0% found this document useful (0 votes)
233 views312 pages

Guide Contents 1. Planning and Implementing Server Roles and Server Security

70-296

Uploaded by

vkris71
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
233 views312 pages

Guide Contents 1. Planning and Implementing Server Roles and Server Security

70-296

Uploaded by

vkris71
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 312

Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server

2003 Environment for an MCSE Certified on Windows 2000


Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Guide Contents

1. Planning and Implementing Server Roles and Server Security

1.1 Planning a Secure Environment


1.2 Overview of a Secure Windows Server 2003 Environment
1.3 Establishing a Secure Shared IT Infrastructure
1.4 Creating and Enhancing Security Boundaries
1.5 Domains, Forests, and Organizational Units
1.6 Security overview
1.6.1 Authentication
1.6.2 Object-based access control
1.6.3 Security policy
1.6.4 Auditing
1.6.5 Active Directory and security
1.6.6 Data protection
1.6.7 Network data protection
1.7 Server roles
1.7.1 File server role overview
1.7.2 Print server role overview
1.7.3 Application server role overview
1.7.4 Mail server role overview
1.7.5 Terminal server role overview
1.7.6 Remote access/VPN server role overview
1.7.7 Domain controller role overview
1.7.8 DNS server role overview
1.7.9 DHCP server role overview
1.7.10 Streaming media server role overview
1.7.11 WINS server role overview
1.8 Security Configuration and Analysis overview
1.8.1 Security analysis
1.8.2 Security configuration
1.9 Security Templates overview
1.9.1 Security Templates
1.9.2 Local security policy overview
1.9.3 How policy is applied to a computer that is joined to a domain
1.9.4 Using Security Settings

Section 2

2. Planning, Implementing, and Maintaining a Network Infrastructure


2.1 Overview of Designing a TCP/IP Network
2.1.1 Process for Designing a TCP/IP Network
2.1.2 Benefits of Windows Server 2003 TCP/IP
2.1.3 Planning the IP-Based Infrastructure
2.1.4 Designing the Access Tier
2.1.5 Designing the Distribution Tier
2.1.6 Designing the Core Tier
2.2 Developing Routing Strategies
2.2.1 Choosing Hardware or Software Routing
2.2.2 Choosing Static or Dynamic Routing
2.2.3 Distance Vector Routing Protocols
2.2.4 Link State Routing Protocols
2.3 Designing an IP Addressing Scheme
Page 1 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2.3.1 Creating a Structured Address Assignment Model


2.3.2 Planning Classless IP Addressing
2.3.3 Determining the Number of Subnets and Hosts
2.4 Planning Classless Routing
2.4.1 Planning Classless Noncontiguous Subnets
2.4.2 Noncontiguous subnets with classful routing
2.4.3 Noncontiguous subnets with classless routing
2.5 Using Route Summarization
2.6 Planning Variable Length Subnet Masks (VLSM)
2.7 Choosing an Address Allocation Method
2.8 Choosing Public or Private Addresses
2.8.1 Public Addresses
2.8.2 Private Addresses
2.8.3 Unauthorized Addresses
2.8.4 Network Address Translation
2.9 Planning an IP Configuration Strategy
2.10 DHCP Integration with DNS and WINS
2.11 Setting Up the Physical Network
2.12 Name Resolution Technologies
2.12.1 DNS Name Resolution
2.12.2 WINS Name Resolution
2.12.3 Overview of DNS Deployment
2.12.4 Process for Deploying DNS
2.12.5 DNS Concepts
2.12.6 DNS Roles
2.12.7 DNS designer role
2.12.8 Tools for Deploying DNS
2.12.9 Designing a DNS Namespace
2.12.9.1 Identifying Your DNS Namespace Requirements
2.12.9.2 Creating Internal and External Domains
2.12.9.3 Configuring Name Resolution for Disjointed Namespaces
2.12.10 Designing DNS Zones
2.12.10.1 Choosing a Zone Type
2.12.10.2 Primary Zones
2.12.10.3 Secondary Zones
2.12.10.4 Stub Zones
2.12.10.5 Stub Zones and Conditional Forwarding
2.12.10.6 Active DirectoryIntegrated Zones
2.12.10.7 Storing Active DirectoryIntegrated Zones
2.12.10.8.Using Forwarding
2.12.10.9. Securing Your DNS Infrastructure
2.12.10.10 Developing a DNS Security Policy
2.12.10.11 Low-Level DNS Security Policy
2.12.10.12 Mid-Level DNS Security Policy
2.13 Overview of WINS Deployment
2.13.1 WINS Deployment Process
2.13.2 Designing Your WINS Replication Strategy
2.13.3 Specifying Automatic Partner Configuration
2.13.4 Determining Replication Partners
2.13.5 Configuring Replication Across WANs
2.13.6 Configuring Replication Across LANs
2.14 Troubleshooting DNS clients
2.15 Troubleshooting WINS servers
Section 3
Page 2 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3. Planning, Implementing, and Maintaining Server Availability


3.1 Planning for High Availability and Scalability
3.1.1 Overview
3.1.2 High Availability and Scalability Planning Process
3.1.3 Basic High Availability and Scalability Concepts
3.1.4 Defining Availability and Scalability Goals
3.1.5 Quantifying Availability and Scalability for Your Organization
3.2 Determining Availability Requirements
3.2.1 Determining Reliability Requirement
3.2.2 Determining Scalability Requirements
3.2.3 Analyzing Risk
3.2.4 Developing Availability and Scalability Goals
3.2.5 Details on Record That Help Define Availability Requirements
3.2.6 Users of IT Services
3.2.7 Requirements and Requests of End Users
3.2.8 Requirements for User Accounts, Networks, or Similar Types of Infra
3.2.9 Time Requirements and Variations
3.3 Using IT Procedures to Increase Availability and Scalability
3.3.1 Planning and Designing Fault-Tolerant Hardware Solutions
3.3.2 Using Standardized Hardware
3.3.3 Using Spares and Standby Server
3.3.4 Using Fault-Tolerant Components
3.3.5 Storage Strategies
3.3.6 Safeguarding the Physical Environment of the Servers
3.4 Implementing Software Monitoring and Error-Detection Tools
3.4.1 Choosing Monitoring Tools
3.4.2 Windows Management Instrumentation
3.4.3 Microsoft Operations Manager 2000
3.4.5 Event logs
3.4.6 Service Control Manager
3.4.7 Performance Logs and Alerts
3.4.8 Shutdown Event Tracker
3.5 Evaluating the Benefits of Clustering
3.5.1 Benefits of Clustering
3.5.2 Limitations of Clustering
3.5.3 Evaluating Cluster Technologies
3.5.4 Server Clusters
3.5.5 Network Load Balancing
3.5.6 Using Clusters to Increase Availability and Scalability
3.5.7 Scaling by Adding Servers
3.5.8 Scaling by Adding CPUs and RAM
3.5.9 Availability and Server Consolidation with Server Clusters
3.6 Overview of the Server Clusters Design Process
3.6.1 Server Cluster Design and Deployment Process
3.6.2 Server Cluster Fundamentals
3.6.3 Cluster Hardware Requirements
3.6.4 New in Windows Server 2003
3.6.5 Deploying Server Clusters
3.6.6 Designing Network Load Balancing
3.6.7 Overview of the NLB Design Process
3.6.8 How NLB Provides Improved Scalability and Availability
3.6.9 Deploying Network Load Balancing
3.6.10 Overview of the NLB Deployment Process
3.6.11 Selecting the Automated Deployment Method
Page 3 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3.6.12 Implementing a New Cluster


3.6.13 Preparing to Implement the Cluster
3.7 Implementing the Network Infrastructure
3.7.1 Implementing the Cluster
3.7.2 Installing and Configuring the Hardware and Windows Server 2003
3.7.3 Installing and Configuring the First Cluster Host
3.7.4 Installing and Configuring Additional Cluster Hosts
3.8 What Is Backup
3.8.1 Types of Backup
3.8.2 Permissions and User Rights Required to Back Up
3.8.3 Automated System Recovery
3.8.4 Restoring File Security Settings
3.8.5 Restoring Distributed Services
3.8.6 What Is Shadow Copies for Shared Folders
3.8.7 How Shadow Copies for Shared Folders Works
3.8.8 Shadow Copies for Shared Folders Architecture

Section 4

4. Planning and Maintaining Network Security


4.1 What Is IPSec
4.1.1 IPSec Scenarios
4.1.2 Recommended Scenarios for IPSec
4.1.3 Securing Communication Between Domain Members and their
Domain Controllers
4.1.4 Securing All Traffic in a Network
4.1.5 Securing Traffic for Remote Access VPN Connections by
Using IPSec Tunnel Mode
4.1.6 Securing Traffic Sent over 802.11 Networks
4.1.7 Securing Traffic in Home Networking Scenarios
4.1.8 Securing Traffic in Environments That Use Dynamic IP Addresses
4.2 IPSec Dependencies
4.2.1 Active Directory
4.2.2 Successful Mutual Authentication
4.2.3 IPSec and ICF
4.3 How IPSec Works
4.4 IPSec Architecture
4.4.1 Logical Architecture
4.4.2 IPSec Protocols and Algorithms for Authentication and Encryption
4.5 Windows Server 2003 IPSec Architecture
4.5.1 IPSec Components
4.5.2 Policy Agent Architecture
4.5.3 Policy Agent Architecture
4.5.4 Policy Agent Components
4.5.5 Policy store
4.5.6 Policy Agent
4.5.7 Policy Agent Service Retrieving and Delivering IPSec Policy
Information
4.5.8 Local registry
4.5.9 Local cache
4.5.10 Interface Manager
4.6 IKE Module Architecture
4.6.1 IKE Module Architecture
4.6.2 IKE Module Components
Page 4 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

4.6.3 IPSec Driver Architecture


4.6.4 IPSec Driver Architecture
4.6.5 IPSec Driver Components
4.6.6 Policy Data Structure
4.6.7 IPSec Policy Structure
4.6.8 IPSec Policy Components
4.6.9 IPSec Rule Components
4.6.10 Default response rule
4.6.11 Default Security Methods for the Default Response Rule
4.7 IPSec Protocols
4.7.1 IPSec Protocol Architecture
4.7.2 IPSec AH and ESP Protocols
4.7.3 IPSec AH and ESP Protocols in IPSec Transport Mode
4.7.4 AH transport mode
4.7.5 AH Transport Mode Packet Structure
4.7.6 ESP transport mode
4.7.7 ESP Transport Mode Packet Structure
4.7.8 IPSec AH and ESP Protocols in IPSec Tunnel Mode
4.7.9 AH tunnel mode
4.8 IPSec Processes and Interactions
4.8.1 Policy Agent Initialization
4.8.2 Policy Data Retrieval and Distribution
4.8.3 IKE Main Mode and Quick Mode Negotiation
4.8.4 Main Mode Negotiation
4.8.5 IKE certificate acceptance process
4.8.6 IPSec CRL checking
4.8.7 Certificate-to-account mapping
4.8.8 Quick Mode Negotiation
4.8.9 Quick Mode SA negotiation
4.8.10 Generating and regenerating session key material
4.9 IPSec Driver Processes
4.9.1 IPSec Driver Responsibilities
4.9.2 IPSec Driver Communication
4.9.3 Default exemptions to IPSec filtering
4.9.4 Hardware acceleration (offloading)
4.10 Network Ports and Protocols Used by IPSec
4.10.1 IPSec Port and Protocol Assignments
4.10.2 Firewall Filters
4.10.3 IPSec NAT-T
4.10.4 Configuring Wireless Network Policies
4.10.5 Network authentication services
4.10.6 Defining Wireless Configuration Options for Preferred Networks
4.10.7 Securing Network Traffic
4.10.8 Securing Servers

Section 5

5. Planning, Implementing, and Maintaining Security Infrastructure


5.1 Overview of the PKI Design Process
5.2 public key infrastructure
5.2.1 Process for Designing a PKI
5.2.2 Basic PKI Concepts
5.2.3 Windows Server 2003 PK
5.2.4 How a Public Key Infrastructure Works
Page 5 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

5.2.5 Defining Certificate Requirements


5.2.6 Determining Secure Application Requirements
5.3 Windows Server 2003 PKI can support the following security applications:
5.3.1 Digital Signatures
5.3.2 Secure E-mail
5.3.3 Software Code Signing
5.3.4 Internet Authentication
5.3.5 IP Security
5.3.6 Smart Card Logon
5.3.7 Encrypting File System Use and Recovery
5.3.8 Wireless (802.1x) Authentication
5.4 Determining Certificate Requirements for Users, Computers, and Services
5.5 Designing Your CA Infrastructure
5.5.1 Planning Core CA Options
5.5.2 Designing Root CAs
5.5.3 Selecting Internal CAs vs. Third-Party CAs
5.5.4 Evaluating CA Capacity, Performance, and Scalability
5.6 Integrating the Active Directory Infrastructure
5.7 Configuring Public Key Group Policy
5.8 Defining PKI Management and Delegation
5.9 Defining CA Types and Roles
5.9.1 Enterprise vs. Stand-Alone CAs
5.9.2 Root CAs
5.9.3 Subordinate CAs
5.9.4 Using Hardware CSPs
5.10 Establishing a CA Naming Convention
5.11 Selecting a CA Database Location
5.12 Overview of Smart Card Deployment
5.12.1 Process for Planning a Smart Card Deployment
5.12.2 Smart Card Fundamentals
5.12.3 Components of a Smart Card Infrastructure
5.12.4 Creating a Plan for Smart Card Use
5.12.5 Identifying the Processes That Require Smart Cards
5.12.6 Interactive User Logons
5.12.7 Remote Access Logons
5.12.8 Terminal Services and Shared Clients
5.12.9 Using Smart Cards for Individual Administrative Operations
5.12.10 Defining Smart Card Service Level Requirements
6.12.11 Selecting Smart Card Hardware
5.12.12 Smart Card Roles
5.12.13 Evaluating Smart Cards and Readers
5.12.14 Planning Smart Card Certificate Templates
5.12.15 Establishing Issuance Processes
5.13 Software Update Services Overview
5.13.1 Implementing a SUS Solution
5.13.2 SUS Security Features
Section 6

6. Planning and Implementing an Active Directory Infrastructure

6.1Introduction to Active Directory


6.2.Windows 2000 Domain Upgrade to Windows Server 2003
6.2.1 The role of the global catalog
6.2.2 Global catalog replication
Page 6 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

6.2.3 Adding attributes


6.2.4 Customizing the global catalog
6.2.5 Global catalogs and sites
6.2.6 Universal group membership caching
6.2.7 To cache universal group memberships

6.3 Creating a new domain tree


6.3.1 Creating a new child domain
6.3.2 Creating a new forest
6.3.3 When to create a new forest
6.4 Operations master roles in a new forest
6.5 Adding new domains to your forest

6.6 Trust
6.6.1 Trusts in Windows NT
6.6.2 Trusts in Windows Server 2003 and Windows 2000 server operating systems
6.6.3 Trust protocols
6.6.4 Trust types
6.6.5 Trust direction
6.6.7 Trust transitivity

6.7 Organizational units


6.7.1 Organizational Unit Design Concepts
6.7.2 Organizational Unit Owner Role
6.7.3 Delegating Administration by Using OU Objects
6.7.4 Administration of Default Containers and OUs
6.7.5 Delegating Administration of Account and Resource OUs
6.7.6 Administrative Group Types
6.7.7 Creating Account OUs
6.7.8 Creating Resource OUs
Section 7

Managing and Maintaining an Active Directory Infrastructure

7. Understanding Domains and Forests


7.1 Domain controllers
7.1.1 Determining the number of domain controllers you need
7.1.2 Physical security
7.1.3 Backing up domain controllers
7.1.4 Upgrading domain controllers

7.2 Creating a domain


7.2.1 Planning for multiple domains
7.2.2 Removing a domain
7.2.3 Trust relationships between domains
7.2.4 Renaming domains
7.2.5 Forest restructuring
7.2.6 Domain and forest functionality

7.3 Domain functionality


7.4 Forest functionality
7.5 Raising domain and forest functional levels

7.6 Application directory partitions


Page 7 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

7.6.1 Application directory partition naming


7.6.2 Application directory partition replication
7.6.3 Security descriptor reference domain
7.6.4 Application directory partitions and domain controller demotion
7.6.5 Identify the applications that use the application directory partition
7.6.6 Determine if it is safe to delete the last replica
7.6.7 Identify the partition deletion tool provided by the application
7.6.8 Remove the application directory partition using the tool provided

7.7 Managing application directory partitions


7.7.1 Creating an application directory partition
7.7.2 Deleting an application directory partition
7.7.3 Adding and removing a replica of an application directory partition
7.7.4 Setting application directory partition reference domain
7.7.5 Setting replication notification delays
7.7.6 Displaying application directory partition information
7.7.7 Delegating the creation of application directory partitions

7.8 Operations master roles


7.8.1 Forest-wide operations master roles
7.8.2 Transferring operations master roles
7.8.3 Responding to operations master failures

7.9 Sites overview


7.9.1 Using sites
7.9.2 Defining sites using subnets
7.9.3 Assigning computers to sites
7.9.4 Understanding sites and domains
7.9.5 Replication overview
7.9.6 How replication works

7.10 Managing replication


7.10.1 Configuring site links
7.10.2 Site link cost
7.10.3 Replication frequency
7.10.4 Site link availability
7.10.5 Configuring site link bridges
7.10.6 Configuring preferred bridgehead servers

Section 8

8. Planning and Implementing User, Computer, and Group Strategies


8.1 User and computer accounts
8.1.1 User accounts
8.1.2 Computer accounts
8.1.3 Access control in Active Directory
8.1.4 Security descriptors
8.1.5 Object inheritance
8.1.6 User authentication
8.1.7 Organizational units

8.2 Manage Groups in Windows Server 2003


8.2.1 Manage Groups
8.2.2 Summary
Page 8 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

8.2.3 Add a Group


8.2.4 Convert a Group to another Group Type
8.2.5 Change Group Scope
8.2.6 Delete a Group
8.2.7 Find a Group
8.2.8 Find Groups where a User Is a Member
8.3 Modify Group Properties
8.3.1 Remove a Member from a Group
8.3.2 Rename a Group

8.4 Nesting groups


8.5 Special identities
8.6 Authentication
8.6.1 Authentication types
8.6.2 Introduction to authentication
8.6.3 Interactive logon
8.6.4 Network authentication

8.7 Smart card


8.7.1 Understanding smart cards
8.7.2 Stored User Names and Passwords overview
8.7.3 How Stored User Names and Passwords works

Section 9

9. Planning and Implementing Group Policy


9.1 What Is Core Group Policy
9.1.2 Change and Configuration Management
9.1.3 Change and Configuration Management Process
9.1.4 Core Group Policy Infrastructure
9.1.5 Group Policy and Active Directory
9.1.6 Sample Active Directory Organizational Structure
9.1.7 Group Policy Inheritance
9.1.8 Viewing and Reporting of Policy Settings
9.1.9 Delegating Administration of Group Policy
9.1.10 Core Group Policy Scenarios

9.2 Group Policy Dependencies


9.2.1 Core Group Policy Architecture
9.2.2 Group Policy Engine Architecture
9.2.3 RSoP Architecture
9.2.4 Planning Mode (Group Policy Modeling)
9.2.5 Core Group Policy Physical Structure

9.3 How a Group Policy Container is Named


9.3.1 Default Group Policy Container Permissions
9.3.2 GroupPolicyContainer Subcontainers
9.3.3 Group Policy Container-Related Attributes of Domain, Site, and OU
9.3.4 Managing Group Policy Links for a Site, Domain, or OU
9.3.5 How WMIPolicy Objects are Stored and Associated with Group Policy
9.3.6 Group Policy Template
9.3.7 Default Group Policy Template Permissions
9.3.8 Core Group Policy Processes and Interactions

Page 9 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

9.3.9 Group Policy Processing Rules


9.3.10 Targeting GPOs

9.4 How Security Filtering is Processed


9.4.1 WMI Filtering
9.4.2 How WMI Filtering is Processed
9.4.3 WMI Filtering Scenarios
9.4.4 Application of Group Policy
9.4.5 Group Policy Loopback Support
9.4.6 How the Group Policy Engine Processes Client-Side Extensions
9.4.7 How Group Policy Processing History Is Maintained on the Client Computer

9.5 Group Policy Replication


9.5.1 Network Ports Used by Group Policy

9.6 What Is Resultant Set of Policy?


9.6.1 Resultant Set of Policy Snap-in Core Scenario
4.6.2 Similar Technologies for Viewing Resultant Set of Policy Data
9.6.3 Resultant Set of Policy Snap-in Dependencies
9.6.4 How Resultant Set of Policy Works
9.6.5 Resultant Set of Policy Snap-in Architecture
9.7 Group policy Tools
Section 10

10. Managing and Maintaining Group Policy


10.1 Group Policy Administrative Tools
10.1.1 Group Policy Administrative Tools Architecture
10.1.2 Group Policy Administrative Tools Components
10.1.3 Group Policy Management Console
10.1.4 Group Policy Object Editor
10.1.5 Resultant Set of Policy Snap-in

10.2 Group Policy Administrative Tools Deployment Scenarios


10.2.1 Group Policy Management Console
10.2.2 Group Policy Object Editor
10.2.3 Resultant Set of Policy snap-in
10.2.4 Group Policy Management Console Core Scenarios
10.2.5 Creating and Editing GPOs
10.2.6 Manipulating Inheritance
10.2.7 Reporting of GPO Settings

10.3 Group Policy Modeling


10.4 Group Policy Results

10.5 Group Policy Management Console Dependencies


10.5.1 GPMC System Installation Requirements
10.5.2 GPMC Feature Requirements

10.6 How Group Policy Management Console Works


10.7 Group Policy Management Console Architecture

10.8 Group Policy Management Console Interfaces


10.8.1 Group Policy Management Console Processes and Interactions

Page 10 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

10.8.2 GPO Operations in Group Policy Management Console


10.8.3 Specifying the discretionary access control list (DACL) on the new GPO
10.8.4 Copying within a domain compared with copying to another domain
10.8.5 Migration Tables
10.8.6 Settings impacted by migration tables
10.8.7 Options for specifying migration tables
10.8.8 Contents of Migration tables

10.9 Administrative Templates in GPMC and Group Policy Object Editor


10.9.1 Administrative Templates
10.9.2 Handling Administrative Template Files in GPMC
10.9.3 Handling Administrative Template Files in Group Policy Object Editor

Section 1

1. Planning and Implementing Server Roles and Server Security

1.1 Planning a Secure Environment


1.2 Overview of a Secure Windows Server 2003 Environment
1.3 Establishing a Secure Shared IT Infrastructure
1.4 Creating and Enhancing Security Boundaries
1.5 Domains, Forests, and Organizational Units
1.6 Security overview
1.6.1 Authentication
1.6.2 Object-based access control
1.6.3 Security policy
1.6.4 Auditing
1.6.5 Active Directory and security
1.6.6 Data protection
1.6.7 Network data protection
1.7 Server roles
1.7.1 File server role overview
1.7.2 Print server role overview
1.7.3 Application server role overview
1.7.4 Mail server role overview
1.7.5 Terminal server role overview
1.7.6 Remote access/VPN server role overview
1.7.7 Domain controller role overview
1.7.8 DNS server role overview
1.7.9 DHCP server role overview
1.7.10 Streaming media server role overview
1.7.11 WINS server role overview
1.8 Security Configuration and Analysis overview
1.8.1 Security analysis
1.8.2 Security configuration
1.9 Security Templates overview
1.9.1 Security Templates
1.9.2 Local security policy overview
1.9.3 How policy is applied to a computer that is joined to a domain
1.9.4 Using Security Settings

Page 11 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Page 12 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

1.1 Planning a Secure Environment


To plan a secure environment, you need a clear and consistent strategy for addressing the many
aspects of the Microsoft Windows Server 2003 operating system, including security-related
issues and features. You can begin by identifying the user-related requirements that impact
security, and the other aspects of the network that comprise a secure common infrastructure. You
can use this chapter to locate the chapters in this book that can help you address those
requirements.

1.2 Overview of a Secure Windows Server 2003 Environment


Secure IT environments are the result of careful planning from the time you decide to deploy a
new feature or service to the moment when responsibility for those features and services is
handed over to those responsible for the day-to-day operation of the network. To ensure a secure
IT environment, security must be addressed in every possible area of network design and
planning, including such diverse areas as the Active Directory directory service, networking, and
client configuration. Even then, the best security plans and designs in the world cannot protect an
organization if security is not an essential part of their operating procedures.
Every organization has its own unique mix of clients, servers, and user requirements that make
planning a comprehensive, secure environment a major challenge. Without a consistent approach
to security, some areas of the network might benefit from extremely rigorous security while others
are only minimally secured.
Fortunately, the Microsoft Windows Server 2003, Standard Edition; Windows Server 2003,
Enterprise Edition; Windows Server 2003, Datacenter Edition; and Windows Server 2003,
Web Edition operating systems, and the Microsoft Windows XP Professional operating system
provide many features and capabilities that you can use to configure and maintain a secure
network operating environment. In fact, there are security capabilities in nearly every area of
Windows Server 2003 and Windows XP Professional. Many of these security features and
capabilities have been added or enhanced since the introduction of the Microsoft
Windows 2000 Professional and Windows 2000 Server operating systems.
This Windows Server 2003 security planning process is based on a high-level view of the security
configuration options and capabilities. This security planning process is based on two organizing
principles:
Users need access to resources. This access can be very basic, including only desktop
logon and the availability of access control lists (ACLs) on resources. This access can also
include optional services such as remote network logons, wireless network access, and
access for external users, such as business partners or customers.
The network requires a secure shared IT infrastructure. This infrastructure includes
security boundaries, secure servers and services, secure networking, and an effective plan
for delegating administration.
Together, these building blocks of network operating system security can provide the trust and
integrity needed in todays complex operating environments. By analyzing the security
requirements of your organization by using a security planning process, you can establish a high-
level security framework for your Windows Server 2003 deployment.

1.3 Establishing a Secure Shared IT Infrastructure


Not all security-related features apply directly to users. Many basic network services and
configuration decisions involve creating and defining explicit boundaries, securing network traffic,
and securing your servers.

1.4 Creating and Enhancing Security Boundaries


As networks become more complex, they can become more difficult to manage. Windows NT 4.0
introduced the concept of domains to enhance an administrators ability to manage the users and
computers in a network. The domain concept has been enhanced and expanded considerably in

Page 13 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Windows 2000 and Windows Server 2003 to address security concerns of organizations,
including the following concerns:
Organizations that have acquired subsidiaries with significantly different administrative
and security requirements.
International organizations that want to divide up administrative and security
responsibilities along national or regional boundaries.
Rapidly growing organizations that want to unify security and administrative
responsibilities across disparate business units.
Active Directory in Microsoft Windows Server 2003 enables organizations to simplify user and
resource management; support directory-enabled programs; and create a scalable, secure, and
manageable infrastructure. A well-designed Active Directory logical structure provides the
following benefits:
Centralized management of Windows networks that contain large numbers of objects
A consolidated domain structure and reduced hardware and administration costs
The supporting framework for Group Policybased user, data, and software management
The ability to delegate administrative control over resources
Integration with services such as Microsoft Exchange 2000 Server, a PKI, and domain-
based distributed file system (DFS)
Achieving these results requires careful planning of the following elements:
Domains. An administrative unit in a computer network that groups a number of
capabilities for management convenience, including network-wide user identity,
authentication, trust relationships, policy administration, and replication.
Forests. One or more Active Directory domains that share a schema and global catalog.
Each forest is a single instance of the directory and defines a security boundary.
Organizational units. Active Directory containers where you can place users, groups,
computers, and other organizational units. You can use organizational units to create
containers within a domain to represent the hierarchical and logical structures within your
organization.
The way that you plan your domains, forests, and organizational units plays a critical role in
defining your networks security boundaries. The relationship might sometimes be based on
administrative requirements; at other times, the relationship might be defined by operational
requirements such as controlling replication. Additionally, if you have multiple forests, you need to
plan the logical trust relationships between forests that allows pass-through authentication.

1.5 Domains, Forests, and Organizational Units


Active Directory is a distributed database that stores and manages information about network
resources. The way that you organize Active Directory determines how well you can manage
network resources and distribute administrative responsibilities.
Active Directory allows administrators to organize elements of a network (such as users,
computers, and devices) into a hierarchical, tree-like structure based on containers. The top-level
Active Directory container is the forest. Forests include domains, and domains include
organizational units. Administrative ownership and control in Active Directory containers are
organized in the following ways:
The default administrative owner of a forest is the Domain Admins group of the forest root
domain. The Domain Admins group of the forest root controls the membership of the
Enterprise Admins and Schema Admins groups. By default, the Enterprise Admins and
Schema Admins groups have control over forest-wide configuration settings, which also
makes them service administrators.
The default administrator of a domain is the Domain Admins group of the domain.
Because the Domain Admins control domain controllers, they are also service administrators.
All non-root Domain Admins in a forest are peers, regardless of their domains position in the
naming hierarchy.

Page 14 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Control over an organizational unit and the objects within it is determined by the ACLs on
the organizational unit and on the objects in the organizational unit. Users and groups that
have control over objects in organizational units are data administrators.
To facilitate the management of large numbers of objects, Active Directory supports
administrative delegation at the container level. If administrative control is the priority for your
organization, base your logical structure design on forests and organizational units. Forests and
organizational units are used to control the delegation of authority throughout the directory. Many
organizations consolidate divisions into a single forest to enhance their users ability to
collaborate and to reduce costs.
If you choose to organize Active Directory according to geographic location, you must apply a
domain model to your logical design. Domains let you control where information is replicated and
let you partition data so that it can be stored where it is used most frequently. A well-designed
domain model prevents unnecessary replication and promotes more efficient use of available
bandwidth between remote locations.
To determine the number of forests that your organization requires, identify the isolation
requirements for each division of the organization that will be using the directory service.
Consider the following elements:
Generally, a single forest deployment isolates data from parties outside the organization.
If your organization includes more than one IT group, the only way to achieve isolation in a
single forest environment is to select one IT group to act as the administrators of the forest,
and then make the other IT groups in the organization relinquish control of the directory.
If divisions of your organization require that you isolate data from the rest of the
organization, you must deploy multiple forests. For example, you might need multiple forests
if legal or contractual obligations require that your organization guarantees the security of
data for a particular project.
If your organization includes multiple divisions with separate IT groups, each IT group
might prefer to manage its own forest; however, your business needs might require resource
sharing between divisions. You can deploy multiple forests, each of which is managed by an
individual IT group, and then establish external trusts between the forests to facilitate
collaboration. In this type of environment, be careful to avoid granting administrative access
to users in other forests.
Trusts
If your organization includes more than one forest, you must enable the forests to allow
authentication and resource sharing. You can do this by establishing trust relationships between
some or all of the domains in the forests. The types of trust relationships that you can establish
depend on the versions of the operating system that are running in each forest:
Authentication between Windows Server 2003 forests. When all domains in two
forests trust each other and must authenticate users, establish a forest trust between the
forests. When only some of the domains in two Windows Server 2003 forests trust each
other, establish one-way or two-way external trusts between the domains that require
interforest authentication.
Authentication between Windows Server 2003 and Windows 2000 forests. It is not
possible to establish transitive forest trusts between Windows Server 2003 and
Windows 2000 forests. To enable authentication with Windows 2000 forests, establish one-
way or two-way external trusts between the domains that need to share resources.
Authentication between Windows Server 2003 and Windows NT 4.0 forests. It is not
possible to establish transitive forest trusts between Windows Server 2003 and
Windows NT 4.0 domains. Establish one-way or two-way external trusts between the
domains that need to share resources.

1.6 Security overview


The primary features of the Microsoft Windows Server 2003 family security model are user
authentication and access control .The Active Directory directory service ensures that

Page 15 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

administrators can manage these features easily and efficiently. The following sections describe
these features of the security model.

1.6.1 Authentication
Interactive logon confirms the user's identification to the user's local computer or Active
Directory account.
Network authentication confirms the user's identification to any network service that the
user is attempting to access. To provide this type of authentication, the security system
includes these authentication mechanisms: Kerberos V5, public key certificates, Secure
Sockets Layer/Transport Layer Security (SSL/TLS), Digest, and NTLM (for compatibility with
Windows NT 4.0 systems).
Single sign-on makes it possible for users to access resources over the network without
having to repeatedly supply their credentials. For the Windows Server 2003 family, users
need to only authenticate once to access network resources; subsequent authentication is
transparent to the user.

1.6.2 Object-based access control


Along with user authentication, administrators are allowed to control access to resources or
objects on the network. To do this, administrators assign security descriptors to objects that are
stored in Active Directory. A security descriptor lists the users and groups that are granted access
to an object and the specific permissions assigned to those users and groups. A security
descriptor also specifies the various access events to be audited for an object. Examples of
objects include files, printers, and services. By managing properties on objects, administrators
can set permissions, assign ownership, and monitor user access.
Not only can administrators control access to a specific object, they can also control access to a
specific attribute of that object. For example, through proper configuration of an object's security
descriptor, a user could be allowed to access a subset of information, such as employees' names
and phone numbers but not their home addresses

1.6.3 Security policy


You can control security on your local computer or on multiple computers by controlling password
policies, account lockout policies, Kerberos policies, auditing policies, user rights, and other
policies. To create a systemwide policy, you can use security templates, apply templates using
Security Configuration and Analysis or edit policies on the local computer, organizational unit, or
domain

1.6.4 Auditing
Monitoring the creation or modification of objects gives you a way to track potential security
problems, helps to ensure user accountability, and provides evidence in the event of a security
breach

1.6.5 Active Directory and security


Active Directory provides protected storage of user account and group information by using
access control on objects and user credentials. Because Active Directory stores not only user
credentials but also access control information, users who log on to the network obtain both
authentication and authorization to access system resources. For example, when a user logs on
to the network, the security system authenticates the user with information stored in Active
Directory. Then, when the user attempts to access a service on the network, the system checks
the properties defined in the discretionary access control list (DACL) for that service.
Because Active Directory allows administrators to create group accounts, administrators can
manage system security more efficiently. For example, by adjusting a file's properties, an
administrator can permit all users in a group to read that file. In this way, access to objects in
Active Directory is based on group membership.

Page 16 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

1.6.6 Data protection


Stored data (online or offline) can be protected using:
Encrypting File System (EFS). EFS uses public key encryption to encrypt local NTFS
data.
Digital signatures. Digital signatures sign software components, ensuring their validity.

1.6.7 Network data protection


Network data within your site (local network and subnets) is secured by the authentication
protocol. For an additional level of security, you can also choose to encrypt network data within a
site. Using Internet Protocol security, you can encrypt all network communication for specific
clients, or for all clients in a domain.
Network data passing in and out of your site (across intranets, extranets, or an Internet gateway)
can be secured using the following utilities:
Internet Protocol Security (IPSec). A suite of cryptography-based protection services
and security protocols.
Routing and Remote Access. Configures remote access protocols and routing
Internet Authentication Service (IAS). Provides security and authentication for dial-in
users

1.7 Server roles


The Windows Server 2003 family provides several server roles. To configure a server role, install
the server role by using the Configure Your Server Wizard and manage your server roles by
using Manage Your Server. After you finish installing a server role, Manage Your Server starts
automatically.
To determine which server role is appropriate for you, review the following information about the
server roles that are available with the Windows Server 2003 family:
File server role overview
Print server role overview
Application server role overview
Mail server role overview
Terminal server role overview
Remote access/VPN server role overview
Domain controller role overview
DNS server role overview
DHCP server role overview
Streaming media server role overview
WINS server role overview

1.7.1 File server role overview


File servers provide and manage access to files. If you plan to use disk space on this computer to
store, manage, and share information such as files and network-accessible applications,
configure this computer as a file server.
After configuring the file server role, you can do the following:
Use disk quotas on volumes formatted with the NTFS file system to monitor and limit the
amount of disk space available to individual users. You can also specify whether to log an
event when a user exceeds the specified disk space limit or when a user exceeds the
specified disk space warning level (that is, the point at which a user is nearing his or her
quota limit).
Use Indexing Service to quickly and securely search for information, either locally or on
the network.

Page 17 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Search in files that are in different formats and languages, either through the Search
command on the Start menu or through HTML pages that users view in a browser.

1.7.2 Print server role overview


Print servers provide and manage access to printers. If you plan to manage printers remotely,
manage printers by using Windows Management Instrumentation (WMI) , or print from a server or
client computer to a print server by using a URL, configure this computer as a print server.
After configuring the print server role, you can do the following:
Use a browser to manage printers. You can pause, resume, or delete a print job, and
view the printer and print job's status.
Use the new standard port monitor, which simplifies installation of most TCP/IP printers
on your network.
Use Windows Management Instrumentation (WMI), which is the management API
created by Microsoft that enables you to monitor and control all system components, either
locally or remotely. The WMI Print Provider enables you to manage print servers, print
devices, and other printing-related objects from the command line. With WMI Print Provider,
you can use Visual Basic (VB) scripts to perform administrative printer functions.
Print from Windows XP clients to print servers running Windows Server 2003 by using a
Uniform Resource Locator (URL)
Connect to printers on your network by using Web point-and-print for single-click
installation of a shared printer. You can also install drivers from a Web site.

1.7.3 Application server role overview


An application server is a core technology that provides key infrastructure and services to
applications hosted on a system. Typical application servers include the following services:
Resource pooling (for example, database connection pooling and object pooling)
Distributed transaction management
Asynchronous program communication, typically through message queuing
A just-in-time object activation model
Automatic XML Web Service interfaces to access business objects
Failover and application health detection services
Integrated security
The Windows Server 2003 family includes an application server that contains all of this
functionality and other services for development, deployment, and runtime management of XML
Web services, Web applications, and distributed applications.
When you configure this server as an application server you will be installing Internet Information
Services (IIS) along with other optional technologies and services such as COM+ and ASP.NET.
Together, IIS and the Windows Server 2003 family provide integrated, reliable, scalable, secure,
and manageable Web server capabilities over an intranet, the Internet, or through an extranet. IIS
is a tool for creating a strong communications platform of dynamic network applications.

1.7.4 Mail server role overview


To provide e-mail services to users, you can use the Post Office Protocol 3 (POP3) and Simple
Mail Transfer Protocol (SMTP) components included with the Windows Server 2003 family. The
POP3 service implements the standard POP3 protocol for mail retrieval, and you can pair it with
the SMTP service to enable mail transfer. If you plan to have clients connect to this POP3 server
and download e-mail to local computers by using a POP3 capable mail client, configure this
server as a mail server.
After configuring the mail server role, you can do the following:
Use the POP3 service to store and manage e-mail accounts on the mail server.
Enable user access to the mail server so that users can retrieve e-mail from their local
computer by using an e-mail client that supports the POP3 protocol (for example, Microsoft
Outlook).
Page 18 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

1.7.5 Terminal server role overview


With Terminal Server, you can provide a single point of installation that gives multiple users
access to any computer that is running a Windows Server 2003 operating system. Users can run
programs, save files, and use network resources all from a remote location, as if these resources
were installed on their own computer.
After configuring the terminal server role, you can do the following:
Confirm Internet Explorer Enhanced Security Configuration settings.
Centralize the deployment of programs on one computer.
Ensure that all clients use the same version of a program.
Important
In addition to configuring a terminal server, you must install Terminal Server Licensing and
configure a Terminal Server License Server. Otherwise, your terminal server will stop accepting
connections from unlicensed clients when the evaluation period ends 120 days after the first
client logon..

1.7.6 Remote access/VPN server role overview


Routing and Remote Access provides a full-featured software router and both dial-up and virtual
private network (VPN) connectivity for remote computers. It offers routing services for local area
network (LAN) and wide area network (WAN) environments. It also enables remote or mobile
workers to access corporate networks as if they were directly connected, either through dial-up
connection services or over the Internet by using VPN connections. If you plan to connect remote
workers to business networks, configure this server as a remote access/VPN server.
Remote access connections enable all of the services that are typically available to a LAN-
connected user, including file and print sharing, Web server access, and messaging.
After configuring the remote access/VPN server role, you can do the following:
Control how and when remote users access your network.
Provide network address translation (NAT) services for the computers on your network.
Create custom networking solutions using application programming interfaces (APIs)

1.7.7 Domain controller role overview


Domain controllers store directory data and manage communication between users and domains,
including user logon processes, authentication, and directory searches. If you plan to provide the
Active Directory directory service to manage users and computers, configure this server as a
domain controller.
Computers running Windows Server 2003, Web Edition, cannot function as domain
controllers
After configuring the domain controller role, you can do the following:
Store directory data and make this data available to network users and administrators.
Active Directory stores information about user accounts (for example, names, passwords,
phone numbers, and so on), and enables other authorized users on the same network to
access this information.
Add additional domain controllers to an existing domain to improve the availability and
reliability of network services.
Improve network performance between sites by placing a domain controller in each site.
With a domain controller in each site, you can handle client logon processes within the site
without using the slower network connection between sites.

1.7.8 DNS server role overview


The Domain Name System (DNS) is the TCP/IP name resolution service that is used on the
Internet. The DNS service enables client computers on your network to register and resolve user-
friendly DNS names. If you plan to make resources in your network available on the Internet,
configure this server as a DNS server.

Page 19 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Important
If you plan to include computers on the Internet on your network, use a unique DNS
domain name names consist of a sequence of name labels separated by periods.
After configuring the DNS server role, you can do the following:
Host records of a distributed DNS database and use these records to answer DNS
queries sent by DNS client computers, such as queries for the names of Web sites or
computers in your network or on the Internet.
Name and locate network resources using userfriendly names.
Control name resolution for each network segment and replicate changes to either the
entire network or globally on the Internet.
Reduce DNS administration by dynamically updating DNS information.

1.7.9 DHCP server role overview


Dynamic Host Configuration Protocol (DHCP) an IP standard designed to reduce the complexity
of administering address configurations by using a server computer to centrally manage IP
addresses and other related configuration details used on your network. If you plan to perform
multicast address allocation, and obtain client IP address and related configuration parameters
dynamically, configure this server as a DHCP server.
After configuring the DHCP server role, you can do the following:
Centrally manage IP addresses and related information.
Use DHCP to prevent address conflicts by preventing a previously assigned IP address
from being used again to configure a new computer on the network.
Configure your DHCP server to supply a full range of additional configuration values
when assigning address leases. This will greatly decrease the time you spend configuring
and reconfiguring computers on your network.
Use the DHCP lease renewal process to ensure that client configurations that need to be
updated often (such as users with mobile or portable computers that change locations
frequently) can be updated efficiently and automatically by clients communicating directly
with DHCP servers.

1.7.10 Streaming media server role overview


Streaming media servers provide Windows Media Services to your organization. Windows Media
Services manages, delivers, and archives Windows Media content, including streaming audio and
video, over an intranet or the Internet. If you plan to use digital media in real time over dial-up
Internet connections or local area networks (LANs), configure this server as a streaming media
server.
After configuring the streaming media server role, you can do the following:
Provide digital video in real time over networks that range from low-bandwidth
bandwidth, dial-up Internet connections to high-bandwidth, local area networks (LANs).
Provide streaming digital audio to clients and other servers across the Internet or your
intranet.

1.7.11 WINS server role overview


Windows Internet Name Service (WINS) servers map IP addresses to NetBIOS computer names
and NetBIOS computer names back to IP addresses With WINS servers in your organization, you
can search for resources by computer name instead of IP address, which can be easier to
remember. If you plan to map NetBIOS names to IP addresses or centrally manage the name-to-
address database, configure this server as a WINS server.
After configuring the WINS server role, you can do the following:
Reduce NetBIOS-based broadcast traffic on subnets by permitting clients to query WINS
servers to directly locate remote systems.

Page 20 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Support earlier Windows and NetBIOS-based clients on your network by permitting these
types of clients to browse lists for remote Windows domains without requiring a local domain
controller to be present on each subnet.
Support DNS-based clients by enabling those clients to locate NetBIOS resources when
WINS lookup integration is implemented.

1.8 Security Configuration and Analysis overview


Security Configuration and Analysis is a tool for analyzing and configuring local system security.

1.8.1 Security analysis


The state of the operating system and applications on a computer is dynamic. For example, you
may need to temporarily change security levels so that you can immediately resolve an
administration or network issue. However, this change can often go unreversed. This means that
a computer may no longer meet the requirements for enterprise security.
Regular analysis enables an administrator to track and ensure an adequate level of security on
each computer as part of an enterprise risk management program. An administrator can tune the
security levels and, most importantly, detect any security flaws that may occur in the system over
time.
Security Configuration and Analysis enables you to quickly review security analysis results. It
presents recommendations alongside of current system settings and uses visual flags or remarks
to highlight any areas where the current settings do not match the proposed level of security.
Security Configuration and Analysis also offers the ability to resolve any discrepancies that
analysis reveals.

1.8.2 Security configuration


Security Configuration and Analysis can also be used to directly configure local system security.
Through its use of personal databases, you can import security templates that have been created
with Security Templates and apply these templates to the local computer. This immediately
configures the system security with the levels specified in the template.

1.9 Security Templates overview


With the Security Templates snap-in for Microsoft Management Console, you can create a
security policy for your computer or for your network. It is a single point of entry where the full
range of system security can be taken into account. The Security Templates snap-in does not
introduce new security parameters, it simply organizes all existing security attributes into one
place to ease security administration.
Importing a security template to a Group Policy object eases domain administration by
configuring security for a domain or organizational unit at once

1.9.1 Security Templates


A security template is a file which represents a security configuration. Security templates may be
applied to a local computer, imported to a Group Policy object , or used to analyze security.

1.9.2 Local security policy overview


A security policy is a combination of security settings that affect the security on a computer. You
can use your local security policy to edit account policies and local policies on your local
computer With the local security policy, you can control:
Who accesses your computer.
What resources users are authorized to use on your computer.
Whether or not a user or group's actions are recorded in the event log.

Page 21 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

1.9.3 How policy is applied to a computer that is joined to a domain


If your local computer is joined to a domain , you are subject to obtaining security policy from the
domain's policy or from the policy of any organizational unit that you are a member of. If you are
getting policy from more than one source, then any conflicts are resolved in this order of
precedence, from highest to lowest:
Organizational unit policy
Domain policy
Site policy
Local computer policy
When you modify the security settings on your local computer using the local security policy, then
you are directly modifying the settings on your computer. Therefore, the settings take effect
immediately, but this may only be temporary. The settings will actually remain in effect on your
local computer until the next refresh of Group Policy security settings, when the security settings
that are received from Group Policy will override your local settings wherever there are conflicts.
The security settings are refreshed every 90 minutes on a workstation or server and every 5
minutes on a domain controller. The settings are also refreshed every 16 hours, whether or not
there are any changes.

1.9.4 Using Security Settings


You can change the security configuration on multiple computers in two ways:
Create a security policy using a security template with Security Templates and then
import it through Security Settings to a Group Policy object
To analyze system security
Using the Windows interface
1. Open Security Configuration and Analysis.
2. In the console tree, right-click Security Configuration and Analysis, and then click
Open Database.
Where?
o ConsoleRoot
o Security Configuration and Analysis
3. In Open database, do one of the following:
o To create a new database, in File name, type a file name, and then click Open.
o To open an existing database, click a database, and then click Open.
4. If you are creating a new database, in Import Template, click a template, and then click
Open.
5. In the details pane, right-click Security Configuration and Analysis, and then click
Analyze Computer Now.
6. Do one of the following:
o To use the default log, in Error log file path, click OK.
o To specify a different log, in Error log file path, type a valid path and file name,
and then click OK.
To configure local computer security
Using the Windows interface
1. Open Security Configuration and Analysis.
2. In the console tree, right-click Security Configuration and Analysis, and then click
Open Database.
Where?
o ConsoleRoot
o Security Configuration and Analysis
3. In Open database, do one of the following:
o To create a new database, in File name, type a file name, and then click Open.
o To open an existing database, click a database, and then click Open.

Page 22 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

4. If you are creating a new database, in Import Template, click a template, and then click
Open.
5. In the console tree, right-click Security Configuration and Analysis, and then click
Configure Computer Now.
6. Do one of the following:
o To use the default log in Error log file path, click OK.
o To specify a different log, in Error log file path, type a valid path and file name.
To import a security template to a Group Policy object
1. Perform one of these steps
If Do this
o Click Start, point to Run, type
mmc and click OK.
o On the File menu, click
Add/Remove snap-in.
You are on a workstation or server which o In Add/Remove Snap-in, click
is joined to a domain and you would like to Add, in Add Standalone Snap-in, double-
import a security template for a Group click Group Policy Object Editor.
Policy object. o In Select Group Policy Object,
click Browse, select the policy object you
would like to modify, click OK, and then click
Finish.
o Click Close and then click OK.
o Open Active Directory Users and
Computers.
o In the console tree, right-click the
domain or organizational unit you want to set
You are on a domain controller and you Group Policy for.
would like to import a security template for o Click Properties, and then click
a domain or organizational unit. the Group Policy tab.
o Click Edit to open the Group
Policy object you want to edit or click New to
create a new Group Policy object, and then
click Edit.
2. In the Group Policy console tree, right-click Security Settings.
Where?
o Group Policy Object Policy
o Computer Configuration
o Windows Settings
o Security Settings
3. Click Import Policy, click the security template you want to import, and then click Open.
4. (Optional) If you want to clear the database of any previously stored security templates,
select the Clear this database before importing check box.
To import a security template
1. Open Security Configuration and Analysis.
2. In the console tree, right-click Security Configuration and Analysis, and then click
Import Template.
Where?
o ConsoleRoot
o Security Configuration and Analysis
3. (Optional) To clear the database of any template that has been previously stored, select
the Clear this database before importing check box.
4. Click a template file, and then click Open.
5. Repeat these steps for each template that you want to merge into the database.
Page 23 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

To export a security template


1. Open Security Configuration and Analysis.
2. In the console tree, right-click Security Configuration and Analysis, and then click
Export Template.
Where?
o ConsoleRoot
o Security Configuration and Analysis
3. Type a valid file name and a path to the location where your template will be saved.
To edit the analysis database
1. Open Security Configuration and Analysis.
2. In the console tree, right-click Security Configuration and Analysis, click Analyze
Computer Now, and then click OK.
Where?
o ConsoleRoot
o Security Configuration and Analysis
3. After Security Configuration and Analysis has finished analyzing the computer, in the
console tree, click the group of settings that contains the security attribute you want to edit.
4. In the details pane, double-click the security attribute that you want to edit.
5. Confirm that the Define this policy in the database check box is selected.
6. Configure the security attribute setting as you want it to be recorded in the database, and
then click OK.
7. Repeat the last four steps for each security attribute that you want to edit.
8. When you are finished modifying the database settings, in the console tree, right-click
Security Configuration and Analysis, and then click Save.
To customize a predefined security template
1. Open Security Templates.
2. In the console tree, double-click the default path folder (systemroot\Security\Templates),
and in the details pane, right-click the predefined template you want to modify.
3. Click Save As, type a new file name for the security template, and click Save.
4. In the console tree, double-click the new security template to display the security policies,
and navigate until the security attribute you want to modify appears in the details pane.
5. In the details pane, right-click the security attribute and click Properties.
6. Select the Define this policy setting in the template check box, make your changes,
then click OK.
To define a security template
1. Open Security Templates.
2. Right-click the folder where you want to store the new template and click New Template.
3. In Template name, type the name for your new security template.
4. In Description, type a description of your new security template, and then click OK.
5. In the console tree, double-click the new security template to display the security areas
and navigate until the security setting you want to configure is in the details pane.
6. In the details pane, right-click the security setting you want to configure and click
Properties.
7. Select the Define this policy setting in the template check box, edit the settings, and
then click OK.
To set a new security template path
1. Open Security Templates.
2. In the console tree, right-click Security Templates, click New Template Search Path,
and select the new location.
A folder with the path of the new location appears in the console tree.
3. (Optional) To add a description to the new template path (in addition to the name), in the
console tree, right-click the new template folder, click Set Description, and then type the
description you want to add.
To edit local security settings
Page 24 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

1. Open Local Security Settings.


2. Do one of the following:
o To edit Password Policy or Account Lockout Policy, in the console tree, click
Account Policies.
o To edit an Audit Policy, User Right Assignment, or Security Options, in the
console tree, click Local Policies.
3. In the console tree, click the folder that contains the policy you want to modify, and then,
in the details pane, double-click the policy that you want to modify.
4. Make the changes you want and click OK.
5. To change other policies, repeat the three previous steps.
To apply a security template to local policy
1. Open Security Configuration and Analysis.
2. In the console tree, right-click Security Configuration and Analysis, and then click
Open Database.
Where?
o ConsoleRoot
o Security Configuration and Analysis
3. In File name, type a file name and then click Open.
4. (Optional) If you are creating a new policy, in Import Template, do one of the following:
o To clear the database of any previously imported templates, select the Clear this
database before importing check box.
o To append this template to any previously imported templates, confirm that the
Clear this database before importing check box is cleared. (The last template that was
imported has precedence over any conflicting values.)
5. In Import Template, click a template, and then click Open.
6. In the console tree, right-click Security Configuration and Analysis, and then click
Configure Computer Now.
7. Do one of the following:
o To use the default error log, in Error log file path, click OK.
o To specify a different error log, in Error log file path, type a valid path and file
name.
To assign user rights for your local computer
1. Open Local Security Settings.
2. In the console tree, click User Rights Assignment.
Where?
o Security Settings
o Local Policies
o User Rights Assignments
3. In the details pane, double-click the user right you want to change.
4. In UserRight Properties, click Add User or Group.
5. Add the user or group and click OK.
To reapply default security settings
Using the Windows interface
1. Open Security Configuration and Analysis.
2. In the console tree, right-click Security Configuration and Analysis, and then click
Open Database.
Where?
o ConsoleRoot
o Security Configuration and Analysis
3. In File name, type the file name, and then click Open.
4. Do one of the following:
o For a domain controller, in the console tree, right-click Security Configuration
and Analysis, click Import Template, and then click DC security.

Page 25 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

o For other computers, in the console tree, right-click Security Configuration and
Analysis, click Import Template, and then click setup security.
5. Select the Clear this database before importing check box, and then click Open.
6. In the console tree, right-click Security Configuration and Analysis, and then click
Configure Computer Now.
7. Do one of the following:
o To use the default log specified in Error log file path, click OK.
o To specify a different log, in Error log file path, type a valid path and file name,
and then click OK.
8. When the configuration is done, right-click Security Configuration and Analysis, and
then click View Log File.

Section 2

2. Planning, Implementing, and Maintaining a Network Infrastructure


2.1 Overview of Designing a TCP/IP Network
2.1.1 Process for Designing a TCP/IP Network
2.1.2 Benefits of Windows Server 2003 TCP/IP
2.1.3 Planning the IP-Based Infrastructure
2.1.4 Designing the Access Tier
2.1.5 Designing the Distribution Tier
2.1.6 Designing the Core Tier
2.2 Developing Routing Strategies
2.2.1 Choosing Hardware or Software Routing
2.2.2 Choosing Static or Dynamic Routing
2.2.3 Distance Vector Routing Protocols
2.2.4 Link State Routing Protocols
2.3 Designing an IP Addressing Scheme
2.3.1 Creating a Structured Address Assignment Model
2.3.2 Planning Classless IP Addressing
2.3.3 Determining the Number of Subnets and Hosts
2.4 Planning Classless Routing
2.4.1 Planning Classless Noncontiguous Subnets
2.4.2 Noncontiguous subnets with classful routing
2.4.3 Noncontiguous subnets with classless routing
2.5 Using Route Summarization
2.6 Planning Variable Length Subnet Masks (VLSM)
2.7 Choosing an Address Allocation Method
2.8 Choosing Public or Private Addresses
2.8.1 Public Addresses
2.8.2 Private Addresses
2.8.3 Unauthorized Addresses
2.8.4 Network Address Translation
2.9 Planning an IP Configuration Strategy
2.10 DHCP Integration with DNS and WINS
2.11 Setting Up the Physical Network
2.12 Name Resolution Technologies
2.12.1 DNS Name Resolution
2.12.2 WINS Name Resolution
2.12.3 Overview of DNS Deployment
2.12.4 Process for Deploying DNS

Page 26 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2.12.5 DNS Concepts


2.12.6 DNS Roles
2.12.7 DNS designer role
2.12.8 Tools for Deploying DNS
2.12.9 Designing a DNS Namespace
2.12.9.1 Identifying Your DNS Namespace Requirements
2.12.9.2 Creating Internal and External Domains
2.12.9.3 Configuring Name Resolution for Disjointed Namespaces
2.12.10 Designing DNS Zones
2.12.10.1 Choosing a Zone Type
2.12.10.2 Primary Zones
2.12.10.3 Secondary Zones
2.12.10.4 Stub Zones
2.12.10.5 Stub Zones and Conditional Forwarding
2.12.10.6 Active DirectoryIntegrated Zones
2.12.10.7 Storing Active DirectoryIntegrated Zones
2.12.10.8.Using Forwarding
2.12.10.9. Securing Your DNS Infrastructure
2.12.10.10 Developing a DNS Security Policy
2.12.10.11 Low-Level DNS Security Policy
2.12.10.12 Mid-Level DNS Security Policy
2.13 Overview of WINS Deployment
2.13.1 WINS Deployment Process
2.13.2 Designing Your WINS Replication Strategy
2.13.3 Specifying Automatic Partner Configuration
2.13.4 Determining Replication Partners
2.13.5 Configuring Replication Across WANs
2.13.6 Configuring Replication Across LANs
2.14 Troubleshooting DNS clients
2.15 Troubleshooting WINS servers

Page 27 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Page 28 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2.1 Overview of Designing a TCP/IP Network


Designing your IP deployment includes deciding how you want to implement IP in a new
environment, or for most organizations examining your existing infrastructure and deciding
what to change. Windows Server 2003 TCP/IP, the most widely used networking protocol, can
connect different types of systems, provide a framework for client/server applications, and give
users access to the Internet. TCP/IP is included in the Microsoft Windows Server 2003,
Standard Edition; Windows Server 2003, Enterprise Edition; Windows Server 2003,
Datacenter Edition; and Windows Server 2003, Web Edition operating systems.
Before you start the TCP/IP design process, inventory your hardware and software and create or
update a map of your network topology. Preparing an inventory and network map can save time
and help you focus on the design decisions you want to address. After you review your existing
network, you might upgrade several servers to Windows Server 2003 in order to take advantage
of end-to-end support for TCP/IP, or you might decide to redesign your entire network to improve
its efficiency and prepare for the future of IP networking. Determine which design tasks are
relevant to your environment, and then decide what changes you want to make to your network

2.1.1 Process for Designing a TCP/IP Network


Figure 1.1 shows the design stages involved in deploying TCP/IP. Although the figure lists the
stages sequentially, you must consider each topic in relation to the others rather than as a linear
step-by-step process.
Figure Designing a TCP/IP Network

2.1.2 Benefits of Windows Server 2003 TCP/IP


Using TCP/IP in a Windows Server 2003 configuration offers the following advantages:

Page 29 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Enables the most widely used network protocol. Windows Server 2003 TCP/IP is a
complete, standards-based implementation of the most widely accepted networking protocol
in the world. IP is routable, scalable, and efficient. IP forms the basis for the Internet, and it is
also used as the primary network technology on most major enterprise networks in
production today. You can configure computers running Windows Server 2003 with TCP/IP to
perform nearly any role that a networked computer requires.
Connects dissimilar systems. Although all modern networking operating systems offer
TCP/IP support, Windows Server 2003 TCP/IP provides the best platform for connecting
Windowsbased systems to earlier Windows systems and to non-Windows systems. Most
standard connectivity utilities are available in Windows Server 2003 TCP/IP, including the
File Transfer Protocol (FTP) program, the Line Printer (LPR) program, and Telnet, a terminal
emulation protocol.
Provides client/server framework. Windows Server 2003 TCP/IP provides a cross-
platform client/server framework that is robust, scalable, and secure. Windows Server 2003
TCP/IP offers the Windows Sockets programming interface, which is ideal for developing
client/server applications that can run on Windows Socketscompliant TCP/IP protocol
implementations from other vendors.
Provides access to the Internet. Windows Server 2003 TCP/IP can provide users with
a method of gaining access to the Internet. A computer running Windows Server 2003 can be
configured to serve as an Internet Web site, it can function in a variety of other roles as an
Internet client or server, and it can use nearly all of the Internet-related software available
today.

2.1.3 Planning the IP-Based Infrastructure


To create or expand an enterprise network, you can choose from many design models, including
a network infrastructure model based on the three-tier design model. This model, a hierarchical
network design model described by Cisco Systems, Inc. and other networking vendors, is widely
used as a reference in the design of enterprise networks.
Figure shows the tasks involved in creating a three-tier TCP/IP infrastructure.
Figure Planning the IP-Based Infrastructure

Page 30 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The modular nature of a hierarchical model such as the three-tier model can simplify deployment,
capacity planning, and troubleshooting in a large internetwork. In this design model, the tiers
represent the logical layers of functionality within the network. In some cases, network devices
serve only one function; in other cases, the same device may function within two or more tiers.
The three tiers of this hierarchical model are referred to as the core, distribution, and access tiers.
Figure illustrates the relationship between network devices operating within each tier.
Figure Three-Tier Network Design Model

Page 31 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2.1.4 Designing the Access Tier


The access tier is the layer in which users connect to the rest of the network, including individual
workstations and workgroup servers. The access tier usually includes a relatively large number of
low- to medium-speed access ports, whereas the distribution and core tiers usually contain fewer,
but higher-speed network ports. Design the access tier with efficiency and economy in mind, and
balance the number and types of access ports to keep the volume of access requests within the
capacity of the higher layers.

2.1.5 Designing the Distribution Tier


The distribution tier distributes network traffic between related access layers, and separates the
locally destined traffic from the network traffic destined for other tiers through the core.
Network security and access control policies are often implemented within this tier. Network
devices in this layer can incorporate technologies such as firewalls and address translators.
The distribution tier is often the layer in which you define subnets; through the definition of
subnets, distribution devices often function as routers. Decisions about routing methods and
routing protocols affect the scalability and performance of the network in this tier.
A server network in the distribution layer might house critical network services and centralized
application servers. Computers running Windows Server 2003 can be used there to run the
Active Directory directory service, DNS, DHCP, and other core infrastructure services.

2.1.6 Designing the Core Tier


The core tier facilitates the efficient transfer of data between interconnected distribution tiers. The
core tier typically functions as the high-speed backbone of the enterprise network. This tier can
include one or more building-wide or campus-wide backbone local area networks (LANs),
metropolitan area network (MAN) backbones, and high-speed regional wide area network (WAN)
backbones.
The primary design goal for the core is reliable, high-speed network performance. As a general
rule, locate any feature that might affect the reliability or performance of this tier in an access or
distribution tier instead.
Select highly reliable network equipment for the core tier, and design a fault-tolerant core system
whenever possible. Many products meet these criteria, and most major network vendors offer
complete solutions to meet the requirements of the core tier.

2.2 Developing Routing Strategies

Page 32 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

After planning your network infrastructure based on your design model, plan how to implement
routing. Figure shows the tasks involved in developing a unicast routing strategy.
Figure Developing a Routing Strategy

To plan an effective routing solution for your environment, you must understand the differences
between hardware routers and software routers; static routing and dynamic routing; and distance
vector routing protocols and link state routing protocols.

2.2.1 Choosing Hardware or Software Routing


A router is a device that holds information about the state of its own network interfaces and
contains a list of possible sources and destinations for network traffic. The router directs incoming
and outgoing packets based on that information. By projecting network traffic and routing needs
based on the number and types of hardware devices and applications used in your environment,
you can better decide whether to use a dedicated hardware router, a software-based router, or a
combination of both. Generally, dedicated hardware routers handle heavier routing demands
best, and less expensive software-based routers are sufficient to handle lighter routing loads.
A software-based routing solution, such as the Windows Server 2003 Routing and Remote
Access service, can be ideal on a small, segmented network with relatively light traffic between
subnets. Conversely, enterprise network environments that have a large number of network
segments and a wide range of performance requirements might need a variety of hardware-
based routers to perform different roles throughout the network.

2.2.2 Choosing Static or Dynamic Routing


Routing can be either static or dynamic, depending on how routing information is generated and
maintained:

Page 33 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

In static routing, routing information is entered manually by an administrator and remains


constant throughout the routers operation.
In dynamic routing, a router is configured to automatically generate routing information
and share the information with neighboring routers.
You must decide where best to implement each type of routing.
Static Routing
In static routing, a network administrator enters static routes in the routing table manually by
indicating:
The network ID, consisting of a destination IP address and a subnet mask.
The IP address of a neighboring router (the next hop).
The router interface through which to forward the packets to the destination.
Static routing has significant drawbacks. Because a network administrator defines a static route,
errors are more likely than with a dynamically assigned route. A simple typographical error can
create chaos on the network. An even greater problem is the inability of a static route to adapt to
topology changes. When the topology changes, the administrator might have to make changes to
the routing tables on every static router. This does not scale well on a large internetwork.
However, static routing can be effective when used in combination with dynamic routing. Instead
of using static routing exclusively, you can use a static route as the redundant backup for a
dynamically configured route. In addition, you might use dynamic routing for most paths but
configure a few static paths where you want the network traffic to follow a particular route. For
example, you might configure routers to force traffic over a given path to a high-bandwidth link.
Dynamic Routing Protocols
Conceptually, the dynamic routing method has two parts: the routing protocol that is used
between neighboring routers to convey information about their network environment, and the
routing algorithm that determines paths through that network. The protocol defines the method
used to share the information externally, whereas the algorithm is the method used to process the
information internally.
The routing tables on dynamic routers are updated automatically based on the exchange of
routing information with other routers. The most common dynamic routing protocols are:
Distance vector routing protocols
Link state routing protocols
Understanding how these protocols work enables you to choose the type of dynamic routing that
best suits your network needs.

2.2.3 Distance Vector Routing Protocols


A distance vector routing protocol advertises the number of hops to a network destination (the
distance) and the direction in which a packet can reach a network destination (the vector). The
distance vector algorithm, also known as the Bellman-Ford algorithm, enables a router to pass
route updates to its neighbors at regularly scheduled intervals. Each neighbor then adds its own
distance value and forwards the routing information on to its immediate neighbors. The result of
this process is a table containing the cumulative distance to each network destination.
Distance vector routing protocols, the earliest dynamic routing protocols, are an improvement
over static routing, but have some limitations. When the topology of the internetwork changes,
distance vector routing protocols can take several minutes to detect the change and make the
appropriate corrections.
One advantage of distance vector routing protocols is simplicity. Distance vector routing protocols
are easy to configure and administer. They are well suited for small networks with relatively low
performance requirements.
Most distance vector routing protocols use a hop count as a routing metric. A routing metric is a
number associated with a route that a router uses to select the best of several matching routes in
the IP routing table. The hop count is the number of routers that a packet must cross to reach a
destination.

Page 34 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Routing Information Protocol (RIP) is the best known and most widely used of the distance vector
routing protocols. RIP version 1 (RIP v1), which is now outmoded, was the first routing protocol
accepted as a standard for TCP/IP. RIP version 2 (RIP v2) provides authentication support,
multicast announcing, and better support for classless networks. The Windows Server 2003
Routing and Remote Access service supports both RIP v1 and RIP v2 (for IPv4 only).
Using RIP, the maximum hop count from the first router to the destination is 15. Any destination
greater than 15 hops away is considered unreachable. This limits the diameter of a RIP
internetwork to 15. However, if you place your routers in a hierarchical structure, 15 hops can
cover a large number of destinations

2.2.4 Link State Routing Protocols


Link state routing protocols address some of the limitations of distance vector routing protocols.
For example, link state routing protocols provide faster convergence than do distance vector
routing protocols. Convergence is the process by which routers update routing tables after a
change in network topology the change is replicated to all routers that need to know about it.
Although link state routing protocols are more reliable and require less bandwidth than do
distance vector routing protocols, they are also more complex, more memory-intensive, and place
a greater load on the CPU.
Unlike distance vector routing protocols, which broadcast updates to all routers at regularly
scheduled intervals, link state routing protocols provide updates only when a network link
changes state. When such an event occurs, a notification in the form of a link state advertisement
is sent throughout the network.
The Windows Server 2003 Routing and Remote Access service supports the Open Shortest Path
First (OSPF) protocol, the best known and most widely used link state routing protocol. OSPF is
an open standard developed by the Internet Engineering Task Force (IETF) as an alternative to
RIP. OSPF compiles a complete topological database of the internetwork. The shortest path first
(SPF) algorithm, also known as the Djikstra algorithm, is used to compute the least-cost path to
each destination. Whereas RIP calculates cost on the basis of hop count only, OSPF can
calculate cost on the basis of metrics such as link speed and reliability in addition to hop count.
Unlike RIP, OSPF can support an internetwork diameter of 65,535 (assuming that each link is
assigned a cost of 1). OSPF transmits multicast frames, reducing CPU usage on a LAN. You can
hierarchically subdivide OSPF networks into areas, reducing router memory overhead and CPU
overhead.Like RIP v2, OSPF supports variable length subnet masks (VLSM) and noncontiguous
subnets

2.3 Designing an IP Addressing Scheme


Before assigning addresses, design an IP addressing scheme that meets the requirements of
your networking infrastructure. Figure 1.5 shows the tasks involved in designing your IP
addressing system, including planning your address assignment model, address allocation, and
public or private addressing. Most organizations choose to use classless IP addressing, classless
IP routing protocols, and route summarization.
Figure Designing an IP Addressing Scheme

Page 35 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2.3.1 Creating a Structured Address Assignment Model


You can ease the burden of enterprise internetwork administration by designing a structured
address assignment model. A structured address assignment model makes troubleshooting
easier and more systematic and helps you interpret network maps and locate specific devices. It
also simplifies the use of network management software. For enterprise scalability, assign
address blocks hierarchically.
The structured address assignment model reflects more than just hierarchical concerns. To
maximize network stability and scalability, assign a block of addresses based on a physical
network rather than on membership within a department or team, to avoid complications when
you move a workstation to a new location
As a general rule, assign static addresses to routers and servers, and assign dynamic addresses
to workstations. This scheme minimizes manual addressing, reducing the chances of address
duplication and stabilizing the networks addressing structure. You can assign meaningful
numbers when using static addresses; for example, reserve host addresses in the low or high
portion of the range, and manually assign these addresses to routers or servers.
To design a structured model for assigning addresses:
Plan classless IP addressing.
Plan classless routing.
Use route summarization.
Plan variable length subnet masks (VLSM).
Plan supernetting and classless interdomain routing (CIDR).

2.3.2 Planning Classless IP Addressing


Page 36 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Classless IP addressing makes traditional classful IP addressing methods restricted to the


standard IP address classes in their default formats out of date for enterprise networks. Of the
five address classes, Class A, B, and C addresses, collectively known as IPv4 unicast addresses,
are assigned to specific devices on an IPv4 network. Class D addresses, known as multicast
addresses, are used for IP multicasting (simultaneously sending a message to more than one
network destination). Class E addresses are reserved for experimental purposes.
To be able to use subnetting or supernetting, you must first understand the default formats of the
unicast addresses. Unicast addresses have the following formats:
All 32-bit IPv4 addresses contain four octets of 8 bits each, often represented as four
decimal numbers separated by dots (known as dotted decimal notation).
In Class A addresses, the first byte, or octet, represents the network ID, and the three
remaining bytes are used for node addresses.
In Class B addresses, the first 2 bytes represent the network ID, and the last 2 bytes are
used for nodes.
In Class C addresses, the first 3 bytes are used for the network ID, and the final byte is
used for nodes.
Without some means of subdividing class-designated networks, all available IP addresses would
have been depleted long ago. Classless IP addressing, which allows subnetting, was developed
to handle this problem.

2.3.3 Determining the Number of Subnets and Hosts


To better use the address space, instead of using the unicast addresses in their default formats,
you can use subnet addressing, which lets you "borrow" additional bits from the host part of the
address to divide the network into subnets. In subnetting, the subnet mask consists of the octets
assigned to the network plus the bits added for the subnet. You can use subnet mask notation to
indicate these leftmost contiguous bits.
For example, for a Class B address, which has a default subnet mask of 255.255.0.0, you might
allocate an additional 8 bits for subnets. That is, for a Class B address such as 131.107.65.37,
you can use the following subnet mask, shown in both decimal and binary notation.
Subnet Mask in Decimal Notation Subnet Mask in Binary Notation
255.255.255.0 11111111 11111111 11111111 00000000
By using 8 host bits for subnetting, you obtain 256 (that is, 28) subnetted network IDs (subnets),
supporting as many as 254 hosts per subnet. The number of hosts per subnet is 254 because
8 bits (28 minus 2) are reserved for the host ID. You subtract 2 because subnetting rules exclude
the host IDs consisting of all ones or all zeros.
An alternative to subnet mask notation is the network prefix length notation. A network prefix is
shorthand for a subnet mask, expressing the number of high-order bits that constitute the
subnetted network ID portion of the address in the format <IP address>/<# of bits>, where # of
bits defines the network/subnet part of the IP address, and the remaining bits represent the host
ID portion of the address.
The following is the network prefix length notation for the Class B address in the previous
example:
131.107.65.37/24
The bit notation "/24" refers to the number of high-order bits set to 1 in the binary notation for the
subnet mask, leaving 8 bits for hosts (the eight bits set to 0).
By contrast, if you anticipate needing only 32 subnets rather than 256, each of the 32 subnets
can support as many as 2,046 hosts (211 minus 2). That subnet mask has the following decimal
and binary notations.
Subnet Mask in Decimal Notation Subnet Mask in Binary Notation
255.255.248.0 11111111 11111111 11111000 00000000
The following network prefix length notation indicates the 21 bits needed to create as many as
32 subnets:

Page 37 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

131.107.65.37/21.
Again, "/21" indicates the number of high-order bits set to 1 in binary notation, leaving 11 bits (the
11 zeros) for the host ID portion of the address.
To determine the appropriate number of subnets versus hosts for your organizations network,
consider the following:
More subnets. Allocating more host bits for subnetting supports more subnets but fewer
hosts per subnet.
More hosts. Allocating fewer host bits for subnetting supports more hosts per subnet, but
limits the growth in the number of subnets.

2.4 Planning Classless Routing


Organizations today typically implement classless routing solutions. With classful routing
protocols, IP hosts and routers recognize only the network address designated by the standard
address classes. An IP host device or a router using a classful protocol such as RIP v1 cannot
recognize subnets.
Classless routing protocols extend the standard Class A, B, or C IP addressing scheme by using
a subnet mask or mask length to indicate how routers must interpret an IP network ID. Classless
routing protocols include the subnet mask along with the IP address when advertising routing
information. Subnet masks representing the network ID are not restricted to those defined by the
address classes, but can contain a variable number of high-order bits. Such subnet mask
flexibility enables you to group several networks as a single entry in a routing table, significantly
reducing routing overhead. In addition to RIP v2 and OSPF, described earlier, classless routing
protocols include Border Gateway Protocol version 4 (BGP4) and Intermediate System to
Intermediate System (IS-IS).
If your network contains routers that support only RIP v1 and you want to upgrade from classful to
classless routing, upgrade the RIP v1 routers to support RIP v2 or use another protocol such as
OSPF. For example, you might use VLSM to implement subnets of different sizes or CIDR to
implement supernetting. (VLSM and CIDR are described later in this chapter.)

2.4.1 Planning Classless Noncontiguous Subnets


One reason that classful routing is out of date is that classful routing protocols cannot reliably
handle noncontiguous subnets of a subnetted class-based network ID. As mentioned earlier,
classful routing protocols recognize only those networks indicated by an address class. Because
classful protocols do not transmit subnet mask or prefix length information, noncontiguous
subnets, when summarized by a classful routing protocol, can have the same class-based
network ID.

2.4.2 Noncontiguous subnets with classful routing


Noncontiguous subnets occur when another network with a different network ID separates
subnets of a classful network. For example, the two routers in Figure 1.6 separate two subnets
that each use the base prefix 10.0.0.0/8, which is a Class A private network. A segment of
another class-based network connects the two routers. (For more information about private
addresses
Figure Classful Routing Not Appropriate for Noncontiguous Subnets

Page 38 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Each router in Figure 1.6 must use a subnet mask to look up a match in the routing table.
Because a classful address, by definition, has only its class-based default subnet mask, the
router uses the network mask that corresponds to the class of the subnet ID when advertising the
route for the subnet. With classful routing, each of the routers in Figure 1.6 summarizes and
advertises the class-based network ID of 10.0.0.0/8, resulting in two routes to 10.0.0.0/8, each of
which might have a different metric. Therefore, a packet meant for one subnet could be
incorrectly routed to the other subnet. In the figure, the arrows represent the routes advertised by
the routers.

2.4.3 Noncontiguous subnets with classless routing


Figure below also shows an unrelated network connecting two noncontiguous subnets. In this
example, using classless routing, the locations on the noncontiguous subnets are unambiguous
because the classless protocol includes a subnet mask when advertising the route. Routers in the
intermediate network can distinguish between the two noncontiguous subnets.
Figure Classless Routing Appropriate for Noncontiguous Subnets

2.5 Using Route Summarization


With route summarization, or aggregation, in a hierarchical routing infrastructure, one route in a
routing table represents many routes. A routing table entry for the highest level (the network) is
also the route used for subnets and sub-subnets. In contrast, in a flat routing infrastructure, the
routing table on every router in the network contains an entry for each network segment. When
you use flat routing, the network IDs have no network/subnet structure and cannot be
summarized. RIP-based Internet Packet Exchange (IPX) internetworks use flat network
addressing and have a flat routing infrastructure.
Using route summarization, you can contain topology changes occurring in one area of the
network within that area. Route summarization simplifies routing tables and reduces the
exchange of routing information, but it requires more planning than does a flat routing
infrastructure.
To support route summarization, your IP addressing scheme must meet the following
requirements:
Classless routing protocols (those including subnet mask or prefix length information
along with the IP address) must be used.
All IP addresses used in route summarization must share identical high-order bits.
The length of the prefix can be any number of bits up to 32 (for IPv4).

2.6 Planning Variable Length Subnet Masks (VLSM)


Variable length subnet masks (VLSMs) allow you to use different prefix lengths at different
locations so that subnets of different sizes can coexist on the same network. Instead of using one
subnet mask throughout the network, you apply several masks to the same address space,
producing subnets of different sizes. For example, given the Class B network ID of 131.107.0.0,
you can configure one subnet with as many as 32,766 hosts, 15 subnets with as many as 2,046
hosts, and 8 subnets with as many as 254 hosts.
Tip

Page 39 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

When using VLSM, do not accidentally overlap blocks of addresses. If possible, start with
equal-size subnets and then subdivide them.
VLSM also can be used when a point-to-point WAN link connects two routers. One way to handle
such a WAN link is to create a small subnet consisting of only two addresses. Without VLSM, you
might divide a Class C network ID into an equal number of two-address subnets. If only one WAN
link is in use, all the subnets but one serve no purpose, wasting 252 addresses.
Alternatively, you can divide the Class C network into 16 workgroup subnets of 14 nodes each by
using a prefix length of 28 bits (or, in subnet mask terms, 255.255.255.240). By using VLSM, you
can then subdivide one of those 16 subnets into 8 smaller subnets, each supporting only 2 nodes.
You can use one of the 8 subnets for your existing WAN link and reserve the remaining 7 subnets
for similar links that you might need in the future. To accomplish this act of sub-subnetting by
using VLSM, use a prefix length of 30 bits (or, in subnet mask terms, 255.255.255.252).
Figure shows variable length subnetting for two-host WAN subnets.
Figure Variable Length Subnetting of 131.107.106.0

If your network includes numerous WAN links, each with its own subnet, this approach can
require significant administrative overhead. If you do not use route summarization, each subnet
requires another entry in the routing table, increasing the overhead of the routing process.
Some routers support unnumbered connections; a link with unnumbered connections does not
require its own subnet.

2.7 Choosing an Address Allocation Method


Choose an address allocation method that best fits your structured address model. Addressing by
topology is recommended. However, you can choose one or more of the following methods:
Random address allocation. Under a random addressing structure, you can assign
blocks of addresses randomly. Random address allocation might be the most frequently used
address allocation method, but it is the least desirable. For a small network where no
significant growth is anticipated, this approach might be appropriate. However, if the network
does grow, random address allocation can cause extra work for network administrators.
Summarizing the random collection of routes might be difficult or impossible. This method
can cause stability problems, with numerous routes being advertised to the core tier.
Addressing by organization chart. To base your address structure on your
organization chart, you create subnets based on a pool of addresses preassigned to a
department or team. If, for example, you designate the Sales department as 10.2.0.0/16, the
address 10.2.1.0/24 might be the subnet for the sales team at one site and 10.2.2.0/24 might
be the subnet for the sales team at another site. To the extent that contiguous subnets
remain unassigned, this address allocation method offers limited possibilities for route
summarization, but, as a rule, this kind of addressing scheme does not scale well.
Addressing by geographical region. When you base your address structure on
location, a greater degree of summarization is possible. However, as the internetwork of a
geographically diverse organization continues to grow, fewer routes are available for
summarization.
Addressing by topology. By basing your address structure on topology, you can ensure
that summarization takes place and that an internetwork remains scalable and stable.
Addressing by topology makes the addressing structure router-centric, enhancing efficiency.

Page 40 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2.8 Choosing Public or Private Addresses


If you use a direct (routed) connection to the Internet, you must use public addresses. If you use
an indirect connection such as a proxy server or Network Address Translator (NAT), use private
addresses. If your organization is not connected to the Internet, use private addresses (rather
than "unauthorized" addresses) so that if you later connect to the Internet using an indirect
connection, you do not need to change addresses already in use.
If you connect to the Internet by using an Internet service provider (ISP), the ISP might provide
only private addresses. The ISP itself uses public addresses to connect to the Internet.

2.8.1 Public Addresses


IANA assigns public addresses and guarantees them to be globally unique on the Internet. In
addition, routes are programmed into the routers on the Internet so that traffic can reach those
assigned public addresses. That is why public addresses can be reached on the Internet.

2.8.2 Private Addresses


Private addresses are a predefined set of IPv4 addresses that the designers of the Internet
provided for those hosts within an organization that do not require direct access to the Internet.
These addresses do not duplicate already assigned public addresses. RFC 1918, "Address
Allocation for Private Internets," defines the following three private address blocks:
10.0.0.0/8. The 10.0.0.0/8 private network is a Class A network ID that supports the
following range of valid IP addresses: 10.0.0.1 through 10.255.255.254. The 10.0.0.0/8
private network has 24 host bits that a private organization can use for any subnetting
scheme within the organization.
172.16.0.0/12. The 172.16.0.0/12 private network can be interpreted either as a block of
16 Class B network IDs or as a 20-bit assignable address space (20 host bits) that can be
used for any subnetting scheme within the private organization. The 172.16.0.0/12 private
network supports the following range of valid IP addresses: 172.16.0.1 through
172.31.255.254.
192.168.0.0/16. The 192.168.0.0/16 private network can be interpreted either as a block
of 256 Class C network IDs or as a 16-bit assignable address space (16 host bits) that can
be used for any subnetting scheme within the private organization. The 192.168.0.0/16
private network supports the following range of valid IP addresses: 192.168.0.1 through
192.168.255.254.
Because IANA never assigns IP addresses in the private address space as public addresses,
routes for private addresses never exist on the Internet routers. Any number of organizations can
repeatedly use the private address space, which helps to prevent the depletion of public
addresses.
Private addresses cannot be reached on the Internet. Therefore, Internet traffic from a host that
has a private address must either send its requests to an application layer gateway (such as a
proxy server), which has a valid public address, or have its private address translated into a valid
public address by a NAT before it is sent over the Internet.

2.8.3 Unauthorized Addresses


Network administrators of private networks who have no plans to connect to the Internet can
choose any IP addresses they want, even public addresses that IANA has assigned to other
organizations. Such potentially duplicate addresses are known as unauthorized (or illegal)
addresses. Later, if the organization decides to connect directly to the Internet after all, its current
addressing scheme might include addresses that IANA has assigned to other organizations. You
cannot connect to the Internet by using unauthorized addresses.
Do not use unauthorized addresses if even the slightest possibility exists of ever establishing a
connection between your network and the Internet. On some future date, discovering that you

Page 41 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

need to quickly replace the IP addresses of all the nodes on a large private network can require
considerable time and interrupt network operation.

2.8.4 Network Address Translation


Network address translation, defined in RFC 3022, is the translation process performed by an IP
router functioning as a network address translator (NAT). A NAT translates IP addresses from
private network addresses used inside an organization to public addresses used outside the
organization. Typically, a NAT-enabled router connects an internal corporate network with the
Internet and builds a table that maps the connections between hosts inside the network and hosts
outside on the Internet.
You can use NAT to map multiple internal private addresses to a single external public IP
address. For example, a small business might obtain an ISPallocated public IP address for each
computer on its network. By using NAT, however, the business could use private addressing
internally and have NAT map its private addresses to one or more public IP addresses that the
ISP allocates.
NAT makes it more difficult for external users to attack systems on a private network. NAT also
allows several nodes on the private network, each with its own private address, to share a smaller
number of scarcer public addresses to access the Internet. However, although NAT allows you to
reuse the private address space, it does not support standards-based network layer security or
the correct mapping of all higher layer protocols. One purpose for the large number of addresses
made available with the introduction of IPv6 is to make address conservation techniques such as
NAT unnecessary.
Windows Server 2003 also supports IPSec NAT traversal (NAT-T), which allows nodes located
behind a NAT (that is, they use private addresses) to use Encapsulating Security Payload (ESP)
to protect traffic. This capability allows the creation of Layer Two Tunneling Protocol with IPSec
(L2TP/IPSec) connections from remote access clients and routers located behind NATs.

2.9 Planning an IP Configuration Strategy


Every computer on an IP network must have a unique IP address. As noted earlier, using static
addressing for clients is time-consuming and prone to error. To provide an alternative for IPv4,
the IETF developed the Dynamic Host Configuration Protocol (DHCP), based on the earlier
bootstrap protocol (BOOTP) standard. Figure 1.9 shows the stage in the TCP/IP design process
during which you decide what to use for IP configuration. Most organizations choose to use
DHCP for IPv4.
Figure Planning an IP Configuration Strategy

Page 42 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Although BOOTP and DHCP hosts can interoperate, DHCP is easier to configure. BOOTP
requires maintenance by a network administrator, whereas DHCP requires minimal maintenance
after the initial installation and configuration.
The DHCP standard, defined in RFC 2131, defines a DHCP server as any computer running the
DHCP service. Compared with static addressing, DHCP simplifies IP address management
because the DHCP server automatically allocates IP addresses and related TCP/IP configuration
settings to DHCP-enabled clients on the network. This is especially useful on a network with
frequent configuration changes for example, in an organization that has a large number of
mobile users.
The DHCP server dynamically assigns specific addresses from a manually designated range of
addresses called a scope. By using scopes, you can dynamically assign addresses to clients on
the network no matter where the clients are located or how often they move.

2.10 DHCP Integration with DNS and WINS


The DHCP implementation in Windows Server 2003 is closely linked to name resolution services
such as the Domain Name System (DNS) service and the Windows Internet Name Service
(WINS). Network administrators benefit from combining all three when planning a deployment.
If you use DHCP servers for Windows-based network clients, you must use a name resolution
service. In addition to name resolution, Windows Server 2003 networks use DNS to support
Active Directory. Domain-based networks supporting clients running Windows NT version 4.0 or
earlier or NetBIOS applications must use WINS servers. Networks supporting a combination of
clients running Windows XP, Windows 2000, Windows Server 2003, and Windows NT 4.0 must
implement both WINS and DNS.

Page 43 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

DHCP, APIPA, and IP Address Allocation


DHCP clients receive IP addresses as follows:
Dynamic allocation from DHCP server. After you configure DHCP, the DHCP server
automatically assigns an IP address from a specified scope to a client for a finite period of
time called a lease. Most clients receive a dynamic IP address.
Static allocation from DHCP server. For a specific computer (such as a DHCP,
DNS, or WINS server, or a print server, firewall, or router), you can manually configure the
TCP/IP properties, including the IP address, the DNS and WINS parameters, and default
gateway information. For the static clients to be on the same subnet as other, dynamically
allocated computers, the static IP addresses must be within the scope or subnet defined for
dynamic address allocation. You can use the DHCP snap-in to set an exclusion range to
prevent the DHCP server from dynamically allocating the static IP address.
Client reservation from DHCP server. By using the DHCP snap-in, you can also
reserve a specific IP address for permanent use by a given DHCP client.
Automatic allocation APIPA. In the absence of a DHCP server, Automatic Private IP
Addressing (APIPA) lets a workstation configure itself with an address in the range from
169.254.0.1 to 169.254.255.254. Computers using APIPA addresses can communicate only
with other computers that are also using APIPA addresses within a single subnet. In this
case, a computer has an IP address but cannot connect outside the subnet. APIPA regularly
checks for the presence of a DHCP server; if it detects one, it yields to the DHCP service,
which then assigns a dynamic address to replace the APIPA address. APIPA is designed
primarily for simple networks with only one subnet, such as small or home-based networks.
On a larger network, APIPA can be useful for identifying problems with DHCP: when a client
uses an APIPA address, this indicates that a DHCP server has not been found.
Alternate configuration user configured. In the absence of a DHCP server,
alternate configuration lets a computer use an IP address configured manually by the user.
Alternate configuration is designed for a computer that is used on more than one network,
such as a laptop used both at the office and at home. The user can specify an IP address on
the computers TCP/IP properties Alternate Configuration tab if at least one of the networks
(for example, the home office) does not have a DHCP server and APIPA addressing is not
wanted. If alternate configuration is not configured

2.11 Setting Up the Physical Network


The components that you use in setting up your physical network will vary depending upon the
equipment that you already have in place, your organizations specific needs, and the purpose of
this network that is, whether you are building a test LAN or an initial production LAN. The
configuration documented here is that of a basic small network, which can be easily scaled to fit
your computing needs. The router used here is a standard 5-port NAT router with a built-in
firewall. Specific router instructions are not included, as those depend on the router that you have
purchased.
To configure your router
Follow the directions in the documentation for your router to configure the router to these
specifications:
Ensure that port 53 on the router is enabled to support DNS. (This is the default state of
many routers.)
If the router is wireless, enable 128-bit WEP security.
Set a strong administrator password on the router.
Use the instructions that you received with your router to configure the router to receive
its IP configuration from your ISP using DHCP (this is the default state for many routers).
To configure your LAN-router connection
1. Connect the LAN cable from the computer that is to be the DC to an available port on the
NAT router.
2. Connect the LAN cable from the broadband modem to the WAN port on the NAT router.

Page 44 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3. Turn on the router and the modem.


Your network should be similar to the one illustrated in Figure
Figure LAN Router Connection for a Simple Managed Environment

2.12 Name Resolution Technologies


Microsoft Windows Server 2003 operating systems use name resolution to translate the
numerical IP addresses that are used for TCP/IP communications to computer names, which are
easier for users to remember. With name resolution, computer names are assigned to the IP
addresses of the source and destination hosts, and are then used to contact these hosts, rather
than using the 32-bit IP addresses.
In Windows Server 2003, there are two types of TCP/IP names to resolve:
Host names
NetBIOS names
Name Resolution Components
Windows Server 2003 provides Domain Name System (DNS) for host name resolution and
Windows Internet Name Service (WINS) for NetBIOS name resolution.

2.12.1 DNS Name Resolution


DNS name resolution means successfully mapping a DNS domain or host name to an IP
address. A host name is an alias that is assigned to an IP node to identify it as a TCP/IP host.
The host name can be up to 255 characters long and can contain alphabetic and numeric
characters, hyphens, and periods. Multiple host names can be assigned to the same host.
Windows Sockets (Winsock) programs, such as Internet Explorer and the FTP utility, can use one
of two values for the destination host: the IP address or a host name. When the IP address is
specified, DNS name resolution is not needed. When a host name is specified, the host name
must be resolved to an IP address before IP-based communication with the desired resource can
begin.
Host names can take various forms. The two most common forms are a nickname and a domain
name. A nickname is an alias to an IP address that individuals can assign and use. A domain
name is a structured name in a hierarchical namespace called DNS. An example of a domain
name is www.microsoft.com.
Nicknames are resolved through entries in the Hosts file, which is stored in the
systemroot\System32\Drivers\Etc folder
Domain names are resolved by sending DNS name queries to a DNS server. The DNS server is
a computer that stores domain name-to-IP address mapping records or has knowledge of other
DNS servers. The DNS server resolves the queried domain name to an IP address and returns
the name-to-IP address mapping records in response to a query.

2.12.2 WINS Name Resolution


WINS name resolution means successfully mapping a NetBIOS name to an IP address. A
NetBIOS name is a 16-byte address that is used to identify a NetBIOS resource on the network.
A NetBIOS name is either a unique (exclusive) or group (nonexclusive) name. When a NetBIOS

Page 45 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

process is communicating with a specific process on a specific computer, a unique name is used.
When a NetBIOS process is communicating with multiple processes on multiple computers, a
group name is used.
The exact mechanism by which NetBIOS names are resolved to IP addresses depends on the
NetBIOS node type that is configured for the node. RFC 1001, Protocol Standard for a NetBIOS
Service on a TCP/UDP Transport: Concepts and Methods, defines the NetBIOS node types, as
listed in the following table.
NetBIOS Node Types
Node Type Description
B-node uses broadcast NetBIOS name queries for name registration and
B- resolution. B-node has two major limitations: (1) Broadcasts disturb every node
node(broadcast) on the network, and (2) Routers typically do not forward broadcasts, so only
NetBIOS names on the local network can be resolved.
P-node uses a NetBIOS name server (NBNS), such as a WINS server, to
P-node (peer-
resolve NetBIOS names. P-node does not use broadcasts; instead, it queries
peer)
the name server directly.
M-node is a combination of B-node and P-node. By default, an M-node
M-node (mixed) functions as a B-node. If an M-node is unable to resolve a name by broadcast,
it queries a NBNS using P-node.
H-node is a combination of P-node and B-node. By default, an H-node
H-node(hybrid) functions as a P-node. If an H-node is unable to resolve a name through the
NBNS, it uses a B-node to resolve the name.

Computers running Windows Server 2003 operating systems are B-node by default and become
H-node when they are configured with a WINS server. Those computers can also use a local
database file called Lmhosts to resolve remote NetBIOS names. The Lmhosts file is stored in the
systemroot\System32\Drivers\Etc folder

2.12.3 Overview of DNS Deployment


DNS is the primary method for name resolution in the Microsoft Windows Server 2003,
Standard Edition; Windows Server 2003, Enterprise Edition; and Windows Server 2003,
Datacenter Edition operating systems (collectively referred to as "Windows Server 2003" in this
chapter). DNS is also a requirement for deploying Active Directory, but Active Directory is not a
requirement for deploying DNS. However, integrating DNS with Active Directory enables DNS
servers to take advantage of the security, performance, and fault tolerance capabilities of Active
Directory.

2.12.4 Process for Deploying DNS


Deploying DNS involves planning and designing your DNS infrastructure, including the DNS
namespace, DNS server placement, DNS zones, and DNS client configuration. In addition, if you
are integrating DNS with Active Directory, you must plan the level of integration and identify your
security, scalability, and performance requirements. Figure 3.1 shows the DNS deployment
process.
Figure Deploying DNS

Page 46 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2.12.5 DNS Concepts


Windows Server 2003 DNS is based on Requests For Comments (RFCs) standards developed
by the Internet Engineering Task Force (IETF) and is therefore interoperable with other
standards-compliant DNS implementations. DNS uses a distributed database that implements a
hierarchical naming system. This naming system enables an organization to expand its presence
on the Internet and enables the creation of names that are unique both on the Internet and on
private TCP/IP-based intranets.
By using DNS, any computer on the Internet can look up the name of any other computer in the
Internet namespace. Computers running Windows Server 2003 and Microsoft Windows 2000
also use DNS to locate domain controllers and other servers running Active Directory.

2.12.6 DNS Roles


Deploying a DNS infrastructure involves design, implementation, and maintenance tasks. The
individuals who are responsible for these tasks include DNS designers and the DNS
administrators. Before you begin designing your DNS deployment, it is helpful to identify the
individuals in your organization who are responsible for these roles. Table 3.1 lists the
responsibilities of the DNS designer and DNS administrator roles.
Table DNS Roles
Role Responsibility
Designing the DNS namespace
Placing DNS servers and zones within the DNS namespace
DNS designer
Creating a secure DNS infrastructure
Designing DNS integration with Active Directory
Deploying, configuring, and managing the DNS infrastructure
DNS administrator
Managing Active Directory integration

Page 47 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2.12.7 DNS designer role


If you are deploying DNS to support Active Directory in an environment that does not already
have a DNS infrastructure, the DNS designer is responsible for the DNS integration with the
entire Active Directory forest. The DNS designer works closely with the DNS administrator for
Active Directory.
If you are deploying DNS to support Active Directory in an environment that has an existing DNS
infrastructure, the DNS designer works with the DNS administrator for Active Directory to
delegate the forest root DNS name to Active Directory. The Active Directory forest administrator
delegates management of DNS to a DNS administrator.
DNS administrator role
DNS administrators manage and maintain the DNS namespace, DNS servers, DNS clients, DNS
zones, and zone propagation. DNS administrators are also responsible for maintaining network
security by anticipating and mitigating new security threats. In addition, DNS administrators are
responsible for DNS integration with other Windows Server 2003 services.
New in Windows Server 2003
Windows Server 2003 DNS includes several new features, including:
Conditional forwarding. Conditional forwarding enables a DNS server to forward DNS
queries based on the DNS domain name in the query. For more information about conditional
forwarding, see Help and Support Center for Windows Server 2003.
DNS application directory partitions. DNS application directory partitions enable you to
set the replication scope for Active Directoryintegrated DNS data. By limiting the scope of
replication traffic to a subset of the servers running Active Directory in your forest, you can
reduce replication traffic.
DNSSEC. DNS provides basic support for the DNS Security Extensions (DNSSEC)
protocol as defined in RFC 2535: Domain Name System Security Extensions. For more
information about DNSSEC, see Help and Support Center for Windows Server 2003.
EDNS0. Extension Mechanisms for DNS (EDNS0) enable DNS requestors to advertise
the size of their UDP packets and facilitate the transfer of packets larger than 512 octets, the
original DNS limit for UDP packet size. For more information about EDNS0, see Help and
Support Center for Windows Server 2003.

2.12.8 Tools for Deploying DNS


Windows Server 2003 includes a number of tools to assist you in deploying a DNS infrastructure.
Netdiag.exe
The Netdiag.exe tool assists you in isolating networking and connectivity problems. Netdiag.exe
performs a series of tests that you can use to determine the state of your network client. For more
information about Netdiag.exe, in Help and Support Center for Windows Server 2003, click
Tools, and then click Windows Support Tools
Nslookup.exe
You can use the Nslookup.exe command-line tool to perform query testing of the DNS domain
namespace and to diagnose problems with DNS servers.
Dnscmd.exe
You can use the Dnscmd.exe command-line tool to perform administrative tasks on the DNS
server the same as you can by using the DNS Microsoft Management Console (MMC) snap-in.
DNSLint
DNSLint is a command-line tool that you can use to address some common DNS name resolution
issues, such as lame delegation and DNS record verification. DNSLint is in the Support.cab file in
the \Support\Tools folder on the Windows Server 2003 operating system CD. You can install
DNSLint by running Suptools.msi.
Terms and Definitions
The following are some important DNS-related terms.

Page 48 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Authoritative DNS server A DNS server that hosts a primary or secondary copy of zone data.
Each zone has at least one authoritative DNS server.
Conditional forwarding A DNS query setting that enables a DNS server to route a request for
a particular name to another DNS server by specifying a name and IP address. For example, a
DNS server in contoso.com can be configured to forward queries for names in treyresearch.com
to a DNS server hosting the treyresearch.com zone.
Delegation The process of using resource records to provide pointers from parent zones to
child zones in a namespace hierarchy. This enable DNS servers in a parent zone to route queries
to DNS servers in a child zone for names within their branch of the DNS namespace. Each
delegation corresponds to at least one zone.
DNS client resolver A service that runs on client computers and sends DNS queries to a DNS
server. Some resolvers use a cache to improve name resolution performance.
DNS namespace The hierarchical naming structure of the domain tree. Each domain label that
is used in a fully qualified domain name (FQDN) indicates a node or branch in the domain tree.
For example, host1.contoso.com is an FQDN that represents the node host1, under the node
Contoso, under the node com, under the DNS root.
DNS server A computer that hosts DNS zone data, resolves DNS queries, and caches the
query responses.
Domain tree In DNS, the inverted hierarchical tree structure that is used to index domain names
within a namespace. Domain trees are similar in purpose and concept to the directory trees used
by computer filing systems for disk storage.
Public namespace A namespace on the Internet, such as www.microsoft.com, that can be
accessed by any connected device. Beneath the top-level domains, the Internet Corporation for
Assigned Names and Numbers (ICANN), the Internet Assigned Numbers Authority (IANA), and
other Internet naming authorities delegate domains to organizations such as Internet Service
Providers (ISPs), which in turn delegate subdomains to their customers or host zones for their
customersForward lookup zone An authoritative DNS zone that is primarily used to resolve
network resource names to IP addresses.
Fully qualified domain name (FQDN) A DNS name that uniquely identifies a node in a DNS
namespace. The FQDN of a computer is a concatenation of the computer name (for example,
client1) and the primary DNS suffix of the computer (for example, contoso.com), and a
terminating dot (for example, contoso.com.).
Internal namespace A namespace internal to an organization to which it can control access.
Organizations can use the internal namespace to shield the names and IP addresses of its
internal computers from the Internet. A single organization might have multiple internal
namespaces. Organizations can create their own root servers and any subdomains as needed.
The internal namespace can coexist with an external namespace.
Iterative query A query made by a client to a DNS server for an authoritative answer that can
be provided by the server without generating additional server-side queries to other DNS servers.
Primary DNS server A DNS server that hosts read-write copies of zone data, has a DNS
database of resource records, and resolves DNS queries.
Secondary DNS server A DNS server that hosts a read-only copy of zone data. A secondary
DNS server periodically checks for changes made to the zone on its configured primary DNS
server, and performs full or incremental zone transfers, as needed.
Recursive query A query made by either a client or a DNS server on behalf of a client, the
response to which can be an authoritative answer or a referral to another server. Recursive
queries continue until the DNS server receives an authoritative answer for the queried name. By
default, recursion is enabled for Windows Server 2003 DNS.
Resource record (RR) A DNS database structure containing name information for a particular
zone. For example, an address (A) resource record can map the IP address 172.16.10.10 to the
name DNSserverone.contoso.com or a namespace (NS) resource record can map the name
contoso.com to the server name DNS1.contoso.com. Most of the basic RR types are defined in
RFC 1035: Domain Names Implementation and Specification, but additional RR types are
defined in other RFCs.
Page 49 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Reverse lookup zone An authoritative DNS zone that is primarily used to resolve IP addresses
to network resource names.
Stub zone A partial copy of a zone that can be hosted by a DNS server and used to resolve
recursive or iterative queries. Stub zones contain the Start of Authority (SOA) resource records of
the zone, the DNS resource records that list the zones authoritative servers, and the glue
address (A) resource records that are required for contacting the zones authoritative servers.
Stub zones are used to reduce the number of DNS queries on a network, and to decrease the
network load on the primary DNS servers hosting a particular name.
Zone In a DNS database, a contiguous portion of the domain tree that is administered as a
single separate entity by a DNS server. The zone contains resource records for all of the names
within the zone.
Zone file A file that consists of the DNS database resource records that define the zone. DNS
data that is Active Directoryintegrated is not stored in zone files because the data is stored in
Active Directory. However, DNS data that is not Active Directoryintegrated is stored in zone files.
Zone transfer The process of copying the contents of the zone file located on a primary DNS
server to a secondary DNS server. Using zone transfer provides fault tolerance by synchronizing
the zone file in a primary DNS server with the zone file in a secondary DNS server. The
secondary DNS server can continue performing name resolution if the primary DNS server fails.

2.12.9 Designing a DNS Namespace


Before you deploy a DNS infrastructure, the DNS designer in your organization must design a
DNS namespace. You can design an external namespace that is visible to Internet users and
computers, or you can design an internal namespace that is accessible only to users and
computers that are within the internal network. After your DNS namespace has been deployed,
DNS administrators are responsible for managing and maintaining the DNS namespace.
Figure 3.3 shows the process for designing a DNS namespace.
Figure Designing a DNS Namespace

Page 50 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2.12.9.1 Identifying Your DNS Namespace Requirements


The first step in designing a DNS namespace is to determine whether you need a new
namespace for your organization, or whether you can retain an existing Windows or third-party
DNS namespace.
Table DNS Namespace Design Requirements
Scenario Design Requirements
You are upgrading an existing DNS
infrastructure from a version of Windows DNS namespace design can remain the same.
earlier than Windows Server 2003.
You are upgrading from a third-party DNS
infrastructure that uses DNS software that
DNS namespace design can remain the same.
adheres to standard DNS domain naming
guidelines.
Bring your existing DNS namespace design into
Your existing DNS software does not
compliance with DNS domain naming guidelines
conform to standard DNS domain naming
before deploying a Windows Server 2003 DNS
guidelines.
namespace.
You are integrating Windows Server 2003 Integrate Windows Server 2003 DNS with your
DNS into an existing third-party DNS current DNS infrastructure. You do not need to
software that adheres to standard DNS change the namespace design of the third-party

Page 51 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

domain naming guidelines. DNS infrastructure or your existing namespace.


Design a logical naming convention for your DNS
You are deploying a new Windows
namespace based on DNS domain naming
Server 2003 DNS infrastructure.
guidelines.
You are deploying Windows
Create a DNS namespace design that is based on
Server 2003 DNS to support Active
your Active Directory naming convention.
Directory.
You are modifying your existing DNS
Ensure that Active Directory domain names match
namespace to support Active Directory, but
your existing DNS names. This enables you to
you do not want to redesign your DNS
deploy the highest level of security
namespace.

2.12.9.2 Creating Internal and External Domains


Organizations that require an Internet presence as well as an internal namespace must deploy
both an internal and an external DNS namespace and manage each namespace separately. You
can create a mixed internal and external DNS namespace in one of two ways:
By making the internal domain a subdomain of the external domain.
By using different names for the internal and external domains.

Using an Internal Subdomain


The recommended configuration option for a mixed internal and external DNS namespace is to
make your internal domain a subdomain of your external domain. For example, an organization
that has an external namespace domain name of contoso.com might use the internal namespace
domain name corp.contoso.com. Using an internal domain that is a subdomain of an external
domain:
Requires you to register only one name with an Internet name authority even if you later
decide to make part of your internal namespace publicly accessible.
Ensures that all of your internal domain names are globally unique.
Simplifies administration by enabling you to administer internal and external domains
separately.
You can use your internal subdomain as a parent for additional child domains that you create to
manage divisions within your company. Child domain names are immediately subordinate to the
DNS domain name of the parent. For example, a child domain for the human resources
department that is added to the us.corp.contoso.com namespace might have the domain name
hr.us.corp.constoso.com.
Using Different Internal and External Domain Names
If it is not possible for you to configure your internal domain as a subdomain of your external
domain, use a stand-alone internal domain. This way, your internal and external domain names
are unrelated. For example, an organization that uses the domain name contoso.com for their
external namespace uses the name corp.internal for their internal namespace.
The advantage to this approach is that it provides you with a unique internal domain name. The
disadvantage is that this configuration requires you to manage two separate namespaces. Also,
using a stand-alone internal domain that is unrelated to your external domain might create
confusion for users because the namespaces do not reflect a relationship between resources
within and outside of your network. In addition, you might have to register two DNS names with
an Internet name authority if you want to make the internal domain publicly accessible.
Deciding Whether to Deploy an Internal DNS Root
If you have a large distributed network and a complex DNS namespace, it is best to use an
internal DNS root that is isolated from public networks. Using an internal DNS root streamlines
the administration of your DNS namespace by enabling you to administer your DNS infrastructure
as if the entire namespace consists of the DNS data within your network.

Page 52 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

If you use an internal DNS root, a private DNS root zone is hosted on a DNS server on your
internal network. This private DNS root zone is not exposed to the Internet. Just as the DNS root
zone contains delegations to all of the top-level domain names on the Internet, such as .com,
.net, and .org, a private root zone contains delegations to all of the top-level domain names on
your network. The DNS server that hosts the private root zone in your namespace is considered
to be authoritative for all of the names in the internal DNS namespace.
Using an internal DNS root provides the following benefits:
Simplicity. If your network spans multiple locations, an internal DNS root might be the
best method for administering DNS data in a network.
Secure name resolution. With an internal DNS root, DNS clients and servers on your
network never contact the Internet to resolve internal names. In this way, the DNS data for
your network is not broadcast over the Internet. You can enable name resolution for any
name in another namespace by adding a delegation from your root zone. For example, if
your computers need access to resources in a partner organization, you can add a
delegation from your root zone to the top level of the DNS namespace of the partner
organization.
Important
Do not reuse names that exist on the Internet in your internal namespace. If you repeat
Internet DNS names on your intranet, it can result in name resolution errors.
If name resolution is required by computers that do not support software proxy, or by computers
that support only LATs, then you cannot use an internal root for your DNS namespace. In this
case, you must configure one or more internal DNS servers to forward queries that cannot be
resolved locally to the Internet.
Table lists the types of client proxy capabilities and whether you can use an internal DNS root for
each type.
Table Client Proxy Capabilities
Can You Use
Microsoft Software with Corresponding Forwards
Proxy Capability an Internal
Proxy Capabilities Queries
Root?
No Proxy Generic Telnet
Winsock Proxy (WSP) 1.x and later
Local Address
Microsoft Internet Security and Acceleration
Table (LAT)
(ISA) Server 2000 and later
WSP 1.x and later
Name Exclusion Internet Security and Acceleration (ISA)
List Server 2000 and later, and all versions of
Microsoft Internet Explorer
Proxy Auto- WSP 2.x, Internet Security and Acceleration
configuration (PAC) Server (ISA) Server 2000 and later, Internet
File Explorer 3.01 and later

2.12.9.3 Configuring Name Resolution for Disjointed Namespaces


If you need to create or merge two DNS namespaces when you deploy Windows Server 2003
DNS, this can result in disjointed namespaces a DNS infrastructure that includes two or more
top-level DNS domain names. To configure internal name resolution for multiple DNS top-level
domains, you must do one of the following:
If you have an internal DNS root, add delegations for each top-level DNS zone to the
internal DNS root zone.
If you want to reduce cross-domain DNS query traffic, configure the DNS servers that
host the DNS zones in the first and second namespaces to host secondary zones for the
DNS zones in each others namespaces. In this configuration, the DNS servers that host the
DNS zones in each namespace are aware of the DNS servers in the other namespace. This

Page 53 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

solution requires increased storage space for hosting secondary copies of zones in different
namespaces, and generates increased zone transfer traffic.
If storage capacity on DNS servers is a consideration, configure the DNS servers that
host the DNS zones in one namespace to forward name resolution queries in a second
namespace to the DNS servers that are hosting the DNS zones for the second namespace.
Then configure the DNS servers that host the DNS zones in the second namespace to
forward name resolution queries in the first namespace to the DNS servers that are hosting
the DNS zones for the first namespace. You can use Windows Server 2003 DNS conditional
forwarders for this configuration

2.12.10 Designing DNS Zones


Each zone type that is available in Windows Server 2003 DNS has a specific purpose. The DNS
designer in your organization selects the type of zones to deploy based on the practical purpose
of each zone. The DNS administrators in your organization manage and maintain your DNS
zones. Figure shows the process for designing DNS zones.
Figure Designing DNS Zones

2.12.10.1 Choosing a Zone Type


Design zones to correspond to your network administration infrastructure. If a site in your network
is administered locally, deploy a zone for the subdomain. If a department has a subdomain, but
no administrator, keep the subdomain in the parent zone. Decide whether or not to store your
zones in Active Directory. Active Directory distributes data using a multimaster replication model
that provides more security than standard DNS. With the exception of secondary zones, you can
store all zone types in Active Directory because all other zones are considered primary zones.
When designing DNS zones, host each zone on more than one DNS server.

Page 54 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Decide which type of zone to use, based on your domain structure. For each zone type, with the
exception of secondary zones, decide whether to deploy file-based zones or Active Directory
integrated zones.

2.12.10.2 Primary Zones


Deploy primary zones that correspond to your planned DNS domain names. You cannot store
both an Active Directoryintegrated and a file-based primary copy of the same zone on the same
DNS server.

2.12.10.3 Secondary Zones


Add secondary zones if you do not have an Active Directory infrastructure. If you do have an
Active Directory infrastructure, use secondary zones on DNS servers that are not serving as
domain controllers. A secondary zone contains a complete copy of a zone. Therefore, use
secondary zones to improve zone availability at remote sites if you do not want zone data
propagated across a WAN link by means of Active Directory replication.

2.12.10.4 Stub Zones


A stub zone is a copy of a zone that contains only the original zones start of authority (SOA)
resource record, the name server (NS) resource records listing the authoritative servers for the
zone, and the glue address (A) resource records that are needed to identify these authoritative
servers.
A DNS server that is hosting a stub zone is configured with the IP address of the authoritative
server from which it loads. DNS servers can use stub zones for both iterative and recursive
queries. When a DNS server hosting a stub zone receives a recursive query for a computer name
in the zone to which the stub zone refers, the DNS server uses the IP address to query the
authoritative server, or, if the query is iterative, returns a referral to the DNS servers listed in the
stub zone.
Stub zones are updated at regular intervals, determined by the refresh interval of the SOA
resource record for the stub zone. When a DNS server loads a stub zone, it queries the zones
primary servers for SOA resource records, NS resource records at the zones root, and glue
address (A) resource records. The DNS server attempts to update its resource records at the end
of the SOA resource records refresh interval. To update its records, the DNS server queries the
primary servers for the resource records listed earlier.
You can use stub zones to ensure that the DNS server that is authoritative for a parent zone
automatically receives updates about the DNS servers that are authoritative for a child zone. To
do this, add the stub zone to the server that is hosting the parent zone. Stub zones can be either
file-based or Active Directoryintegrated. If you use Active Directoryintegrated stub zones, you
can configure them on one computer and let Active Directory replication propagate them to other
DNS servers running on domain controllers.
Although conditional forwarding is the recommended method for making your servers aware of
other namespaces, you can also use stub zones for this. For more information about using stub
zones, see Help and Support Center for Windows Server 2003.
Note
Only DNS servers running Windows Server 2003 and BIND 9 support stub zones.

2.12.10.5 Stub Zones and Conditional Forwarding


Stub zones and conditional forwarding are Windows Server 2003 DNS features that enable you
to control the routing of DNS traffic over a network. These features enable a DNS server to
respond to a query by doing one of the following:
Providing a referral to another DNS server.
Forwarding the query to another DNS server.
A stub zone enables a DNS server that is hosting a parent zone to be aware of the names and IP
addresses of DNS servers that are authoritative for a child zone, even if the DNS server does not

Page 55 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

have a complete copy of the child zone. In addition, when a stub zone is used, the DNS server
does not have to send queries to the DNS root servers. If the stub zone for a child zone is hosted
on the same DNS server as the parent zone, the DNS server that is hosting the stub zone
receives a list of all new authoritative DNS servers for the child zone when it requests an update
from the stub zones primary server. In this way, the DNS server that is hosting the parent zone
maintains a current list of the authoritative DNS servers for the child zone as the authoritative
DNS servers are added and removed.
Use conditional forwarding if you want DNS servers in one network to perform name resolution
for DNS clients in another network. You can configure DNS servers in separate networks to
forward queries to each other without querying DNS servers on the Internet. If DNS servers in
separate networks forward DNS client names to each other, the DNS servers cache this
information. This enables you to create a direct point of contact between DNS servers in each
network and reduces the need for recursion.
If you are using a stub zone and you have a firewall between DNS servers in the networks, then
DNS servers on the query/resolution path must have port 53 open. However, if you are using
conditional forwarding and you have a firewall between DNS servers in each of the networks, the
requirement to have port 53 open only applies to the two DNS servers on either side of the
firewall.

2.12.10.6 Active DirectoryIntegrated Zones


If your DNS topology includes Active Directory, use Active Directoryintegrated zones. Active
Directoryintegrated zones enable you to store zone data in the Active Directory database. Zone
information on any primary DNS server within an Active Directoryintegrated zone is always
replicated.
Because DNS replication is single-master, a primary DNS server in a standard primary DNS zone
can be a single point of failure. In an Active Directoryintegrated zone, a primary DNS server
cannot be a single point of failure because Active Directory uses multimaster replication. Updates
that are made to any domain controller are replicated to all domain controllers and the zone
information on any primary DNS server within an Active Directoryintegrated zone is always
replicated. Active Directoryintegrated zones:
Enable you to secure zones by using secure dynamic update.
Provide increased fault tolerance. Every Active Directoryintegrated zone can be
replicated to all domain controllers within the Active Directory domain or forest. All DNS
servers running on these domain controllers can act as primary servers for the zone and
accept dynamic updates.
Enable replication that propagates changed data only, compresses replicated data, and
reduces network traffic.
If you have an Active Directory infrastructure, you can only use Active Directoryintegrated zones
on Active Directory domain controllers. If you are using Active Directoryintegrated zones, you
must decide whether or not to store Active Directoryintegrated zones in the application directory
partition.
You can combine Active Directoryintegrated zones and file-based zones in the same design. For
example, if the DNS server that is authoritative for the private root zone is running on an
operating system other than Windows Server 2003 or Windows 2000, it cannot act as an Active
Directory domain controller. Therefore, you must use file-based zones on that server. However,
you can delegate this zone to any domain controller running either Windows Server 2003 or
Windows 2000.

2.12.10.7 Storing Active DirectoryIntegrated Zones in Application Directory Partitions


Windows Server 2003 Active Directory enables you to configure an application directory partition
that limits the scope of replication. Data stored in an application directory partition is replicated to
a subset of domain controllers. This subset is determined by the replication scope of the data. In
the default configuration of Windows Server 2003 Active Directory, DNS application directory

Page 56 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

partitions are present only on the domain controllers that run the DNS Server service. By storing
Active Directoryintegrated zones in an application directory partition, you can reduce the number
of objects that are stored in the global catalog, and you can reduce the amount of replication
traffic within a domain.
In contrast, Active Directoryintegrated zones that are stored in domain directory partitions are
replicated to all domain controllers in the domain. Storing Active Directoryintegrated zones in an
application directory partition allows replication of DNS data to domain controllers anywhere in
the same Active Directory forest.
When you are setting up your Active Directory environment and installing the first Windows
Server 2003 domain controller in the forest, if you install DNS, two Windows Server 2003 DNS
application directory partitions are created by default. A forest-wide DNS application directory
partition called ForestDNSZones will be created, and for each domain in the forest, a domain-
wide DNS application directory partition called DomainDNS Zones will be created.

2.12.10.8.Using Forwarding
If a DNS server does not have the data to resolve a query in its cache or in its zone data, it
forwards the query to another DNS server, known as a forwarder. Forwarders are ordinary DNS
servers and require no special configuration; a DNS server is called a forwarder because it is the
recipient of a query forwarded by another DNS server.
Use forwarding for off-site or Internet traffic. For example, a branch office DNS server can forward
all off-site traffic to a forwarder at the company headquarters, and an internal DNS server can
forward all Internet traffic to a forwarder on the external network. To ensure fault tolerance,
forward queries to more than one forwarder.
Forwarders can increase network security by minimizing the list of DNS servers that
communicate across a firewall.
You can use conditional forwarding to more precisely control the name resolution process.
Conditional forwarding enables you to designate specific forwarders for specific DNS names. You
can use conditional forwarding to resolve the following:
Queries for names in off-site internal domains
Queries for names in other namespaces
Using Conditional Forwarding to Query for Names in Off-Site Internal Domains
In Windows Server 2003 DNS, non-root servers resolve names for which they are not
authoritative, do not have a delegation, and do not have in their cache by doing one of the
following:
Querying a root server.
Forwarding queries to a forwarder.
Both of these methods generate additional network traffic. For example, a non-root server in Site
A is configured to forward queries to a forwarder in Site B, and it must resolve a name in a zone
hosted by a server in Site C. Because the non-root server can forward queries only to Site B, it
cannot directly query the server in Site C. Instead, it forwards the query to the forwarder in Site B,
and the forwarder queries the server in Site C.
When you use conditional forwarding, you can configure your DNS servers to forward queries to
different servers based on the domain name specified in the query. This eliminates steps in the
forwarding chain and reduces network traffic. When conditional forwarding is applied, the server
in Site A can forward queries to forwarders in Site B or Site C, as appropriate.
For example, the computers in the Seville site need to query computers in the Hong Kong site.
Both sites use a common DNS root server, DNS3.corp.fabrikam.com, located in Seville.
Before the Contoso Corporation upgraded to Windows Server 2003, the server in Seville
forwarded all queries that it could not resolve to its parent server, DNS1.corp.contoso.com, in
Seattle. When the server in Seville queried for names in the Hong Kong site, the server in Seville
first forwarded those queries to Seattle.

Page 57 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

After upgrading to Windows Server 2003, administrators configured the DNS server in Seville to
forward queries destined for the Hong Kong site directly to a server in that site, instead of first
detouring to Seattle, as shown in Figure
Figure Conditional Forwarding to an Off-Site Server

Administrators configured DNS3.corp.fabrikam.com to forward any queries for


corp.treyresearch.com to DNS5.corp.treyresearch.com or DNS6.corp.treyresearch.com.
DNS3.corp.fabrikam.com forwards all other queries to DNS1.corp.contoso.com or
DNS2.corp.contoso.com.
Using Conditional Forwarding to Query for Names in Other Namespaces
If your internal network does not have a private root and your users need access to other
namespaces, such as a network belonging to a partner company, use conditional forwarding to
enable servers to query for names in other namespaces. Conditional forwarding in Windows
Server 2003 DNS eliminates the need for secondary zones by configuring DNS servers to
forward queries to different servers based on the domain name.
For example, the Contoso Corporation includes two namespaces: Contoso and Trey Research.
Computers in each division need access to the other namespace. In addition, computers in both
divisions need access to computers in the Supplier private namespace.
Before upgrading to Windows Server 2003, the Trey Research division created secondary zones
to ensure that computers in both the Contoso and Trey Research namespace can resolve names
in the Contoso, Trey Research, and Supplier namespaces. After upgrading to Windows
Server 2003, the Trey Research division deleted its secondary zones and configured conditional
forwarding instead.

2.12.10.9. Securing Your DNS Infrastructure


Because DNS was designed to be an open protocol, DNS data can be vulnerable to security
attacks. Windows Server 2003 DNS provides improved security features to decrease this
vulnerability. The DNS designer in your organization is responsible for creating a secure DNS
infrastructure. The DNS administrators in your organization are responsible for maintaining
network security by anticipating and mitigating new security threats.
Figure shows the process for securing your DNS infrastructure.
Figure Securing Your DNS Infrastructure

Page 58 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Identifying DNS Security Threats


A DNS infrastructure is vulnerable to a number of types of security threats.
Footprinting The process of building a diagram, or footprint, of a DNS infrastructure by
capturing DNS zone data such as domain names, computer names, and IP addresses for
sensitive network resources. DNS domain and computer names often indicate the function or
location of domains and computers.
Denial-of-service attack An attack in which the attacker attempts to deny the availability of
network services by flooding one or more DNS servers in the network with recursive queries.
When a DNS server is flooded with queries, its CPU usage eventually reaches its maximum and
the DNS Server service becomes unavailable. Without a fully operating DNS server on the
network, network services that use DNS are unavailable to network users.
Data modification The use of valid IP addresses in IP packets that an attacker has created to
destroy data or conduct other attacks. Data modification is typically attempted on a DNS
infrastructure that has already been foot printed. If the attack is successful, the packets appear to
be coming from a valid IP address on the network. This is commonly called IP spoofing. With a
valid IP address (an IP address within the IP address range of a subnet), an attacker can gain
access to the network.
Redirection An attack in which an attacker is able to redirect queries for DNS names to servers
that are under the control of the attacker. One method of redirection involves the attempt to
pollute the DNS cache of a DNS server with erroneous DNS data that might direct future queries
to servers that are under the control of an attacker. For example, if a query is made to
example.contoso.com and a referral answer provides a record for a name that is outside of the
contoso.com domain, the DNS server uses the cached data to resolve a query for the external
name. Redirection can be accomplished when an attacker has writable access to DNS data, such
as with non-secure dynamic updates.

Page 59 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2.12.10.10 Developing a DNS Security Policy


If your DNS data is compromised, attackers can gain information about your network that can be
used to compromise other services. For example, attackers can harm your organization in the
following ways:
By using zone transfer, attackers can retrieve a list of all the hosts and their IP addresses
in your network.
By using denial-of-service attacks, attackers can prevent e-mail from being delivered to
and from your network, and they can prevent your Web server from being visible.
If attackers can change your zone data, they can set up fake Web servers, or cause e-
mail to be redirected to their servers.
Your risk of attack varies depending on your exposure to the Internet. For a DNS server in a
private network that uses a private namespace, a private addressing scheme, and an effective
firewall, the risk of attack is lower and the possibility of discovering the intruder is greater. For a
DNS server that is exposed to the Internet, the risk is higher.
Developing a DNS security policy involves:
Deciding what access your clients need, what tradeoffs you want to make between
security and performance, and what data you most want to protect.
Familiarizing yourself with the security issues common to internal and external DNS
servers.
Studying your name resolution traffic to see which clients can query which servers.
You can choose to adopt a low-level, mid-level, or high-level DNS security policy.

2.12.10.11 Low-Level DNS Security Policy


Low-level security does not require any additional configuration of your DNS deployment. Use
this level of DNS security in a network environment in which you are not concerned about the
integrity of your DNS data, or in a private network in which no external connectivity is possible. A
low-level security policy includes the following characteristics:
All DNS servers in your network perform standard DNS resolution.
All DNS servers are configured with root hints that point to the root servers for the
Internet.
All DNS servers permit zone transfers to any server.
All DNS servers are configured to listen on all of their IP addresses.
Secure cache against pollution is disabled on all DNS servers.
Dynamic update is allowed for all DNS zones.
User Datagram Protocol (UDP) and TCP/IP port 53 is open on the firewall for your
network for both source and destination addresses.

2.12.10.12 Mid-Level DNS Security Policy


Mid-level DNS security consists of the DNS security features that are available without running
DNS servers on domain controllers and storing DNS zones in Active Directory. A mid-level
security policy includes the following characteristics:
The DNS infrastructure of your organization has limited exposure to the Internet.
All DNS servers are configured to use forwarders to point to a specific list of internal DNS
servers when they cannot resolve names locally.
All DNS servers limit zone transfers to servers listed in the NS records in their zones.
DNS servers are configured to listen on specified IP addresses.
Secure cache against pollution is enabled on all DNS servers.
Secure dynamic update is allowed for all DNS zones.
Internal DNS servers communicate with external DNS servers through the firewall with a
limited list of allowed source and destination addresses.

Page 60 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

External DNS servers in front of your firewall are configured with root hints pointing to the
root servers for the Internet.
All Internet name resolution is performed by using proxy servers and gateways.

2.12.10.13 High-Level DNS Security Policy


High-level DNS security uses the same configuration as mid-level security and also uses the
security features available when the DNS Server service is running on a domain controller and
DNS zones are stored in Active Directory. Also, high-level security completely eliminates DNS
communication with the Internet. This is not a typical configuration, but it is recommended
whenever Internet connectivity is not required. High-level security policy includes the following
characteristics:
The DNS infrastructure of your organization has no Internet communication by means of
internal DNS servers.
Your network uses an internal DNS root and namespace, where all authority for DNS
zones is internal.
DNS servers that are configured with forwarders use internal DNS server IP addresses
only.
All DNS servers limit zone transfers to specified IP addresses.
DNS servers are configured to listen on specified IP addresses.
Secure cache against pollution is enabled on all DNS servers.
Internal DNS servers are configured with root hints that point to the internal DNS servers
hosting the root zone for your internal namespace.
Secure dynamic update is configured for all DNS zones except for the top-level and root
zones, which do not allow dynamic updates at all.
All DNS servers are running on domain controllers. An access control list (ACL) is
configured on the DNS Server service to allow only specific individuals to perform
administrative tasks on DNS servers.
All DNS zones are stored in Active Directory. An ACL is configured to allow only specific
individuals to create, delete, or modify DNS zones.
ACLs are configured on DNS resource records to allow only specific individuals to create,
delete, or modify DNS data.
Note
Windows Server 2003 DNS does not support the use of DACLs on zones to control which
clients or users can send queries to the DNS server.
Cache Pollution Protection
When cache pollution protection is enabled, the DNS server disregards DNS resource records
that originate from DNS servers that are not authoritative for the resource records. Cache
pollution protection is a significant security enhancement; however, when cache pollution
protection is enabled, the number of DNS queries can increase.
In Windows Server 2003 DNS, cache pollution protection is enabled by default. You can disable
cache pollution protection to reduce the number of DNS queries; however, to ensure the security
of your system, it is strongly recommended that you leave cache pollution protection enabled on
your DNS servers.
Securing DNS Servers That Are Exposed to the Internet
DNS servers that are exposed to the Internet are especially vulnerable to attack. You can secure
your DNS servers that are exposed to the Internet by doing the following:
Place the DNS server on a perimeter network instead of your internal network.
Use one DNS server for publicly accessed services inside your perimeter network and a
separate DNS server for your private internal network. This reduces the risk of exposing your
private namespace, which can expose sensitive names and IP addresses to Internet-based
users. It also increases performance because it decreases the number of resource records
on the DNS server.

Page 61 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Add a secondary server on another subnet or network, or on an ISP. This protects you
against denial-of-service attacks.
Eliminate single points of failure by securing your routers and DNS servers, and
distributing your DNS servers geographically. Add secondary copies of your zones to at least
one offsite DNS server.
Encrypt zone replication traffic by using Internet Protocol security (IPSec) or virtual
private network (VPN) tunnels to hide the names and IP addresses from Internet-based
users.
Configure firewalls to enforce packet filtering for UDP and TCP port 53.
Restrict the list of DNS servers that are allowed to initiate a zone transfer on the DNS
server. Do this for each zone in your network.
Monitor the DNS logs and monitor your external DNS servers by using Event Viewer.

Securing Internal DNS Servers


Internal DNS servers are less vulnerable to attack than external DNS servers, but you still need to
protect them. To secure your internal DNS servers:
Eliminate any single point of failure. Note, however, that DNS redundancy cannot help
you if your clients cannot access any network services. Think about where the clients of each
DNS zone are located, and how they resolve names if the DNS server is compromised and
unable to answer queries.
Prevent unauthorized access to your servers. Allow only secure dynamic update for your
zones and limit the list of DNS servers that are allowed to obtain a zone transfer.
Monitor the DNS logs and monitor your internal DNS servers by using Event Viewer.
Monitoring your logs and your server can help you detect unauthorized modifications to your
DNS server or zone files.
Implement Active Directoryintegrated zones with secure dynamic update.

Securing Dynamically Updated DNS Zones


Use Active Directoryintegrated zones and configure them for secure dynamic update. Secure
dynamic update resolves the security risks associated with using dynamic update. Because
dynamic update allows any computer to modify any record, an attacker can modify zone data,
then impersonate existing servers.
For example, if you install the Web server, web.contoso.com, and it registers its IP address in
DNS by using dynamic update, an attacker can install a second Web server, also name it
web.contoso.com, and use dynamic update to modify the corresponding IP address in the DNS
record. In this way, the attacker can impersonate the original Web server and capture secure
information.
To prevent server impersonation, implement secure dynamic update. By using secure dynamic
update, only the computers and users specified in an access control list (ACL) can modify objects
within a zone.
If your security policy demands stricter security, modify these settings to further restrict access.
Restrict access by computer, group, or user account, and assign permissions for the entire DNS
zone and for the individual DNS names within the zone.
Securing DNS Zone Replication
Zone replication can occur either by means of zone transfer or as part of Active Directory
replication. If you do not secure zone replication, you run the risk of exposing the names and IP
addresses of your computers to attackers. You can secure DNS zone replication by doing the
following:
Using Active Directory replication.
Encrypting zone replication sent over public networks such as the Internet.
Restricting zone transfer to authorized servers.
Using Active Directory Replication
Replicating zones as part of Active Directory replication provides the following security benefits:

Page 62 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Active Directory replication traffic is encrypted; therefore zone replication traffic is


encrypted automatically.
The Active Directory domain controllers that perform replication are mutually
authenticated, and impersonation is not possible.
Note
Use Active Directoryintegrated zones whenever possible, because they are replicated
as part of Active Directory replication, which is more secure than file-based zone transfer.
Encrypting Replication Traffic Sent Over Public Networks
Encrypt all replication traffic sent over public networks by using IPSec or VPN tunnels. When
encrypting replication traffic sent over public networks:
Use the strongest level of encryption or VPN tunnel authentication that your servers can
support.
Use the Windows Server 2003 Routing and Remote Access service to create the IPSec
or VPN tunnel.
Restricting Zone Transfer to Authorized Servers
If you have secondary servers and you replicate your zone data by using zone transfer, configure
your DNS servers to specify the secondary servers that are authorized to receive zone transfers.
This prevents an attacker from using zone transfer to download zone data. If you are using Active
Directoryintegrated zones instead, configure your servers to disallow zone transfer.

2.13 Overview of WINS Deployment


WINS provides a dynamic solution for network basic input/output system (NetBIOS) name
resolution in enterprise networks. Although most large networks currently have a WINS
infrastructure, some still rely on other methods of NetBIOS name resolution, such as the Lmhosts
file
If your organization does not currently use WINS, and intends to continue operating with
Microsoft Windows 95, Windows 98, Windows Millennium Edition, or Microsoft
Windows NT version 4.0, consider implementing WINS when you deploy Windows Server 2003
in order to automate NetBIOS name resolution. Certain applications, such as Microsoft
Exchange Server, also rely on NetBIOS name resolution. Therefore, even if all of your computers
are running Microsoft Windows 2000, Windows XP, or Windows Server 2003, you might still
require NetBIOS name resolution based on the applications running in your environment.
If you are upgrading your current WINS servers to Windows Server 2003, determine if your
existing hardware is compatible with Windows Server 2003, then migrate your WINS solution to
Windows Server 2003.By deploying WINS, you provide NetBIOS name resolution for clients on
your network. WINS implements a distributed database for NetBIOS names and their
corresponding addresses. WINS clients register their names at a local WINS server, and the
WINS servers replicate the entries to the other WINS servers. This ensures the uniqueness of
NetBIOS names and makes local name resolution possible.
Lmhosts file
A local text file that maps network basic input/output (NetBIOS) names (commonly used for
computer names) to IP addresses for hosts that are not located on the local subnet. In this
version of Windows, this file is stored in the systemroot\System32\Drivers\Etc folder.

2.13.1 WINS Deployment Process


Deploying WINS involves building a server strategy, designing a replication strategy, securing
your WINS solution, integrating WINS with other services, and implementing your WINS solution.
Figure shows the general WINS deployment process.
Figure Deploying WINS

Page 63 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2.13.2 Designing Your WINS Replication Strategy


A good replication design is essential to your WINS availability and performance. Designs
encompassing multiple WINS servers distribute NetBIOS name resolution across LAN and WAN
environments, confining WINS client traffic to localized areas. To ensure consistent, network-wide
name resolution, WINS servers must replicate their local entries to other servers.
Figure shows the process for designing your WINS replication strategy.
Figure Designing Your WINS Replication Strategy

Page 64 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Before configuring replication, carefully design and review your WINS replication topology. For
WANs, this planning can be critical to the success of your deployment and use of WINS.
WINS provides the following choices when you are configuring replication:
You can manually configure WINS replication for a WAN environment.
For larger networks, you can configure WINS to replicate within a LAN environment.
In smaller or bounded LAN installations, consider enabling and using WINS automatic
partner configuration for simplified setup of WINS replication.
In larger or global installations, you might have to configure WINS across untrusted
Windows NT domains.
If your network uses only two WINS servers, configure them as push/pull replication partners to
each other. When configuring replication partners, avoid push-only or pull-only servers except
where necessary to accommodate slow links. In general, push/pull replication is the most simple
and effective way to ensure full WINS replication between partners. This also ensures that the
primary and secondary WINS servers for any particular WINS client are push/pull partners of
each other, a requirement for proper WINS functioning in the event of a failure of the primary
server of the client.
In most cases, the hub-and-spoke model provides a simple and effective design for organizations
that require complete convergence with minimal administrative intervention. For example, this

Page 65 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

model works well for organizations with centralized headquarters or a corporate data center (the
hub) and several branch offices (the spokes). Also, a second or redundant hub (that is, a second
WINS server in the central location) can increase the fault tolerance for WINS.
In some large enterprise WINS networks, limited replication partnering can effectively support
replication over slow network links. However, when you plan limited WINS replication, ensure that
each server has at least one replication partner. Furthermore, balance each slow link that
employs a unidirectional link by a unidirectional link elsewhere in the network that carries updated
entries in the opposite direction.

2.13.3 Specifying Automatic Partner Configuration


You can configure a WINS server to automatically configure other WINS server computers as
replication partners. By using this automatic partner configuration, other WINS servers are
discovered when they join the network and are added as replication partners.
When using automatic partner configuration, each WINS server announces its presence on the
network by using periodic multcasts These announcements are sent as Internet Group
Management Protocol (IGMP) messages for the multicast group address of 224.0.1.24, which is
reserved for WINS server use.
Automatic partner configuration is typically useful in small networks, such as single subnet LAN
environments. However, you can use automatic partner configuration in routed networks. For
WINS multicast support in routed networks, the forwarding of multicast traffic is made possible by
configuring routers for each subnet to forward traffic to the WINS multicast group address of.
224.0.1.24.
Because periodic multicast announcements between WINS servers can add traffic to your
network, automatic partner configuration is recommended only if you have a small number of
installed WINS servers (typically, three or fewer).
Automatic partner configuration monitors multicast announcements from other WINS servers, and
performs the following configuration steps:
Adds the IP addresses for the discovered servers to its list of replication partner servers.
Configures the discovered servers as push/pull partners.
Configures pull replication at two-hour intervals with the discovered servers.
If a remote server is discovered and added as a partner by means of multicasting, it is removed
as a replication partner when WINS shuts down properly. To have automatic partner information
persist when WINS restarts, you must manually configure the partners.

2.13.4 Determining Replication Partners


Choosing whether to configure a WINS server as a push partner, pull partner, or push/pull partner
depends on several considerations, including the specific configuration of servers at your site,
whether the partner is across a WAN, and how important it is to distribute changes immediately
throughout the network.
In the hub-and-spoke configuration, you can configure one WINS server as the central server and
all other WINS servers as push/pull partners of this central server. Such a configuration ensures
that the WINS database on each server contains addresses for every node on the WAN.
Figure 4.6 shows replication using a hub-and-spoke topology.
Figure WINS Replication in a Hub-and-Spoke Topology

Page 66 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

You can select other configurations for replication partner configurations to meet the particular
needs of your site. For example, Figure 4.7 shows replication in a T network topology, in which
Server1 has only Server2 as a partner, but Server2 has three partners. So Server1 gets all the
replicated information from Server2, but Server2 gets information from Server1, Server3, and
Server4.
Figure Replication in a T Network Topology

If Server2 needs to perform pull replications with Server3, make sure it is a push partner of
Server3. If Server2 needs to push replications to Server3, configure it as a pull partner of
Server3. Determine whether to configure WINS servers as either pull or push partners, and set
partner preferences for each server.

2.13.5 Configuring Replication Across WANs


When configuring WINS replication across WANs, the two most important issues are:
Whether your WINS replication occurs over slower WAN links.
The length of time required for all replicated changes in the WINS database to converge
and achieve consistency on the network.
The frequency of WINS database replication between WINS servers is a major design issue. The
WINS server database must be replicated frequently enough to prevent the downtime of a single
WINS server from affecting the reliability of the mapping information in other WINS servers.
However, the time interval between replications cannot be so small that it interferes with network
throughput.
Network topology can influence your decision on replication frequency. For example, if your
network has multiple hubs connected by relatively slow WAN links, you can configure WINS
database replication between WINS servers on the slow links to occur less frequently than
replication on the LAN or on fast WAN links. This reduces traffic across the slow link and reduces
contention between replication traffic and WINS client name queries.

Page 67 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

After determining the replication strategy that works best for your organization, map the strategy
to your physical network. For example, if you have chosen a hub-and-spoke strategy, indicate on
your network topology map which sites have the "hub" server, and which have the "spoke"
servers. Also indicate whether the replication is push/pull, push-only, or pull-only.

2.13.6 Configuring Replication Across LANs


When configuring WINS replication across LANs, the issues are similar to those that occur in
WAN environments, although less critical.
Because the data throughput of the underlying network links for LANs is much greater than for
WANs, it might be acceptable to increase the frequency of WINS database replication by
specifying push and pull parameters for LAN-based replication partners. For push/pull partners,
you can do this by decreasing the Number of changes in version ID before replication and
Replication interval settings from what you use for WAN-based partners on slower links.
For example, between LAN-based replication partners it often works to enable WINS to use a
persistent connection between the servers. Without a persistent connection, the normal update
count threshold defaults to a minimum of 20. You can specify a smaller update count with a
persistent connection.
Next, you can specify a much smaller number, such as a value of one to three in the Number of
changes in version ID before replication setting before WINS sends a push replication trigger
to the other partner. For pull partners, you might also consider setting the Replication interval
setting to a value in minutes, instead of hours.
As in WAN replication planning, the WINS server database must replicate frequently enough to
prevent the downtime of a single WINS server from affecting the reliability of the mapping
information in other WINS servers. However, the time interval between replications cannot be so
small that it interferes with network throughput.
In environments with a large amount of network traffic it is a good idea to use a network
monitoring tool, such as Network Monitor, to help measure and determine how to optimize your
WINS replication strategy.

2.14 Troubleshooting DNS clients


What problem are you having?
The DNS client received a "Name not found" error message.
Cause: The DNS client computer does not have a valid IP configuration for the network.
Solution: Verify that TCP/IP configuration settings for the client computer are correct,
particularly those used for DNS name resolution.
To verify a client IP configuration, use the ipconfig command. In the command output, verify that
the client has a valid IP address, subnet mask, and default gateway for the network where it is
attached and being used.
If the client does not have a valid TCP/IP configuration, you can either:
a. For dynamically configured clients, use the ipconfig /renew command to manually force
the client to renew its IP address configuration with the DHCP server.
b. For statically configured clients, modify the client TCP/IP properties to use valid
configuration settings or complete its DNS configuration for the network.
Cause: The client was not able to contact a DNS server because of a network or hardware
related failure.
Solution: Verify that the client computer has a valid and functioning network connection. First,
check that related client hardware (cables and network adapters) are working properly at the
client using basic network and hardware troubleshooting steps.
If the client hardware appears to be prepared and functioning properly, verify that it can ping other
computers on the same network.
Cause: The DNS client cannot contact its configured DNS servers.
Solution: If the DNS client has basic connectivity to the network, verify that it can contact a
preferred (or alternate) DNS server.

Page 68 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

To verify whether a client has basic TCP/IP access to the DNS server, first try pinging the
preferred DNS server by its IP address.
For example, if the client uses a preferred DNS server of 10.0.0.1, type ping 10.0.0.1 at the
command prompt on the client computer. If you are not sure what the IP address is for the
preferred DNS server, you can observe it by using the ipconfig command.
For example, at the client computer, type ipconfig /all|more if necessary to pause the display so
you can read and note any IP addresses listed in DNS servers for the command output.
If no configured DNS servers respond to a direct pinging of their IP address, it indicates that the
source of the problem is more likely a network connectivity problem between the client and the
DNS servers. If that is the case, follow basic TCP/IP network troubleshooting steps to fix the
problem.
Cause: The DNS server is not running or responding to queries.
Solution: If the DNS client can ping the DNS server computer, verify that the DNS server is
started and able to listen for and respond to client requests. Try using the nslookup command to
test whether the server can respond to DNS clients.
Cause: The DNS server the client is using does not have authority for the failed name and
cannot locate the authoritative server for this name.
Solution: Confirm whether the DNS domain name the client is trying to resolve is one for which
its configured DNS servers are authoritative.
For example, if the client is attempting to resolve the name host.example.microsoft.com, verify
that the preferred (or an alternate, if one is being used) DNS server queried by the client loads
the authoritative zone where a host (A) resource record (RR)
resource record (RR)
A standard DNS database structure containing information used to process DNS queries. For
example, an address (A) resource record contains an IP address corresponding to a host name.
Most of the basic resource record types are defined in RFC 1035, but additional RR types have
been defined in other RFCs and approved for use with DNS.
for the failed name should exist.
If the preferred server is authoritative for the failed name and loads the applicable zone,
determine whether the zone is missing the appropriate RRs. If needed, add the RRs to the zone.
If the preferred server is not authoritative for the failed name, it indicates that configuration errors
at the DNS server are the likely cause. As needed, further troubleshoot the problem at the DNS
server.
I am having a problem related to zone transfers.
Cause: The DNS Server service is stopped or the zone is paused.
Solution: Verify that the master (source) and secondary (destination) DNS servers involved in
completing transfer of the zone are both started and that the zone is not paused at either server
Cause: The DNS servers used during a transfer do not have network connectivity with each
other.
Solution: Eliminate the possibility of a basic network connectivity problem between the two
servers.
Using the ping command, ping each DNS server by its IP address from its remote counterpart.
For example, at the source server, use the ping command to test IP connectivity with the
destination server. At the destination server, repeat the ping test, substituting the IP address for
the source server.
Both ping tests should succeed. If not, investigate and resolve intermediate network connectivity
issues.

2.15 Troubleshooting WINS servers


What problem are you having?
The server cannot resolve names for clients.
Cause: Configuration details might be incorrect or missing.

Page 69 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Solution: To help prevent the most common types of problems, review WINS best practices for
deploying and managing your WINS servers. Most WINS-related problems start as failed queries
at a client, so it is best to start there.
Cause: The WINS server might not be able to service the client.
Solution: At the primary or secondary WINS server for the client that cannot locate a name, use
Event Viewer or the WINS console to see if WINS is started and currently running. If WINS is
running on the server, search for the name previously requested by the client to see if it is in the
WINS server database.
The server intermittently loses its ability to resolve names.
Cause: There might be a split registration problem, where the WINS server is registering its
names in WINS at two servers on the network. This is possible when the WINS server settings
configured in TCP/IP properties at the server computer are pointing to IP address of remote
WINS servers and are not configured to use the IP address of the local WINS server.
Solution: Re-configure client TCP/IP properties at the WINS server to have its primary and
secondary WINS servers point to the IP address of the local server computer.
I can't locate the source of "duplicate name" error messages.
Cause: You might need to manually remove static records, or enable static mappings to be
overwritten during replication.
Solution: If the duplicate name exists already as a static mapping, you can tombstone or delete
the entry. If possible, this should be done at the WINS server that owns the static mapping for the
duplicate name record. As a preventive measure, in cases where a static mapping needs to be
replaced and updated by a dynamic mapping, you can also enable Overwrite unique static
mappings at this server (migrate on) in Replication Partners Properties.
I need to locate the source of "network path not found" error messages on a WINS client.
Cause: The network path might contain the name of a server computer configured as a p-node,
m-node, or h-node and its IP address is different from the one in the WINS database. In this case,
the IP address for this computer might have changed recently and the new address has not yet
replicated to the local server.
Solution: Check the WINS database for the name and IP address mapping information. If they
are not current, you can start replication at the WINS server that owns the name record requiring
an update at other servers.

Section 3
3. Planning, Implementing, and Maintaining Server Availability
3.1 Planning for High Availability and Scalability
3.1.1 Overview
3.1.2 High Availability and Scalability Planning Process
3.1.3 Basic High Availability and Scalability Concepts
3.1.4 Defining Availability and Scalability Goals
3.1.5 Quantifying Availability and Scalability for Your Organization
3.2 Determining Availability Requirements
3.2.1 Determining Reliability Requirement
3.2.2 Determining Scalability Requirements
3.2.3 Analyzing Risk
3.2.4 Developing Availability and Scalability Goals
3.2.5 Details on Record That Help Define Availability Requirements
3.2.6 Users of IT Services
3.2.7 Requirements and Requests of End Users
3.2.8 Requirements for User Accounts, Networks, or Similar Types of Infra
3.2.9 Time Requirements and Variations
3.3 Using IT Procedures to Increase Availability and Scalability

Page 70 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3.3.1 Planning and Designing Fault-Tolerant Hardware Solutions


3.3.2 Using Standardized Hardware
3.3.3 Using Spares and Standby Server
3.3.4 Using Fault-Tolerant Components
3.3.5 Storage Strategies
3.3.6 Safeguarding the Physical Environment of the Servers
3.4 Implementing Software Monitoring and Error-Detection Tools
3.4.1 Choosing Monitoring Tools
3.4.2 Windows Management Instrumentation
3.4.3 Microsoft Operations Manager 2000
3.4.5 Event logs
3.4.6 Service Control Manager
3.4.7 Performance Logs and Alerts
3.4.8 Shutdown Event Tracker
3.5 Evaluating the Benefits of Clustering
3.5.1 Benefits of Clustering
3.5.2 Limitations of Clustering
3.5.3 Evaluating Cluster Technologies
3.5.4 Server Clusters
3.5.5 Network Load Balancing
3.5.6 Using Clusters to Increase Availability and Scalability
3.5.7 Scaling by Adding Servers
3.5.8 Scaling by Adding CPUs and RAM
3.5.9 Availability and Server Consolidation with Server Clusters
3.6 Overview of the Server Clusters Design Process
3.6.1 Server Cluster Design and Deployment Process
3.6.2 Server Cluster Fundamentals
3.6.3 Cluster Hardware Requirements
3.6.4 New in Windows Server 2003
3.6.5 Deploying Server Clusters
3.6.6 Designing Network Load Balancing
3.6.7 Overview of the NLB Design Process
3.6.8 How NLB Provides Improved Scalability and Availability
3.6.9 Deploying Network Load Balancing
3.6.10 Overview of the NLB Deployment Process
3.6.11 Selecting the Automated Deployment Method
3.6.12 Implementing a New Cluster
3.6.13 Preparing to Implement the Cluster
3.7 Implementing the Network Infrastructure
3.7.1 Implementing the Cluster
3.7.2 Installing and Configuring the Hardware and Windows Server 2003
3.7.3 Installing and Configuring the First Cluster Host
3.7.4 Installing and Configuring Additional Cluster Hosts
3.8 What Is Backup
3.8.1 Types of Backup
3.8.2 Permissions and User Rights Required to Back Up
3.8.3 Automated System Recovery
3.8.4 Restoring File Security Settings
3.8.5 Restoring Distributed Services
3.8.6 What Is Shadow Copies for Shared Folders
3.8.7 How Shadow Copies for Shared Folders Works
3.8.8 Shadow Copies for Shared Folders Architecture

Page 71 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Page 72 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3.1 Planning for High Availability and Scalability


Business-critical applications such as corporate databases and e-mail often need to reside on
systems and network structures that are designed for high availability. The same is true for retail
Web sites and other Web-based businesses. Knowing about high availability concepts and
practices can help you to maximize the availability (extremely low downtime) and scalability (the
ability to grow as demand increases) of your server systems. Using sound IT practices and fault-
tolerant hardware solutions in your deployment can increase availability and scalability.
Additionally, the Microsoft Windows Server 2003 operating system offers two clustering
technologies server clusters and Network Load Balancing that provide both the reliability
and scalability that most enterprises need.

3.1.1 Overview
A highly available system reliably provides an acceptable level of service with minimal downtime.
Downtime penalizes businesses, which can experience reduced productivity, lost sales, and lost
faith from clients, partners, and customers.
By implementing recommended IT practices, you can increase the availability of key services,
applications, and servers. These practices also help you minimize both planned downtime, such
as for maintenance or service pack installations, and unplanned downtime, such as downtime
caused by a server failure.

3.1.2 High Availability and Scalability Planning Process


Figure illustrates the high availability design process and defines the steps you can take to
ensure a deployment that meets your requirements for high availability.
Figure Planning for High Availability and Scalability

Page 73 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3.1.3 Basic High Availability and Scalability Concepts


To begin designing a Windows Server 2003 deployment for maximum availability and scalability,
familiarize yourself with these fundamental high availability and scalability concepts.
Basic High Availability Concepts
An integral part of building large-scale, mission-critical systems that your business and your users
can rely on is to ensure that no single point of failure can render a server or network unavailable.
There are several types of failures you must plan against to keep your system highly available.
Storage failures
There are many ways to protect against failures of individual storage devices using techniques
such as RAID. Storage vendors provide hardware solutions that support many different types of
hardware redundancy for storage devices, allowing devices as well as individual components in
the storage controller itself to be exchanged without losing access to the data. Software solutions
also provide similar capabilities.

Network failures
There are many components to a computer network, and there are many typical network
topologies that provide highly available connectivity. All types of networks need to be considered,
including client access networks and management networks. In storage area networks (SANs),
failures might include the storage fabrics that link the computers to the storage units
Computer failures
Many enterprise-level server platforms provide redundancy inside the computer itself, such as
through redundant power supplies and fans. Vendors also allow components such as peripheral

Page 74 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

component interconnect (PCI) cards and memory to be swapped in and out without removing the
computer from service. In cases where a computer fails or needs to be taken out of service for
routine maintenance or upgrades, clustering provides redundancy to enable applications or
services to continue. This redundancy happens automatically in clustering, either through failover
of the application (transferring client requests from one computer to another) or by having multiple
instances of the same application available for client requests.
Site failures
In extreme cases, a complete site can fail due to a total power loss, a natural disaster, or other
unusual occurrences. More and more businesses are recognizing the value of deploying mission-
critical solutions across multiple geographically dispersed sites. For disaster tolerance, a data
centers hardware, applications, and data can be duplicated at one or more geographically
remote sites. If one site fails, the other sites continue offering service until the failed site is
repaired. Sites can be active-active, where all sites carry some of the load, or active-passive,
where one or more sites are on standby.
You can prevent most of these failures by using the following methods:
Proven IT practices. IT strategies can help your organization avoid some or all of the
above failures. IT practices take on added importance when a clustering solution is not
applicable to, or even possible in, your particular deployment. Whether or not you choose to
deploy a clustering solution, all Windows deployments should at a minimum follow the
guidelines and reference the resources listed in this chapter for fault-tolerant hardware
solutions.
Clustering. This chapter introduces Windows Server 2003 clustering technologies and
provides an overview of how they work. Different kinds of clusters can be used together to
provide a true end-to-end high availability and scalability solution
Basic Scalability Concepts
In general deployments, scalability is the measure of how well a service or application can grow
to meet increasing performance demands. When applied to clustering, scalability is the ability to
incrementally add systems to an existing cluster when the overall load of the cluster exceeds the
clusters capabilities.
Scaling up
Scaling up involves increasing system resources (such as processors, memory, disks, and
network adapters) to your existing hardware or replacing existing hardware with greater system
resources. Scaling up is appropriate when you want to improve client response time, such as on
a Network Load Balancing cluster. If the required number of users are not properly supported,
adding random access memory (RAM) or central processing units (CPUs) to the server is one
way to meet the demand.
Windows Server 2003 supports single or multiple CPUs that conform to the symmetric
multiprocessing (SMP) standard. Using SMP, the operating system can run threads on any
available processor, which makes it possible for applications to use multiple processors when
additional processing power is required to increase the capability of a system
Scaling out
Scaling out involves adding servers to meet demand. In a server cluster, this means adding
nodes to the cluster. Scaling out is also appropriate when you want to improve client response
time with your servers, and when you have the hardware budget to purchase additional servers
as needed
Testing and Pilot Deployments
Before you deploy any new solution, whether it is a fault-tolerant hardware or networking
component, a software monitoring tool, or a Windows clustering solution, you should thoroughly
test the solution before deploying it. After testing in an isolated lab, test the solution in a pilot
deployment in which only a few users are affected, and make any necessary adjustments to the
design. After you are satisfied with the pilot deployment, perform a full-scale deployment in your
production environment. Depending on the number of users you have, you might want to perform

Page 75 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

your full-scale deployment in stages. After each stage, verify that your system can accommodate
the increased processing load from the additional users before deploying the next group of users.

3.1.4 Defining Availability and Scalability Goals


Defining availability and scalability goals is the first step toward ensuring that your efforts are
focused on the elements of the system that matter the most to your organization. Figure 6.2
shows how to quantify and determine your availability goals.
Figure Defining Availability and Scalability Goals

Availability goals allow you to accomplish the following tasks:


Design, operate, and evaluate your systems in relationship to a consistent set of
priorities, and place new requests or problems in context.
Keep efforts focused where they are needed. Without clear goals, efforts can become
uncoordinated, or resources can be spread so thin that none of the organizations most
important needs are met.
Limit costs. You can direct expenditures toward the areas where they make the most
difference.
Recognize when tradeoffs must be made, and make them in appropriate ways.
Clarify areas where one set of goals might conflict with another, and avoid making plans
that require a system to handle two conflicting goals simultaneously.
Provide a way for operators and support staff to prioritize unexpected problems when
they arise by referring to the availability goals for that component or service.

3.1.5 Quantifying Availability and Scalability for Your Organization

Page 76 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Your goal in quantifying availability is to compare the costs of your current IT environment
including the actual costs of outages and the cost of implementing high availability solutions.
These solutions include training costs for your staff as well as facilities costs, such as costs for
new hardware. After you have calculated the costs, IT managers can use these numbers to make
business decisions, not just technical decisions, about your high availability solution. For
information about monitoring tools that can help you measure the availability of your services and
systems, Scalability is more difficult to quantify because it is based on future needs and therefore
requires a certain amount of estimation and prediction. Remember, though, that scalability is tied
to availability because if your system cannot grow to meet increased demand, certain services
will become less available to your users.

3.2 Determining Availability Requirements


Availability can be expressed numerically as the percentage of the time that a service is available
for use. The exact level of availability must be determined in the context of the service and the
organization that uses the service. Table 6.1 displays common availability levels that many
organizations try to achieve. The following formula is used to calculate these levels:
Percentage of availability = (total elapsed time sum of downtime)/total elapsed time
Table Availability Measurements and Yearly Downtime
Availability Yearly Downtime
99.999% 5 minutes
99.99% 53 minutes
99.9% 8 hours, 45 minutes
Availability requirements can vary depending on the server role. Your users can probably
continue to work if a print server is down, for example, but if a server hosting a mission-critical
database fails, your business might feel the effects immediately.

3.2.1 Determining Reliability Requirements


Reliability is related to availability, and it is generally measured by computing the time between
failures. Mean time between failures (MTBF) is calculated by using the following equation:
MTBF = (total elapsed time sum of downtime)/number of failures
A related measurement is mean time to repair (MTTR), which is the average amount of time that
it takes to bring an IT service or component back to full functionality after a failure.
A system is more reliable if it is fault tolerant. Fault tolerance is the ability of a system to continue
functioning when part of the system fails. This is achieved by designing the system with a high
degree of hardware redundancy. If any single component fails, the redundant component takes
its place with no appreciable downtime. For more information about fault-tolerant components,

3.2.2 Determining Scalability Requirements


You need to consider scalability now to provide your organization a certain amount of flexibility in
the future. If you believe your hardware budget will be sufficient, you can plan to purchase
hardware at regular intervals to add to your existing deployment. The amount of hardware you
purchase depends on the exact increase in demand. If you have budget limitations, purchase
servers that you can scale up later by adding RAM or CPUs to meet a rise in users or client
requests.
Looking at past growth can help you determine how demand on your IT system might grow.
However, because business technology is becoming increasingly complex, and reliance on that
technology is growing every year, you must consider other factors as well. If you anticipate
growth, realize that some aspects of your deployment may grow at different rates. You might
need many more Web servers than print servers, for example, over a certain period of time. For
some types of servers, it might be sufficient to add CPU power when network traffic increases,
while in other cases, such as with a Network Load Balancing cluster, the most practical scaling
solution might be to add more servers.

Page 77 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Recreate your Windows deployment as accurately as possible in a test environment and, either
manually or through a simulation program, put as much workload as possible on different areas of
your deployment. Observing your system under such circumstances can help you formulate
scaling priorities and anticipate where you might need to scale first.
After your system is deployed, software-monitoring tools can alert you when certain components
of your system are near or at capacity. Use these tools to monitor performance levels and system
capacity so that you know when a scaling solution is needed. For more information about
monitoring performance levels

3.2.3 Analyzing Risk


When planning a highly available Windows Server 2003 environment, consider all available
alternatives and measure the risk of failure for each alternative. Begin with your current
organization and then implement design changes that increase reliability to varying degrees.
Evaluate the costs of each alternative against its risk factors and the impact of downtime to your
organization.
Often, achieving a certain level of availability can be relatively inexpensive, but to go from
98 percent availability to 99 percent, for example, or from 99.9 percent availability to
99.99 percent, can be very costly. This is because bringing your organization to the next level of
availabity might entail a combination of new or costly hardware solutions, additional staff, and
support staff for non-peak hours. As you determine how important it is to maintain productivity in
your IT environment, consider whether those added days, hours, and minutes of availability are
worth the price.
Every operations center needs a risk management plan. When assessing risks in your proposed
Windows Server 2003 deployment, remember that network and server failures can cause
considerable loss to businesses. After you evaluate risks versus costs, and after you design and
deploy your system, your IT staff needs sound guidelines and plans of action in case a failure in
the system does occur.

3.2.4 Developing Availability and Scalability Goals


Begin establishing goals by reviewing information that is readily available within your
organization. Existing Service Level Agreements, for example, define the availability goals for
specific IT services or systems. Gather information from those individuals and groups who are
most directly affected, such as the users or departments that depend on the services and the
people who make decisions about IT staffing.
The following questions provide a starting point for developing a list of availability goals. These
goals, and the factors that influence them, vary from organization to organization. By identifying
the goals appropriate to your situation, you can clarify your priorities as you work to increase
system availability and reliability.
Organizations Central Purposes
These fundamental questions will help you prioritize the applications and services that are most
important to your organization and the extent to which you rely on your IT infrastructure for certain
key tasks.
What are the organizations central purposes?
What must the organization accomplish to survive and flourish?

3.2.5 Details on Record That Help Define Availability Requirements


The questions in this section can help you quantify your availability needs, which is the first step
in addressing those needs.
If your organization has attempted to evaluate the need for high availability in the past, do
you have existing documents that already outline availability goals?
Do you have current or previous Service Level Agreements, Operating Level
Agreements, or similar agreements that define service levels?
Have you defined acceptable and unacceptable service levels?

Page 78 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Do you have data about the cost of outages or the effect of service delays or outages (for
example, information about the cost of an outage at 9 A.M. versus the cost of an outage at
9 P.M.)?
Do you have any data from groups that practice incident management, problem
management, availability management, or similar disciplines?

3.2.6 Users of IT Services


It is important to define the needs of your users to provide them with the availability they need to
do their work. There is often a tradeoff between providing high availability and paying the cost of
hardware, training, and support. Categorizing your users can make these kinds of business
decisions easier.
Who are the end users? What groups or categories do they fall into? What expertise
levels do they have?
How important is each user group or category to the organizations central goals?

3.2.7 Requirements and Requests of End Users


These questions help pinpoint the needs of your users. You can more easily customize your high
availability solutions and anticipate scalability issues if you know exactly what your users need.
Among the tasks that users commonly perform, which are the most important to the
organizations central purposes?
When end users try to accomplish the most important tasks, what do they expect to see
on their screens (or access through some other device)? Described another way, what data
(or other resources) do users need to access, and what applications or services do they need
when working with that data?
For the users and tasks most important to the organization, what defines a satisfactory
level of service?

3.2.8 Requirements for User Accounts, Networks, or Similar Types of Infrastructure


It is important to know about supporting services even services that you do not control when
evaluating availability needs and defining availability goals. Your system will be only as fault
tolerant as the systems that support it.
What types of network infrastructure and directory services are required so that users can
accomplish the tasks that you have identified as requirements for end users? In other words,
what types of behind-the-scenes services do users require?
For these behind-the-scenes services, what defines the difference between satisfactory
and unsatisfactory results for the organization?

3.2.9 Time Requirements and Variations


Keeping support staff on-site to maintain a system can be expensive. Costs can be minimized if
support personnel are on-site only during critical periods. Similarly, knowing when workload is
highest can help you anticipate when availability is most important, and possibly when a failure is
likely to occur.
Are services needed on a 24-hours-a-day, 7-days-a-week basis, or on some other
schedule (such as 9 A.M. to 5 P.M. on weekdays)?
What are the normal variations in load over time?
What increments of downtime are significant (for example, five seconds, five minutes, an
hour) during peak and nonpeak hours?

3.3 Using IT Procedures to Increase Availability and Scalability


The following sections introduce best practices for optimizing your Windows Server 2003
deployment for high availability and scalability. A well-planned deployment strategy can increase
system availability and scalability while reducing the support costs and failure recovery times of a

Page 79 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

system. Figure 6.3 displays the process for deploying your servers and network infrastructure in a
fault-tolerant manner that also provides manageability.
Figure Using IT Procedures to Increase Availability and Scalability

To aid in this planning, Microsoft recommends the Microsoft Operations Framework (MOF). MOF
is a flexible, open-ended set of guidelines and concepts that you can adapt to your specific
operations needs. Adopting MOF practices provides greater organization and contributes to
regular communication between your IT department, your end users, and other departments in
your company that might be affected

3.3.1 Planning and Designing Fault-Tolerant Hardware Solutions


An effective hardware strategy can improve the availability of a system. These strategies can
range from adopting commonsense practices to using expensive fault-tolerant equipment.

3.3.2 Using Standardized Hardware


To ensure full compatibility with Windows operating systems, choose hardware from the Windows
Server Catalog only
When selecting your hardware from the Windows Server Catalog, adopt one standard for
hardware and standardize it as much as possible. To do this, pick one type of computer and use
the same kinds of components, such as network cards, disk controllers, and graphics cards, on
all your computers. Use this computer type for all applications, even if it is more than you need for
some applications. The only parameters that you should modify are the amount of memory,
number of CPUs, and the hard disk configurations.
Standardizing hardware has the following advantages:

Page 80 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Having only one platform reduces the amount of testing needed.


When testing driver updates or application-software updates, only one test is needed
before deploying to all your computers.
With only one system type, fewer spare parts are required.
Because only one type of system must be supported, support personnel require less
training.

3.3.3 Using Spares and Standby Servers


This chapter discusses clustering as a means of providing high availability for your applications
and services to your end users. However, there are two clustering alternatives that provide
flexibility or redundancy in your hardware design: spares and standby systems.
Spares
Keep spare parts on-site, and include spares in any hardware budget. One of the advantages of
using a standard configuration is the reduced number of spares that must be kept on-site. If all of
the hard drives are of the same type and manufacturer, for example, you can keep fewer drives in
stock as spares. This reduces the cost and complexity associated with providing spares.
The number of spares that you need to keep on hand varies according to the configuration and
failure conditions that users and operations personnel can tolerate. Another concern is availability
of replacement parts. Some parts, such as memory and CPU, are easy to find years later. Other
parts, like hard drives, are often difficult to locate after only a few years. For parts that may be
hard to find, and where exact matches must be used, plan to buy spares when you buy the
equipment. Consider using service companies or contracts with a vendor to delegate the
responsibility, or consider keeping one or two of each of the critical components in a central
location.
Standby Systems
Consider the possibility of maintaining an entire standby system, possibly even a hot standby to
which data is replicated automatically. For file servers, for example, the Windows Server 2003
Distributed File System (DFS) allows you to logically group folders located on different servers by
transparently connecting them to one or more hierarchical namespaces. When DFS is combined
with File Replication service (FRS)
, clients can access data even if one of the file servers goes down, because the other servers
have identical content. If the costs of downtime are very high and clustering is not a viable option,
you can use standby systems to decrease recovery times. Using standby systems can also be
important if failure of the computer can result in high costs, such as lost profits from server
downtime or penalties from a Service Level Agreement violation.
A standby system can quickly replace a failed system or, in some cases, act as a source of spare
parts. Also, if a system has a catastrophic failure that does not involve the hard drives, it might be
possible to move the drives from the failed system to a working system (possibly in combination
with using backup media) to restore operations relatively quickly. This scenario does not happen
often, but it does happen, in particular with CPU or motherboard component failures. (Note that
this transfer of data after a failure is performed automatically in a server cluster.)
One advantage to using standby equipment to recover from an outage is that the failed unit is
available for careful after-the-fact diagnosis to determine the cause of the failure. Getting to the
root cause of the failure is extremely important in preventing repeated failures.
Standby equipment should be certified and running on a 24-hours-a-day, 7-days-a-week basis,
just like the production equipment. If you do not keep the standby equipment operational, you
cannot be sure it will be available when you need it.

3.3.4 Using Fault-Tolerant Components


Using fault-tolerant technology improves both availability and performance. The following sections
describe some basic fault-tolerant considerations in two key areas of your deployment: storage
and network components. In both cases you should also consult hardware vendors for details
specific to each product, especially if you are considering deploying server clusters.

Page 81 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3.3.5 Storage Strategies


When planning how to store your data, consider the following points:
The type and quantity of information that must be stored. For example, will a particular
computer be used to store a large database needing frequent reads and writes?
The cost of the equipment. It does not make sense to spend more money on the storage
system than you expect to recover in saved time and data if a failure occurs.
Specific needs for protecting data or making data constantly available. Do you need to
prevent data loss, or do you need to make data constantly available? Or are both necessary?
For preventing data loss, a RAID arrangement is recommended. For high availability of an
application or service, consider multiple disk controllers, a RAID array, or a Windows
clustering solution. (Clustering is discussed later in this chapter.)
A good backup and recovery plan is essential. Downtime is inevitable, but a sound and
proven backup and recovery plan can minimize the time it takes to restore services to your
users.
Physical memory copying, or memory mirroring, provides fault tolerance through memory
replication. Memory-mirroring techniques include having two sets of RAM in one computer,
each a mirror of the other, or mirroring the entire system state, which includes RAM, CPU,
adapter, and bus states. Memory mirroring must be developed and implemented in
conjunction with the original equipment manufacturer (OEM).

3.3.6 Safeguarding the Physical Environment of the Servers


An important practice for high availability of servers is to maintain high standards for the
environment in which the servers must run. The following list contains information to consider if
you want to increase the longevity and reliability of your hardware:
Temperature and humidity. Install mission-critical servers in a room set aside for that
purpose, where you can carefully control temperature and humidity. Computers perform best
at approximately 70 degrees Fahrenheit. In an office, temperature is not normally an issue,
but be aware of the effect of a long holiday weekend in the summer with the air conditioning
turned off.
Dust or contaminants. Protect servers and other equipment from dust and
contaminants where possible, and check for dust periodically. Dust and other contaminants
can cause components to short-circuit or overheat, which can cause intermittent failures.
Whenever the case of a server is opened for any reason, perform a quick check to determine
whether the unit needs cleaning. If so, check all the other units in the area.
Power supplies. Planning for power outages, like any disaster-recovery planning, is best
done long before you anticipate outages, and it involves identifying the resources that are
most critical to the operation of the company. When possible, provide power from at least two
different circuits to the computer room and divide redundant power supplies between the
power sources. Ideally, the circuits should originate from two different sources external to the
building. Be aware of the maximum amount of power a location can provide. It is possible
that a location could have so many servers that there is not sufficient power for any additional
servers you might want to install. Consider a backup power supply for use in the event of a
power failure in your computer center. It may be necessary to continue providing computer
service to other buildings in the area or to areas geographically remote from the computer
center. Short outages can be dealt with through uninterruptible power supply (UPS) units.
Longer duration outages can be handled using standby generators. Include network
equipment, such as routers, when reviewing equipment that requires backup power during an
outage.
Maintenance of cables. Prevent physical damage to cables in the computer room by
making sure cables are neat and orderly, either with a cable management system or tie
wraps. Cables should never be loose in a cabinet, where they can be disconnected by
mistake. Make sure all cables are securely attached at both ends where possible, and make

Page 82 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

sure pull-out, rack-mounted equipment has enough slack in the cables, and that the cables
do not bind and are not pinched or scraped. Set up good pathways for redundant sets of
cables. If you use multiple sources of power or network communications, try to route the
cables into the cabinets from different points. If one cable is severed, the other can continue
to function. Do not plug dual power supplies into the same power strip. If possible, use
separate power outlets or UPS units (ideally, connected to separate circuits) to avoid a single
point of failure.
Security of the computer room. For servers that must maintain high availability, restrict
physical access for all but designated individuals. In addition, consider the extent to which
you need to restrict physical access to network hardware. The details of how you implement
this depend on your physical facilities and your organizations structure and policies. When
reviewing the security in place for the computer room, also review your methods for
restricting access to remote administration of servers. Make sure that only designated
individuals have remote access to your configuration information and your administration
tools.

3.4 Implementing Software Monitoring and Error-Detection Tools


Constant vigilance of your network and applications is essential for high availability. Software-
monitoring tools and techniques allow you to determine the health of your system and identify
potential trouble spots before an error occurs.
This section assumes that you have selected software that supports the high availability features
you require. Not all software supports features such as redundancy or clustering. For an
application that requires 99 percent uptime, this might not matter. An application that requires
99.9 percent or greater availability must support such features. Monitoring tools can reveal
performance trends and other indications that a potential loss of service is eminent before an
error actually occurs. If an error does occur, monitoring tools can provide analytic data that
administrators can use to prevent the problem from happening again.
Before deploying software, check the applications hardware requirements and consult the
documentation or software vendor to be sure the application supports online backup. When your
monitoring tools detect a problem or error in an application, online backup allows you to fix the
problem with no disruption of service.

3.4.1 Choosing Monitoring Tools


After you deploy your software, establish routine and automated monitoring and error detection
for your operating system and applications. If you can detect application and system errors
immediately after they occur, you have a better chance of responding before a system shutdown.
Monitoring can also alert you if scaling is necessary somewhere in your organization. For
example, if one or more servers are operating at capacity some or all of the time, you can decide
if you need to add more servers or increase the CPUs of existing servers.

3.4.2 Windows Management Instrumentation


Windows Management Instrumentation (WMI) helps you manage your network and applications
as they become larger and more complex. With WMI, you can monitor, track, and control system
events that are related to software applications, hardware components, and networks. WMI
includes a uniform scripting application programming interface (API), which defines all managed
objects under a common object framework that is based on the Common Information Model
(CIM). Scripts use the WMI API to access information from different sources. WMI can submit
queries that filter requests for very specific information, and it can subscribe to WMI events based
on your particular interests, rather than being limited to events predefined by the original
developers

Page 83 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3.4.3 Microsoft Operations Manager 2000


The Microsoft Operations Manager 2000 management packs have a full set of features to help
administrators monitor and manage both the events and performance of their IT systems based
on the Microsoft Windows 2000 or Windows Server 2003 operating systems. Microsoft
Operations Manager 2000 Application Management Pack improves the availability of Windows-
based networks and server applications. Microsoft Operations Manager 2000 and Windows
Server 2003 are sold separately

3.4.4 Simple Network Management Protocol


Simple Network Management Protocol (SNMP) allows you to capture configuration and status
information on systems in your network and have the information sent to a designated computer
for event monitoring

3.4.5 Event logs


When you diagnose a system problem, event logs are often the best place to start. By using the
event logs in Event Viewer you can gather important information about hardware, software, and
system problems. Windows Server 2003 records this information in the system log, the
application log, and the security log. In addition, some system components such as the Cluster
service and FRS also record events in a log.

3.4.6 Service Control Manager


Service Control Manager (SCM), a tool introduced with the release of the Microsoft
Windows NT version 4.0 operating system, maintains a database of installed services in the
registry. SCM can provide high availability because you can configure it to autorestart services
after they have failed. For more information about SCM, see the topic "Service Control Manager"
in the Windows Platform SDK documentation.

3.4.7 Performance Logs and Alerts


Performance Logs and Alerts collects performance data automatically from local or remote
computers. You can collect a variety of information on key resources such as CPU, memory, disk
space, and the resources needed by the application. When planning your performance logging,
determine the information you need and collect it at regular intervals. Be aware, however, that
performance sampling consumes CPU and memory resources, and that excessively large
performance logs are hard to store and hard to extract useful information from

3.4.8 Shutdown Event Tracker


You can document the reasons for shutdowns and save the information in a standard format by
using Shutdown Event Tracker. You can use codes to categorize the major and minor reasons for
each shutdown and record a comment for the shutdown

3.5 Evaluating the Benefits of Clustering


A cluster is two or more computers working together to provide higher availability, reliability, and
scalability than can be obtained by using a single system. When failure occurs in a cluster,
resources are redirected and the workload is redistributed. Microsoft cluster technologies guard
against three specific types of failure:
Application and service failures, which affect application software and essential services.
System and hardware failures, which affect hardware components such as CPUs, drives,
memory, network adapters, and power supplies.
Site failures in multisite organizations, which can be caused by natural disasters, power
outages, or connectivity outages.

Page 84 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3.5.1 Benefits of Clustering


If one server in a cluster stops working, a process called failover automatically shifts the workload
of the failed server to another server in the cluster. Failover ensures continuous availability of
applications and data.
This ability to handle failure allows clusters to meet two requirements that are typical in most data
center environments:
High availability. The ability to provide end users with access to a service for a high
percentage of time while reducing unscheduled outages.
High reliability. The ability to reduce the frequency of system failure.
Additionally, Network Load Balancing clusters address the need for high scalability, which is the
ability to add resources and computers to improve performance.

3.5.2 Limitations of Clustering


Server clusters are designed to keep applications available, rather than keeping data available.
To protect against viruses, corruption, and other threats to data, organizations need solid data
protection and recovery plans. Cluster technology cannot protect against failures caused by
viruses, software corruption, or human error.
The Cluster service, the service behind server clusters, depends on compatible applications and
services to operate properly. The software must respond appropriately when a failure occurs.
Administrators must be able to configure where an application stores its data on the server
cluster. Also, clients that are accessing a clustered application or service must be able to
reconnect to the cluster virtual server after a failure has occurred and a new cluster node has
taken over the application.
Only services and applications that use TCP/IP for client-server communication are supported on
Network Load Balancing clusters and server clusters.

3.5.3 Evaluating Cluster Technologies


If you decide to deploy clustering with Windows Server 2003, choosing a cluster technology
depends greatly on the application or service that you want to host on the cluster. Server clusters
and Network Load Balancing both provide failover support for IP-based applications and services
that require high scalability and availability. Each type of cluster, however, is intended for different
kinds of services. Your choice of cluster technologies depends primarily on whether you run
stateful or stateless applications:
Server clusters are designed for stateful applications. Stateful applications have
long-running in-memory state, or they have large, frequently updated data states. A database
such as Microsoft SQL Server 2000 is an example of a stateful application.
Network Load Balancing is intended for stateless applications. Stateless
applications do not have long-running in-memory state. A stateless application treats each
client request as an independent operation, and therefore it can load-balance each request
independently. Stateless applications often have read-only data or data that changes
infrequently

3.5.4 Server Clusters


Server clusters provide high availability for more complex, stateful applications and services by
allowing the failover of resources. Server clusters also maintain client connections to applications
and services. If your application is stateful, with frequent changes to data, a server cluster is a
more appropriate solution. Server clusters run on Windows Server 2003, Enterprise Edition, and
Windows Server 2003, Datacenter Edition.
Server clusters, which use the Cluster service, maintain data integrity and provide failover support
and high availability for mission-critical applications and services on your back-end servers,
including databases, messaging systems, and file and print services. Organizations can use
server clusters to make applications and data available on multiple servers that are linked

Page 85 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

together in a server cluster configuration. Back-end applications and services, such as messaging
applications like Microsoft Exchange or database applications like Microsoft SQL Server, are
ideal candidates for server clusters.
In server clusters, nodes share access to data. Nodes can be either active or passive, and the
configuration of each node depends on the operating mode (active or passive) and how you
configure failover in the cluster. A server that is designated to handle failover must be sized to
handle the workload of the failed node in addition to its own workload.
In Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition,
server clusters can contain up to eight nodes. Each node is attached to one or more cluster
storage devices, which allow different servers to share the same data. Because nodes in a server
cluster share access to data, the type and method of storage in the server cluster is very
important.

3.5.5 Network Load Balancing


If your application is stateless or can otherwise be cloned with no decline in performance,
consider deploying Network Load Balancing. Network Load Balancing provides failover support
for IP-based applications and services that require high scalability and availability. Network Load
Balancing can run on all editions of Windows Server 2003.
Network Load Balancing addresses bottlenecks caused by front-end services, providing
continuous availability for IP-based applications and services that require high scalability.
Network Load Balancing clusters are used to provide scalability for Web services and other front-
end servers, such as VPN servers and firewalls. Organizations can build groups of clustered
computers to support load balancing of TCP and User Datagram Protocol (UDP) traffic requests.
Network Load Balancing clusters are groups of identical, typically cloned computers that, through
their numbers, enhance the availability of Web servers, Microsoft Internet Security and
Acceleration (ISA) servers (for proxy and firewall servers), and other applications that receive
TCP and UDP traffic. Because Network Load Balancing cluster nodes are usually identical clones
of each other and can therefore operate independently, all nodes in a Network Load Balancing
cluster are active.
You can scale out Network Load Balancing clusters by adding additional servers. As demand on
the cluster increases, you can scale out Network Load Balancing clusters to as many as 32
servers if necessary. Each node runs a copy of the IP-based application or service that is being
load balanced and stores all the data necessary for the application or service to run on local
drives.
In clusters, stateless applications are typically cloned, so that multiple instances of the same code
are executed on the same dataset. Figure 6.5 shows a cloned application (called "App") deployed
in a cluster. Each instance of the cloned application is self-contained, so that a client can make a
request to any instance and will always receive the same result.
Figure Cloned Application

Changes made to one instance of a cloned stateless application can be replicated to the other
instances, because the dataset of stateless applications is relatively static. Because stateful
applications such as Microsoft Exchange or Microsoft SQL Server are updated with new data

Page 86 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

frequently, they cannot be easily cloned and they are not good candidates for hosting on Network
Load Balancing clusters.
Component Load Balancing
CLB clusters address the unique scalability and availability needs of middle-tier (business)
applications that use the COM+ programming model. Organizations can load balance COM+
components over more than one node to dramatically enhance the availability and scalability of
software applications. CLB clusters, however, are a feature of Microsoft Application Center 2000.
For information about CLB clusters, see your Microsoft Application Center 2000 documentation.
3.5.6 Using Clusters to Increase Availability and Scalability
Different types of clusters provide different benefits. Network Load Balancing clusters are
designed to provide scalability because you can add nodes as your workload increases. Server
clusters increase availability of stateful applications, and they can also allow you to consolidate
servers and save on hardware costs. Figure 6.6 shows the steps for planning cluster deployment.
Figure Planning Cluster Deployment

3.5.7 Scaling by Adding Servers


Scaling by adding servers to a Network Load Balancing cluster is also known as scaling out. In
Windows Server 2003, Network Load Balancing clusters can contain up to 32 nodes. Windows
Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition, support server
clusters containing up to eight nodes. For this reason, it is recommended that you use Network
Load Balancing rather than server clusters to increase scalability. Additional servers allow you to
meet increased demand on your Network Load Balancing clusters; however, not every
organization has the budget to readily add hardware.

Page 87 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Table summarizes the number of servers a cluster can contain, by cluster type and operating
system.
Table Maximum Number of Nodes in a Cluster
Network Load Component Load Server
Operating System
Balancing Balancing* Cluster
Microsoft Windows 2000
32 12 2
Advanced Server
Microsoft Windows 2000
32 12 4
Datacenter Server
Windows Server 2003, Standard
32 12 N/A
Edition
Windows Server 2003, Enterprise
32 12 8
Edition
Windows Server 2003, Datacenter
32 12 8
Edition
* Component Load Balancing is not included with the Windows Server 2003 operating system
and runs only on Windows Application Center 2000. You can use CLB clusters with Windows
Server 2003, provided that you use Windows Application Center 2000 Service Pack 2 or later.
For complete information about Component Load Balancing, see your Windows Application
Center 2000 documentation.
Network Load Balancing Clusters also run on Windows Server 2003, Web Edition; the maximum
number of nodes is 32.

3.5.8 Scaling by Adding CPUs and RAM


Your options for adding CPUs and RAM, also known as scaling up, depend on the server
operating system. This is a less expensive scaling option than adding nodes, but you are limited
by the capacity of the operating system.
Table illustrates the CPU and RAM capacity in Windows 2000 and Windows Server 2003.
Table Maximum Number of Processors and RAM
Operating System Number of Processors Maximum RAM
Windows 2000 Advanced Server 8 8 GB
Windows 2000 Datacenter Server 32 64 GB
Windows Server 2003, Enterprise Edition 8 32 GB
Windows Server 2003, Datacenter Edition 32 64 GB

3.5.9 Availability and Server Consolidation with Server Clusters


While Network Load Balancing can meet your scaling needs, server clusters provide high
availability for hosted applications by reducing the interruption of service due to unplanned
outages and hardware failure.
Increasing the number of nodes in a server cluster proportionally increases the availability of the
services and applications running on that server cluster. This is because spreading the workload
over a greater number of nodes allows each node to run at a lower capacity. When failover
occurs in a server cluster with many nodes, there are more servers available to accept the
workload of the failed node. Similarly, when a server running at low capacity takes on additional
processing after a failover, the failover is less likely to result in a decline in performance.
The hardware required for server cluster nodes is expensive, however. In some cases, it might be
possible for you to save money on hardware by consolidating the number of servers or nodes you
have deployed in your server clusters. You can still provide high availability of your mission-
critical applications after server consolidation, but be aware that combining clusters to reduce the

Page 88 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

total number of nodes also reduces availability. This is a tradeoff that your organization should
evaluate.
Figure shows two clusters before consolidation, where two separate two-node clusters, each with
one active and one passive node, provide service for a group of clients.
Figure Two Server Clusters Before Consolidation

Two clusters, each dedicated to a separate application, generally have higher availability than
one cluster hosting both applications. This is depends on, among other factors, the available
capacity of each server, other programs that may be running on the servers, and the hosted
applications themselves. However, if a potential loss in availability is acceptable to your
organization, you can consolidate servers.
Figure shows what the clusters in Figure 6.7 would look like if they were consolidated into a
single three-node cluster. There is a potential loss in availability because in the event of a failure,
both active clusters will fail over to the same passive node, whereas before consolidation, each
active node had a dedicated passive node. In this example, if both active nodes were to fail at the
same time, the single passive node might not have the capacity to take on the workloads of both
nodes simultaneously. Your organization must consider such factors as the likelihood of multiple
failures in the server cluster, the importance of keeping the applications in this server cluster
available, and if potential loss in services is worth the money saved by server consolidation.
Figure Consolidated Server Cluster

3.6 Overview of the Server Clusters Design Process


Mission-critical applications, such as corporate databases and e-mail, must reside on systems
that are designed for high availability and scalability. Deploying server clusters with the
Microsoft Windows Server 2003, Enterprise Edition or the Windows Server 2003, Datacenter
Edition operating system minimizes the amount of planned and unplanned server downtime.
Server clusters can benefit your organization if:
Your users depend on regular access to mission-critical data and applications to do their
jobs.
Your organization has established a limit on the amount of planned or unplanned service
downtime that you can sustain.
The cost of the additional hardware that server clusters require is less than the cost of
having mission-critical data and applications offline during a failure.
Windows Server 2003 provides two different clustering technologies: server clusters and Network
Load Balancing

3.6.1 Server Cluster Design and Deployment Process


Page 89 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

You begin the server cluster design and deployment process by defining your high-availability
needs. After you determine the applications or services you want to host on a server cluster, you
need to understand the clustering requirements of those applications or services. Next, design
your server cluster support network, making sure that you protect your data from failure, disaster,
or security risks. After you evaluate and account for all relevant hardware, software, network, and
security factors, you are ready to deploy a server cluster. Figure 7.1 illustrates the server cluster
design process.
Figure Designing and Deploying Server Clusters

3.6.2 Server Cluster Fundamentals


The following sections provide an overview of key clustering concepts and summarize new
clustering features that have been added to Windows Server 2003.
A cluster is a group of individual computer systems working together cooperatively to provide
increased computing power and to ensure continuous availability of mission-critical applications
or services. From the client viewpoint, an application that runs on a server cluster is no different
than an application that runs on any other server, except that availability is higher.
Clustering Terms
The following terms are fundamental clustering concepts that are common to almost every
clustering decision.
Node A computer system that is a member of a server cluster. Windows Server 2003 supports
up to eight nodes in a server cluster.
Resource A physical or logical entity that is capable of being managed by a cluster, brought
online, taken offline, and moved between nodes. A resource can be owned only by a single node
at any point in time.
Resource groups A collection of one or more resources that are managed and monitored as a
single unit. Resource groups can be started and stopped independently of other groups (when a
resource group is stopped, all resources within the group are stopped). In a server cluster,
Page 90 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

resource groups are indivisible units that are hosted on one node at any point in time. During
failover, resource groups are transferred from one node to another.
Virtual server A collection of services that appear to clients as a physical Windows-based
server but are not associated with a specific server. A virtual server is typically a resource group
that contains all of the resources needed to run a particular application and can be failed over like
any other resource group
Failover The process of taking resource groups offline on one node and bringing them back
online on another node. When a resource group goes offline, all resources belonging to that
group go offline. The offline and online transitions occur in a predefined order. Resources that are
dependent on other resources are taken offline before and brought online after the resources
upon which they depend.
Failback The process of moving resources, either individually or in a group, back to their
original node after a failed node rejoins a cluster and comes back online.
Quorum resource The quorum-capable resource selected to maintain the configuration data
necessary for recovery of the cluster. This data contains details of all of the changes that have
been applied to the cluster database. The quorum resource is generally accessible to other
cluster resources so that any cluster node has access to the most recent database changes. By
default there is only one quorum resource per cluster.

3.6.3 Cluster Hardware Requirements


The following are general hardware requirements for building Windows Server 2003 clusters.
More detailed information is provided throughout this chapter where applicable. In order for server
clusters to be supported by Microsoft, all hardware must be selected from a list of qualified
clustering solutions.
Use a minimum of two computers running Windows Server 2003, Enterprise Edition or
Windows Server 2003, Datacenter Edition (computer hardware must also be listed in the
Windows Server Catalog). You cannot mix x86-based and Itanium architecturebased
computers in the same server cluster.
Meet minimum storage requirements. Minimum storage requirements depend on a
number of factors, such as whether or not your server cluster uses storage area network
(SAN) technology, or the type of quorum resource used in the server cluster.
Use a minimum of two network adapters for each node. In recommended configurations,
one network adapter connects the node to the other nodes in the cluster for
communication and configuration purposes (private network). The second adapter
connects the cluster to both an external network (public network) and the private network.

3.6.4 New in Windows Server 2003


Windows Server 2003 introduces a number of new clustering features that are highlighted below.
More nodes in a cluster Windows Server 2003 supports up to eight nodes per cluster.
Support for 64-bit versions of Windows Server 2003 operating system The 64-bit versions
of Windows Server 2003, Enterprise Edition and Windows Server 2003, Datacenter Edition
support the Cluster service. Note that you cannot use GUID partition table (GPT) disks for shared
cluster storage. A GPT disk is an Itaniumbased disk partition style in the 64-bit versions of
Windows Server 2003. For 64-bit versions of Windows Server 2003, you must partition cluster
disks on a shared bus as master boot record (MBR) disks and not as GPT disks.
Simplified cluster configuration and setup Server cluster system files are installed by default
with Windows Server 2003. New features include the ability to create new server clusters or
add nodes to an existing cluster remotely. Another new feature, the New Server Cluster
Wizard, analyzes hardware and software configurations to identify potential problems before
installation.
Security enhancements You can reset the Cluster service account password without stopping
the Cluster service, allowing you to maintain corporate password policies without compromising

Page 91 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

availability. In addition, server clusters now support the Kerberos version 5 authentication
protocol
Scripting An application can be made server cluster-aware through scripting (both VBScript and
Jscript are supported), rather than through resource dynamic-link library (DLL) files. Unlike DLLs,
scripting does not require knowledge of the C or C++ programming languages, which means
scripts are easier for developers and administrators to create and implement. Scripts are also
easier to customize for your applications.
Majority node set clusters In every cluster, the quorum resource maintains all configuration
data necessary for the recovery of the cluster. In majority node set clusters, the quorum data is
stored on each node, allowing for, among other things, geographically dispersed clusters.

3.6.5 Deploying Server Clusters


After your application has been evaluated for server cluster deployment, and your hardware and
network are in place, you can deploy your server cluster on Windows Server 2003, Enterprise
Edition or Windows Server 2003, Datacenter Edition. Figure 7.25 illustrates the process for
installing or upgrading a server cluster.
Figure Deploying a Server Cluster

Installing a New Server Cluster


Before installing Windows Server 2003, Enterprise Edition or Windows Server 2003, Datacenter
Edition on your cluster nodes, consult "Planning for Deployments" in Planning, Testing, and
Piloting Deployment Projects in this kit. Information in that chapter includes methods for taking

Page 92 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

inventory of your current IT environment and how to create a functional specification that you can
use to create a clear and thorough cluster deployment plan. In addition, gather the following
materials and have them available for reference during installation:
A list of all services and applications to be deployed on server clusters.
A plan that defines which applications are to be installed on which nodes.
Failover policies for each service or application, including resource group planning.
A selected quorum model.
A physical and logical security plan for the cluster.
Specifications for capacity requirements.
Documentation for your storage system.
The Windows Server Catalog approved device drivers for network hardware and
storage systems
Documentation supplied with all cluster hardware and all applications or services that will
be deployed on the server cluster.
An IP addressing scheme for the cluster networks, both private and public.
A selected cluster name, its length limited by NetBIOS parameters.

3.6.6 Designing Network Load Balancing


Many of your deployments will include mission-critical applications and services. The servers that
host your applications and services must be able to support projected increases in the number of
users, and they must ensure that users can access mission-critical applications. To fulfill these
requirements, your solution must be highly available and scalable. Network Load Balancing (NLB)
improves availability and scalability in your solutions by distributing application load across
multiple servers.

3.6.7 Overview of the NLB Design Process


Improving availability and scalability in your solution depends on the applications and services in
your organization. A computer running the Microsoft Windows Server 2003 operating system
can provide a high level of reliability and scalable performance. However, a Network Load
Balancing cluster can achieve the higher levels of availability and performance required by
mission-critical servers.
A Network Load Balancing cluster comprises multiple servers running any version of the
Windows Server 2003 family, including the Microsoft Windows Server 2003, Standard Edition;
Windows Server 2003, Enterprise Edition; Windows Server 2003, Datacenter Edition; and
Windows Server 2003, Web Edition operating systems. The servers are combined to provide
greater scalability and availability than is possible with an individual server. Network Load
Balancing distributes client requests across the servers to improve scalability. If a server fails,
client requests are redistributed to the remaining servers to improve availability. Network Load
Balancing can improve scalability and availability for applications and services that communicate
with clients that use Transmission Control Protocol (TCP) or User Datagram Protocol (UDP).
The Network Load Balancing design process assumes that you are creating new clusters or that
you are redesigning existing Windows Load Balancing Service (WLBS) or Network Load
Balancing clusters. Upon completion of the Network Load Balancing design process, you will
have a solution that meets or exceeds your scalability and availability requirements.
NLB Design Process
Creating a Network Load Balancing design involves more than documenting Windows
Server 2003 and Network Load Balancing settings on the individual application servers. You must
identify the applications and services that can benefit from Network Load Balancing, determine
the core set of specifications in your design, ensure that your solution is secure, and provide for
scalability and availability. The process for creating your Network Load Balancing design is
shown in Figure 8.1.
Figure Designing a Network Load Balancing Solution

Page 93 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

As you create your Network Load Balancing design, document your decisions and use that
information to deploy your Network Load Balancing solution.
NLB Fundamentals
To create a successful Network Load Balancing design and to ensure that Network Load
Balancing is correct for your solution, you need to know the fundamentals of how Network Load
Balancing provides improved scalability and availability, and how Network Load Balancing
compares with other strategies for providing scalability and availability.
3.6.8 How NLB Provides Improved Scalability and Availability
Network Load Balancing improves scalability and availability by distributing client traffic across
the servers that you include in the Network Load Balancing cluster. Each cluster host (a server
running on a cluster) runs an instance of the applications supported by your cluster. Network
Load Balancing transparently distributes client requests among the cluster hosts. Clients access
your cluster by using one or more virtual IP addresses. From the perspective of the client, the
cluster appears to be a single server that answers the client request.
As the scalability and availability requirements of your solution change, you can add or remove
servers from the cluster as necessary. Network Load Balancing automatically distributes client
traffic to take advantage of any servers that you add to the cluster. In addition, when you remove
a server from the cluster, Network Load Balancing redistributes the client traffic among the
remaining servers in the cluster.
As an example, assume that your organization has a Web application farm running Microsoft
Internet Information Services (IIS) version 6.0 that hosts your organizations Internet presence. As
seen in Figure 8.2, Network Load Balancing allows your individual Web application servers to
service client requests from the Internet by distributing them across the cluster. On each of the
servers, you install IIS 6.0 and Network Load Balancing. By combining the individual Web
application servers into a Network Load Balancing cluster, you can load balance the requests to
improve client response times and to provide improved fault tolerance in the event that one of the
Web application servers fails.
Figure Network Load Balancing Cluster in a Web Farm

Page 94 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Network Load Balancing automatically detects and recovers when the entire server fails or is
manually disconnected from the network. However, Network Load Balancing is unaware of the
applications and services running on the cluster, and it does not detect failed applications or
services. To provide awareness of application or service failures, you need to add management
software, such as Microsoft Operations Manager (MOM) 2000, Microsoft Application Center
2000, a third-part party application, or software developed by your organization.
When your design requires fault tolerance for servers that support your Network Load Balancing
cluster, such as servers running Microsoft SQL Server 2000, include Microsoft server
clusters. For example, you can improve the availability of the network database (SQLCLSTR-01
in Figure ) by creating a two-node server cluster.
Network Load Balancing runs as an intermediate network driver in the Windows Server 2003
network architecture. Network Load Balancing is logically situated beneath higher-level
application protocols, such as Hypertext Transfer Protocol (HTTP) and File Transfer Protocol
(FTP), and above the network adapter drivers. Figure 8.3 illustrates the relationship of Network
Load Balancing in the Windows Server 2003 network architecture.
Figure Network Load Balancing in the Windows Server 2003 Network Architecture

Page 95 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

To maximize throughput and to provide high availability in your solution, Network Load Balancing
uses a distributed software architecture. A copy of the Network Load Balancing driver runs on
each host in the cluster. The Network Load Balancing drivers allow all hosts in the cluster to
concurrently receive incoming network traffic for the cluster.
On each host in the cluster, the driver acts as an intermediary between the network adapter driver
and the TCP/IP stack. This allows a subset of the incoming network traffic to be received by the
host. Network Load Balancing uses this filtering mechanism to distribute incoming client requests
among the servers in the cluster.
Network Load Balancing architecture maximizes throughput by using a common media access
control (MAC) address to deliver incoming network traffic to all hosts in the cluster. As a result,
there is no need to route incoming packets to the individual hosts in the cluster. Because filtering
unwanted network traffic is faster than routing packets (which involves receiving, examining,
rewriting, and resending), Network Load Balancing delivers higher network throughput than
dispatcher-based software load balancing solutions. Also, as you add hosts to your Network Load
Balancing cluster, the scalability grows proportionally, and any dependence on a particular host
diminishes.
Because Network Load Balancing load balances client traffic across multiple servers, it provides
higher availability in your solution. One or more cluster hosts can fail, but the cluster continues to
service client requests as long as any cluster hosts are running.
NLB and Round Robin DNS
Round robin Domain Name System (DNS) is a software method for distributing workload among
multiple servers, but does not prevent clients from detecting server outages. If one of the servers
fails, round robin DNS continues sending client requests to the server until a network
administrator detects the failure and removes the server from the DNS address list. This results in
service disruption for clients.
In contrast, Network Load Balancing automatically detects servers that have been disconnected
from the cluster and redistributes client requests to the remaining servers. Unlike round robin
DNS, this prevents clients from sending requests to the failed servers.

3.6.9 Deploying Network Load Balancing


After completing the design for the applications and services in your Network Load Balancing
cluster, you are ready to deploy the cluster running the Microsoft Windows Server 2003
operating system in your pilot and production network environments. A successful deployment
ensures that your Network Load Balancing cluster meets or exceeds the specifications in the
design. In addition, you must ensure that the deployment of your Network Load Balancing cluster
does not disrupt the operation of any existing applications or services.

Page 96 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3.6.10 Overview of the NLB Deployment Process


A Network Load Balancing cluster comprises multiple servers running any version of the
Windows Server 2003 family of operating systems, including Microsoft Windows Server 2003,
Standard Edition; Windows Server 2003, Enterprise Edition; Windows Server 2003,
Datacenter Edition; and Windows Server 2003, Web Edition.
Clustering allows you to combine application servers to provide a level of scaling and availability
that is not possible with an individual server. Network Load Balancing distributes incoming client
requests among the servers in the cluster to more evenly balance the workload of each server
and prevent overload on any one server. To client computers, the Network Load Balancing
cluster appears as a single server that is highly scalable and fault tolerant.
The Network Load Balancing deployment process assumes that your design team has completed
the design of the Network Load Balancing solution for your organization and has performed
limited testing in a lab. After the design team tests the design in the lab, your deployment team
implements the Network Load Balancing solution first in a pilot environment and then in your
production environment.

3.6.11 Selecting the Automated Deployment Method


Begin the deployment of your Network Load Balancing solution by determining how to automate
your Network Load Balancing deployment. Automate the deployment of your Network Load
Balancing solution to ensure the consistency of installation, to reduce the time required to deploy
your solution, and to assist in restoring failed Network Load Balancing cluster hosts. Manually
deploy your Network Load Balancing solution only if creating and testing the automation files and
scripts would take more time than manually configuring the cluster. Figure 9.2 shows the process
for determining the best method to automate the deployment of your Network Load Balancing
solution.
Figure Selecting the Automated Deployment Method

Page 97 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

You can automate the deployment of Windows Server 2003 and Network Load Balancing cluster
hosts by using one of the following methods:
Unattended installation
Sysprep
Remote Installation Services (RIS)
Table compares the characteristics of the various automated deployment methods.
Table Comparing Automated Deployment Methods
Deployment Characteristics Unattended Installation Sysprep RIS
Uses images to deploy installation
Uses scripts to customize installation
Supports post installation scripts to install applications
Deployed by using local drives on the target server
Initiated by headless servers
Deployed from a network share
Depending on the requirements of each cluster, more than one method might be required to
deploy all your Network Load Balancing clusters
For Network Load Balancing, you need to perform specific tasks when creating the unattended
installation and Sysprep script files:
Review the content in the Microsoft Windows Preinstallation Reference that relates to the
[MS_WLBS parameters] section.
WLBS stands for "Windows NT Load Balancing Service," the name of the load balancing
service used in Microsoft Windows NT Server version 4.0. For reasons of backward
compatibility, WLBS continues to be used in certain instances.
Ensure that the IP addresses for the cluster and all virtual clusters are entered in the
IPAddress parameter under the [MS_TCPIP parameters] section.
Typically, Network Load Balancing Manager automatically adds the cluster and virtual cluster
IP addresses to the list of IP addresses. Both unattended installation and Sysprep require
that you add the addresses to the IPAddress parameter under the [MS_TCPIP parameters]
section of the script.

3.6.12 Implementing a New Cluster


Many of the mission-critical applications deployed within your organization will be new
applications, which require you to deploy new Network Load Balancing clusters. The process for
implementing a new cluster involves more than installing Windows Server 2003 and Network
Load Balancing on the individual application servers. Implementing a new cluster might require
additional network infrastructure, network services, file services, database services, and security
services. Figure 9.3 shows the steps that you must complete before and after the implementation
of the new cluster.
Figure Implementing a New Cluster

Page 98 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3.6.13 Preparing to Implement the Cluster


Your new cluster is dependent upon the network infrastructure and other network services in your
total solution. Ensure that these network infrastructure and other network services are deployed
prior to implementing your cluster.
Prepare for the implementation of the new cluster by using the information documented in the
"NLB Cluster Host Worksheet" and other documentation (such as Visio drawings of the network
environment) that your design team completed for a specific cluster host during the design
process. Coordinate with the operations team during this step in the process to review the
changes that will occur in your organizations network environment.
To prepare for the implementation of the new cluster, complete the following tasks:
1. Implement the network infrastructure required by the cluster and by the applications and
services running on the cluster.
2. Implement any networking services required by the applications and services running on
the cluster.
3. Select the method for automating any additional Network Load Balancing configuration.

3.7 Implementing the Network Infrastructure


Before you implement the cluster, you must implement the network infrastructure that connects
the cluster to client computers, to other servers within your organization, and to management
consoles. The network infrastructure components include:
Network cables
Hubs
Switches
Routers
Firewalls

Page 99 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Example: Preparing to Implement the Cluster


An organization is deploying a new Web application that will be accessed by a large volume of
Internet users. Because of scaling and availability considerations, the organization will deploy the
new, high-volume Web application on a Network Load Balancing cluster. Figure 9.4 illustrates the
organizations network environment prior to the implementation of the new cluster.
Figure Network Environment Before Implementing New Cluster

In addition to deciding that the new Web farm will run on a Network Load Balancing cluster, the
design team also made the following configuration decisions:
No single router, switch, or Internet Information Services (IIS) Web farm server failure will
prevent users from running the Web application.
Web application will store data in a clustered SQL server running Microsoft
SQL Server 2000 on a server cluster.
Web application executables Active Server Pages (ASP), Hypertext Markup Language
(HTML) pages, and other executable code will be stored on a file server running on a server
cluster.
Accounts used for authenticating Internet users will be stored in Active Directory
directory service.
As the first step in the organizations deployment of the Web application, the IIS 6.0 Web farm,
and Network Load Balancing, the organization must restructure the network infrastructure to
support the new Web farm and cluster. Figure illustrates the organizations network environment
after preparing for the implementation.
Figure Network Environment After Preparing to Implement a New Cluster

Page 100 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Table lists the deployment steps that were performed prior to the implementation of the new
cluster and the reasons for performing those steps.
Table Deployment Steps Prior to Implementation of the New Cluster
Deployment Step Reason
Add Firewall-02 and Firewall-03 Provide redundancy and load balancing.
Add Switch-01 and Switch-02 Provide redundancy and load balancing.
Add network segments on Switch-01
Connect the IIS 6.0 Web farm to the network.
and Switch-02
Configure Switch-01 and Switch-02 to Provide load balancing of client requests by using
belong to the same VLAN Network Load Balancing.
Provide database support for the Web application on a
Add SQLCLUSTR-01
Microsoft server cluster.
Provide secured storage for the Web application
Add FILECLUSTR-01
executables and content on a Microsoft server cluster.
Provide storage and management of user accounts used
Add DC-01 and DC-01
in authenticating Internet users.

3.7.1 Implementing the Cluster


To implement the new cluster, complete the following tasks:
1. Install and configure the hardware and Windows Server 2003 for each cluster host.
2. Install and configure the first Network Load Balancing cluster host.

Page 101 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3. Install and configure additional Network Load Balancing cluster hosts.

3.7.2 Installing and Configuring the Hardware and Windows Server 2003
The first step in performing the implementation of the new cluster is to install and configure the
hardware and Windows Server 2003 for each cluster host. Install and configure all cluster host
hardware at the same time to ensure that you eliminate any configuration errors prior to installing
and configuring the Network Load Balancing cluster.
To install and configure Windows Server 2003 on the cluster host hardware, you must be logged
on as a user account that is a member of the local administrators group on all cluster hosts.
Install and configure the cluster host by using the information documented in the "NLB Cluster
Host Worksheet" that your design team completed for that host during the design process.
To install and configure the hardware and Windows Server 2003 on each cluster host in the new
cluster, complete the following tasks:
1. Install the cluster host hardware in accordance with the manufacturers
recommendations.
2. Connect the cluster host hardware to the network infrastructure.
3. Install Windows Server 2003 with the default options and specifications from the
worksheet for the cluster host.
4. Install any additional services (such as IIS 6.0 or Routing and Remote Access) by using
the design specifications for the service.
5. Configure the TCP/IP property settings and verify connectivity for the cluster adapters.
6. If a separate management network is used, configure the TCP/IP property settings and
verify connectivity for the management adapter.
7. Configure each server to be a member server in a domain created specifically for
managing the cluster and other related servers.

3.7.3 Installing and Configuring the First Cluster Host


After you have installed and configured the hardware for the cluster, you are ready to install and
configure the first cluster host. The first cluster host acts as a master copy when you use an
image-based deployment method (such as RIS, Sysprep, or a third-party product) to deploy the
remaining cluster hosts.
An image-based deployment is faster and ensures consistency when implementing the remaining
cluster hosts, by reducing or eliminating manual configuration. In addition, the same image-based
deployment method can be reused after the deployment to restore failed cluster hosts.
Perform the following task on the first cluster host by using the "NLB Cluster Host Worksheet"
that your design team completed for the first cluster host:
1. If you did not use the automated installation process to create the new cluster, start
Network Load Balancing Manager and create a new cluster.
Tip
o You can start Network Load Balancing Manager by running Nlbmgr.exe.
2. Install the applications and services on the first cluster host.
Examples of Windows Server 2003 services to be installed at this time include IIS or
Terminal Services. For more information about installing Windows Server 2003 services, see
the chapters that discuss those services in the Microsoft Windows Server 2003
Deployment Kit.
Examples of applications to be installed at this time include, Web applications or Windows
applications that run on Terminal Services. For more information about installing the
applications running on your cluster, see the documentation that accompanies your
application.
3. Enable monitoring and health checking on the first cluster host.
A Microsoft Operations Manager (MOM) Management Pack exists for Network Load
Balancing. When your organization uses MOM to monitor and manage the servers within
your organization, include the MOM Management Pack for Network Load Balancing on the
cluster hosts.
Page 102 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

4. Verify that the first Network Load Balancing cluster host responds to client queries by
directing requests to the cluster IP address.
Test the first cluster host by specifying the cluster IP address or a virtual cluster IP address in
the client software that is used to access the application or service running on the cluster. For
example, a client accessing an IIS application would put the cluster IP address or virtual
cluster IP address in the Web browser address line.

3.7.4 Installing and Configuring Additional Cluster Hosts


After you have installed and configured the first cluster host, you are ready to install and configure
the remaining cluster hosts in the cluster. The first cluster host acts as a master copy when you
use an image-based deployment method (such as RIS, Sysprep, or a third-party product) to
deploy the remaining cluster hosts.
Perform the following tasks on the remaining cluster hosts by using the "NLB Cluster Host
Worksheet" that your design team completed for each cluster host:
1. Create an image of the first cluster host that has just been deployed (discussed in the
previous section) as required by one of the following image-based automated installation
methods:
o Sysprep
o RIS
o Third-party products
2. Restore the image of the first cluster host (created in step 1) to one of the remaining cluster
host, following the directions provided in the documentation for the image-base installation
method you used.
3. Configure any computer specific information (such as computer name and IP address) on the
newly deployed cluster host.
4. Enable monitoring and health checking for the additional cluster host. Use the same methods
as described for the first cluster host.
5. Verify that the additional cluster host responds to client requests. Use the same methods as
described for the first cluster host.
6. Complete steps 2 through 5 for each remaining cluster host in the Network Load Balancing
cluster.
7. Ensure that the cluster is load balancing requests across all cluster hosts (based on the port
rules of the cluster).
The time required to create and test the images used in an image-based deployment can be
prohibitive. It might take you less time to install and configure the remaining cluster hosts in the
same way that you installed and configured the first Network Load Balancing cluster host. For
example, you could deploy a cluster that consists of three cluster hosts. If you decide to deploy
the cluster hosts using a method other than image-based deployment, you must ensure that you
can restore a failed cluster host.
Example: Implementing the New Cluster
Now ready to implement the new IIS 6.0 Web farm that uses Network Load Balancing for load
balancing and fault tolerance. The network infrastructure and additional networking services have
been deployed in preparation for the implementation.
In this step, the organization installed and configured the first cluster host as a model for the
remaining cluster hosts. Then the organization deployed the remaining cluster hosts by using an
image-based deployment method. Figure illustrates the network environment after the
implementation of the new IIS 6.0 Web farm and Network Load Balancing.
Figure Network Environment After Installing the New Cluster

Page 103 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Table lists the deployment steps that were performed to implement the new cluster and the
reasons for performing those steps.
Table Deployment Steps for Implementing the New NLB Cluster
Deployment Step Reason
Server hardware needs to be connected to network
Add IIS-01, IIS-02, IIS-03, IIS-04, IIS-05, IIS-
infrastructure in preparation for Network Load
06, IIS-07, and IIS-08 server hardware.
Balancing deployment.
Install Windows Server 2003 and Network
Unattended setup is chosen because of the limited
Load Balancing on IIS-01 by using
number of hosts to be deployed.
unattended installation.
Create an image of IIS-01 to use as a model RIS allows the servers to be reimaged in the event
for RIS deployment. of a server failure.
Image deployment ensures a consistent
Deploy the image on IIS-02, IIS-03, IIS-04,
configuration on all servers in the Network Load
IIS-05, IIS-06, IIS-07, and IIS-08.
Balancing cluster.
Verification ensures that the Web farm is properly
Verify the Web farm responds to client
configured and that Network Load Balancing is load
requests.
balancing.

3.8 What Is Backup?

Page 104 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The Backup utility in the Microsoft Windows Server 2003 operating systems helps you protect
your data if your hard disk fails or files are accidentally erased due to hardware or storage media
failure. By using Backup, you can create a duplicate copy of the data on your hard disk and then
archive it on another storage device, such as a hard disk or a tape.
If the original data on your hard disk is accidentally erased or overwritten, or becomes
inaccessible because of a hard-disk malfunction, you can easily restore it from the disk or
archived copy.
Backup uses the Volume Shadow Copy service to create an accurate copy of the contents of
your hard drive, including any open files or files that are being used by the system. Users can
continue to access the system while Backup is running, without risking loss of data
Using Backup, you can:
Archive selected files and folders on your hard disk.
Restore the archived files and folders to your hard disk or any other disk you can access.
Make a copy of your computers System State data.
Use Automated System Recovery (ASR) to create a backup set that contains the System
State data, system services, and all disks associated with the operating system components.
ASR also creates a floppy boot disk that contains information about the backup, the disk
configurations (including basic and dynamic volumes), and how to restore your system.
Make a copy of any Remote Storage data and any data stored in mounted drives.
Create a log of what files were backed up and when the backup was performed.
Make a copy of your computers system partition, boot partition, and the files needed to
start up your system in case of a computer or network failure.
Schedule regular backups to keep your archived data up-to-date.
Backup also performs simple media management functions such as formatting. You can perform
more advanced management tasks such as mounting and dismounting a tape or disk by using
Removable Storage, which is a feature in Windows Server 2003.

3.8.1 Types of Backup


The Backup utility supports five methods of backing up data on your computer: a copy backup,
daily backup, differential backup, incremental backup, and normal backup.
Types of Backup
Type Description
Copies all the files that you select, but does not mark each file as having been
backed up (in other words, the archive attribute is not cleared). Copying is useful if
Copy backup
you want to back up files between normal and incremental backups because
copying does not affect these other backup operations.
Copies all the files that you select that have been modified on the day that the daily
Daily backup backup is performed. The backed-up files are not marked as having been backed
up (in other words, the archive attribute is not cleared).
Copies files that have been created or changed since the last normal or
incremental backup. It does not mark files as having been backed up (in other
Differential
words, the archive attribute is not cleared). If you are performing a combination of
backup
normal and differential backups, you must have the last normal as well as the last
differential backup to restore files and folders.
Backs up only those files that have been created or changed since the last normal
or incremental backup. It marks files as having been backed up (in other words, the
Incremental
archive attribute is cleared). If you use a combination of normal and incremental
backup
backups, you will need to have the last normal backup set as well as all
incremental backup sets to restore your data.
Normal Copies all the files that you select and marks each file as having been backed up
backup (in other words, the archive attribute is cleared). With normal backups, you only

Page 105 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

need the most recent copy of the backup file or tape to restore all of the files. You
usually perform a normal backup the first time you create a backup set.

If you want the quickest backup method that requires the least amount of storage space, you
should back up your data using a combination of normal backups and incremental backups.
However, recovering files from this combination of backups can be time-consuming and difficult
because the backup set might be stored on several disks or tapes.
If you want to restore your data more easily, you should back up your data using a combination of
normal backups and differential backups. This backup set is usually stored on only a few disks or
tapes. However, this combination of backups is more time-consuming.
Volume Shadow Copy Service
Backup uses the Volume Shadow Copy service to create a volume shadow copy, which is an
accurate copy of the contents of your hard drive, including any open files, files that are being
used by the system, and databases that are held open exclusively.
Backup uses the Volume Shadow Copy service to ensure that:
Applications can continue to write data to the volume during a backup.
Files that are open are no longer omitted during a backup.
Backups can be performed at any time, without locking out users.
If you choose to disable the volume shadow copy using advanced options or if the service fails,
Backup will revert to creating a backup without the Volume Shadow Copy service technology. If
this occurs, Backup skips files that are open or in use by other applications at the time of the
backup.
Important
Some applications manage storage consistency differently while files are open, which
can affect the consistency of the files in a backup. For critical applications, consult the
application documentation or your provider for information about the recommended backup
method. When in doubt, close the application before performing a backup.

Files Skipped During Backup


Backup skips certain files by default, including files in the following categories:
Files that the person performing the backup does not have permission to read.
Only users with backup rights can copy files that they do not own. Members of the Backup
Operators and Administrators groups have these permissions by default.
Temporary files. For example, Pagefile.sys, Hiberfil.sys, Win386.swp, 386spart.par,
Backup.log, and Restore.log. These files are not backed up or restored by Backup. The list of
skipped temporary files is embedded into Backup and cannot be changed.
Registry files on remote computers. Windows Server 2003 backs up only local registry
files.

System State Data


With Backup, you can back up the System State data for your computer. System State data
includes the registry, the COM+ Class Registration database, files under Windows File
Protection, and system boot files. Depending on the configuration of the server, other data might
be included in the System State data. For example, if the server is a certificate server, the System
State data also contains the Certificate Services database. If the server is a domain controller, the
Active Directory directory service database and the SYSVOL directory are included in the
System State data.
With Backup, the following system components might be included in a backup of the System
State:
When this component is included in System
Component
State?
Registry Always

Page 106 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

COM+ Class Registration database Always


Boot files, including the system files Always
Certificate Services database If it is a Certificate Services server
Active Directory database If it is a domain
SYSVOL directory Only if it is a domain controller
Cluster service information If it is within a cluster
IIS Metadirectory If it is installed
System files that are under Windows File
Always
Protection

When you choose to back up or restore the System State data, all of the System State data that
is relevant to your computer is backed up or restored. You cannot choose to back up or restore
individual components of the System State data because of dependencies among the System
State components. However, you can restore the System State data to an alternate location. If
you do this, only the registry files, SYSVOL directory files, cluster database information files, and
system boot files are restored to the alternate location. The Active Directory database, Certificate
Services database, and COM+ Class Registration database are not restored if you designate an
alternate location when you restore the System State data.
Files Under Windows File Protection
Backup works together with the catalog file for the Windows File Protection service when backing
up and restoring boot and system files. System files are backed up and restored as a single
entity. The Windows File Protection service catalog file, located in the folder
systemroot\system32\catroot\{F750E6C3-38EE-11D1-85E5-00C04FC295EE}, is backed up with
the system files.
In Windows NT 4.0 and earlier, backup programs could selectively back up and restore
operating system files. However, Windows 2000 Server and Windows Server 2003 do not allow
incremental restores of operating system files.
There is an Advanced Backup option that automatically backs up protected system files with the
System State data. All of the system files that are in the systemroot\ directory and the startup files
that are included with the System State data are backed up when you use this option.

3.8.2 Permissions and User Rights Required to Back Up


You must have certain permissions and user rights to back up files and folders using Backup. If
you are an administrator or a backup operator in a local group, you can back up any file and
folder on the local computer to which the local group applies. Similarly, if you are an administrator
or backup operator on a domain controller, you can back up any file and folder locally on any
computer in the domain or any computer on a domain with which you have a two-way trust
relationship. However, if you are not an administrator or a backup operator and you want to back
up files, then you must be the owner of the files and folders that you want to back up, or you must
have one or more of the following permissions for the files and folders you want to back up: Read,
Read and Execute, Modify, and Full Control.
You must also be certain that there are no disk-quota restrictions that might restrict your access
to a hard disk. These restrictions make it impossible for you to back up data. You can check
whether you have any disk-quota restrictions by right-clicking the disk you want to save data to,
clicking Properties, and then clicking the Quota tab.
You can also restrict access to a backup file by selecting Allow only the owner and the
Administrator access to the backup data in the Backup Job Information dialog box. If you
select this option, only an administrator or the person who created the backup file will be able to
restore the files and folders.

3.8.3 Automated System Recovery

Page 107 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Automated System Recovery (ASR) is a part of Backup that you can use to recover a system that
will not start. With ASR, you can create ASR sets on a regular basis as part of an overall plan for
system recovery in case of system failure. You should use ASR as a last resort in system
recovery, only after you have exhausted other options such as the startup options Safe Mode and
Last Known Good Configuration.
ASR is a recovery option that has two parts: ASR backup and ASR restore. You can access the
backup portion through the Automated System Recovery Preparation Wizard located in
Backup. The Automated System Recovery Preparation Wizard creates an ASR set, which is a
backup of the System State data, system services, and all disks associated with the operating
system components. It also creates a floppy disk, which contains information about the backup,
the disk configurations (including basic and dynamic volumes), and how to restore your system.
You can access the restore part of ASR by pressing F2 when prompted in the text mode portion
of Setup. ASR reads the disk configurations from the floppy disk and restores all of the disk
signatures, volumes and partitions on the disks that are required to start your computer (at a
minimum). It will attempt to restore all of the disk configurations, but under some circumstances it
might not be able to. ASR then installs a simple installation of Windows and automatically starts
to restore from backup using the backup ASR set.
Note
ASR does not include data files. You should back up data files separately on a regular
basis and restore them after the system is working.
ASR only supports FAT16 volumes up to 2.1 gigabytes (GB). ASR does not support 4-
GB FAT16 partitions that use a cluster size of 64 K. If your system contains 4-GB FAT16
partitions, convert them from FAT16 to NTFS before using ASR.

3.8.4 Restoring File Security Settings


Backup preserves permissions, ownership, and audit flags on files restored to NTFS volumes, but
not on files restored to FAT volumes. It is not possible to secure data on FAT volumes.
When you restore files to a new computer or hard disk, you do not have to restore security
information. The files inherit the permissions of the directory in which they are placed. If the
directory has no permissions, the file retains its previous permissions, including ownership.

3.8.5 Restoring Distributed Services


In Backup, you can restore distributed services data that is part of the System State data, such as
the Active Directory database, using one of three restore methods:
Primary restore
Normal (nonauthoritative) restore
Authoritative restore
To understand how each restore method works, it is important to understand how the Backup
utility backs up data for distributed services. When you back up the System State data on a
domain controller, you are backing up all Active Directory data that exists on that server (along
with other system components such as the SYSVOL directory and the registry). To restore these
distributed services to that server, you must restore the System State data. However, the number
and configuration of domain controllers in your system will dictate the type of restore method you
choose.
For example, if you need to roll back replicated Active Directory changes, but have more than one
domain controller in your organization, you will need to perform an authoritative restore to ensure
that your restored data gets replicated to all of your servers. However, if you need to restore
Active Directory data on a stand-alone domain controller or on the first of several domain
controllers, you will need to perform a primary restore. If you need to restore Active Directory data
on just one domain controller in a system where Active Directory data is replicated across several
domain controllers, you can use a normal restore if your restored data does not have to be
replicated to all your servers.

Page 108 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Primary restore Use this type of restore when the server you are trying to restore is the only
running server of a replicated data set (for example, the SYSVOL and File Replication service are
replicated data sets). Select primary restore only when restoring the first replica set to the
network. Do not use a primary restore if one or more replica sets have already been restored.
Typically, perform a primary restore only when all the domain controllers in the domain have
failed, and you are trying to rebuild the domain from backup.
Distributed Data Reason for Using Primary Restore of System State Data
Restoring a stand-alone domain controller.
Active Directory
Restoring the first of several domain controllers.
Restoring a stand-alone domain controller.
SYSVOL
Restoring the first of several domain controllers
Replica sets Restoring the first replica set.

Normal restore During a normal restore operation, Backup operates in nonauthoritative restore
mode. That is, any data that you restore, including Active Directory objects, will have their original
update sequence number. The Active Directory replication system uses this number to detect and
propagate Active Directory changes among the servers in your organization. Because of this, any
data that is restored nonauthoritatively will appear to the Active Directory replication system as
though it is old, which means the data will never get replicated to your other servers. Instead, if
newer data is available from your other servers, the Active Directory replication system will use
this to update the restored data. To replicate the restored data to the other servers, you must use
an authoritative restore.
Distributed
Reason for Using Normal Restore of System State Data
Data
Active Directory Restoring a single domain controller in a replicated environment.
SYSVOL Restoring a single domain controller in a replicated environment.
Restoring all but the first replica sets (that is, sets 2 through n, for n replica
Replica sets
sets).

Authoritative restore To authoritatively restore Active Directory data, you need to run the
Ntdsutil utility after you have restored the System State data but before you restart the server.
The Ntdsutil utility lets you mark Active Directory objects for authoritative restore. When an object
is marked for authoritative restore its update sequence number is changed so that it is higher
than any other update sequence number in the Active Directory replication system. This will
ensure that any replicated or distributed data that you restore is properly replicated or distributed
throughout your organization.
For example, if you inadvertently delete or modify objects stored in Active Directory, and those
objects are replicated or distributed to other servers, you will need to authoritatively restore those
objects so they are replicated or distributed to the other servers. If you do not authoritatively
restore the objects, they will never get replicated or distributed to your other servers because they
will appear to be older than the objects currently on your other servers. Using the Ntdsutil utility to
mark objects for authoritative restore ensures that the data you want to restore gets replicated or
distributed throughout your organization. On the other hand, if your system disk has failed or the
Active Directory database is corrupted, then you can simply restore the data nonauthoritatively
without using the Ntdsutil utility.
You can run the Ntdsutil command-line utility from the command prompt. Help for the Ntdsutil
utility is available through the command prompt by typing ntdsutil /?.
Distributed Data Reason for Using Authoritative Restore of System State Data
Active Directory Rolling back or undoing changes
SYSVOL Resetting data

Page 109 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Replica sets Rolling back or undoing changes

3.8.6 What Is Shadow Copies for Shared Folders?


Shadow Copies for Shared Folders is a new file-storage technology in the Microsoft Windows
Server 2003 operating systems. Shadow Copies for Shared Folders uses the Volume Shadow
Copy service to provide point-in-time copies of files that are located on a shared network
resource, such as a file server. With the Previous Versions client for Shadow Copies for Shared
Folders, users can view shared files and folders as they existed at points of time in the past,
without administrator assistance. Accessing previous versions of files, or shadow copies, is useful
because users can:
Recover files that were accidentally deleted. If users accidentally delete a file, they
can open a previous version and copy it to a safe location.
Recover from accidentally overwriting a file. If users accidentally overwrite a file, they
can recover a previous version of the file.
Compare different versions of a file while working. Users can use previous versions
when they want to check what has changed between two versions of a file.
A common scenario for recovering lost or corrupted files occurs when an end user submits an
urgent request to the IT help desk to find an archived version of a file. If the organization has an
archiving system in place, this request usually requires a costly and time-intensive search of
archived media, which in many instances is a tape backup. This situation creates several
problems:
Potential loss of business if the lost document is time-sensitive or labor-intensive to
replace
Decreased productivity for the end user
Increased cost to help-desk and IT-support services
Because end users can access previous versions of files by themselves, using Shadow Copies of
Shared Files for routine file recovery scenarios can help to:
Reduce demand on busy administrators, for example, by reducing requests to restore
files from tape.
Reduce the cost of recovering single or multiple files.

3.8.7 How Shadow Copies for Shared Folders Works


In this section
Shadow Copies for Shared Folders Architecture
Source Volume
Storage Volume
Previous Versions Clients
File Permissions
Shadow Copies for Shared Folders is a new file-storage technology in Microsoft Windows
Server 2003. Shadow Copies for Shared Folders provides point-in-time copies of files that are
located on shared network resources, such as a file server. With the Previous Versions client for
Shadow Copies for Shared Folders, users can view shared files and folders as they existed at
points of time in the past. Accessing previous versions of your files, or shadow copies, is useful
because users can:
Recover files that were accidentally deleted. If users accidentally delete a file, they
can open a previous version and copy it to a safe location.
Recover from accidentally overwriting a file. If users accidentally overwrite a file, they
can recover a previous version of the file.
Compare different versions of a file while working. Users can use previous versions
when they want to check what has changed between two versions of a file.

3.8.8 Shadow Copies for Shared Folders Architecture

Page 110 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

There are three main components that make up Shadow Copies for Shared Folders: the source
volume, the storage volume, and the Previous Versions Client.
Shadow Copies for Shared Folders Components
Component Description
The volume that is being copied. This volume is located on a server running
Windows Server 2003 and has Shadow Copies for Shared Folders enabled. This
Source volume
volume contains several folders that are shared on the network. This is typically
a volume on a file server.
Storage
The volume where shadow copies are stored.
volume
Previous The interface that is used to view shadow copies. The client is available for
Versions Client Windows Server 2003, Windows XP, and Windows 2000 operating systems

The following diagram shows the main components of Shadow Copies for Shared Folders and
how they interact.
Shadow Copies for Shared Folders Architecture

Source Volume
The source volume is the volume that is being copied, typically a volume on a file server. Shadow
Copies for Shared Folders is enabled on a per-volume basis. That is, you can only make shadow
copies of an entire volume. You cannot select specific shared folders and files on a volume to be
copied.
The volume must reside on a server running Windows Server 2003 and must be formatted using
the NTFS file system. Shadow Copies for Shared Folders is built upon the Volume Shadow Copy
service technology, which provides a way to make copies of open files.
The mounted drive will not be included when shadow copies are taken. You should enable
shadow copies only on volumes without mount points or only when you do not want the shared
resources on the mounted volume to be copied.
You can access the server portion of Shadow Copies for Shared Folders through the Shadow
Copies tab of the Local Disk Properties dialog box.
Note
When you enable Shadow Copies for Shared Folders on a volume, a default scheduled
task is also created. The default schedule for copies is twice a day at 7:00 A.M. and 12:00
noon, Monday through Friday.

Storage Volume

Page 111 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The storage volume is where shadow copies are stored. Shadow copies can be stored on any
NTFS-formatted volume that is available to the server. Because of the high I\O involved in
creating the copies, we recommend that you store the shadow copies on a volume on a separate
disk from the disk that contains the source volume.
Shadow Copies for Shared Folders works by making a block-level copy of any changes that have
occurred to a file since the last shadow copy was created. The file changes are copied and stored
as blocks, or units of data. Generally, the entire file is not copied. Only the previous values of the
changed blocks are copied to the storage area. As a result, previous versions of files do not
usually take up as much disk space as the current file.
However, the amount of disk space that is used for changes can vary, depending on the
application that changed the file. For example, some applications rewrite the entire file when a
change is made, but other applications add changes to the existing file. If the entire file is
rewritten to disk when a change is made, then the shadow copy contains the entire file.
The minimum amount of storage space that you can specify to be used for storing shadow copies
on the storage volume is 400 megabytes (MB). The default storage size is 10% of the source
volume (the volume being copied). When the storage limit is reached, older versions of the
shadow copies will be deleted and cannot be restored. There is also a limit of 64 shadow copies
per volume that can be stored. When this limit is reached, the oldest shadow copy will be deleted
and cannot be retrieved.
Important
Shadow copies are read-only. You cannot edit the contents of a shadow copy.

Previous Versions Clients


There are two clients for Shadow Copies for Shared Folders:
Previous Versions Client for Windows XP and Windows Server 2003. This client can
be installed on Windows XP from the Windows Server 2003 installation media or it can be
installed using the .msi package that is available at the Microsoft Download Center. Previous
Versions Client is installed by default on Windows Server 2003.
Previous Versions Client for Windows 2000. This client is only available from the .msi
package available at the Microsoft Download Center. In order for Windows 2000 clients to
view shadow copies, a registry key must be set on the computer running Windows
Server 2003 that enables access by Windows 2000 clients. The .msi package that contains
the Windows 2000 client also contains an installation package for the server that will set this
registry key.
You can access the client view of shadow copies through the Previous Versions tab of the
Properties dialog box of the shared file or folder.
You cannot view previous versions of files on a local shared resource. You must have a working
network connection and be viewing the volume as a shared network resource. On a computer
running Windows Server 2003, in order to view shadow copies of a local shared folder, you must
create a connection to the shared resource on your local computer as if it were in another location
on the network. For example: To view previous versions of files in the shared folder \Documents
on the local computer \\Server1, you need to create a network connection to
\\Server1\Documents and access the previous versions as if they were on a remote computer.

File Permissions
When you restore a file to a previous version, the file permissions will not be changed. File
permissions remain the same as they were before the file was restored. When you recover a file
that was accidentally deleted, the file permissions are set to the default permissions for the
directory the file is in. This directory might have different permissions than the file.

Page 112 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Section 4

4. Planning and Maintaining Network Security


4.1 What Is IPSec
4.1.1 IPSec Scenarios
4.1.2 Recommended Scenarios for IPSec
4.1.3 Securing Communication Between Domain Members and their
Domain Controllers
4.1.4 Securing All Traffic in a Network
4.1.5 Securing Traffic for Remote Access VPN Connections by
Using IPSec Tunnel Mode
4.1.6 Securing Traffic Sent over 802.11 Networks
4.1.7 Securing Traffic in Home Networking Scenarios
4.1.8 Securing Traffic in Environments That Use Dynamic IP Addresses
4.2 IPSec Dependencies
4.2.1 Active Directory
4.2.2 Successful Mutual Authentication
4.2.3 IPSec and ICF
4.3 How IPSec Works
4.4 IPSec Architecture
4.4.1 Logical Architecture
4.4.2 IPSec Protocols and Algorithms for Authentication and Encryption
4.5 Windows Server 2003 IPSec Architecture
4.5.1 IPSec Components
4.5.2 Policy Agent Architecture
4.5.3 Policy Agent Architecture
4.5.4 Policy Agent Components
4.5.5 Policy store
4.5.6 Policy Agent
4.5.7 Policy Agent Service Retrieving and Delivering IPSec Policy
Information
4.5.8 Local registry
4.5.9 Local cache
4.5.10 Interface Manager
4.6 IKE Module Architecture
4.6.1 IKE Module Architecture
4.6.2 IKE Module Components
4.6.3 IPSec Driver Architecture
4.6.4 IPSec Driver Architecture
4.6.5 IPSec Driver Components
4.6.6 Policy Data Structure
4.6.7 IPSec Policy Structure
4.6.8 IPSec Policy Components
4.6.9 IPSec Rule Components
4.6.10 Default response rule
4.6.11 Default Security Methods for the Default Response Rule
4.7 IPSec Protocols
4.7.1 IPSec Protocol Architecture
4.7.2 IPSec AH and ESP Protocols
4.7.3 IPSec AH and ESP Protocols in IPSec Transport Mode
4.7.4 AH transport mode
4.7.5 AH Transport Mode Packet Structure
4.7.6 ESP transport mode
4.7.7 ESP Transport Mode Packet Structure
Page 113 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

4.7.8 IPSec AH and ESP Protocols in IPSec Tunnel Mode


4.7.9 AH tunnel mode
4.8 IPSec Processes and Interactions
4.8.1 Policy Agent Initialization
4.8.2 Policy Data Retrieval and Distribution
4.8.3 IKE Main Mode and Quick Mode Negotiation
4.8.4 Main Mode Negotiation
4.8.5 IKE certificate acceptance process
4.8.6 IPSec CRL checking
4.8.7 Certificate-to-account mapping
4.8.8 Quick Mode Negotiation
4.8.9 Quick Mode SA negotiation
4.8.10 Generating and regenerating session key material
4.9 IPSec Driver Processes
4.9.1 IPSec Driver Responsibilities
4.9.2 IPSec Driver Communication
4.9.3 Default exemptions to IPSec filtering
4.9.4 Hardware acceleration (offloading)
4.10 Network Ports and Protocols Used by IPSec
4.10.1 IPSec Port and Protocol Assignments
4.10.2 Firewall Filters
4.10.3 IPSec NAT-T
4.10.4 Configuring Wireless Network Policies
4.10.5 Network authentication services
4.10.6 Defining Wireless Configuration Options for Preferred Networks
4.10.7 Securing Network Traffic
4.10.8 Securing Servers

Page 114 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

4.1 What Is IPSec


Internet Protocol security (IPSec) is a framework of open standards for helping to ensure private,
secure communications over Internet Protocol (IP) networks through the use of cryptographic
security services. IPSec supports network-level data integrity, data confidentiality, data origin
authentication, and replay protection. Because IPSec is integrated at the Internet layer (layer 3), it
provides security for almost all protocols in the TCP/IP suite, and because IPSec is applied
transparently to applications, there is no need to configure separate security for each application
that uses TCP/IP.
IPSec helps provide defense-in-depth against:
Network-based attacks from untrusted computers, attacks that can result in the denial-of-
service of applications, services, or the network
Data corruption
Data theft
User-credential theft
Administrative control of servers, other computers, and the network.
You can use IPSec to defend against network-based attacks through a combination of host-
based IPSec packet filtering and the enforcement of trusted communications.
IPSec is integrated with the Windows Server 2003 operating system and it can use the Active
Directory directory service as a trust model. You can use Group Policy to configure Active
Directory domains, sites, and organizational units (OUs), and then assign IPSec policies as
required to Group Policy objects (GPOs). In this way, IPSec policies can be implemented to meet
the security requirements of many different types of organizations.
This section describes the solution that IPSec is intended to provide by providing information
about core IPSec scenarios, IPSec dependencies, and related technologies.
The following figure shows an Active Directory-based IPSec policy being distributed to two IPSec
peers and IPSec-protected communications being established between those two peers.
Two IPSec Peers Using Active Directory-based IPSec Policy

The Microsoft Windows implementation of IPSec is based on standards developed by the Internet
Engineering Task Force (IETF) IPSec working group. For a list of relevant IPSec RFCs, see the
Related Information section later in this subject.

4.1.1 IPSec Scenarios


IPSec is a general-purpose security technology that can be used to help secure network traffic in
many scenarios. However, you must balance the need for security with the complexity of
configuring IPSec policies. Additionally, due to a lack of suitable standards, IPSec is not
appropriate for some types of connectivity. This section describes IPSec scenarios that are

Page 115 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

recommended, IPSec scenarios that are not recommended, and IPSec scenarios that require
special consideration.

4.1.2 Recommended Scenarios for IPSec


IPSec is recommended for the following scenarios:
Packet filtering
End-to-end security between specific hosts
End-to-end traffic through a Microsoft Internet Security and Acceleration (ISA) Server-
secured network address translator
Secure server
Layer Two Tunneling Protocol (L2TP) over IPSec (L2TP/IPSec) for remote access and
site-to-site virtual private network (VPN) connections
Site-to-site IPSec tunneling with non-Microsoft IPSec gateways
Packet Filtering
IPSec can perform host-based packet filtering to provide limited firewall capabilities for end
systems. You can configure IPSec to permit or block specific types of unicast IP traffic based on
source and destination address combinations and specific protocols and specific ports. For
example, nearly all the systems illustrated in the following figure can benefit from packet filtering
to restrict communication to only specific addresses and ports. You can strengthen security by
using IPSec packet filtering to control exactly the type of communication that is allowed between
systems.
Filtering Packets by Using IPSec

As illustrated in this figure:


The internal network domain administrator can assign an Active Directory-based IPSec
policy (a collection of security settings that determines IPSec behavior) to block all traffic from
the perimeter network (also known as a demilitarized zone [DMZ], demilitarized zone, or
screened subnet).
The perimeter network domain administrator can assign an Active Directory-based IPSec
policy to block all traffic to the internal network.
The administrator of the computer running Microsoft SQL Server on the internal network
can create an exception in the Active Directory-based IPSec policy to permit structured query
language (SQL) protocol traffic to the Web application server on the perimeter network.
The administrator of the Web application server on the perimeter network can create an
exception in the Active Directory-based policy to permit SQL traffic to the computer running
SQL Server on the internal network.
The administrator of the Web application server on the perimeter network can also block
all traffic from the Internet, except requests to TCP port 80 for the HyperText Transfer
Protocol (HTTP) and TCP port 443 for HTTPS (HTTP over Secure Sockets Layer/Transport
Layer Protocol [SSL/TLS]), which are used by Web services. This provides additional security
for traffic allowed from the Internet in case the firewall was misconfigured or compromised by
an attacker.
The domain administrator can block all traffic to the management computer, but allow
traffic to the perimeter network.

Page 116 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

You can also use IPSec with the IP packet-filtering capability or NAT/Basic Firewall component of
the Routing and Remote Access service to permit or block inbound or outbound traffic, or you can
use IPSec with the Internet Connection Firewall (ICF) component of Network Connections, which
provides stateful packet filtering. However, to ensure proper Internet Key Exchange (IKE)
management of IPSec security associations (SAs), you must configure ICF to permit UDP port
500 and port 4500 traffic needed for IKE messages.
End-to-End Security Between Specific Hosts
IPSec establishes trust and security from a unicast source IP address to a unicast destination IP
address (end-to-end). For example, IPSec can help secure traffic between Web servers and
database servers or domain controllers in different sites. As shown in the following figure, only the
sending and receiving computers need to be aware of IPSec. Each computer handles security at
its respective end and assumes that the medium over which the communication takes place is not
secure. The two computers can be located near each other, as on a single network segment, or
across the Internet. Computers or network elements that route data from source to destination
are not required to support IPSec.
Securing Communications Between a Client and a Server by Using IPSec

The following figure shows domain controllers in two forests that are deployed on opposite sides
of a firewall. In addition to using IPSec to help secure all traffic between domain controllers in
separate forests, as shown in the figure, you can use IPSec to help secure all traffic between two
domain controllers in the same domain and between domain controllers in parent and child
domains.
Securing Communications Between Two Domain Controllers in Different Forests by Using
IPSec

Page 117 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

End-to-End Traffic Through an ISA-Secured Network Address Translator


Windows Server 2003 supports IPSec NAT Traversal (NAT-T). IPSec NAT-T allows traffic to be
secured by IPSec and also to be translated by a network address translator. For example, you
can use IPSec transport mode to help secure host-to-host traffic through a computer that is
running ISA Server and that is functioning as a network address translator if ISA (or any other
NAT device) does not need to inspect the traffic between the two hosts. IPSec transport mode is
used to protect traffic between hosts and it can provide security between computers that are on
the same local area network (LAN) or connected by private wide area network (WAN) links. In the
following figure, a computer running Windows Server 2003 and Microsoft Internet Security and
Acceleration (ISA) Server is functioning as a network address translator. The IPSec policy on
Server A is configured to secure traffic to the IP address of Server B, while the IPSec policy on
Server B is configured to secure traffic to the external IP address of the computer running ISA
Server.
Securing Communications Through an ISA-Secured NAT by Using IPSec NAT-T

Secure Server
You can require IPSec protection for all client computers that access a server. In addition, you
can set restrictions on which computers are allowed to connect to a server running Windows
Server 2003. The following figure shows IPSec in transport mode securing a line of business
(LOB) application server.
Securing an Application Server by Using IPSec

Page 118 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

In this scenario, an application server in an internal corporate network must communicate with
clients running Windows 2000 or Windows XP Professional; a Windows Internet Name Service
(WINS) server, Domain Name System (DNS) server, and Dynamic Host Configuration Protocol
(DHCP) server; Active Directory domain controllers; and a non-Microsoft data backup server. The
users on the client computers are company employees who access the application server to view
their personal payroll information and performance review scores. Because the traffic between
the clients and the application server involves highly sensitive data, and because the server
should only communicate with other domain members, the network administrator uses an IPSec
policy that requires ESP encryption and communication only with trusted computers in the Active
Directory domain.
Other traffic is permitted as follows:
Traffic between the WINS server, DNS server, DHCP server, and the application server
is permitted because WINS servers, DNS servers, and DHCP servers must typically
communicate with computers that run on a wide range of operating systems, some of which
might not support IPSec.
Traffic between Active Directory domain controllers and the application server is
permitted, because using IPSec to secure communication between domain members and
their domain controllers is not a recommended usage.
Traffic between the non-Microsoft data backup server and the application server is
permitted because the non-Microsoft backup server does not support IPSec.
L2TP/IPSec for Remote Access and Site-to-Site VPN Connections
You can use L2TP/IPSec for all VPN scenarios. This does not require the configuration and
deployment of IPSec policies. Two common scenarios for L2TP/IPSec are securing
communications between remote access clients and the corporate network across the Internet
and securing communications between branch offices.
Note
Windows IPSec supports both IPSec transport mode and tunnel mode. Although VPN
connections are commonly referred to as tunnels, IPSec transport mode is used for

Page 119 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

L2TP/IPSec VPN connections. IPSec tunnel mode is most commonly used to help protect
site-to-site traffic between networks, such as site-to-site networking through the Internet.
L2TP/IPSec for remote access connections
A common requirement for organizations is to secure communications between remote access
clients and the corporate network across the Internet. Such a client might be a sales consultant
who spends most of the time traveling, or an employee working from a home office. In the
following figure, the remote gateway is a server that provides edge security for the corporate
intranet. The remote client represents a roaming user who requires regular access to network
resources and information. An ISP is used as an example to demonstrate the path of
communication when the client uses an ISP to access the Internet. L2TP/IPSec provides a
simple, efficient way to build a VPN tunnel and help protect the data across the Internet.
Securing Remote Access Clients by Using L2TP/IPSec

L2TP/IPSec for site-to-site VPN connections


A large corporation often has multiple sites that require communication for example, a
corporate office in New York and a sales office in Washington. In this case, L2TP/IPSec provides
the VPN connection and helps protect the data between the sites. In the following figure, the
router running Windows Server 2003 provides edge security. The routers might have a leased
line, dial-up, or other type of Internet connection. The L2TP/IPSec VPN tunnel runs between the
routers only and provides protected communication across the Internet.
Establishing an L2TP/IPSec VPN Tunnel Between Sites

Site-to-Site IPSec Tunneling with Non-Microsoft Gateways


For interoperability with gateways or end systems that do not support L2TP/IPSec or Point-to-
Point Tunneling Protocol (PPTP) VPN site-to-site connections, you can use IPSec in tunnel
mode. When IPSec tunnel mode is used, the sending gateway encapsulates the entire IP
datagram by creating a new IP packet that is then protected by one of the IPSec protocols. The
following figure illustrates site-to-site IPSec tunneling.
Establishing an IPSec Gateway-to-Gateway Tunnel Between Sites

Page 120 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

In this figure, traffic is being sent between a client computer in a vendor site (Site A) and a File
Transfer Protocol (FTP) server at the corporate headquarters site (Site B). Although an FTP
server is used for this scenario, the traffic can be any unicast IP traffic. The vendor uses a non-
Microsoft IPSec-enabled gateway, while corporate headquarters uses a gateway running
Windows Server 2003. An IPSec tunnel is used to secure traffic between the non-Microsoft
gateway and the gateway running Windows Server 2003.
Scenarios for Which IPSec Is Not Recommended
IPSec policies can be quite complex to configure and manage. Additionally, IPSec can incur
performance overhead to establish and maintain secure connections, and it can incur network
latency. In some deployment scenarios, the lack of standard methods for user authentication and
address assignment make IPSec an unsuitable choice. Because IPSec depends on IP addresses
for establishing secure connections, you cannot specify dynamic IP addresses. It is often
necessary for a server to have a static IP address in IPSec policy filters. In large network
deployments and in some mobile user cases, using dynamic IP addresses at both ends of the
connection can increase the complexity of IPSec policy design. For these reasons, IPSec is not
recommended for the following scenarios:
Securing communication between domain members and their domain controllers
Securing all traffic in a network
Securing traffic for remote access VPN connections using IPSec tunnel mode

4.1.3 Securing Communication Between Domain Members and their Domain Controllers
Using IPSec to help secure traffic between domain members (either clients or servers) and their
domain controllers is not recommended because:
If domain members were to use IPSec-secured communication with domain controllers,
increased latency might occur, causing authentication and the process of locating a domain
controller to fail.
Complex IPSec policy configuration and management is required.
Increased load is placed on the domain controller CPU to maintain SAs with all domain
members. Depending on the number of domain members in the domain controllers domain,
such a load might overburden the domain controller.

4.1.4 Securing All Traffic in a Network

Page 121 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

In addition to reduced network performance, using IPSec to help secure all traffic in a network is
not recommended because:
IPSec cannot secure multicast and broadcast traffic.
Traffic from real-time communications, applications that require Internet Control Message
Protocol (ICMP), and peer-to-peer applications might be incompatible with IPSec.
Network management functions that must inspect the TCP, UDP, and other protocol
headers are less effective, or cannot function at all, due to IPSec encapsulation or encryption
of IP payloads.

4.1.5 Securing Traffic for Remote Access VPN Connections by Using IPSec Tunnel Mode
IPSec tunnel mode is not a recommended technology for remote access VPN connections,
because there are no standard methods for user authentication, IP address assignment, and
name server address assignment. Using IPSec tunnel mode for gateway-to-gateway VPN
connections is possible using computers running Windows Server 2003. But because the IPSec
tunnel is not represented as a logical interface over which packets can be forwarded and
received, routes cannot be assigned to use the IPSec tunnel and routing protocols do not operate
over IPSec tunnels. Therefore, the use of IPSec tunnel mode is only recommended as a VPN
solution for site-to-site VPN connections in which one end of the tunnel is a non-Microsoft VPN
server or security gateway that does not support L2TP/IPSec. Instead, use L2TP/IPSec or PPTP
for remote access VPN connections.
IPSec Uses That Require Special Considerations
The following scenarios merit special consideration, because they introduce an additional level of
complexity for IPSec policy configuration and management:
Securing traffic over IEEE 802.11wireless networks
Securing traffic in home networking scenarios
Securing traffic in environments that use dynamic IP addresses

4.1.6 Securing Traffic Sent over 802.11 Networks


You can use IPSec transport mode to protect traffic sent over 802.11 wireless networks.
However, IPSec is not the recommended solution for providing security for corporate 802.11
wireless LAN networks. Instead, it is recommended that you use either 802.11 Wired Equivalent
Privacy (WEP) encryption or Wi-Fi Protected Access (WPA) and IEEE 802.1X authentication.
To use IPSec to help secure traffic sent over 802.11 networks, you must ensure that client
computers and servers support IPSec. Configuration management and trust are also required on
client computers and servers when IPSec is used. Because many computers on a network do not
support IPSec or are not managed, it is not appropriate to use IPSec alone to protect all 802.11
corporate wireless LAN traffic.

4.1.7 Securing Traffic in Home Networking Scenarios


Although IPSec is not optimized for use in general home networking scenarios, when network
security administrators deploy IPSec with appropriate scripts and support tools, it can be used
effectively on home computers for specific scenarios.
IPSec can be used to connect home computers to a corporate intranet for remote access.
Network security administrators can use scripts and support tools to deploy IPSec on the home
computers of employees who require secure connectivity to the corporate network. For example,
an administrator can use a Connection Manager profile to deploy an L2TP/IPSec-based VPN
connection on home computers. Employees can then establish IPSec-secured connections
across the Internet to the corporate network by using the VPN client built-in to Network
Connections.
Note
In some cases, non-Microsoft VPN or firewall clients might disable the IPSec service,
which is required for IPSec to function. If you encounter this problem, it is recommended that
you contact the VPN or firewall vendor.

Page 122 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

IPSec is not recommended for end users in general home networking scenarios for the following
reasons:
The IPSec policy configuration user interface (IP Security Policy Management) is
intended for professional network security administrators, rather than for end users. Improper
policy configuration can result in blocked communications, and if problems occur, built-in
support tools are not yet available to aid end users in troubleshooting.
Some home networking applications use broadcast and multicast traffic, for which IPSec
cannot negotiate security.
Many home networking scenarios use a wide range of dynamic IP addresses.
Many home networking scenarios involve the use of a network address translator. To use
IPSec across a NAT, both IPSec peers must support IPSec NAT-T

4.1.8 Securing Traffic in Environments That Use Dynamic IP Addresses


IPSec depends on IP addresses for establishing secure connections, and it is often necessary for
a server to have a static IP address in IPSec policy filters. In large network deployments and in
some mobile user cases, using dynamic IP addresses at both ends of the connection can
increase the complexity of IPSec policy design.

4.2 IPSec Dependencies


There is no single optimal environment for IPSec. However, there are dependencies that are
critical to the successful deployment of IPSec. This section describes how the following two
IPSec dependencies affect the deployment of IPSec
Active Directory (if your deployment requires the use of Active Directory-based IPSec
policies, rather than local IPSec policies)
Successful mutual authentication

4.2.1 Active Directory


For organizations with large numbers of computers that must be managed in a consistent way, it
is best to distribute IPSec policies by using Group Policy to configure Active Directory domains,
sites, and organizational units (OUs), and then assigning IPSec policies as required to Group
Policy objects (GPOs). Although you can assign local IPSec policies to computers that are not
members of a trusted domain, distributing IPSec policies and managing IPSec policy
configuration and trust relationships is much more time-consuming for computers that are not
members of a trusted domain.
If you do use Active Directory-based IPSec policies, IPSec policy design and management must
take into account the delays that result from the replication of Group Policy data from domain
controllers to domain members. Often, the first step in troubleshooting a problem with IPSec
connectivity is to determine whether the computer in question has the most current Group Policy
assignment. To do this, you must be a member of the local Administrators group on the computer
for which troubleshooting is being performed.

4.2.2 Successful Mutual Authentication


For IPSec-secured communications to be established, there must be successful mutual
authentication between IPSec peers. IPSec requires the use of one of the following authentication
methods: Kerberos version 5, an X.509 version 3 computer certificate issued by a public key
infrastructure (PKI), or a preshared key. The two IPSec peers must use at least one common
authentication method or communication will fail. Make sure that you choose an authentication
method that is appropriate for your environment.
When you deploy IPSec to negotiate security for upper-layer protocols such as TCP connections,
failures to communicate are often caused by the failure of IPSec to mutually authenticate the two
communication endpoints. Authentication might succeed for some computers and fail for others
due to issues within the authentication system itself (typically, Kerberos or public key certificates,
rather than preshared keys, because preshared keys are not recommended). For these reasons,

Page 123 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

it is important to evaluate how the dependency of IPSec connectivity on authentication affects


your environment and support practices. Additional training is recommended so that
administrators can quickly determine whether a connectivity problem is caused by an IPSec
authentication failure and understand how to investigate the authentication system.

4.2.3 IPSec and ICF


IPSec is similar to the ICF feature of Network Connections. However, there are important
differences between these two technologies as well. It is important to understand the similarities
and differences, so that you can deploy IPSec where it is truly needed and obtain the maximum
benefits from the security that IPSec provides. This section describes similarities and differences
between IPSec and ICF.
Microsoft ICF is a locally managed, stateful host firewall that, by default, discards all incoming
packets except those sent in response to packets sent by the host. One primary difference
between IPSec and ICF is that IPSec provides complex static filtering based on IP addresses,
while ICF provides stateful filtering for all addresses on a network interface. For example, when
you use IPSec, you can configure a filter to block all inbound traffic that is sent to a specific IP
address over a specific protocol and port (for example, Block all inbound traffic from the Internet
to TCP port 135). You can also configure exemptions to permit specific types of traffic from
specific source IP addresses. When you use ICF, you can configure exemptions to permit traffic
based solely on the port, regardless of the source IP address.
It is recommended that you use ICF when you want a firewall for a network interface that can be
accessed through the Internet. It is recommended that you use IPSec when you want to secure
traffic over upper-layer protocols or when you need to allow access only to a group of trusted
computers. Note that it is easier to configure ICF to permit traffic over a certain port than it is to
configure an IPSec policy.
IPSec is not a full-featured host firewall. However, it does provide the ability to centrally manage
policies that can permit, block, or secure unicast IP traffic based on specific addresses, protocols,
and ports. Some of the functions found in standard firewalls that IPSec does not provide include
stateful inspection, application protocol awareness, intrusion detection, and packet logging.
Although IPSec lacks some features of firewalls, the packet blocking and filtering it provides can
be effective in helping to limit the spread of viruses and thwart specific attacks known to use
specific ports. You can also use IPSec to prevent specific applications and services from being
used on the network.

4.3 How IPSec Works


In the Microsoft Windows Server 2003 operating system, Internet Protocol security (IPSec)
helps provide defense-in-depth against network-based attacks from untrusted computers. IPSec
provides protection from attack in host-to-host, virtual private network (VPN), site-to-site (also
known as gateway-to-gateway or router-to-router), and secure server environments. You can
configure IPSec policies to meet the security requirements of a computer, an organizational unit,
a domain, site, or a global organization.
IPSec uses packet filtering and cryptography. Cryptography provides user authentication,
ensures data confidentiality and integrity, and enforces trusted communication. The strong
cryptographic-based authentication and encryption support that IPSec provides is especially
effective for securing traffic that must traverse untrusted network paths, such as those on a large
corporate intranet or the Internet. IPSec also is especially effective for securing traffic that uses
protocols and applications that do not provide sufficient security for communications.
To successfully deploy IPSec for Windows Server 2003, you must ensure the following:
If your scenario requires Active Directory-based IPSec policy (a collection of IPSec rules
that determine IPSec behavior), the Active Directory directory service and Group Policy
must be configured correctly on the corporate network, appropriate trusts must be defined,
and appropriate permissions must be applied. Although Group Policy applies to both users

Page 124 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

and computers, IPSec policy is a computer configuration Group Policy setting that applies
only to computers.
Each computer that will establish IPSec-secured communications must have an IPSec
policy assigned. This policy must be compatible with the IPSec policy that is assigned to
other computers with which that computer must communicate.
Authentication must be configured correctly and an appropriate authentication method
must be specified in the IPSec policy so that mutual authentication can occur between IPSec
peers.
Routers, firewalls, or other filtering devices must be configured correctly to permit IPSec
protocol traffic on all parts of the corporate network, if IPSec negotiation messages and
IPSec-secured traffic must pass through these devices.
Computers must run operating systems that automatically support IPSec or must have
appropriate client software installed.
If computers are running different versions of the Microsoft Windows operating system
(for example, Windows Server 2003, the Microsoft Windows XP operating system, and the
Microsoft Windows 2000 operating system), you must address the compatibility of the IPSec
policies.
If clients must establish IPSec-secured connections with servers, those servers must be
adequately sized to support those connections. If necessary, you can use IPSec hardware
offload network adapters.
The number of IPSec policies are kept to a minimum, and the IPSec policies are made as
simple as possible.
Systems administrators who will configure and support IPSec must be properly trained
and must be members of the appropriate administrative groups.

4.4 IPSec Architecture


Several Requests for Comments (RFCs) define the architecture and components of IPSec. These
components and their interrelationship comprise the logical architecture of IPSec. This section
briefly describes the fundamental components of the IPSec logical architecture and then explains
how these components are implemented in Windows Server 2003.
For comprehensive descriptions of IPSec architecture and components, see the RFCs that are
listed in Related Information later in this section.

4.4.1 Logical Architecture


The IPSec architecture can be categorized into four main areas:
Security Associations
SA and key management support
IPSec protocols
Algorithms and methods
Security Associations
Security Associations (SAs) are a combination of a mutually agreeable policy and keys that
defines the security services, mechanisms, and keys used to protect communications between
IPSec peers. Each SA is a one-way or simplex connection that provides security services to the
traffic that it carries.
Because SAs are defined only for one-way communication, each IPSec session requires two
SAs. For example, if both IPSec protocols, Authentication Header (AH) and Encapsulating (ESP),
are used for an IPSec session between two peers, then four SAs would be required.
SAs for IPSec-secured communications require two databases: a security policy database (SPD)
and security association database (SAD). The SPD stores the security requirements or policy
requisites for an SA to be established. It is used during both inbound and outbound packet
processing. IPSec checks inbound packets to ensure that they have been secured according to
policy. Outbound packets are secured according to policy.

Page 125 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The SAD contains the parameters of each active SA. The Internet Key Exchange (IKE) protocol
automatically populates the SAD. After an SA is established, the information for each SA is stored
in the SAD. The following figure shows the relationship between SAs, the SPD, and the SAD.
SA, SPD, and SAD Architecture

SA and Key Management


IPSec requires SA and key management support. The Internet Security Association and Key
Management Protocol (ISAKMP) defines the framework for authentication and key exchange by
providing procedures for negotiating, establishing, changing, and deleting SAs. It does not define
the actual key exchange: it merely provides the framework.
IPSec requires support for both manual and automatic management of SAs and keys. IKE is the
default automated key management protocol for IPSec. IKE is a hybrid protocol that incorporates
parts of the Oakley key exchange protocol and the SKEME keying techniques protocol. The
following figure shows the relationship between the ISAMKP, IKE, Oakley, and SKEME protocols.
ISAMKP, IKE, Oakley, and SKEME Protocol Architecture

The Oakley protocol uses the Diffie-Hellman key exchange or key agreement algorithm to create
a unique, shared, secret key, which is then used to generate keying material for authentication or
encryption. For example, such a shared secret key could be used by the DES encryption
algorithm for the required keying material. A Diffie-Hellman exchange can use one of a number of
groups that define the length of the base prime numbers (key size) which are created for use
during the key exchange process. The longer the number, the greater the key strength. Well-
known groups include Groups 1, 2, and 14.
The following figure shows the relationship between the Oakley protocol, the Diffie-Hellman
algorithm, and well-known Diffie-Hellman key exchange groups.
Oakley Protocol, Diffie-Hellman Key Exchange Algorithm, and Well-Known Diffie-Hellman
Groups Architecture

Page 126 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The Oakley protocol defines several modes for the key exchange process. These modes
correspond to the two negotiation phases defined in the ISAKMP protocol. For phase 1, the
Oakley protocol defines two principle modes: main and aggressive. IPSec for Windows does not
implement aggressive mode. For phase 2, the Oakley protocol defines a single mode, quick
mode.
IPSec Protocols
To provide security for the IP layer, IPSec defines two protocols: Authentication Header (AH) and
Encapsulating Security Payload (ESP). These protocols provide security services for the SA.
Each SA is identified by the Security Parameters Index (SPI), IP destination address, and security
protocol (AH or ESP) identifier.
The SPI is a unique, identifying value in an SA that is used to distinguish among multiple SAs on
the receiving computer. For example, IPSec communication between two computers requires two
SAs on each computer. One SA services inbound traffic and the other services outbound traffic.
Because the addresses of the IPSec peers for the two SAs are the same, the SPI is used to
distinguish between the inbound and outbound SA. Because the encryption keys differ for each
SA, each SA must be uniquely identified.
The following figure shows the relationship between the SA, SPI, IP destination address, and
security protocol.
IPSec Protocols and SA Architecture

Algorithms and Methods


The IPSec protocols use authentication, encryption, and key exchange algorithms. Two
authentication or keyed hash algorithms, HMAC-MD5 (Hash Message Authentication Code -
MD5) and HMAC-SHA-1, are used with both the AH and ESP protocols, The DES and 3DES
encryption algorithms are used with ESP. The following figure shows the relationship between the
authentication and encryption algorithms and the AH and ESP protocols.

4.4.2 IPSec Protocols and Algorithms for Authentication and Encryption

Page 127 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The authentication methods for IPSec, as defined by the IKE protocol, are grouped into three
categories: digital signature, public-key, and pre-shared key. The following figure shows the
relationship between the IKE protocol and the authentication methods.
IKE Protocol and Authentication Methods Architecture

Windows Server 2003 IPSec Architecture and Components


The components and architecture of Windows Server 2003 IPSec are based on the IPSec RFCs.
The basic IPSec architecture for Windows Server 2003 has the following components: Active
Directory, a Policy Agent, the IKE protocol, an IPSec driver, and a TCP/IP driver.
The following figure illustrates how these components interact.

4.5 Windows Server 2003 IPSec Architecture

The following table describes each of these components.

4.5.1 IPSec Components


Component Description

Page 128 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Windows Server 2003 Active Directory stores domain-wide IPSec policies for
Active
computers that are members of the domain. Active Directory-based IPSec policies
Directory
are polled and retrieved by the Policy Agent.
The Policy Agent retrieves IPSec policy from an Active Directory domain, a
configured set of local policies, or a local cache. The Policy Agent then distributes
Policy Agent
authentication and security settings to the IKE component and the IP filters to the
IPSec driver.
IKE receives authentication and security settings from the Policy Agent and waits
for requests to negotiate IPSec SAs. When requested by the IPSec driver, IKE
negotiates both kinds of SAs (main mode and quick mode) with the appropriate
IKE
endpoint requested by the IPSec driver based on the policy settings obtained from
the Policy Agent. After negotiating an IPSec SA, IKE sends the SA settings to the
IPSec driver.
The IPSec driver monitors and secures outbound unicast IP traffic and monitors,
decrypts, and validates inbound unicast IP traffic. After the IPSec driver receives
the filters from the Policy Agent, it determines which packets are permitted, which
IPSec driver are blocked, or which are secured. For secure traffic, the IPSec driver either uses
active SA settings to secure the traffic or requests that new SAs be created. The
IPSec driver is bound to the TCP/IP driver to provide IPSec processing for IP
packets that pass through the TCP/IP driver.
The TCP/IP driver is the Windows Server 2003 implementation of the TCP/IP
TCP/IP
protocol. It is a kernel-mode component that is loaded from the tcpip.sys file during
driver
startup.

The architecture of the Policy Agent, IKE protocol, and IPSec driver are described in more detail
in the following sections.

4.5.2 Policy Agent Architecture


The Policy Agent retrieves IPSec policy information, handles the internal interpretation and
processing of the policy, and sends it to the other IPSec components that require the information
to perform security services. The Policy Agent has the following components: policy store, Policy
Agent service, local registry, local cache, and Interface Manager.
The following figure shows the architecture of the Policy Agent.

4.5.3 Policy Agent Architecture

Page 129 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The following table briefly describes the Policy Agent components.

4.5.4 Policy Agent Components


Component Description
The IPSec policy store maintains both IPSec policy descriptions and interfaces to
applications and other tools that provide policy data management. The policy store
Policy store
accesses IPSec policy data that is stored in either the local registry or in Active
Directory.
The IPSec Policy Agent controls the retrieval and distribution of IPSec policy and
Policy Agent
maintains the data about the configured policy for the IPSec driver and IKE.
The local registry stores the locally configured IPSec policies, the local cache, and
Local registry
other IPSec settings.
The local cache stores IPSec policies after they are downloaded from an Active
Local cache
Directory domain controller by the Policy Agent.
Interface Interface Manager manages a list that contains items that correspond to each
Manager physical and logical network adapter on the system.

The following sections provide additional detail about each of these components.

4.5.5 Policy store


The policy store organizes IPSec policy data and stores it in a format that the Policy Agent can
use. In Windows Server 2003, policy data can be stored in the following:
Active Directory
Local and remote registry
A file (for exporting and importing only)
In addition to providing an interface that UI services can use to store policy in each of these
media, the policy store does the following:
Provides policy data for default IPSec policies
Checks policy information for consistency
Retrieves policy version information

Page 130 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The policy store reads and writes policy information both to and from persistent storage and is
aware of shared policy-setting dependencies. This ensures that all policies using shared settings
are marked as changed when they are modified and that Windows Server 2003 IPSec
components download the modified policies.

4.5.6 Policy Agent


The Policy Agent retrieves IPSec policy information and delivers it to other IPSec components
that require this information to perform security services, as shown in the following illustration.

4.5.7 Policy Agent Service Retrieving and Delivering IPSec Policy Information

The Policy Agent performs the following tasks:


Retrieves the appropriate IPSec policy (if one has been assigned) from Active Directory if
the computer is a domain member or from the local registry if the computer is not a member
of a domain
Determines filter list order
Delivers the assigned IPSec policy information (IP filters) to the IPSec driver
Delivers both main mode and quick mode settings to IKE
Polls for changes in policy configuration. If the computer is a member of a domain, policy
retrieval occurs when the computer starts, at the interval specified in the IPSec policy, and at
the default Winlogon polling interval. You can also manually poll Active Directory for policy by
using the gpupdate /target:computer command.
If there are no IPSec policies in Active Directory or the registry, or if the IPSec Policy Agent
cannot connect to Active Directory, the IPSec Policy Agent waits for policy to be assigned or
activated.
The Policy Agent appears in the list of computer services in the Services snap-in under the name
IPSEC Services and starts automatically as part of the initialization of the Local Security Authority
(LSA) service.

4.5.8 Local registry


Each computer running Windows XP or a Windows Server 2003 has only one local Group Policy
object, often called the local computer policy. Using this local Group Policy object allows Group
Policy settings to be stored on individual computers regardless of whether they are members of
an Active Directory domain.

Page 131 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The local registry maintains the IPSec policy configuration in the following registry key and its
subkeys: HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\IPSec.
If you assign local IPSec policies and you do not assign Active Directory-based IPSec policies,
the local policies are stored in the following registry key:
HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\IPSec\Policy\Local.
If you assign Active Directory-based IPSec policies, the policies are read from Active Directory
and stored in the local cache.

4.5.9 Local cache


The local cache is part of the registry. If you assign an IPSec policy in Active Directory, the policy
is stored in and read from Active Directory. A copy of the current policy in Active Directory is
maintained in a cache in the local registry at: HKEY_LOCAL_MACHINE\Software\Policies
\Microsoft\Windows\IPSec\Policy\Cache.
If a computer to which an IPSec policy in Active Directory is assigned cannot connect to the
domain, the cached copy of the policy in Active Directory is applied. When the computer
reconnects to the domain, new policy information for that computer replaces old, cached
information. You cannot configure or manage the cached copy of an IPSec policy in Active
Directory.

4.5.10 Interface Manager


Interface Manager maintains a list of physical and logical network adapters on the computer and
notifies the Policy Agent when interface and address changes occur. Interface Manager also
maintains a complete list of generic filters. Generic filters are filters that are configured to use My
IP Address either as a source address or as a destination address. Generic filters are saved in
the appropriate IPSec policy storage location with either a source address or a destination
address of 0.0.0.0 and a corresponding subnet mask of 255.255.255.255.

4.6 IKE Module Architecture


The IKE module receives authentication and security settings from the Policy Agent and waits for
requests to negotiate SAs. When the IKE module receives a request to negotiate an SA from the
IPSec driver, the IKE module negotiates both kinds of SAs (the main mode SA and the quick
mode SA) with the appropriate endpoint based on the request of the IPSec driver that the policy
settings obtained from the Policy Agent. After it has negotiated an SA, the IKE module sends the
SA settings to the IPSec driver.
The IKE module has the following components:
CryptoAPI
Diffie-Hellman Cryptographic Service Provider (CSP)
RSA CSP
Certificate store
Security Support Provider Interface (SSPI)
Kerberos Security Support Provider (SSP)
The following figure shows the architecture of the IKE module.

4.6.1 IKE Module Architecture

Page 132 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

A cryptographic service provider (CSP) is an independent software module that provides


implementations of cryptographic standards and algorithms. The CSP carries out the
cryptographic functions of CrytoAPI, creating keys, destroying them, and using them to
perform a variety of cryptographic operations.
The following table briefly describes the IKE module components.

4.6.2 IKE Module Components


Component Description
CryptoAPI provides a set of functions that allows applications based on Windows
to encrypt or digitally sign data in a flexible manner while providing protection for
the user's sensitive private key data. Actual cryptographic operations are
performed by independent modules known as CSPs.
CryptoAPI
The IKE negotiation must be encrypted. This encryption is limited by what can be
configured in IPSec policy. The standard CryptoAPI functions for keyed hashing
(using HMAC-MD5 and HMAC-SHA1) and data encryption (using DES and 3DES)
are used
The Diffie-Hellman CSP contains the implementation of the Diffie-Hellman key
exchange and determination algorithm. IKE uses only the Microsoft Base or
Diffie-
Enhanced CSP for Diffie-Hellman. However, the Diffie-Hellman calculation can be
Hellman CSP
accelerated using the CryptoAPI exponentiation offload interface
(OffloadModExpo), as documented in the CryptoAPI SDK.
The RSA CSP contains the implementation of the Rivest-Shamir-Adleman (RSA)
cryptographic algorithms. When certificate authentication is selected, IKE checks
the CryptoAPI default provider to see if it is capable of performing RSA 512-bit
digital signatures. If so, then IKE uses this default CSP. If not, IKE enumerates the
RSA providers, selects hardware-based providers first and ensures that they can
provide 512-bit signatures. IKE performs these actions to open the certificate store.
The CSP that is used for signature operations during IKE negotiation is specified
RSA CSP
by the certificate selected during the IKE negotiation; the certificates associated
CSP for the private key signature is used. As long as the RSA CSP supports the
NOHASHID flag for the CryptSignHash( ) API call, IKE can use that CSP for
private key signing of IKE payloads. Because the enumeration process does not
permit IKE to know if the CSP supports the NOHASHID option, it is possible to
choose a certificate that appears valid, but whose CSP does not allow IKE to
construct the proper signature.
Certificate The certificate store is a permanent storage location where certificates, certificate
store revocation lists, and certificate trust lists are stored. The certificate store is a

Page 133 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

physical store on the Windows Server 2003 computer, but it is viewed logically as
belonging to the user account of the currently logged-on user, a service account, or
the computer account. IKE can use only the computer account, usually referred to
as the computer store. You can view the computer store by using the Certificates
snap-in.
The SSPI enables network applications to access one of several security providers
SSPI to establish authenticated connections and exchange data securely over those
connections.
Kerberos The Kerberos SSP contains an implementation of the Kerberos security protocol.
SSP The Kerberos SSP is an SSPI provider

4.6.3 IPSec Driver Architecture


The IPSec driver is a kernel-mode component that monitors and secures IP packets. In addition
to the Policy Agent and IKE, the IPSec driver uses the following components: the Security
Association Database (SAD), the Security Policy Database (SPD), the TCP/IP driver, TCP/IP
applications, and the network interface.
The IPSec driver matches IP packet information with the IP filters that are configured in the active
SPD. If traffic must be secured, the IPSec driver either uses the appropriate SA to determine how
to provide packet security or requests that the IKE module negotiate SAs to be used to provide
packet security. After the IPSec driver determines which SA to use, it creates and validates
encrypting, decrypting, and hashing to create or interpret the AH and ESP headers on an IPSec-
protected packet.
The following figure shows the IPSec driver architecture and how the driver interacts with other
components in Windows Server 2003.

4.6.4 IPSec Driver Architecture

The following table briefly describes the IPSec driver components.

4.6.5 IPSec Driver Components


Component Description
The SAD is a database in the IPSec driver that contains the parameters
SAD associated with each active SA. This database is populated automatically from the
IKE module.
The SPD is a database in the IPSec driver that specifies the filter lists and
associated settings that determine the status of all inbound or outbound IP traffic.
Inbound packets are checked to ensure that they have been secured according to
SPD
policy. Outbound packets are permitted, blocked, or secured, according to policy.
For secured traffic, the security policy that is used is the negotiated SA, which is
stored in the SAD.

Page 134 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The TCP/IP driver is the Windows Server 2003 implementation of the TCP/IP
TCP/IP driver protocol. It is a kernel-mode component that is loaded from the Tcpip.sys file
during startup.
TCP/IP applications use TCP/IP and access TCP/IP network services through an
TCP/IP
appropriate network API, such as Windows Sockets, NetBIOS, or Remote
applications
Procedure Call (RPC).
The network interface is the logical or physical interface over which IP packets are
Network sent and received. The details of the Network Driver Interface Specification (NDIS)
interface interface, the network adapter driver, and the physical media over which the IP
packets are sent and received are beyond the scope of this subject.

4.6.6 Policy Data Structure


The data in a policy indicates the desired protection for the traffic between computers on a
network. The data is made up of various computer-related attributes (for example, IP address and
port number), the communication methods allowed (for example, algorithms and key lengths),
and the IKE key negotiation and management. The policy store updates and stores the policy
data. The Policy Agent retrieves the stored policy data and makes it available to all IPSec
components.
As shown in the following figure, an IPSec policy contains several subsets of information.

4.6.7 IPSec Policy Structure

Page 135 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The following table describes the components of an IPSec policy.

4.6.8 IPSec Policy Components


Component Description
Policy-wide parameters specify the polling interval used to detect changes in
Policy-wide policy.
parameters The policy-wide parameters are configured on the General tab in the properties of
an IPSec policy.
The ISAKMP policy contains IKE parameters, such as encryption key lifetimes,
and other settings. The ISAKMP policy settings are configured in the Key
Exchange Settings dialog box, which is available from the Advanced button on
the General tab in the properties of an IPSec policy.
ISAKMP
The ISAKMP policy also contains a list of security methods for protecting the
policy
identity of IPSec peers during authentication. These methods are, listed in order
of preference, and are configured in the Key Exchange Security Methods dialog
box, which is available from the Methods button on the Key Exchange Settings
dialog box.
IPSec rules contain a statement that associates a filter list with a filter action, an
IPSec rules
authentication method, an IPSec mode, and other settings. Typically, an IPSec

Page 136 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

rule is configured for a specific purpose (for example, Block all inbound traffic
from the Internet to TCP port 135). You can define many IPSec rules in a single
IPSec policy.
IPSec rules are configured on the Rules tab in the properties of an IPSec policy.

IPSec rules associate IKE negotiation parameters with one or more IP filters. The following table
describes the components of an IPSec rule

4.6.9 IPSec Rule Components


Component Description
The filter list contains one or more predefined filters that describe the types of
traffic to which an action (permit, block, or secure) is applied.
Filter list
The filter list is configured on the IP Filter List tab in the properties of an IPSec
rule within an IPSec policy.
The filter action defines the security requirements for the data transmission. A
filter action can be configured to permit traffic, block traffic, or negotiate secure
communications with IPSec for packets matching the filter list. If security
negotiation is selected, you must also configure security methods and their
order: whether initial incoming unsecured traffic should be accepted, whether
unsecured communication with computers that do not support IPSec should be
allowed, and whether to use perfect forward secrecy (PFS). PFS is a
Filter action
mechanism that determines whether the existing keying material for a master
key can be used to derive a new session key. Session key PFS performs a new
Diffie-Hellman key exchange to generate new master key keying material
instead of using master key keying material to derive more than one session
key.
The negotiation settings are configured on the Filter Action tab in the properties
of an IPSec rule within an IPSec policy.
An IPSec rule contains one or more authentication methods, listed in order of
preference, that are used for protection during IKE negotiations. The available
Authentication authentication methods are the Kerberos v5 protocol, the use of a certificate
method(s) issued from a specified certification authority (CA), or a preshared key.
The negotiation data is configured on the Authentication Methods tab in the
properties of an IPSec rule within an IPSec policy.
A setting that specifies whether traffic is tunneled and, if it is, specifies the tunnel
endpoint, which is the tunneling computer that is closest to the IP traffic
destination, as specified by the associated IP filter list. Two rules are required to
describe an IPSec tunnel. For the outbound traffic rule, the tunnel endpoint is
Tunnel endpoint the IP address or subnet of the IPSec peer on the other end of the tunnel. For
the inbound traffic rule, the tunnel endpoint is an IP address or subnet
configured on the local computer.
The tunnel endpoint is configured on the Tunnel Setting tab in the properties of
an IPSec rule within an IPSec policy.
The connection type setting specifies whether the rule applies to only local area
network (LAN) connections, to only dial-up connections, or to both types of
Connection type connections.
The interface applicability is configured on the Connection Type tab in the
properties of an IPSec rule within an IPSec policy

Note

Page 137 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The Kerberos v5 authentication method is not supported on computers running the


Microsoft Windows XP Home Edition operating system.

4.6.10 Default response rule


Each policy has a default response rule. The default response rule has the IP filter list of
<Dynamic> and the filter action of Default Response when the list of rules is viewed with the IP
Security Policies snap-in. The default response rule cannot be deleted, but it can be deactivated.
It is activated for all of the default policies and in the IP Security Policy wizard.
IPSec uses the default response rule to ensure that the computer responds to requests for secure
communication. If an active policy does not include a rule for a computer requesting secure
communication, IPSec applies the default response rule and negotiates security. For example, if
Host A intends to communicate securely with Host B, but Host B does not have an inbound filter
defined for Host A, then IPSec uses the default response.
For the defense response rule, you can configure only the security methods for secure traffic and
the authentication methods. The following table lists the default security methods for the default
response rule.

4.6.11 Default Security Methods for the Default Response Rule


Type AH Integrity ESP Confidentiality ESP Integrity
Encryption and Integrity <None> 3DES SHA1
Custom <None> 3DES MD5
Custom <None> DES SHA1
Custom <None> DES MD5
Custom SHA1 <None> <None>
Custom MD5 <None> <None>

You can configure the security methods and their preference order on the Security Methods tab
in the properties of the default response rule in the IP Security Policies snap-in.
The default response rule works in the following way:
1. If the IKE module receives a request to negotiate security, it queries the Policy Agent for
a matching filter for traffic to and from the source and destination address of the ISAKMP
message. If a matching filter is explicitly configured, the IKE negotiation is based on the
settings of the associated rule.
2. If no matching filter is found and the default response rule is not activated, IKE
negotiation fails.
3. If no matching filter is found and the default response rule is activated, then IKE
dynamically creates an IP filter within the Policy Agent that corresponds to the traffic
specification of the incoming ISAKMP message. IKE authenticates and negotiates security
based on the settings on the Authentication Methods and Security Methods tabs for the
default response rule.
You configure the default authentication method for the default response rule by using the IP
Security Policy wizard.
Example: Default response rule
Typically, the default response rule is used when a group of servers are configured with policy to
secure communications between themselves and any IP address and to accept unsecured
communication, but respond using secured communications. The client computers are configured
with the default response rule. When the clients communicate with each other, the traffic is not
secured. When the clients communicate with the server, the traffic is secured (with the exception
of the initial packet sent by the client to the server).
For example, a client computer can reliably exchange data with a server by using Transmission
Control Protocol (TCP). A TCP connection is established through the exchange of three TCP

Page 138 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

segments: SYN (synchronize), SYN-ACK (synchronize-acknowledgement), and ACK


(acknowledgement).
Because the client computer does not have an explicit rule to secure traffic to the server, the
client computer sends an unsecured SYN segment to the server. The server receives the SYN
segment and checks its filters. It finds the filter that requires secure traffic to and from any IP
address. However, the filter action settings also allow the server to accept unsecured
communication, responding with secured communication. Consequently, the SYN segment sent
by the client and received by the server is sent to the TCP/IP protocol on the server.
The TCP/IP protocol on the server responds by sending a SYN-ACK segment back to the client.
This IP packet is sent to the IPSec driver on the server. Checking the filter settings to require
secured communications, the IPSec driver notes that there are no active SAs between itself and
the client. The IPSec driver requests that the IKE module negotiate SAs for secure
communication.
The IKE module on the server sends an ISAKMP message to the client to begin the process of
negotiating SAs for secure communications. When the ISAKMP message is received on the client
computer, the IKE module on the client computer queries the filter lists in the Policy Agent.
Because it finds no explicit filter match and the default response rule is activated, the IKE module
creates a dynamic filter for traffic to and from the client and server computer.
IKE negotiation continues based on the policy settings of the server and the client. After IKE
negotiation is finished, the server sends the secured TCP SYN-ACK segment to the client. The
client responds with a secured TCP ACK segment and secured TCP data can be exchanged
between the client and the server.

4.7 IPSec Protocols


IPSec is integrated at the IP layer (layer 3) of the TCP/IP stack, so it provides security for almost
all protocols in the TCP/IP suite. Because IPSec is applied to all applications, you do not need to
configure separate security for each application that uses TCP/IP.

4.7.1 IPSec Protocol Architecture

4.7.2 IPSec AH and ESP Protocols


The IPSec protocols Authentication Header (AH) and Encapsulating Security Payload (ESP)
provide data and identity protection for each IP packet. The AH protocol is an IPSec protocol
that provides data origin authentication, data integrity, and anti-replay protection for the entire
packet (the IP header and the data payload carried in the packet, except fields in the IP header
that are allowed to change in transit). AH can be used alone, in combination with the ESP

Page 139 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

protocol, or in IPSec tunnel mode. IPSec tunnel mode is used to protect site-to-site (gateway-to-
gateway) traffic between networks, such as site-to-site networking through the Internet. The
sending gateway encapsulates the entire IP datagram by adding a new IP header and then
protects the new packet using one of the IPSec protocols. Windows Server 2003 supports IPSec
tunnel mode for configurations where Layer Two Tunneling Protocol (L2TP) cannot be used.
The ESP protocol is an IPSec protocol that provides data confidentiality, data origin
authentication, data integrity, and anti-replay protection for the ESP payload. The ESP protocol
can be used alone, in combination with the AH protocol, or in IPSec tunnel mode.

4.7.3 IPSec AH and ESP Protocols in IPSec Transport Mode


IPSec protocols provide data and identity protection for each IP packet by adding their own
security protocol header to each packet. This section describes how AH and ESP protect IP
packets when IPSec is used in transport mode.
You use IPSec in transport mode to protect traffic in end-to-end communications scenarios (for
example, for communications between a client and a server). You can also use IPSec in transport
mode for basic packet filtering (that is, to statically permit or block traffic based on source and
destination address combinations and on the IP protocol and TCP and UDP ports).
IPSec transport mode encapsulates the original IP payload with an IPSec header (AH or ESP).

4.7.4 AH transport mode


The AH protocol provides data origin authentication, data integrity, and anti-replay protection for
the entire packet (both the IP header and the data payload carried in the packet), except for the
fields in the IP header that are allowed to change in transit. AH does not provide data
confidentiality, which means that it does not encrypt the data. The data is readable, but protected
from modification and spoofing.

4.7.5 AH Transport Mode Packet Structure

As shown in the figure, data integrity and authentication are provided by the placement of the AH
header between the IP header and the IP packet payload. The AH protocol uses keyed hash
algorithms to sign the packet for integrity. The AH protocol is identified in the IP header with an IP
protocol ID of 51. This protocol can be used alone or with the ESP protocol.
The following table describes the AH header fields.
AH Header Fields
AH Header
Function
Field
Identifies the IP payload by using the IP protocol ID. For example, a value of 6
Next header
represents TCP.
Length Indicates the length of the AH header.
Used in combination with the destination address and the security protocol (AH or
SPI ESP) to identify the correct SA for the communication. The receiver uses this
value to determine with which SA the packet is identified.
Provides anti-replay protection for the packet. The sequence number is a 32-bit,
incrementally increasing number (starting from 1) that indicates the packet
Sequence number sent over the SA for the communication. The sequence number cannot
number repeat for the life of the quick mode security association. The receiver checks this
field to verify that a packet for a security association with this number has not
already been received. If one has been received, the packet is rejected.
Authentication Contains the integrity check value (ICV), also known as the message

Page 140 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

data authentication code, which is used to verify both data origin authentication and
data integrity. The receiver calculates the ICV value and checks it against this
value (which is calculated by the sender) to verify integrity. The ICV is calculated
over the IP header, the AH header, and the IP payload.

4.7.6 ESP transport mode


The ESP protocol provides data origin authentication, data integrity, anti-replay protection, and
the option of confidentiality for the IP payload only. ESP in transport mode does not protect the
entire packet with a cryptographic checksum nor does it protect the IP header.

4.7.7 ESP Transport Mode Packet Structure

As shown in the figure, the ESP header is placed before the IP payload and an ESP trailer and
ESP trailer and authentication data field are placed after the IP payload. The ESP protocol is
identified in the IP header with the IP protocol ID of 50.
The following table describes the ESP header fields.
ESP Header Fields
ESP
Header Function
Field
When used in combination with the destination address and the security protocol
SPI (AH or ESP), the SPI identifies the SA for the communication. The receiver uses this
value to determine the SA with which this packet should be identified.
Provides anti-replay protection for the packet. The sequence number is a 32-bit,
incrementally increasing number (starting from 1) that indicates the packet number
Sequence sent over the quick mode SA for the communication. The sequence number cannot
number repeat for the life of the quick mode SA. The receiver checks this field to verify that a
packet for an SA with this number has not already been received. If one has been
received, the packet is rejected.

The following table describes the ESP trailer fields.


ESP Trailer Fields
ESP Trailer
Function
Field
Padding of 0 to 255 bytes is used to ensure that the encrypted payload is on byte
Padding
boundaries required by encryption algorithms.
Indicates the length of the Padding field in bytes. The receiver uses this field to
Padding
remove padding bytes after the encrypted payload with the padding bytes has been
length
decrypted.
Next header Identifies the type of data in the payload, such as TCP or UDP.

The following table describes the ESP authentication trailer field.


ESP Authentication Trailer Field
ESP
Authentication Function
Trailer Field

Page 141 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Contains the ICV, also known as the message authentication code, which is
used to verify both message authentication and integrity. The receiver
Authentication data calculates the ICV value and checks it against this value (which is calculated
by the sender) to verify integrity. The ICV is calculated over the ESP header,
the payload data, and the ESP trailer.

4.7.8 IPSec AH and ESP Protocols in IPSec Tunnel Mode


You use IPSec tunnel mode primarily to protect traffic between sites that must traverse an
untrusted path. For example, you can use tunnel mode to do the following:
Establish gateway-to-gateway tunnels between sites, when the gateways or end systems
do not support L2TP/IPSec Virtual Private Network (VPN) connections.
Protect traffic end-to-end when one endpoint of the communication does not support
IPSec. You can send protected traffic to a computer that supports tunnel mode and that is
placed immediately in front of the computer that does not support IPSec.
With tunnel mode, an entire IP packet is encapsulated with an AH or ESP header and an
additional IP header. The IP addresses of the outer IP header are the tunnel endpoints, and the
IP addresses of the encapsulated IP header are the ultimate source and destination addresses.
You can use both the ESP and AH protocols in combination when tunneling to provide both
confidentiality for the tunneled IP packet and integrity and authentication for the entire packet.

4.7.9 AH tunnel mode


AH tunnel mode encapsulates an IP packet with an AH and IP header and signs the entire packet
for integrity and authentication.
AH Tunnel Mode Packet Structure

ESP tunnel mode


ESP tunnel mode encapsulates an IP packet with both an ESP and IP header and an ESP
authentication trailer.
ESP Tunnel Mode Packet Structure

The signed portion of the packet indicates where the packet has been signed for integrity and
authentication. The encrypted portion of the packet indicates what information is protected with
confidentiality.
Because a new header for tunneling is added to the packet, everything that comes after the ESP
header is signed (except for the ESP authentication trailer) because it is now encapsulated in the
tunneled packet. The original header is placed after the ESP header. The entire packet is
appended with an ESP trailer before encryption occurs. All data that follows the ESP header,
except for the ESP authentication trailer, is encrypted. This includes the original header, which is
now considered to be part of the data portion of the packet.
The entire ESP payload is then encapsulated within the new tunnel header. The tunnel header is
not encrypted because it is used only to route the packet from origin to tunnel endpoint.

Page 142 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

If the packet is being sent across a public network, it is routed to the IP address of the gateway
for the receiving intranet. The gateway decrypts the packet, discards the ESP header, and uses
the original IP header to route the packet to the intranet computer.

4.8 IPSec Processes and Interactions


This section describes the following IPSec processes and interactions:
Policy Agent initialization
Policy data retrieval and distribution
IKE main mode and quick mode negotiation
Key protection
IPSec traffic processing

4.8.1 Policy Agent Initialization


When the Policy Agent is started by the Local Security Authority (LSA) service, it performs a
number of initialization steps and then begins a cycle during which it waits for a number of events
to be signaled.
At startup, the Policy Agent checks the registry for the value that indicates the location of the
computers IPSec policy in Active Directory. To find the value, it checks the following registry key
HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\IPSec\GPTIPSECPolicy\DS
IPSECPolicyPath.
If the computer is a member of a domain and an IPSec policy has been configured in Active
Directory, this value was set during computer initialization. If the computer is not a member of a
domain or there is no IPSec policy in Active Directory, the Policy Agent finds no value.
After initializing interval values for event processing, the Policy Agent starts the Interface
Manager (IM). The IM establishes the event processing that needs to be signaled whenever a
network interface configuration or status changes. The Policy Agent then starts the IPSec driver.
Next, IKE starts. At this point, additional initialization occurs. This includes determining through
the IPSec driver whether strong cryptography is available.
In preparation for loading the IPSec policy from either Active Directory or the local registry, the
Policy Agent initializes the policy state and polling interval. It then loads the IPSec policy from the
appropriate store, as described in the following section. After the policy is loaded, the Policy
Agent begins a service loop.
If, during the service wait cycle, the Policy Agent encounters either an unexpected event or the
Policy Agent service stop event, the Policy Agent shuts down IKE, stops the IPSec driver, cleans
up local data, and then stops itself.

4.8.2 Policy Data Retrieval and Distribution


The Policy Agent loads the static IPSec policy from one of the stores if one of the following
situations occurs:
The Policy Agent has initialized and started IKE and the IPSec driver.
An event has signaled the Policy Agent to reload the policy.
A polling interval timeout occurs when the Policy Agent unsuccessfully accesses the
current policy location to check for updates.
The Policy Agent first attempts to retrieve the policy from Active Directory. If it successfully does
so, the Policy Agent supplies the retrieved policy data to IKE and the IPSec driver. The Policy
Agent then copies the IPSec policy to the registry cache.
If the Policy Agent fails to retrieve the policy from Active Directory, it checks the local cache for
the IPSec policy data. If the local cache contains policy data, the Policy Agent supplies the data
to IKE and the IPSec driver.
If the Policy Agent fails to retrieve policy from Active Directory and the local cache, it checks the
registry for local policy data. If the policy is successfully retrieved and the data is successfully
provided to IKE and the IPSec driver, the Policy Agent is placed in the Local Downloaded policy
state.

Page 143 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

If the Policy Agent successfully loads a policy, the polling interval for policy checks is set from the
value in the policy data. If the Policy Agent fails to load a policy, it goes into the Initial policy state
and sets the polling interval to the default value.
While loading the static IPSec policy, the Policy Agent notes the state of the static policy data. If a
service loop timeout occurs, the Policy Agent uses the state information to determine activity.

4.8.3 IKE Main Mode and Quick Mode Negotiation


The IKE component is a Policy Agent service. IKE is started by the Policy Agent, restarted by the
Policy Agent as needed, and shut down when the Policy Agent shuts down. All policy data that
IKE requires for operation is provided by the Policy Agent.
IKE establishes a combination of mutually agreeable policy and keys that defines the SA; the
security services, protection mechanisms, and cryptographic keys between communicating peers.
To ensure successful and secure communication, IKE performs a two-phase negotiation
operation. Phase 1 negotiation is known as main mode negotiation and Phase 2 is known as
quick mode negotiation. The IKE main mode SA (also known as the ISAKMP SA) protects the
IKE negotiation itself. The SAs created during the second IKE negotiation phase are known as
the quick mode SAs (also known as IPSec SAs). The quick mode SAs protect application traffic.
Two quick mode SAs are negotiated, one for inbound and one for outbound traffic.

4.8.4 Main Mode Negotiation


IKE performs main mode negotiation with an IPSec peer to establish protection mechanisms and
keys for subsequent use in protecting quick mode IKE communications. IKE main mode
negotiation occurs in three parts:
Part one: Negotiation of protection mechanisms
Part two: Diffie-Hellman exchange
Part three: Authentication
Main mode negotiation consists of the exchange of a series of six ISAKMP messages. An
ISAKMP message is the payload of a (User Datagram Protocol) UDP message with the source
and destination UDP ports set to 500 (or 4500). An ISAKMP message has an ISAKMP header
and one or more ISAKMP payloads as defined in RFC 2408.
Both the initiator and the responder in the exchange send three messages. The initiator is the
IPSec peer that initiates secure communications by sending the first message. The responder,
which sends the second message, is the IPSec peer with which the initiator is requesting secure
communications.
The following table shows the first four main mode messages, which are not encrypted.
Main Mode Messages 1 Through 4
Main Mode
Sender Payload
Message
1 Initiator ISAKMP header, Security Association (contains proposals)
2 Responder ISAKMP header, Security Association (contains a selected proposal)
ISAKMP header, Key Exchange (contains Diffie-Hellman key),
3 Initiator
Nonce, additional payloads (depending on authentication method)
ISAKMP header, Key Exchange (contains Diffie-Hellman key),
4 Responder
Nonce, additional payloads (depending on authentication method)

The first four main mode messages contain the following ISAKMP payloads:
Security Association. The Security Association payload sent in message 1 is a list of
proposed protection mechanism for the main mode SA. The Security Association payload
sent in message 2 is a specific protection suite for the main mode SA that is common to both
IPSec peers. It is selected by the responder.

Page 144 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Key Exchange. The Key Exchange payload is sent in message 3 by the initiator and in
message 4 by the responder and contains Diffie-Hellman key determination information for
the Diffie-Hellman key exchange process.
Nonce. The Nonce payload is sent in messages 3 and 4 and contains a nonce, which is
a pseudorandom number that is used only once. The initiator and responder each send their
own unique nonces. Nonces are used to provide replay protection.
Depending on the authentication method that is selected in the IPSec policy, messages 3 and 4
might contain additional payloads. The payloads of all messages beyond the first four messages
are encrypted and vary based on the authentication method selected.
Note
It is important to understand the differences in negotiation behavior between initiating an
IKE main mode negotiation and quick mode negotiation and responding to one, and rekeying
an existing one. The IKE RFC 2409 requires that rekeys can be performed by either peer (in
either direction) at any time, regardless of the security association lifetimes negotiated.
Therefore, the computer that initiates a negotiation might become the responder and these
roles might alternate many times. Some of these differences are due to behavior required for
interoperability and some are caused by enforcement of policy settings.
Part One: Negotiation of protection mechanisms
When initiating an IKE exchange, IKE proposes protection mechanisms based on the applied
security policy. Each proposed protection mechanism includes attributes for encryption
algorithms, hash algorithms, authentication methods, and Diffie-Hellman groups. The first part of
the main mode is contained in main mode messages 1 and 2.
The following table lists the protection mechanism attribute values that are supported by Windows
Server 2003 IKE. These values are described in more detail in later sections.
Main Mode Attribute Values Supported by IKE
Main Mode Attribute Value
Encryption algorithm DES, 3DES
Integrity algorithm HMAC-MD5, HMAC-SHA1
Authentication method Kerberos v5, public key certificate, preshared key
Diffie-Hellman group Group 1, Group 2, Group 14 (2048)

The encryption algorithm, integrity algorithm, and Diffie-Hellman group are configured as one of
multiple key exchange security methods.
The initiating IKE peer proposes one or more protection suites in the same order as they appear
in the applied security policy. If one of the protection suites is acceptable to the responding IKE
peer, the responder selects it for use and responds to the initiator with its choice. Because the
responding IKE peer might not be running Windows Server 2003 or Windows 2000 and is
selecting the first proposed protection suite that is acceptable, the protection suites in the applied
security policy should be configured in the order of most secure to least secure.
Part Two: Diffie-Hellman exchange
After a protection suite is negotiated, IKE queries a Diffie-Hellman CSP through CryptoAPI to
generate a Diffie-Hellman public and private key pair based on the negotiated Diffie-Hellman
group. The Diffie-Hellman public key is sent to the IKE peer in an ISAKMP Key Exchange
payload. Main mode negotiation part 2 is contained in main mode messages 3 and 4.
The cryptographic strength of a Diffie-Hellman key pair is related to its prime number length (key
size). Windows Server 2003 IKE supports the following Diffie-Hellman groups:
Group 1 (768 bits)
Group 2 (1024 bits)
Group 14 (2048 bits)
Note

Page 145 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

For enhanced security, Windows Server 2003 IPSec includes Diffie-Hellman Group 14,
which provides 2048 bits of keying strength. However, Diffie-Hellman Group 14 is not
currently supported in Windows 2000 or Windows XP for general IPSec policy use
After the Diffie-Hellman public keys are exchanged, IKE accesses CryptoAPI to compute
the shared key based on the mutually agreeable authentication method.
The following figure shows the Diffie-Hellman exchange between the IPSec peers and the
relationship between IKE, CryptoAPI, and the Diffie-Hellman CSP.
IKE Diffie-Hellman Key Exchange

The Diffie-Hellman exchange occurs in the following steps:


1. On each IPSec peer, IKE requests that the first Diffie-Hellman CSP generate a Diffie-
Hellman public and private key pair based on the Diffie-Hellman group selected during main
mode messages 1 and 2.
2. The public portion of the Diffie-Hellman public and private key pair is returned to IKE.
3. The initiator sends the Diffie-Hellman public key to the responder in a Key Exchange
payload (main mode message 3).
4. The responder sends its Diffie-Hellman public key to the initiator in a Key Exchange
payload (main mode message 4).
5. On each IPSec peer, the IKE component sends the other IPSec peers Diffie-Hellman
public key to the Diffie-Hellman CSP.
6. The Diffie-Hellman CSP computes the shared secret and returns the value to the IKE
component.
Part Three: Authentication
IKE supports three methods of authentication:
Kerberos v5
Public key certificate
Preshared key
The authentication that occurs is a computer-based authentication, also known as machine-
based authentication. The authentication process verifies only the identity of the computers, not
the individual using the computer when the authentication process occurs.
Kerberos v5 authentication Kerberos v5 authentication is the default authentication standard
in Windows Server 2003 and Windows 2000 domains. Any computer in the domain or a trusted
domain can use this method of authentication.
Note
In Windows Server 2003, the Kerberos protocol is no longer a default exemption.
Therefore, if you want to enable Kerberos authentication, you must create filters in the IPSec
policy that explicitly allow such traffic..
Windows IKE Kerberos authentication is based on the Generic Security Service (GSS) API IKE
authentication method, which is described in the Internet draft entitled A GSS-API Authentication
Method for IKE.
The following table lists the ISAKMP messages exchanged during a Kerberos authentication
main mode negotiation.
Kerberos Authentication Method Main Mode Messages
Page 146 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Main Mode
Sender Payload
Message
1 Initiator ISAKMP header, Security Association (contains proposals)
ISAKMP header, Security Association (contains a selected
2 Responder
proposal)
ISAKMP header, Key Exchange (contains Diffie-Hellman key),
3 Initiator
Nonce, Initiator Kerberos Token
ISAKMP header, Key Exchange, Nonce, Responder Kerberos
4 Responder
Token
5* Initiator ISAKMP header, Identification, Initiator Hash
6* Responder ISAKMP header, Identification, Responder Hash

* ISAKMP payloads of message are encrypted.


Authentication occurs when:
Each peer authenticates the other peers Kerberos token: The responder verifies the
initiators Kerberos token and the initiator verifies the responders Kerberos token.
Each peers hash is calculated and verified: The responder verifies the initiators hash and
the initiator verifies the responders hash.
The hash calculation is performed over the following:
The Diffie-Hellman public values of the initiator and responder
The initiator and responder cookies
The ISAKMP payloads of message 2
The identity name string for the IPSec peer
The IPSec peers Kerberos token
To perform Kerberos authentication, Windows Server 2003 uses a Kerberos SSP that is
accessible through the SSPI. The following figure shows the exchange of Kerberos tokens
between the IKE peers and the relationship between the IKE, SSPI, and the Kerberos SSP.
IKE Authentication with Kerberos

Kerberos authentication is performed in the following steps:


7. IKE on the initiator accesses the SSPI to initialize a security context.
8. The initiators Kerberos SSP creates a Kerberos token and returns it to the IKE
component.
9. The initiator sends the initiators Kerberos token and computer identity to the responder in
main mode message 3.
10. The responder IKE component accesses the SSPI to acquire a security context.
11. The responders Kerberos SSP creates a Kerberos token and returns it to IKE.

Page 147 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

12. The responder sends the responder Kerberos token and computer identity to the sender
in main mode message 4.
13. On each IPSec peer, IKE creates the hash for the next main mode message to be sent
and then requests that SSPI sign the hash with the Kerberos session key.
14. The Kerberos SSP returns the signature to the IKE component.
15. The initiator sends the signed initiator hash to the responder in main mode message 5.
16. The responder sends the signed responder hash to the initiator in main mode message 6.
17. On each IPSec peer, IKE accesses the SSPI to compute the hash for the other peer. The
initiator accesses the SSPI to compute the responder hash. The responder accesses the
SSPI to compute the initiator hash.
18. The Kerberos SSP returns the computed hash to the IKE component where the hash
value is verified to complete IKE authentication. The initiator compares its calculated
responder hash with the responder hash received in main mode message 6. The responder
compares its calculated initiator hash with the initiator hash received in main mode message
5.
The following sections describe the IKE certificate selection and acceptance process. If you
decide to use certificates for IKE authentication, understanding this process and its requirements
is integral to ensuring proper deployment.
Public key certificate authentication Windows IKE performs public key certificate
authentication during main mode in compliance with RFC 2409. IKE uses CryptoAPI to retrieve
the computer certificate, verify peer certificates and certificate chains, check certificate
revocation, and create and verify digital signatures. All certificate, certificate chain, and signature
information is exchanged in main mode messages, as shown in the following table.
Certificate-based IKE Authentication Main Mode Messages
Main Mode
Sender Payload
Message
1 Initiator ISAKMP header, Security Association (contains proposals)
ISAKMP header, Security Association (contains a selected
2 Responder
proposal)
ISAKMP header, Key Exchange (contains Diffie-Hellman key),
3 Initiator
Nonce
ISAKMP header, Key Exchange (contains Diffie-Hellman key),
4 Responder
Nonce, Certificate Request
ISAKMP header, Identification, Certificate, Certificate Request,
5* Initiator
Signature
6* Responder ISAKMP header, Identification, Certificate, Signature

* ISAKMP payloads of message are encrypted.


IKE certificate selection process
When IKE negotiates to use certificates for authentication, the following process is used to select
a computer certificate:
19. The list of trusted roots is prepared. This is the list of the CA root names provided by the
peer in the Certificate Request Payloads (CRPs), and it matches the CA root names
configured in the list of trusted roots in the appropriate authentication method of the IPSec
policy. If there are no matching CA root names, all trusted CA root names from the
appropriate authentication method are used.
20. IKE searches the computer store for an IPSec certificate that chains to any of the trusted
CA roots identified in step 1. An IPSec certificate contains an Enhanced Key Usage (EKU)
attribute with a value equal to the IP security IKE intermediate object identifier (OID)
1.3.6.1.5.5.8.2.2.
21. For each certificate chain found, checks are performed to verify the following:
o The certificate chain does not have any trust errors.
Page 148 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

o The certificate chain is not a root-only chain.


o The computer certificate has a private key.
o The computer certificate has an RSA type public/private key pair.
o The computer certificate has a public key length that is greater than 512 bits.
o The computer certificate has a Digital Signature key usage.
o The computer certificate is not a CA signing certificate that is used to issue
certificates.
o The certificate chain passes certificate revocation list (CRL) checking, which is
performed by default or if the value of the StrongCRLCheck registry subkey is set to 1
or 2. For more information about CRL checking, see IPSec CRL checking later in this
section.
If all of these checks succeed, IKE selects the certificate chain to be sent to the IPSec peer. If
any of these checks fails, IKE continues to search for another IPSec type certificate, using
the same list of root CA names.
22. If a valid computer certificate chain is not located, IKE retries the process, from step 2.
Although it uses the same list of root CA names, IKE does not search for an IPSec type
certificate.
23. If a valid computer certificate chain is still not found and if the list of root CA names in
step 1 is a subset of the names allowed by the local IPSec policy, IKE retries, from step 2.
This time, IKE uses the entire list of root CA names allowed by the local authentication
method.
This step is required for successful authentication when cross-certificates are used to
establish trust relationships.
24. After IKE selects a computer certificate, it includes all intermediate certificates in the
chain up to the root, except for the root CA certificate. A certificate chain in PKCS#7 format is
then sent to the IPSec peer. If there are no intermediate CAs, only the computer certificate is
sent.
If a computer certificate cannot be selected, the authentication fails.
Note
o If IKE negotiates with another computer running Windows Server 2003, or with
other Microsoft IKE implementations that use IPSec NAT-T (such as Microsoft
L2TP/IPSec VPN Client), a special method to avoid fragmentation of ISAKMP UDP
packets might be implemented. Otherwise, the ISAKMP message that contains the
certificate chain will likely be fragmented as the packet is transmitted.

4.8.5 IKE certificate acceptance process


25. IKE receives the peers certificates or certificate chains and verifies that the peers
certificates chain up to any of the root CAs in the appropriate authentication method of the
local IPSec policy.
26. For each peer certificate chain, checks are performed to verify that:
o The computer certificate Subject Name or Subject AltName is consistent with the
peers ID field passed in the IKE negotiation.
o The computer certificate chain does not have any trust errors. If there is a trust
error, the peer authentication fails.
27. If the two checks in step 2 succeed, checks are performed for the peer certificate chain to
verify that:
o The certificate chain passes CRL checking, which is performed by default or if
the value of the StrongCRLCheck registry subkey is set to 1 or 2.
o The computer certificate has an RSA type public/private key pair.
o The computer certificate has a public key length that is greater than 512 bits.
o The computer certificate has a Digital Signature key usage.
If any of these checks fails, the peer authentication fails.
28. If certificate-to-account mapping is enabled in the IPSec policy for the certificate root CA
of the peer, IKE calls the Windows secure channel (Schannel) APIs to perform the mapping.
Page 149 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Schannel completes the mapping and builds an access token for the computer account. This
access token is automatically evaluated against the Access this computer from the
network or the Deny this computer access from the network logon right defined in Group
Policy Security settings.
If the logon right evaluation fails, the peer authentication fails.

4.8.6 IPSec CRL checking


If you use certificate-based authentication, you can also enable IPSec certificate revocation list
(CRL) checking. By default, in Windows XP and Windows Server 2003, IPSec CRLs are
automatically checked during IKE certificate authentication, but a fully successful CRL check is
not required for the certificate to be accepted. However, if enhanced security is required, a fully
successful CRL check is also required. CRL checking can cause delays in authentication or
unnecessary failures, and some non-Microsoft PKI systems might not support it. You can disable
IPSec CRL checking or specify a stronger level of IPSec CRL checking by using the Netsh IPSec
context or by modifying the registry.
To disable IPSec CRL checking or specify a different level of IPSec CRL checking, use the
following command:
netsh ipsec dynamic set config strongcrlcheck value={0 | 1 | 2}
To enable IPSec CRL checking through the registry
Caution
Incorrectly editing the registry might severely damage your system. Before making
changes to the registry, you should back up any valued data on the computer.
29. Under the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\PolicyAgent\ key, add a
new Oakley subkey, with a DWORD entry named StrongCRLCheck.
30. Assign this entry any value from 0 through 2, where:
o A value of 0 disables CRL checking (this is the default for Windows 2000).
o A value of 1 causes CRL checking to be attempted and certificate validation to
fail only if the certificate is revoked (this is the default for Windows XP and Windows
Server 2003). Other failures that are encountered during CRL checking (such as the
revocation URL not being reachable) do not cause certificate validation to fail.
o A value of 2 enables strong CRL checking, which means that CRL checking is
required and that certificate validation fails if any error is encountered during CRL
processing. Set this registry value for enhanced security.
31. Do one of the following:
o Restart the computer.
o Stop and then restart the IPSec service by running the net stop policyagent and
net start policyagent commands at the command prompt.
Note that IPSec CRL checking does not guarantee that certificate validation fails immediately
when a certificate is revoked. There is a delay between the time that the revoked certificate is
placed on an updated and published CRL and the time when the computer that performs the
IPSec CRL checking retrieves this CRL. The computer does not retrieve a new CRL until the
current CRL has expired or until the next time the CRL is published. By default, IKE requests that
CryptoAPI wait 15 seconds to complete the CRL retrieval. If the CRL cannot be retrieved at that
time, IKE either ignores the error (if the value of the StrongCRLCheck registry subkey is set to 1,
or it causes authentication to fail (if the value of StrongCRLCheck is set to 2). CRLs are cached
in memory and in \Documents and Settings\UserName\Local Settings\Temporary Internet
Files by CryptoAPI. Because CRLs persist across computer restarts, if a CRL cache problem
occurs, restarting the computer does not resolve the problem. Excluding the CA name from
certificate requests
If you use certificate authentication to establish trust between IPSec peers, you can also use
Windows Server 2003 to exclude CA names from certificate requests. Excluding the CA name
prevents a malicious user from learning sensitive information about the trust relationships of a

Page 150 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

computer, such as the name of the company that owns the computer and the domain
membership of the computer (if an internal PKI is being used). Although excluding the CA name
from certificate requests enhances security, computers with multiple certificates from different
roots might require the CA root names to select the correct certificate. Also, some non-Microsoft
IKE implementations might not respond to a certificate request that does not include a CA name.
For these reasons, excluding the CA name from certificate requests might cause IKE certificate
authentication to fail in certain cases.

4.8.7 Certificate-to-account mapping


In Windows Server 2003, a specific group of computers can be authorized to use IPSec when
either Kerberos v5 or certificates are used for IKE authentication. This capability enables much
stronger peer authentication and allows IPSec to be used to restrict network access to a server.
When you enable IPSec certificate-to-account mapping, the IKE protocol associates (that is,
maps) a computer certificate to a computer account in an Active Directory domain or forest, and
then retrieves an access token, which includes the list of computer security groups. This process
ensures that the certificate offered by the IPSec peer corresponds to an active computer account
in the domain, and that the certificate is one that should be used by that computer.
Certificate-to-account mapping can be used only for computer accounts that are in the same
forest as the computer performing the mapping. This provides much stronger authentication than
simply accepting any valid certificate chain. For example, you can use this capability to restrict
access to computers that are within the same forest. Certificate-to-account mapping, however,
does not ensure that a specific trusted computer is allowed IPSec access.
If the certificate-to-account mapping process is not completed properly, authentication will fail and
IPSec-protected connections will be blocked.
Preshared key authentication Preshared key authentication requires that each IKE peer use a
predefined and shared key to authenticate the IKE exchange.
The following table describes the main mode messages for preshared key authentication.
Preshared Key Authentication Main Mode Messages
Main Mode
Sender Payload
Message
1 Initiator ISAKMP header, Security Association (contains proposals)
ISAKMP header, Security Association (contains a selected
2 Responder
proposal)
ISAKMP header, Key Exchange (contains Diffie-Hellman key),
3 Initiator
Nonce
ISAKMP header, Key Exchange (contains Diffie-Hellman key),
4 Responder
Nonce
5* Initiator ISAKMP header, Identification, Initiator Hash
6* Responder ISAKMP header, Identification, Responder Hash

* ISAKMP payloads of message are encrypted.


Messages 5 and 6 contain an initiator and responder hash calculated with the preshared key.
Each IPSec peer authenticates the other peer's packet by decrypting and verifying the hash
inside the packet (the hash inside the packet is a hash of the preshared key).
Important
Preshared keys are easily implemented but can be compromised if they are not used
correctly. Microsoft does not recommend the use of preshared key authentication because
the key value is not securely stored, and it is, therefore, difficult to keep secret. Preshared
key authentication is provided for interoperability purposes and compliance with RFC
standards.

4.8.8 Quick Mode Negotiation


Page 151 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

When main mode negotiation completes or an existing quick mode SA expires, IKE begins quick
mode negotiation. During quick mode negotiation, IKE queries the Policy Agent for information
required to perform the appropriate filter actions, including whether the IPSec mode is tunnel or
transport, whether the protocol is ESP or AH or both, and which encryption and hashing
algorithms are proposed or accepted.

4.8.9 Quick Mode SA negotiation


The quick mode negotiation process is implemented as defined in RFC 2409. All quick mode
negotiation messages are protected with the main mode SA that was established during the main
mode negotiation. Each successful quick mode negotiation establishes two quick mode SAs. One
SA is inbound and the other SA is outbound.
The following table lists the quick mode messages exchanged by two IPSec peers running
Windows IPSec.
Quick Mode Messages
Quick Mode
Sender Payload
Message
ISAKMP header, Security Association (contains proposals and
1* Initiator
secure traffic description)
ISAKMP header, Security Association (contains a selected
2* Responder
proposal)
3* Initiator ISAKMP header, Hash
4* Responder ISAKMP header, Notification

* ISAKMP payloads of message are encrypted.


The four quick mode messages contain the following payloads:
Security Association. This payload contains a list of proposals and encryption and
hashing algorithms (AH or ESP, DES or 3DES, and HMAC-MD5 or HMAC-SHA1) for
securing the traffic and a description of the traffic that is protected. This description might
include IP addresses, IP protocols, TCP ports, or UDP ports, and is based on the matching
filter of the initiator. The Security Association in the second quick-mode message includes a
Security Association payload that contains the chosen method of securing the traffic.
Hash. This payload provides verification and replay protection.
Notification. This payload has a connected notify message. This message is requested
and sent between two IPSec peers running Windows Server 2003. Quick mode message 4
with the Notification payload is not required by the IKE standard and is used to prevent the
initiator from sending IPSec-protected packets to the responder before the responder is
ready to receive them.
Windows Server 2003 IPSec supports the filter action choices listed in the following table.
Filter Action Choices
Filter Action
ESP Encryption/Integrity Algorithm AH
Choices
Encryption and
3DES/HMAC-SHA1 None
integrity
Integrity only None/HAMC-SHA1 None
DES, 3DES, or none/HMAC-MD5, HMAC-SHA1, HMAC-MD5 or HMAC-
Custom
or none SHA1

Note
Computers running Windows Server 2003 and Windows XP support the 3DES and DES
algorithms and do not require installation of additional components. However, computers
running Windows 2000 must have the High Encryption Pack or Service Pack 2 (or later)

Page 152 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

installed in order to use 3DES. If a computer running Windows 2000 is assigned a policy that
uses 3DES encryption, but does not have the High Encryption Pack or Service Pack 2 (or
later) installed, the security method defaults to the weaker DES algorithm.

4.8.10 Generating and regenerating session key material


IKE generates session keys for both the inbound and outbound quick mode SAs based on the
main mode shared master key and nonce material exchanged during the quick mode negotiation.
Additionally, Diffie-Hellman key exchange material can also be exchanged and used to enhance
the cryptographic strength of the IPSec session key.
Key Protection
The following features enhance the base prime numbers (keying material) and the strength of the
keys for master and session keys.
Key Lifetimes
Key lifetimes control when a new key is generated. A key lifetime allows you to force automatic
key regeneration after a specific interval of either kilobytes (KB) or seconds, whichever occurs
first. This ensures that even if an attacker is able to decipher part of a communication protected
by one key, new keys protect the remainder of the communication. Whenever a key lifetime is
reached, the SA is also renegotiated and the key is refreshed or regenerated. For this reason,
key lifetimes are also referred to as SA lifetimes. You can specify key lifetimes for the master key
and for session keys.
The master key lifetime corresponds to the main mode SA lifetime created by the IKE main mode
negotiation. It is configured in terms of time and number of quick mode negotiations and it applies
to all security rules in the IPSec policy. The main mode SA and master key typically have a long
lifetime (the default value is eight hours). Master keys are much more resource-intensive to
generate than session keys because they require reauthentication and additional Diffie-Hellman
exchanges.
The session keys correspond to the quick mode SAs that are used to protect program traffic. The
session keys and quick mode SAs are quickly derived from the master key by IKE quick mode
negotiation. Session keys are used to protect data and they have lifetimes based on the amount
of data sent and the amount of time elapsed since the key started being used. Typically, session
keys have shorter lifetimes than master keys. The default values are 100,000 KB (approximately
100 megabytes) or one hour.
When session keys are refreshed, new quick mode SAs replace the old ones. Quick mode SA
session keys must be refreshed before either the data or time lifetime expires; otherwise, traffic is
discarded. A session key will be deleted if the quick mode SAs become idle (by default, SAs
become idle after five minutes). Because the master key lives longer than the session key, it
allows new quick mode SAs and session keys to be established quickly. To prevent data loss, the
quick mode SA session key is generated shortly before the main mode SA expires. The main
mode SA and corresponding master key are not deleted when session keys and quick mode SAs
are deleted, unless the specified number of quick mode SA negotiations has elapsed.
For example, if a communication takes one hour (3,600 seconds) and if you specify the minimum
session key lifetime of five minutes (300 seconds), more than 12 keys are generated to complete
the communication. If a 100 MB file is transferred over a fast corporate LAN using an IPSec
security association with a 100 MB and one hour lifetime, at least one, if not two, session rekeys
occur.
Session Key Refresh Limit
Repeated rekeying from a session key can compromise the Diffie-Hellman shared secret.
Consequently, you might want to limit the number of quick mode session keys that can be derived
from a main mode negotiation.
If you have enabled master key PFS, the session key limit is disregarded. Setting a session key
limit to 1 is identical to enabling master key PFS. If both a master key lifetime and a session limit
are specified, whichever limit is reached first causes a new main mode negotiation. By default,
IPSec policy does not specify a session limit.
Diffie-Hellman Groups
Page 153 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Diffie-Hellman groups are used to determine the length of the base prime numbers (key material)
for the Diffie-Hellman exchange. The cryptographic strength of any key derived from a Diffie-
Hellman exchange depends, in part, on the strength of the Diffie-Hellman group on which the
prime numbers are based. When a stronger group is used, the key that is derived from a Diffie-
Hellman exchange is stronger and more difficult for an attacker to break.
IKE negotiates which group to use, ensuring that there are not any negotiation failures that result
from a mismatched Diffie-Hellman group between the two peers.
If session key PFS is enabled, a new Diffie-Hellman key is negotiated during the first quick mode
SA negotiation. This new key removes the dependency of the session key on the Diffie-Hellman
exchange that is performed for the master key.
Both the initiator and responder must have session key PFS enabled, or negotiation fails.
The Diffie-Hellman group is the same for both the main mode and quick mode SA negotiations.
When session key PFS is enabled, even though the Diffie-Hellman group is set as part of the
main mode SA negotiation, it affects any rekeys during session key establishment.
Perfect Forward Secrecy
Unlike key lifetimes, PFS determines how a new key is generated, rather than when it is
generated. Specifically, PFS ensures that the compromise of a single key permits access only to
data that is protected by it, not necessarily to the entire communication. To achieve this, PFS
ensures that a key used to protect a transmission cannot be used to generate additional keys. In
addition, if the key that was used was derived from specific keying material, that material cannot
be used to generate other keys.
Master key PFS
In Windows Server 2003 IPSec, you can configure the number of times quick mode SAs can be
created based on a single main mode SA. If you enable master key PFS, the IKE allows only a
single quick mode SA for each main mode SA. By default, master key PFS is disabled, so there is
no limit to the number of quick mode SAs that can be created from one main mode SA. To derive
a new quick mode SA, a new main mode negotiation is performed, which includes a new Diffie-
Hellman exchange and a new authentication process.
Session key PFS
Whenever a quick mode SA requires renegotiation, IKE determines whether a session key PFS is
specified in the corresponding filter rule. If it is, IKE additionally generates a new Diffie-Hellman
key and exchanges it with the IKE peer during quick mode negotiation. By performing another
Diffie-Hellman key exchange, IKE provides additional cryptographic strength to quick mode key
generation beyond that already contributed by the main mode SA. Performing additional Diffie-
Hellman exchanges requires additional computational resources and might affect IPSec
performance.

4.9 IPSec Driver Processes


The IPSec driver does not participate in IP packet processing until the first time the Policy Agent
informs the driver that there is an active IPSec policy. If no IPSec policy is active, the IPSec driver
does not participate in inbound and outbound IP traffic processing.

4.9.1 IPSec Driver Responsibilities


The IPSec driver is responsible for the following:
Maintaining the SAD and SPD.
Checking each IP packet to determine whether it matches a policy filter. When a match is
found and an SA must be created, the IPSec driver invokes the IKE module. After the IKE
module completes its negotiations, an SA is returned to the IPSec driver and stored in the
SAD.
Implementing the IPSec policy as specified in the SA (for example, using a specific
hashing method for outbound packets, verifying the integrity of inbound packets, and using a
specific method for encrypting and decrypting).

Page 154 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Tracking the length of time a specific key is in use and requesting a new key from IKE as
necessary.
Tracking the number of bytes that have been transformed (that is, hashed or encrypted)
for each SA and requesting a new key from the IKE module if the byte count allowed by the
SA is exceeded.
For each secured inbound packet that contains an AH or ESP header, parsing the packet
for the SPI to determine the SA.
For each non-secured inbound packet, checking the filter list in the SPD to determine
whether the packet is permitted or discarded:
The filter list can contain an inbound permit filter if the corresponding filter action is set to
Permit or Negotiate security and either the Accept unsecured communication, but
always respond with IPSec or Allow unsecured communication with non IPSec-aware
computer options are enabled. The IPSec driver sends unmodified permitted packets to the
TCP/IP driver for additional processing.
The packet is discarded either because the filter action is set to Block or the filter action is
set to Negotiate security and unsecured communications are not allowed.
Handling hardware offload of cryptographic functions by skipping cryptographic
processing on packets processed by offload network adapters and managing offloaded SAs.
Handling network layer issues such as path maximum transmission unit (PMTU)
discovery.
Creating the SPI that the responder uses to identify the appropriate SA for an inbound
packet.
Deleting expired SAs.
Providing the implementation of the AH and ESP protocols.
For outbound traffic that must be secured; the IPSec driver, based on the parameters of the
SA, calculates and places the AH or ESP or both headers and trailer on the IP packet before
sending it to the TCP/IP driver. For inbound traffic that contains an AH or ESP header, the
IPSec driver processes the header and, if it is valid, sends the authenticated and decrypted
packet without the AH or ESP headers and trailer back to the TCP/IP driver.

4.9.2 IPSec Driver Communication


The following types of communications occur between the Policy Agent, IPSec driver, and IKE
module:
The Policy Agent adds a set of filters to the IPSec driver. Each filter is accompanied by a
policy identifier (a GUID) and an index. The index indicates to IPSec what weight to assign to
the filter. A lower index indicates higher precedence and higher weight.
When the Policy Agent deletes a set of filters, it deletes all associated outbound SAs first
and then allows the inbound SAs to expire.
The Policy Agent can receive a set of usage statistics from the IPSec driver. These
statistics include the number of packets sent and received for both AH and ESP protocols,
the number of SAs, the number of rekeys, and the number of bad packets.
The IKE module adds SAs as the result of successful negotiation of keys.
The IKE module can expire a specified SA.
Packet Processing
The IPSec driver receives the active IP filter list from the Policy Agent, as shown in the following
illustration, and then attempts to match every inbound and outbound packet against the filters in
the list.
IPSec Driver Matching an IP Filter List

Page 155 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

When a packet matches a filter, the IPSec driver applies the filter action. When a packet does not
match any filters, the IPSec driver passes the packet back without modification to the TCP/IP
driver to be received or transmitted.
If the filter action permits transmission, the packet is received or sent with no modifications. If the
action blocks transmission, the packet is discarded. If the action requires the negotiation of
security, main mode and quick mode SAs are negotiated.
The negotiated quick mode SA and keys are used with both outbound and inbound processing.
The IPSec driver stores all current quick mode SAs in a database. The IPSec driver uses the SPI
field to match the correct SA with the correct packet.
When an outbound IP packet matches the IP filter list with an action to negotiate security, the
IPSec driver queues the packet and then notifies IKE, which begins security negotiations with the
destination IP address of that packet. If several outbound packets are going to the same
destination and match the same filter before IKE has finished the negotiation, then only the last
packet sent is saved.
The following sections describe the basic inbound packet and outbound packet processing that
the IPSec driver performs in transport mode.
Inbound packet processing
The following figure illustrates this process.
Basic Inbound Packet Process

Note
The inbound packet process applies only to local host unicast traffic (traffic with the
unicast destination address of the host) when there is an active IPSec policy.
Basic inbound packet processing for transport mode occurs in the following sequence:

Page 156 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

32. IP packets are sent from the network interface to the TCP/IP driver.
33. The TCP/IP driver sends the IP packet to the IPSec driver.
34. If the inbound packet is IPSec-protected, the IPSec driver looks up the SA in the SAD.
35. If the inbound packet is not IPSec-protected, the IPSec driver checks the packet for a
filter match by looking up the filters in the SPD.
36. After the IPSec-protected inbound packet is authenticated and decrypted, the AH or ESP
or both headers are removed and the packet is sent to the TCP/IP driver. If a packet that is
not IPSec-protected is permitted by policy, that packet is sent to the TCP/IP driver.
37. The TCP/IP driver performs IP packet processing as needed and sends the application
data to the TCP/IP application.
Detailed inbound packet processing for transport mode occurs in the following sequence:
38. The TCP/IP driver sends the unicast packet to the IPSec driver.
39. If the packet is ISAKMP, the unmodified packet is sent back to the TCP/IP driver.
Note
o To modify the default filtering behavior for Windows Server 2003 IPSec, you can
use the Netsh IPSec context or modify the registry. For more information, see Default
exemptions to IPSec filtering later in this section.
40. If hardware offload processing was performed, the IPSec driver checks to determine
whether the hardware processing was successful.
41. If the hardware processing was not successful, an event is logged and the packet is
discarded.
42. The packet is parsed to determine whether an AH or ESP header or both are present.
43. If the packet does not contain an AH or ESP header, the packet is compared to the filter
list for a match.
44. If a filter match is not found, the unmodified packet is sent to the TCP/IP driver.
45. If a filter match is found, the IPSec driver attempts to find an SA based on the packet
contents.
46. If an SA is not found, the matching filter is checked to determine if it is an inbound permit
filter.
47. If the matching filter is an inbound permit filter, the unmodified packet is sent to the
TCP/IP driver.
48. If the matching filter is not an inbound permit filter, the packet is discarded.
49. If an SA is found, it is checked to determine whether it is a soft SA. A soft SA is one in
which the Negotiate security filter action is enabled, but there is no authentication or
encryption being performed because the computer with which communication occurs is not
running IPSec. This process is also known as fallback to clear. Even though the packet is not
being protected, an SA without an AH or ESP header is still maintained in the SAD. Soft SAs
and fallback to clear are possible only when Allow unsecured communication with non
IPSec-aware computer is selected on the Security methods tab in the properties of a filter
action.
50. If the SA is a soft SA, the unmodified packet is sent to the TCP/IP driver.
51. If the SA is not a soft SA, the packet is discarded.
52. If the packet contains an AH or ESP header (or both), the header is parsed for the SPI.
53. The SPI is used to look up the SA in the SAD.
54. If the SA corresponding to the SPI is not found in the SAD, a Bad SPI event is logged
and the packet is discarded.
55. If the SA corresponding to the SPI is found in the SAD, the current time is used to update
the SAs last used time. The time is used for aging the SA.
56. The SA is checked to determine whether cryptographic processing for the SA was
offloaded to hardware. For packets that have been processed by hardware offload, steps 20
and 21 are skipped.
57. The packet is authenticated or decrypted or both. This process involves verifying the
HMAC in the AH or ESP header, processing the other fields in the AH and ESP headers and
trailer, and decrypting the ESP payload.
Page 157 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

58. If cryptographic processing is unsuccessful, an event is logged and the packet is


discarded.
59. The AH or ESP headers and ESP trailer are removed.
60. If the SA for this packet is a tunnel SA (using either AH or ESP tunnel mode), the
decapsulated packet is reinjected into the TCP/IP driver and the original packet is discarded.
By reinjecting the decapsulated packet, the TCP/IP driver treats it as if it were received from
the network adapter.
61. If the SA for this packet is not a tunnel SA, the IP packet, with the AH and ESP headers
removed, is sent back to TCP/IP driver for additional processing.
Outbound packet processing
Basic outbound packet processing is shown in the following figure.
Basic Outbound Packet Process

Basic outbound packet processing for transport mode occurs in the following sequence:
62. Application data is sent to the TCP/IP driver from the TCP/IP application.
63. The TCP/IP driver sends an IP packet to the IPSec driver.
64. The IPSec driver checks the packet for a filter match by looking up the filters in the SPD.
65. The IPSec driver checks the packet for an active SA by looking up the SAs in the SAD.
Based on the SA, the traffic is authenticated or encrypted or both.
66. If the traffic must be protected and there is not an SA, the IPSec driver requests that IKE
create the appropriate SAs. The IP packet is then held until the SA is established and can be
IPSec framed.
67. The IP packet is sent back to the TCP/IP driver.
68. The TCP/IP driver sends the IP packet to the network interface.
Detailed outbound packet processing for transport mode occurs in the following sequence:
69. The TCP/IP driver sends the unicast outbound packet to the IPSec driver.
70. If the packet is ISAKMP, the unmodified packet is sent back to the TCP/IP driver.
71. The IPSec driver attempts to find a filter that matches the packet. If a filter is not found,
the unmodified packet is sent back to the TCP/IP driver.
72. If a filter match is found, the IPSec driver attempts to find an SA that matches the packet.
73. If an SA is not found, the filter action is checked. If the filter action is set to Negotiate
security, the IPSec driver requests that the IKE module negotiate the appropriate SAs.
74. If the IKE negotiation is successful, the IKE module informs the IPSec driver of the new
SA and the IPSec driver looks up the SA again.
75. If the IKE negotiation is not successful, the packet is discarded.
76. If the filter action is set to Permit, the unmodified packet is sent back to the TCP/IP
driver. Otherwise, the packet is discarded.
77. If an SA is found in the SAD, the current time is used to update the SAs last used time.
The time is used for aging the SA.

Page 158 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

78. The SA is checked to determine whether it is about to expire. If the SA is about to expire,
the IPSec driver informs the IKE module to initiate a quick mode or Phase 2 rekey of the
quick mode SA.
79. The SA is checked to determine whether it has expired. If the SA has expired, the packet
is discarded.
80. The Dont Fragment (DF) flag in the IP header of the packet is checked. If the DF flag is
set to 1, the size of the IP packet with the proposed AH or ESP or both headers and trailer is
calculated.
81. If the size of the IP packet with the proposed IPSec overhead is larger than the path
maximum transmission unit (PMTU) for the destination IP address, the IPSec driver indicates
a packet-too-large condition for the packet and the unmodified packet is sent back to the
TCP/IP driver. The packet-too-large condition allows the TCP/IP driver to either adjust the
PMTU for the destination or, in the case of transit traffic, inform the sending host with an
Internet Control Message Protocol (ICMP) Destination Unreachable-Fragmentation Needed
and DF Set message that includes the new PMTU. The packet is eventually discarded by the
TCP/IP driver.
82. If the DF flag is not set to 1, or if it is set to 1 and the additional IPSec overhead is not
greater than the current PMTU for the destination, blank AH or ESP both headers and trailer
are constructed (based on the settings for the SA).
83. The IPSec driver checks to determine whether the hardware offload is capable of
offloading the SA for this packet. If so, the IPSec driver checks to determine whether the SA
for the packet was offloaded to the hardware.
84. If the SA was offloaded to the hardware, an offload status is set on the packet and the
modified packet with blank AH or ESP or both headers and trailer is sent to the TCP/IP
driver.
85. If the SA has not been offloaded to the hardware, the IPSec driver accesses NDIS with
instructions to add the SA to the hardware offload network interface.
86. If hardware offload is not enabled or the SA has not been offloaded to the hardware, the
IPSec driver performs the cryptographic processing and adds the appropriate values in the
fields of the AH or ESP or both headers and trailer.
87. The IPSec driver sends the modified packet to the TCP/IP driver.

4.9.3 Default exemptions to IPSec filtering


In Windows Server 2003, the default filtering exemptions have been removed for Kerberos,
Resource Reservation Setup Protocol (RSVP), and multicast and broadcast traffic, but remain for
ISAKMP traffic, and inbound multicast and broadcast traffic.
To modify the default filtering behavior for Windows Server 2003 IPSec, you can use the Netsh
IPSec context or modify the registry.
To modify the default filtering behavior using Netsh, use the following command:
netsh ipsec dynamic set config ipsecexempt value={ 0 | 1 | 2 | 3}
Depending on which exemptions you want, specify the appropriate values as follows:
A value of 0 specifies that multicast, broadcast, RSVP, Kerberos, and ISAKMP traffic are
exempt from IPSec filtering. This is the default filtering behavior for Windows 2000 (with
Service Pack 3 and earlier service packs) and Windows XP.
Important
Use this setting only if it is required for compatibility with Windows 2000 and
Windows XP. If Kerberos traffic is exempted from filtering, an attacker can bypass other
IPSec filters by using either UDP or TCP source port 88 to access any open port. Many
port scan tools will not detect this because these tools do not allow setting the source
port to 88 when checking for open ports.
A value of 1 specifies that Kerberos and RSVP traffic are not exempt from IPSec filtering
(multicast, broadcast, and ISAKMP traffic are exempt).

Page 159 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

A value of 2 specifies that multicast and broadcast traffic are not exempt from IPSec
filtering (RSVP, Kerberos, and ISAKMP traffic are exempt).
A value of 3 specifies that only ISAKMP traffic is exempt from IPSec filtering. This is the
default filtering behavior for Windows Server 2003, Windows 2000 (with Service Pack 4 and
later service packs) and Windows XP (with Service Pack 1 and later service packs).
If you change the value for this setting, you must restart the computer for the new value to take
effect.
To modify the default filtering behavior by using the registry
88. In Regedit, under the
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\IPSEC key, add a new
DWORD entry named NoDefaultExempt.
89. Assign this entry any value from 0 through 3.
90. Restart the computer.
The filtering behaviors for each value are equivalent to those noted above for the netsh ipsec
dynamic set config ipsecexempt value=x command.
The following table summarizes the equivalent filters that are implemented if all default
exemptions to IPSec filtering are enabled (that is, if NoDefaultExempt is 0). When the IP
address is specified, the subnet mask is 255.255.255.255. When the IP address is Any, the
subnet mask is 0.0.0.0.
Equivalent Filters When NoDefaultExempt=0
Source Address Destination Address Protocol Source Port Destination Port Filter Action
My IP Address Any IP Address UDP Any 88 Permit
Any IP Address My IP Address UDP 88 Any Permit
Any IP Address My IP Address UDP Any 88 Permit
My IP Address Any IP Address UDP 88 Any Permit
My IP Address Any IP Address TCP Any 88 Permit
Any IP Address My IP Address TCP 88 Any Permit
Any IP Address My IP Address TCP Any 88 Permit
My IP Address Any IP Address TCP 88 Any Permit
1
My IP Address Any IP Address UDP 500 500 Permit
Any IP Address My IP Address UDP 500 500 Permit
My IP Address Peer IP Address UDP 4500 4500 2 Permit
Peer IP Address My IP Address UDP 4500 4500 Permit
My IP Address Any 46 (RSVP) Permit
Any IP Address My IP Address 46 (RSVP) Permit
Any IP Address <multicast> 3 Permit
My IP Address <multicast> Permit
4
Any IP Address <broadcast> Permit
My IP Address <broadcast> Permit
<All IPv6 protocol traffic> 5 Permit

1
In order for IPSec transport mode to be negotiated through an IPSec tunnel mode SA, ISAKMP
traffic cannot be exempted if it needs to pass through the IPSec tunnel first.
2
When IPSec NAT-T is performed, the filter exemption for UDP port 4500 is automatically
generated based on the source and destination IP addresses used during the initial part of the
IKE negotiation on UDP port 500. This dynamic permit filter for port 4500 is displayed in the IP
Security Monitor snap-in, under Quick Mode\Specific Filters, and in the output for the netsh
ipsec dynamic show qmfilter command.

Page 160 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

3
Multicast traffic is defined as the class D range, with a destination address range of 224.0.0.0
with a 240.0.0.0 subnet mask, which corresponds to the range of addresses from 224.0.0.0 to
239.255.255.255.
4
Broadcast traffic is defined as a destination address of 255.255.255.255 (the limited broadcast
address) or as having the host ID portion of the IP address set to all 1s (the subnet broadcast
address).
5
IPSec does not support filtering for IP version 6 (IPv6) packets, except when IPv6 packets are
encapsulated with an IPv4 header.
Windows Server 2003 IPSec does not support specific filters for broadcast protocols or ports, nor
does it support multicast groups, protocols, or ports. Because IPSec does not negotiate security
for multicast and broadcast traffic, these types of traffic are dropped if they match a filter with a
corresponding filter action to negotiate security. A filter with a source address of Any IP Address
and a destination address of Any IP Address can block or permit all multicast and broadcast
traffic. By default (and if the NoDefaultExempt registry key is set to a value of 2 or 3), outbound
multicast or broadcast traffic will be matched against a filter with a source address of My IP
Address and a destination address of Any IP Address. More specific unicast IP address filters
that block, permit, or negotiate security for unicast IP traffic should be configured in the same
IPSec policy to achieve appropriate security.

5.9.4 Hardware acceleration (offloading)


Hardware acceleration is accomplished by offloading specific processing tasks that are normally
completed by an operating system component to the network adapter. Some network adapters
can perform IPSec cryptographic functions, such as encryption and decryption of data and the
calculation and verification of message authentication codes.
When the NDIS interface binds, the offload capability of the network interface is queried. During
outbound packet processing, after an SA is created a check is made to ensure that the network
interface can offload cryptographic functions, support transport-over-tunnel functionality, and
support IP header options. If not, the packet cannot be offloaded. A check is also made to
determine whether the SA for the packet being offloaded is a soft SA. A soft SA is an SA in which
no authentication or encryption is being performed because the computer with which
communication occurs is not running IPSec. Because no AH or ESP headers need to be
processed, hardware offloading is unnecessary.
If hardware offloading is enabled, a check is made per packet to determine whether the SA for an
outbound packet has already been offloaded to the offload adapter. If so, the existing offloaded
SA is used. If the SA is not yet offloaded and the offload has not previously failed for this SA, an
attempt is made to offload the SA. However, the attempt to offload the SA is made
asynchronously. The IPSec driver does not wait for the SA offload to be successful before
continuing to process the packet. This causes the first packet to always be cryptographically
processed by the IPSec driver, with the cryptographic processing of following packets occurring
on the hardware offload network adapter.
Typically, IPSec network offload adapters do not accelerate the IKE negotiation. However, some
SSL offload adapters might be capable of processing the IKE Diffie-Hellman calculation in
hardware. To determine whether your SSL offload adapter can do so, see the manufacturers
documentation.
Windows 2000, Windows XP, and Windows Server 2003 provide hardware acceleration APIs in
the Windows Driver Development Kit (DDK) as part of TCP/IP Task Offload

4.10 Network Ports and Protocols Used by IPSec


The following table lists the network ports and protocols used by IPSec.

4.10.1 IPSec Port and Protocol Assignments


Protocol Protocol ID UDP TCP
ESP 50 N/A N/A

Page 161 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

AH 51 N/A N/A
ISAKMP N/A 500 (4500) N/ A

The following sections describe how to configure routers, firewalls, or other filtering devices to
ensure that traffic that is sent over IPSec protocols can pass through these devices. Additional
considerations for IPSec NAT traversal are also described.

4.10.2 Firewall Filters


In order for IPSec-secured communications to take place through a firewall or other filtering
device, you must configure the firewall to permit IPSec traffic on UDP source and destination port
500 (ISAKMP) and IP Protocol 50 (ESP). You might also need to configure the firewall to permit
IPSec traffic on IP protocol 51 (AH) to permit troubleshooting by IPSec administrators and to
allow the traffic to be inspected while it is still IPSec-encapsulated.
Ensure that the firewall filter can permit or track fragments for ISAKMP. IKE with certificate or
Kerberos authentication requires ISAKMP packets to be fragmented because the ISAKMP
protocol uses UDP. ISAKMP messages that are larger than the local interface MTU are
automatically fragmented by IP. If only certificate authentication is used, Windows Server 2003
implements a method to avoid IKE message fragmentation. When certificate authentication is
used for communication between computers running Windows Server 2003 IPSec and
Windows XP IPSec or Windows 2000 IPSec, fragmentation is required.
You must also allow ISAKMP to be initiated from either a source or destination IP address.
RFC 2408 specifies that the ISAKMP protocol must be able to negotiate security in either
direction. Stateful filtering that allows only one computer to initiate IKE to a responder typically
times out and deletes the stateful inbound filter in the firewall. As a result, IKE cannot rekey IPSec
security associations, and IPSec connectivity is lost.

4.10.3 IPSec NAT-T


In Windows 2000 and Windows XP, if traffic between the client and a server must pass through a
network address translator (NAT), then IPSec cannot secure the traffic (the IKE negotiation will
fail when translated by a NAT). Windows Server 2003 provides support for version 2 of a new
IETF Internet draft called IPSec NAT-T. IPSec NAT-T allows IPSec ESP packets in either
transport mode or tunnel mode to pass through NATs that allow UDP traffic. In this design, IKE
automatically detects NATs and uses UDP-ESP encapsulation on UDP port 4500 to enable traffic
to pass through a network address translator. The Windows Server 2003 implementation of
IPSec NAT-T also supports PMTU discovery for UDP-ESP encapsulation. This new functionality
allows you to secure servers running Windows Server 2003, when clients are behind a network
address translator. IPSec NAT-T does not support the use of AH across network address
translators.
If you are using IPSec NAT-T to secure a server, it is recommended that you do not create UDP
port 4500 filters in the IPSec policy that is assigned to the server. The IPSec driver recognizes
UDP port 4500 traffic and detects the associated UDP-ESP quick mode SA. However, if you are
using firewalls or filtering routers to filter traffic for the IPSec-secured server, then you must
configure the firewalls or filtering routers to permit the UDP-ESP traffic.
To configure firewalls or filtering routers to permit traffic on UDP source and destination port
4500, use the following settings to create a filter called Permit IPSec NAT-T ISAKMP traffic on
UDP port 4500:
Source address = SpecificIPAddress
Destination address = SpecificIPAddress
Protocol = UDP
Source port = Any or 4500 (The network address translator might translate source port
4500 to a different source port)
Destination port = 4500

Page 162 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

4.10.4 Configuring Wireless Network Policies


Wireless network settings can be configured locally, by users on client computers, or centrally. To
enhance the deployment and administration of wireless networks, you can use Group Policy to
centrally create, modify, and assign wireless network policies for Active Directory clients. When
you use Group Policy to define wireless network policies, you can configure wireless network
connection settings, enable IEEE 802.1X authentication for wireless network connections, and
specify the preferred wireless networks that clients can connect to. When you create and
configure wireless policies, you have the options that are described in Table 4.4. For more
information about configuring wireless policies, see "Define Active Directory-based wireless
network policies" in Help and Support Center for Windows Server 2003.
Table Configuration Settings for General Policy
Options Comments
Name of the policy. Use a unique and descriptive name of up to
Name
255 characters that easily identifies the policy.
Specifies in minutes how often to poll Active Directory for
Check for policy changes changes to this policy. Applies only to computers that are
every members of an Active Directory domain. The default is 180
minutes.
Network to access
Any available network
(access point preferred)
Access point Specifies the types of IEEE 802.11 wireless networks that are
(infrastructure) networks available for clients to try to connect to.
only
Computer-to-computer
(ad hoc) networks only
Use Windows to configure Specifies whether client settings are automatically configured for
my wireless network settings IEEE 802.11 wireless network connections.
Automatically connect to Specifies whether clients can try to connect to any available
non-preferred networks IEEE 802.11 wireless networks that are within range.

4.10.5 Network authentication services


The IEEE 802.11supported network authentication services provide open system and shared
key authentication. Open system authentication permits any wireless device to associate with an
access point. Shared key authentication requires a network key to be used. For security reasons
shared key authentication is not recommended. Instead, open system authentication used in
conjunction with 802.1X authentication is recommended.
Network keys
When you enable WEP, you can require that a network key be used for encryption. You can
specify a key (by typing a key in the Network key text box when you configure the wireless
connection). If you specify a key, you can also provide its location in the Key index text box (on
the Properties page for Wireless Network Connections). Table includes descriptions of the
configuration settings for requiring network keys.
Table Configuration Settings for Preferred Networks
Options Comments
Lists the IEEE 802.11 wireless networks to which clients can try to
connect. Use the Move Up and Move Down buttons to prioritize
Networks the list. Use the Add button to add a new wireless network. You
can also edit properties of a network by using the Edit button, or
use the Remove option to remove an entry from the list.
Network name (SSID) Specifies the name for the specified wireless network. Under the

Page 163 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

IEEE 802.11 standard, the network name is also known as the


Service Set Identifier (SSID).
Provides a description for the specified wireless network. Use a
Description
unique description of up to 255 characters.
Wireless network key
(WEP)
Data Encryption (WEP enabled) specifies that a network key is
Data encryption
used to encrypt the data that is sent over the network.
(WEP enabled)
Network authentication (Shared mode) specifies that a
Network
network key be used for authentication to the wireless network.
authentication (Shared
The key is provided automatically specifies that a network key is
mode)
automatically provided for clients.
The key is provided
automatically
This is a computer-to-
Specifies whether this preferred network is a computer-to-computer
computer (ad hoc)
ad hoc network. If this check box is not selected, this network is an
network; wireless access
access point (infrastructure) network.
points are not used
IEEE 802.1X authentication
To provide user and computer identification, centralized authentication, and dynamic key
management, you can enable IEEE 802.1X authentication.
You can use Group Policy to create a wireless configuration policy to configure IEEE 802.11 and
IEEE 802.1X values. Table 4.6 and Table 4.7 list the wireless network policy settings that you can
specify.
Table Wireless Network (IEEE 802.11) Policy Settings
Options Comments
Enable network access control Use 802.1X authentication when you connect to an 802.11
using IEEE 802.1X wireless network.
EAPOL-Start message
Do not transmit Specifies how Extensible Authentication Protocol over LAN
Transmit (EAPOL)-start messages are transmitted.
Transmit per IEEE 802.1X
Table Wireless Network (IEEE 802.1X) Authentication Settings
Options Comments
Parameters (seconds):
Default Max start value is 3 seconds.
Max start
Default Held period is 60 seconds.
Held period
Default Start period is 60 seconds.
Start period
Default Authentication period is 30 seconds.
Authentication period
EAP type:
Click Settings to specify the options to use when connecting,
Smart card or other
including: using a smart card or certificate on the computer;
certificate
validating server certificate; specifying which servers to connect
Protected Extensible
to; Trusted Root Certification Authorities; viewing certificates; and
Authentication Protocol
selecting and configuring an authentication method.
(PEAP)
Authenticate as guest when Specifies whether clients attempt authentication to the wireless
user or computer network as guests when user or computer information is not
information is unavailable available.
Authenticate as computer Specifies whether client computers must attempt authentication to
when computer information the wireless network when a user is not logged on. The default

Page 164 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

is available setting is Enabled.


Computer authentication: It is recommended that you select With user re-authentication.
When this option is selected, authentication is performed by using
With user
the computer credentials when users are not logged on to the
authentication
computer. After a user logs on to the computer, authentication is
With user re-
performed by using the user credentials. When a user logs off of
authentication
the computer, authentication is performed by using the computer
Computer only credentials.

Creating Wireless Network Policies


You can define wireless network policies for your organization by using the Group Policy Object
Editor snap-in.
To access Wireless Network (IEEE 802.11) Policies
1. Open GPMC.
2. Right-click the GPO that you want to edit, and then click Edit.
3. In the Group Policy Object Editor console tree, click Computer Configuration, click
Windows Settings, and then click Security Settings.
4. Right-click Wireless Network (IEEE 802.11) Policies on Active Directory, and then
click Create Wireless Policies. The Wireless Policy Wizard starts.

4.10.6 Defining Wireless Configuration Options for Preferred Networks


By using the Properties page for your wireless configuration policy, you can define a list of
preferred networks to use. You can use the General tab to specify how often to check for policy
changes, which networks to access, whether to disable Zero Configuration, or automatically
connect to non-preferred networks.
To define preferred wireless networks
1. Open GPMC.
2. In the console tree, expand the domain or OU that you want to manage, right-click the
Group Policy object that you want to edit, and then click Edit.
3. In the Group Policy Object Editor console tree, click Computer Configuration, click
Windows Settings, and then click Security Settings.
4. Click Wireless Network (IEEE 802.11) Policies, right-click the wireless network policy
that you want to modify, and then click Properties.
5. Click the Preferred Networks tab, and then click Add.
6. Click the Network Properties tab, and then in the Name box, type a unique name.
7. In the Description box, type a description of the wireless network, such as the type of
network and whether WEP and IEEE 802.1X authentication are enabled.
8. In the Wireless network key (WEP) box, specify whether a network key is used for
encryption and authentication, and whether a network key is provided automatically. The
options are:
o Data encryption (WEP enabled). Select this option to require that a network key
be used for encryption.
o Network authentication (Shared mode). Select this option to require that a
network key be used for authentication. If this option is not selected, a network key is not
required for authentication, and the network is operating in open system mode.
o The key is provided automatically. Select this option to specify whether a
network key is automatically provided for clients (for example, whether a network key is
provided for wireless network adapters).
9. To specify that the network is a computer-to-computer (ad hoc) network, click to select
the This is a computer-to-computer (ad hoc) network; wireless access points are not
used check box.
To define 802.1X authentication
1. Open GPMC.

Page 165 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2. In the console tree, expand the domain or OU that you want to manage, right-click the
Group Policy object that you want to edit, and then click Edit.
3. In the Group Policy Object Editor console tree, click Computer Configuration, click
Windows Settings, and then click Security Settings.
4. Click Wireless Network (IEEE 802.11) Policies, right-click the wireless network policy
that you want to modify, and then click Properties.
5. On the Preferred Networks tab, under Networks, click the wireless network for which
you want to define IEEE 802.1X authentication.
6. On the IEEE 802.1X tab, check the Enable network access control using IEEE 802.1X
check box to enable IEEE 802.1X authentication for this wireless network. This is the default
setting. To disable IEEE 802.1X authentication for this wireless network, clear the Enable
network access control using IEEE 802.1X check box.
7. Specify whether to transmit EAPOL-start message packets and how to transmit them.
8. Specify EAPOL-Start message packet parameters.
9. In the EAP type box, click the EAP type that you want to use with this wireless network.
10. In the Certificate type box, select one of the following options:
o Smart card. Permits clients to use the certificate that resides on their smart card
for authentication.
o Certificate on this computer. Permits clients to use the certificate that resides
in the certificate store on their computer for authentication.
11. To verify that the server certificates that are presented to client computers are still valid,
select the Validate server certificate check box.
12. To specify whether client computers must try authentication to the network, select one of
the following check boxes:
o Authenticate as guest when user or computer information is unavailable.
Specifies that the computer must attempt authentication to the network if user
information or computer information is not available.
o Authenticate as computer when computer information is available. Specifies
that the computer attempts authentication to the network if a user is not logged on. After
you select this check box, specify how the computer attempts authentication.
To use Windows to configure wireless network settings on a client computer
1. Open Network Connections.
2. Right-click Wireless Network Connection, and then click Properties.
3. Click the Wireless Networks tab.
4. On the Wireless Networks tab, do one of the following:
o To use Windows to configure wireless network settings on your computer, select
the Use Windows to configure my wireless network settings check box. This check
box is selected by default. For information about this option, see Notes.
o If you do not want to use Windows to configure wireless network settings on your
computer, clear the Use Windows to configure my wireless network settings check
box.
To add, edit, or remove wireless network connections on a client computer
1. Open Network Connections.
2. Right-click Wireless Network Connection, and then click Properties.
3. Click the Wireless Networks tab.
4. Choose whether to add, modify, or remove a wireless network connection:
To Do this
Add a new wireless Select the Use Windows to configure my network settings check
network connection box, and then click Add.

Modify an existing Select the Use Windows to configure my network settings check
wireless network box. Under Preferred networks, click the wireless network

Page 166 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

connection connection that you want to modify, and then click Properties.
Remove a preferred
Under Preferred networks, click the wireless network connection
wireless network
that you want to remove, and then click Remove.
connection
5. If you are adding or modifying a wireless network connection, click the Association tab,
configure wireless network settings as needed, and then click OK. For more information, see
Related Topics.
6. To define 802.1X authentication for the wireless network connection, click the
Authentication tab, and then configure the settings as needed, and then click OK. For more
information, see Related Topics.
7. To connect to a wireless network after configuring network settings, on the Wireless
Networks tab, under Available networks, click the network name, click Configure, and
then, in Wireless network properties, click OK.
Important
o If a network does not broadcast its network name, it does not appear under
Available networks. To connect to an access point (infrastructure) network that you
know is available but that does not appear under Available networks, click Add. On
Association, type the network name, and if needed, configure additional network
settings.
8. To change the order in which connection attempts to preferred networks are made, under
Preferred networks, click the wireless network that you want to move to a new position in
the list, and then click Move up or Move down until the wireless network is at the required
position.
9. To update the list of available networks that are within range of your computer, click
Refresh.
10. To automatically connect to available networks that do not appear in the Preferred
networks list, click Advanced, and then select the Automatically connect to non-
preferred networks check box.

4.10.7 Securing Network Traffic


Hostile users have become increasingly sophisticated in monitoring and compromising network
traffic. The tools that are available to assist them in their activities have also become more
powerful every year. At the same time, most organizations want to open up their networks to
support new methods of doing business that are enabled by telecommunications. Supporting
these business scenarios without compromising security requires the use of very critical
networking technologies, including the following:
Remote access servers
IPSec
Internet Authentication Service (IAS)
Microsoft Internet Security and Acceleration (ISA) Server
A consistent and integrated plan for using these features along with other components of the
networking infrastructure can enhance the security of your environment.
Remote Access Servers
A growing number of users require access to the network when they are away from the
organizations physical premises. However, opening up your network to the outside world poses
potential security risks.
To support users who require access to your network from remote locations without
compromising security, you can deploy a dial-up network, a virtual private network (VPN), or a
combination of both.
A dial-up network enables remote users to dial in directly to a remote access server on
your network.
A VPN enables remote users who are connected to the Internet to establish a connection
to a VPN server on your network through a VPN connection.
Page 167 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Dial-up connections are inherently more private than a solution that uses a public network such
as the Internet. However, with dial-up networking, your organization faces a large initial
investment and continuing expenses throughout the life cycle of the solution. These expenses
include:
Hardware purchase and installation. Dial-up networking requires an initial investment
in modems or other communication hardware, server hardware, and phone line installation.
Monthly phone costs. Each phone line that is used for remote access increases the
cost of dial-up networking. If you use toll-free numbers or the callback feature to defray long
distance charges for your users, these costs can be substantial. Most businesses can
arrange a bulk rate for long distance, which is preferable to reimbursing users individually at
their more expensive residential rates.
Ongoing support. The number of remote access users and the complexity of your
remote access design significantly affects the ongoing support costs for dial-up networking.
Support costs include network support engineers, testing equipment, training, and help desk
personnel to support and manage the deployment. These costs represent the largest portion
of your organizations investment.
If you use a VPN for remote access, users connect to your corporate network over the Internet.
VPNs use a combination of tunneling, authentication, and encryption technologies to create
secure connections. VPNs reduce remote access expenses by using the existing Internet
infrastructure. You can use a VPN to partially or entirely replace your centralized, in-house, dial-
up remote access infrastructure and legacy services.
VPNs offer two primary benefits:
Reduced costs. Using the Internet as a connection medium saves long-distance phone
expenses and requires less hardware than a dial-up networking solution.
Sufficient security. Authentication prevents unauthorized users from connecting to your
network. Strong encryption methods make it extremely difficult for a hostile party to interpret
the data that is sent across a VPN connection.

4.10.8 Securing Servers


In general, servers are only as secure as the configuration options that you enable or disable.
The security of the configuration options depends on who can access the servers. A secure
server configuration for one organization might not be secure for another organization. Whether
security is adequate depends on a variety of factors, such as where the server is on the network,
what data is stored on the server, what role the server plays, and who administers it. One point is
certain, however: securing servers requires painstaking attention to detail based on strict
evaluations of what services and configuration options are needed and not needed, and testing to
ensure that access control mechanisms perform without preventing the server from performing its
legitimate tasks.
The Windows Server 2003 Deployment Kit includes detailed information about how to configure
and administer servers securely for a variety of specific roles. In addition, you can enhance
general server security by doing the following:
Whenever possible, keep critical servers offline or unavailable to the Internet.
Require administrators to use smart cards when they access your servers.
Implement a strategy for auditing all activities on critical servers.
Create standard server configurations and require their use whenever a server is
deployed to fill a similar role, even in remote offices or in subsidiaries.

Section 5

5. Planning, Implementing, and Maintaining Security Infrastructure


5.1 Overview of the PKI Design Process

Page 168 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

5.2 public key infrastructure


5.2.1 Process for Designing a PKI
5.2.2 Basic PKI Concepts
5.2.3 Windows Server 2003 PK
5.2.4 How a Public Key Infrastructure Works
5.2.5 Defining Certificate Requirements
5.2.6 Determining Secure Application Requirements
5.3 Windows Server 2003 PKI can support the following security applications:
5.3.1 Digital Signatures
5.3.2 Secure E-mail
5.3.3 Software Code Signing
5.3.4 Internet Authentication
5.3.5 IP Security
5.3.6 Smart Card Logon
5.3.7 Encrypting File System Use and Recovery
5.3.8 Wireless (802.1x) Authentication
5.4 Determining Certificate Requirements for Users, Computers, and Services
5.5 Designing Your CA Infrastructure
5.5.1 Planning Core CA Options
5.5.2 Designing Root CAs
5.5.3 Selecting Internal CAs vs. Third-Party CAs
5.5.4 Evaluating CA Capacity, Performance, and Scalability
5.6 Integrating the Active Directory Infrastructure
5.7 Configuring Public Key Group Policy
5.8 Defining PKI Management and Delegation
5.9 Defining CA Types and Roles
5.9.1 Enterprise vs. Stand-Alone CAs
5.9.2 Root CAs
5.9.3 Subordinate CAs
5.9.4 Using Hardware CSPs
5.10 Establishing a CA Naming Convention
5.11 Selecting a CA Database Location
5.12 Overview of Smart Card Deployment
5.12.1 Process for Planning a Smart Card Deployment
5.12.2 Smart Card Fundamentals
5.12.3 Components of a Smart Card Infrastructure
5.12.4 Creating a Plan for Smart Card Use
5.12.5 Identifying the Processes That Require Smart Cards
5.12.6 Interactive User Logons
5.12.7 Remote Access Logons
5.12.8 Terminal Services and Shared Clients
5.12.9 Using Smart Cards for Individual Administrative Operations
5.12.10 Defining Smart Card Service Level Requirements
6.12.11 Selecting Smart Card Hardware
5.12.12 Smart Card Roles
5.12.13 Evaluating Smart Cards and Readers
5.12.14 Planning Smart Card Certificate Templates
5.12.15 Establishing Issuance Processes

5.13 Software Update Services Overview


5.13.1 Implementing a SUS Solution
5.13.2 SUS Security Features

Page 169 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

5.1 Overview of the PKI Design Process


Organizations use a variety of technology solutions to enable essential business processes, such
as online ordering, exchanges of contracts, and remote access. A public key infrastructure based
on Microsoft Windows Server 2003 Certificate Services provides a means by which organizations
can secure these critical internal and external processes.
Deploying a PKI allows you to perform tasks such as:
Digitally signing files such as documents and applications.
Securing e-mail from unintended viewers.
Enabling secure connections between computers, even if they are connected over the
public Internet or through a wireless network.
Enhancing user authentication through the use of smart cards.
If your organization does not currently have a public key infrastructure, begin the process of
designing a new public key infrastructure by identifying the certificate requirements for your
organization. If your organization already uses a public key infrastructure based on
Microsoft Windows NT version 4.0, Microsoft Windows 2000, or third-party certificate
services, you can improve your PKI capabilities by taking advantage of new and enhanced
features in Microsoft Windows Server 2003, Standard Edition; Windows Server 2003,
Enterprise Edition; and Windows Server 2003, Datacenter Edition. When you have completed
the PKI design process, you can deploy a public key infrastructure that provides solutions for all
of your internal security requirements, as well as security requirements for business exchanges
with external customers or business partners.

5.2 public key infrastructure


The laws, policies, standards, and software that regulate or manipulate certificates and public and
private keys. In practice, it is a system of digital certificates, certification authorities, and other
registration authorities that verify and authenticate the validity of each party involved in an
electronic transaction. Standards for PKI are still evolving, even though they are being widely
implemented as a necessary element of electronic commerce.

5.2.1 Process for Designing a PKI


Designing a PKI for your organization involves defining your certificate requirements, creating a
design for your infrastructure, creating a certificate management plan, and deploying your PKI
solution. Figure 16.1 shows the steps that are involved in designing a public key infrastructure.
Figure Designing a PKI

Page 170 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

5.2.2 Basic PKI Concepts


Public key infrastructure is the term used to describe the laws, policies, procedures, standards,
and software that regulate or control the operation of certificates and public and private keys.
More specifically, a PKI is a system of digital certificates, certification authorities, and other
registration authorities that verify and authenticate the validity of each party involved in an
electronic transaction.
A PKI consists of the following basic components:
Digital certificates Electronic credentials, consisting of public keys, which are used to sign and
encrypt data. Digital certificates provide the foundation of a PKI.
One or more certification authorities (CAs) Trusted entities or services that issue digital
certificates. When multiple CAs are used, they are typically arranged in a carefully prescribed
order and perform specialized tasks, such as issuing certificates to subordinate CAs or issuing
certificates to users.
Certificate policy and practice statements The two documents that outline how the CA and its
certificates are to be used, the degree of trust that can be placed in these certificates, legal
liabilities if the trust is broken, and so on.
Certificate repositories A directory service or other location where certificates are stored and
published. In a Windows Server 2003 domain environment, the Active Directory directory
service is the most likely publication point for certificates issued by Windows Server 2003based
CAs.
Certificate revocation lists (CRL) Lists of certificates that have been revoked before reaching
the scheduled expiration date.
Certificate trust lists These are signed lists, which are located on the client, of trusted CA
certificates. Certificate trust means that a certificate is part of a certificate trust list (CTL) or that
the CTL contains a trusted certificate from another CA that is part of the certificates certificate
chain. Windows Server 2003 domain administrators can use Group Policy objects (GPOs) to
publish and maintain CTLs.

Page 171 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Key archival and recovery A feature that makes it possible to archive and recover the private
key portion of a public-private key pair, in the event that a user loses his or her private keys, or an
administrator needs to assume the role of a user for data access or data recovery. Private key
recovery does not recover any data or messages; it merely enables the recovery process.
Public key standards Standards developed to describe the syntax for digital signing and
encrypting of messages and to ensure that a user has an appropriate private key. To maximize
interoperability with third-party applications that use public key technology, the Windows
Server 2003 PKI is based on the standards recommended by the Public-Key Infrastructure
(X.509) (PKIX) working group of the Internet Engineering Task Force (IETF). Other standards that
the IETF has recommended also have a significant impact on public key infrastructure
interoperability, including standards for Transport Layer Security (TLS) ,Secure/Multipurpose
Internet Mail Extensions (S/MIME) and Internet Protocol security (IPSec)
Internet Protocol security (IPSec).

5.2.3 Windows Server 2003 PKI


You can use PKI-based applications on workstations and servers running Microsoft
Windows XP Professional, Windows Server 2003, Windows 2000, or Windows NT 4.0, as well
as on workstations running Microsoft Windows 95 and Microsoft Windows 98. The ability to
create and manage a PKI is available in Microsoft Windows NT 4.0 Server, Microsoft
Windows 2000 Server, and Windows Server 2003. However, Windows Server 2003 provides
more extensive support for a PKI.
In addition, a growing number of applications and system services that require the secure transfer
of information also rely on the Windows Server 2003 PKI. Applications that are enabled for
certificate-based security include Microsoft Outlook, Internet Explorer, Internet Information
Services, Microsoft Exchange Server, Microsoft Commerce Server 2000 and Commerce
Server 2002, Outlook Express, and Microsoft SQL Server. A number of third-party
applications also take advantage of the Windows Server 2003 PKI.

5.2.4 How a Public Key Infrastructure Works


A Windows Server 2003 PKI makes it possible for an organization to do the following:
Publish certificates. The PKI administrator makes certificate templates available to
clients (users, services, applications, and computers) and enables additional CAs to issue
certificates.
Enroll clients. To participate in a PKI, users, services, or computers must request and
receive certificates from an issuing CA or a Registration Authority (RA). Typically, enrollment
is initiated when a requester provides unique information and a newly generated public key.
The CA administrator or enrollment agent uses the information provided to authenticate the
identity of the requester before issuing a certificate.
Use certificates. Clients use their certificates, which are validated or invalidated in a
timely manner as long as CAs and certificate revocation lists are available to verify or deny
their authenticity. If they are validated, a PKI provides an easy way for users to use keys in
conjunction with applications that perform public key cryptographic operations, making it
possible to provide security for e-mail, e-commerce, and networks.
Renew or revoke certificates. A well-designed PKI makes it easy for you to renew or
revoke existing certificates, and to manage the trust level associated with certificates used by
different clients or for different applications.
The status of a public key certificate is determined by means of the chain building process. Chain
building is the process of building a trust chain, or certification path from the end certificate to a
root CA that is trusted by the security principal. Figure 16.2 shows a certification path in a two-
level CA hierarchy.
Figure Certification Path in a Two-Level CA Hierarchy

Page 172 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

In this example, the issuing CA issued the User certificate, and the root CA issued the certificate
of the issuing CA. This is considered a trusted chain, because it terminates with a root CA
certificate that has been designed and implemented to meet the highest degree of trust.
The chain building process validates the certification path by checking each certificate in the
certification path from the end certificate to the certificate of the root CA. If the CryptoAPI
discovers a problem with one of the certificates in the path, or if it cannot find a certificate, the
certification path is either considered invalid or is given less weight than a fully validated
certificate.

5.2.5 Defining Certificate Requirements


You can use a Windows Server 2003 public key infrastructure to provide a wide range of strong,
scalable, cryptography-based solutions for network and information security. The value of the
information that you want to protect, as well as the costs involved with implementing a strong
security system, impact the level of security that you choose for your organization.
Figure shows the steps that are involved in determining your certificate requirements.
Figure Defining Certificate Requirements

5.2.6 Determining Secure Application Requirements

Page 173 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Before you begin to design your public key infrastructure and configure certificate services, you
need to define the security needs of your organization. For example, does your organization
require electronic purchasing, secure e-mail, secure connections for roaming users, or digital
signing of files? If so, you need to configure CAs to issue and manage certificates for each of
these business solutions.

5.3 Windows Server 2003 PKI can support the following security applications:
Digital signatures
Secure e-mail
Software code signing
Internet authentication
IP security
Smart card logon
Encrypting file system user and recovery certificates
802.1x authentication

5.3.1 Digital Signatures


A digital signature is a means for originators of a message, file, or other digitally encoded
information to bind their identities to the data. This can be extremely useful for important
documents such as legal opinions and contracts. The process of digitally signing information
involves transforming the information, together with some secret information held by the sender,
into a tag called a signature. Digital signatures are used in public key environments to help
secure electronic commerce transactions by providing verification that the individual sending the
message is who he or she claims to be, and by confirming that the message received is identical
to the message sent.
You can use digital signatures even when data is distributed in plaintext, such as with e-mail. In
this case, while the sensitivity of the message itself does not warrant encryption, it can be
important as a means to ensure that the data is in its original form and has not been sent by an
impostor.
One way that your organization can capitalize on the use of digital signatures is by using
CAPICOM. CAPICOM is an ActiveX control that provides a COM interface to Microsoft
CryptoAPI. It exposes a select set of CryptoAPI functions to enable application developers to
incorporate digital signing and encryption functionality into their applications. Because CAPICOM
uses COM, application developers can access this functionality in a number of programming
environments, such as Microsoft Visual Basic, Microsoft Visual Basic Scripting Edition,
Active Server Pages, Microsoft JScript, C++, and others. CAPICOM is packaged as an
ActiveX control, allowing Web developers to use it in Web-based applications as well.
You can use CAPICOM for:
Digitally signing data with a smart card or software key.
Verifying digitally signed data.
Displaying certificate information.
Inspecting certificate properties such as subject name or expiration date.
Adding and removing certificates from the certificate stores.
Encrypting and decrypting data with a password.
Encrypting and decrypting data by means of public keys and certificates.

5.3.2 Secure E-mail


Standard Internet mail is sent as plaintext over open networks with no security. In the increasingly
interconnected network environments of today, intruders can monitor mail servers and network
traffic to obtain proprietary or sensitive information. You also risk exposure of proprietary and
confidential business information when you send mail over the Internet from within your
organization.

Page 174 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Another form of intrusion is impersonation. On IP networks, anyone can impersonate mail


senders by using readily available tools to counterfeit the originating IP address and mail
headers. When you use standard Internet mail, you can never be sure who really sent a message
or whether the contents of the message are valid. Moreover, malicious attackers can use mail to
cause harm to the recipient computers and networks (for example, by sending attachments that
contain viruses).
For these reasons, many organizations have placed a high priority on implementing secure mail
services that provide confidential communication, data integrity, and non-repudiation. A Windows
Server 2003 public key infrastructure allows you to enhance e-mail security by using certificates
to prove the identity of the sender, the point of origin of the mail, and the authenticity of the
message. It also makes it possible to encrypt mail. To provide message authentication, data
integrity, and non-repudiation, secure mail clients can sign messages with the private key of the
sender before sending the messages. The recipients then use the public key of the sender to
verify the message by checking the digital signature.
S/MIME clients that run on any platform or operating system can exchange secure mail because
all cryptographic functions are performed on the clients, not on the servers.

5.3.3 Software Code Signing


A growing number of applications, ActiveX controls, and Java applets are being downloaded
and installed on computers with little or no user notification.
In response to this problem, Microsoft introduced Authenticode digital signature technology in
1996, and in 1997 added significant enhancements to this technology. Authenticode technology
allows software publishers to digitally sign any form of active content, including multiple-file
archives. These signatures can be used to verify both the publishers of the content and the
content integrity at time of download. Many software vendors already sign their applications and
you can use these signatures to manage the software applications used on your network.
Authenticode relies on a certification authority structure in which a small number of commercial
CAs issue software-publishing certificates. If you want to expand the use of software-publishing
certificates in your own organization, the Windows 2000 and Windows Server 2003 PKI allows
you to issue your own Authenticode certificates to internal developers or contractors and allows
any employee to verify the origin and integrity of downloaded applications.

5.3.4 Internet Authentication


The Internet has become a key element in the growth of electronic commerce. However, for many
users, security considerations impact how much and what kind of information they are willing to
share across the Internet. The major concerns are:
Confidentiality. Data that is transferred between clients and servers needs to be
encrypted to prevent its exposure over public Internet links.
Server authentication. Clients need a way to verify the identity of the servers they are
communicating with.
Client authentication. Servers need a way to verify the identity of clients.
Client authentication of the server takes place when the client verifies the cryptographic
signatures on the certificate of the server, and any intermediate CA certificates, to a root CA
certificate located in the trusted root store on the client. Server authentication of the client is
accomplished when the server verifies the cryptographic signatures on the certificate of the client,
and any intermediate CA certificates, to a root CA installed in the trusted root store on the server.
When the identity of the client is verified, the server can establish a security context to determine
what resources the client is allowed or not allowed to use on the server.

5.3.5 IP Security
Windows 2000 and Windows Server 2003 incorporate Internet Protocol security (IPSec) to
protect data moving across the network. IPSec is a suite of protocols that allows encrypted and
digitally signed communication between two computers or between a computer and a router over

Page 175 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

an insecure network. The encryption is applied at the IP network layer, which means that it is
transparent to most applications that use specific protocols for network communication. IPSec
provides end-to-end security, meaning that the IP packets are encrypted or signed by the sending
entity, are unreadable en route, and can be decrypted only by the recipient entity. Due to a
special algorithm for generating the same shared encryption key at both ends of the connection,
the key does not need to be passed over the network.
You do not need to use public key technology to use IPSec; instead you can use the
Kerberos version 5 authentication protocol or shared secret keys that are communicated securely
by means of an out-of-band mechanism at the network end points for encryption. However, if you
use public key technology in conjunction with IPSec, you can create a scalable distributed trust
architecture in which IPSec devices can mutually authenticate each other and agree upon
encryption keys without relying on prearranged shared secrets, either out-of-band or in-band.
This, in turn, yields a higher level of security than IPSec without a PKI.

5.3.6 Smart Card Logon


Smart card logon is integrated with the Kerberos version 5 authentication protocol implemented in
Windows Server 2003. When smart card logon is enabled, the system recognizes a smart-card
insertion event as an alternative to the standard Ctrl + Alt + Del secure attention sequence to
initiate a logon. The user is then prompted for the smart card PIN code, which controls access to
operations performed by using the private key stored on the smart card. In this system, the smart
card also contains a copy of the certificate of the user (issued by an enterprise CA). This allows
the user to roam within the domain.
Smart cards enhance the security of your organization by allowing you to store extremely strong
credentials in an easy-to-use form. Requiring a physical smart card for authentication virtually
eliminates the potential for spoofing the identities of your users across a network. In addition, you
can also use smart card applications in conjunction with virtual private networks and certificate
mapping, and in e-commerce. For many organizations, the potential to use smart cards for logon
is one of the most compelling reasons for implementing a public key infrastructure.

5.3.7 Encrypting File System Use and Recovery


The Windows Server 2003 Encrypting File System (EFS) allows users and services to encrypt
their data to prevent others who authenticate to the system from viewing the information.
However, EFS also provides for data recovery if another means is needed to access this data
for example, if the user who encrypted the data leaves the organization, or if the original
encryption key is lost. To support this requirement, EFS allows recovery agents to configure
public keys that are used to enable file recovery. The recovery key only makes available the
randomly generated file encryption key, not a private key of the user. This ensures that no other
private information is accidentally revealed to the recovery agent.

5.3.8 Wireless (802.1x) Authentication


A growing number of organizations and facilities such as airports and hotels are implementing
wireless network access. This creates the challenge of ensuring that:
Only authenticated users can access the wireless network.
Data transmitted across the wireless network cannot be intercepted.
Public key infrastructures, in conjunction with the IEEE 802.1x standard for port-based network
access control, support both of these goals by providing centralized user identification,
authentication, dynamic key management, and accounting to provide authenticated network
access to 802.11 wireless networks and to wired Ethernet networks.

5.4 Determining Certificate Requirements for Users, Computers, and Services


After you have identified the security technologies that you need to implement to meet the
business needs of your organization, you need to identify the categories of users, computers, and
services that will use these technologies and for which you need to provide certificate services.

Page 176 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

For example, certificate use might be based on job function, location, organizational structure, or
a combination of these three, or all computers or users in the organization might use certain
certificate applications.
For each of the groups that you have identified, you need to determine:
The types of certificates to be issued. This is based on the security application
requirements of your organization and the design of your PKI infrastructure.
The number of users, computers, and applications that need certificates. This
number can include as few as one or as many users, computers, or applications as are in an
entire organization.
The physical location of the users, computers, and applications that need
certificates. Different certificate solutions might be required for users in remote offices or for
users who travel frequently than are required for users in the headquarters office of an
organization. Also, requirements can differ based on geography. For example, you might
want to restrict users in one country/region from using their certificates to access data in an
organizational business unit in another country/region.
The level of security that is required to support the users, computers, and
applications that need certificates. Users who work with sensitive information typically
require higher levels of security than other members of the organization.
The number of certificates required for each user, computer, and application. In
some cases, one certificate can meet all requirements. Other times, you need multiple
certificates to enable specific applications and meet specific security requirements.
The enrollment requirements for each certificate that you plan to issue. For
example, do users have to present one or more pieces of physical identification, such as a
drivers license, or can they simply request a certificate electronically?

5.5 Designing Your CA Infrastructure


To support the certificate-based applications of your organization, you must establish a
framework of linked CAs that are responsible for issuing, validating, renewing, and revoking
certificates as needed. The goal in establishing a CA infrastructure is to provide reliable service to
users, manageability for administrators, and flexibility to meet both current and future needs,
while maintaining an optimum level of security for the organization.
Figure shows the steps involved in designing your CA infrastructure.
Figure Designing Your CA Infrastructure

Page 177 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

5.5.1 Planning Core CA Options


Before you can establish a CA infrastructure that meets the security needs and certificate
requirements for your organization, you need to make decisions about a number of core CA
options that are available. Planning the CA infrastructure for your organization involves making
decisions about the following:
Location of the root certification authorities.
Internal versus third-party CAs.
Requirements for CA capacity, performance, and scalability.
Your Active Directory structure.
Your PKI management model.
CA types and roles.
Use of hardware cryptographic service providers.
Number of CAs required.

5.5.2 Designing Root CAs


A CA infrastructure consists of a hierarchy of CAs that trust one another and authenticate
certificates belonging to one another. Within this infrastructure, a final authority, called a root CA
Before you establish a CA hierarchy, you must determine the following:
Who designates the root certification authority in the organization. For example,
determine whether this is the responsibility of central IT, divisional IT departments, or a third-
party organization.
Where the root certification authority is to be located.
Who manages the root certification authority.
Whether the role of the root CA is only to certify other certification authorities, or also to
serve certificate requests from users.
After you have made these determinations, you can define the roles for any additional certification
authorities, including who manages them and what trust relationships they have with other CAs

Page 178 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

5.5.3 Selecting Internal CAs vs. Third-Party CAs


Depending on the functionality that you require, the capabilities of your IT infrastructure and IT
administrators, and the costs that your organization can support, you might choose to base your
certification authority infrastructure on internal CAs, third-party CAs, or a combination of internal
and third-party CAs.
Internal CAs
If your organization conducts most of its business with partner organizations and wants to
maintain control of how certificates are issued, internal CAs are the best choice. Internal CAs:
Allow an organization to maintain direct control over its security policies.
Allow an organization to align its certificate policy with its overall security policy.
Can be integrated with the Active Directory infrastructure of the organization.
Can be expanded to include additional functionality and users at relatively little extra cost.
The disadvantages associated with using internal CAs include:
The organization must manage its own certificates.
The deployment schedule for internal CAs might be longer than that for CAs available
from third-party service providers.
The organization must accept liability for problems with the PKI.
External CAs
If your organization conducts most of its business with external customers and clients and wants
to outsource certificate issuing and management processes, you might choose to use third-party
CAs. Third-party CAs:
Allow customers a greater degree of confidence when conducting secure transactions
with the organization.
Allow the organization to take advantage of the expertise of a professional service
provider.
Allow the organization to use certificate-based security technology while developing an
internally managed PKI.
Allow the organization to take advantage of the providers understanding of the technical,
legal, and business issues associated with certificate use.
The disadvantages associated with use of third-party CAs include:
They typically involve a high per-certificate cost.
They might require the development of two different management standards, one for
internally issued certificates and one for commercially issued certificates.
They allow less flexibility in configuring and managing certificates.
The organization must have access to the third-party CAs in order to access the CRLs.
Autoenrollment is not possible.
Third-party CAs allow only limited integration with the internal directories, applications,
and infrastructure of the organization.

5.5.4 Evaluating CA Capacity, Performance, and Scalability


Organizations must agree upon a definition of acceptable CA performance. To determine the
appropriate number of CAs and the best configuration for your CA infrastructure, you need to
evaluate and address the factors in your organization that impact CA capacity, performance, and
scalability. These include:
The number of certificates that you need to issue and renew.
The key lengths of the issuing CA certificates.
The type of hardware that is used for your CAs.
The number and configuration of the client computers that you need to support.
The quality of your network connections.
A stand-alone Windows Server 2003 CA supports more than 35 million certificates per physical
CA without any degradation of performance.

Page 179 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

An individual departmental certification authority running on a server with a dual processor and
512 megabytes (MB) of RAM can issue more than 2 million standard-key-length certificates per
day. Even with an unusually large CA key, a single stand-alone CA with the appropriate hardware
is capable of issuing more than 750,000 user certificates per day.
Using a greater number of small CAs with strategically located CRL distribution points reduces
the risk that your organization might be forced to revoke and reissue all its certificates if a large
CA is compromised. However, using a greater number of CAs might increase your administrative
overhead.
For many organizations, the primary limitations to CA performance are the amount of physical
storage available and the quality of the clients network connectivity to the CA. If too many clients
attempt to access your CA over slow network connections, autoenrollment requests can be
delayed.
Another significant factor is the number of roles that a CA server performs on the network. If a CA
server is operating in more than one capacity in the network for example, if it also functions as
a domain controller it can negatively impact the capacity and performance of the CA. It can
also complicate the delegation of administration for the CA server. For this reason, unless your
organization is extremely small, use your CAs only to issue certificates.
Some hardware components impact PKI capacity and performance more than others. When you
are selecting the server hardware for your CAs, consider the following:
Number of CPUs. Large CA key sizes require more CPU resources. The greater the
number of CPUs, the better the performance of the CA. CPU power is the most critical
resource for a Windows Server 2003 certification authority.
Note
Because of the architecture of their databases, Windows Server 2003
certification authorities are CPU-intensive and use a substantial amount of the disk
subsystem. However, other hardware resources can also impact the performance of a
CA when the system is put under stress.
Disk performance. In general, a high-performance disk subsystem allows for a faster
rate of certificate enrollment. However, key length impacts disk performance. With a shorter
CA key length, the CPU has fewer calculations to perform and, therefore, it can complete a
large number of operations. With longer CA keys, the CPU needs more time to issue a
certificate and this results in a smaller number of disk input/output (IO) operations per time
interval.
Number of disks. You can improve performance slightly by using separate physical
disks for the database and log files. You can improve performance significantly by placing the
database and log files on RAID or striped disk sets. In general, the drive that contains the
certification authority database is used more than the drive hosting the log file.
Note
Using separate logical disks does not provide any performance advantages.
Amount of memory. The amount of memory that you use does not have a significant
impact on CA performance, but must meet general system requirements
Hard disk capacity. Certificate key length does not affect the size of an individual
database record. Therefore, the size of the CA database increases linearly as more records
are added. In addition, the higher the capacity of the hard disk, the greater the number of
certificates that a CA can issue.
Tip
Plan for your hard disk requirements to grow over time. In general, every
certificate that you issue requires 17 kilobytes (KB) in the database and 15 KB in the log
file.
The type of hardware that your clients use can also impact performance. When you are selecting
or evaluating the capabilities of the hardware for your CA clients, consider the following:
Key length. The greater the key length of a requested certificate, the greater the impact
on the CPU of the server hosting the CA.

Page 180 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Network bandwidth. Assuming that the CA is not serving in more than one capacity, a
100-megabit network connection is sufficient to prevent performance bottlenecks.
As you plan your CA infrastructure, you also need to ensure that your design is flexible enough to
accommodate changes to your organization. For example, you need to be able to accommodate:
Changes in the functionality that you require from your public key infrastructure.
Growth or decline in demand for certificates.
The addition or removal of locations that CAs need to serve.
The effect of revocation. Revoking large numbers of certificates can take several minutes
and increase the size of the database.
Using multiple CAs is an excellent way to ensure that your infrastructure can support enterprise
scalability. The use of multiple CAs, even for organizations with minimal certificate requirements,
provides the following advantages:
Greater reliability. If you need to take an individual CA offline for maintenance or
backup, another CA can service its requests.
Scalability. Increases in demand, either from new users or from new applications, can
be accommodated more easily.
Distributed administration. Many organizations distribute security administration across
a number of IT administrators to prevent one individual or team from controlling the entire
security technology infrastructure of the organization.
Improved availability. Users in remote offices can access a CA that is local to them
rather than accessing a CA across slow Wide Area Network (WAN) links.

5.6 Integrating the Active Directory Infrastructure


Your CA infrastructure is independent of the domain structure of your Windows environment. For
example, one CA can service requests from multiple domains, or multiple CAs can serve a single
domain. CA hierarchies with stand-alone CAs can even span multiple Active Directory forests.
If possible, take your PKI requirements into account when you design your Active Directory
infrastructure. Active Directory and PKI technology impact each other in the following ways:
Enterprise CAs are bound to the forest. As a result, enterprise CAs can only issue
certificates to computers and users in the forest. In addition, you cannot change the name of
the CA or the computer after it is deployed. Moreover, the computer cannot be removed from
the domain or forest. Because much of the security of an organization is established at the
forest level, the security of an enterprise CA is connected to the forest in which it is located.
For this reason, each forest requires its own enterprise CAs.
Note
If certificates from stand-alone CAs are published to Active Directory, these
stand-alone CAs cannot be renamed or removed from the forest without their certificates
becoming invalid. However, you can rename stand-alone CAs that belong to workgroups
without impacting the status of their certificates.
Certificate storage affects the size of your directory. If you store certificates in user
objects, the size of the directory increases and replication time might increase. Because the
userCertificate attribute contains data about all the user certificates, the addition of a
certificate to that multivalued attribute causes Active Directory to replicate attribute data for all
certificates.
Complications such as failure to recognize the user or the certificate can occur.
This happens if you do not apply a consistent naming structure for both your distinguished
names (also known as DNs) and your user principal names (UPNs).
Enterprise CAs rely on the existence of an Active Directory schema. If your schema
schema
The set of definitions for the universe of objects that can be stored in a directory. For each
object class, the schema defines which attributes an instance of the class must have, which
additional attributes it can have, and which other object classes can be its parent object
class.

Page 181 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

is based on Windows 2000 Active Directory, you might need to extend it to support Windows
Server 2003 Certificate Services functionality, such as version 2 certificate templates.
For certificates with a long life, the availability of the CA services themselves is much less
important than the availability of the directory that holds the certificates and the certificate
revocation lists. If you integrate your CAs with Active Directory, your certificates and CRLs are
automatically published to the directory and replicated throughout the forest as part of the global
catalog.
Note
If you use Active Directory to publish and replicate information about CRLs throughout
your organization, be sure to review Active Directory replication schedules and policies in
order to ensure that this data is distributed in a timely manner.
Windows Server 2003 Certificate Services functions whether Active Directory in your organization
is based on Windows 2000 or Windows Server 2003. It also functions if your organization is
operating in mixed mode.

5.7 Configuring Public Key Group Policy


If you have an Active Directory environment, Group Policy allows you to link certificate services to
groups of users or computers based on their domains or organizational unit membership. You
must configure public key Group Policy in order to perform the following tasks:
Add trusted root certificates for groups of computers. You can define the following:
Which root CAs users can trust when verifying certificates.
Whether users are allowed to trust additional CAs of their own choosing.
The purposes for which certificates issued by each CA can be used.
Enterprise root CAs within your domain forest are automatically added to these policies.
Designate EFS recovery agent accounts. You can define an EFS recovery policy
within the scope of the policy object. If a recovery policy is defined, it is populated with the
certificates of the recovery agents.
In many organizations, users and computers are already organized into domains and
organizational units that are based on the organization structure, location, and job function. If your
organization has not already created an Active Directory domain structure, the best way for you to
take advantage of Public Key Group Policy is to define the groups of users and computers that
will use your Certificate Services and communicate this information to the Active Directory and
Group Policy administrators, so that they can address your public key requirements in their
planning.

5.8 Defining PKI Management and Delegation


It is important to define a PKI management model early in the process of designing your CA
infrastructure. This PKI management model must complement your existing security management
delegation plan and help you to meet Common Criteria requirements for role separation. To
ensure that a single individual cannot compromise PKI services, it is best to distribute
management roles across different individuals in your organization. This involves deciding which
individuals are to perform each of the following tasks:
Creating or modifying existing CAs
Managing certificate templates
Issuing cross certificates
Issuing or revoking user certificates
Configuring and viewing audit logs
You can use discretionary access control lists (DACLs) to manage CA permissions and delegate
CA management tasks.
Windows Server 2003 includes the following CA management roles:
Service Manager. Configures and manages Certificate Services for local users, assigns
certificate managers, and renews CA certificates.
Certificate Manager. Issues and revokes certificates.

Page 182 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Auditor. Audits the actions of local administrators, service managers, and certificate
managers.
The extent to which you separate roles depends on the level of security that you require for a
particular service. Assign the fewest possible rights to users in order to achieve the greatest level
of security. For example, you can adopt the following rules:
No user can assume the roles of both CA Administrator and Certificate Manager.
No user can assume the roles of both User Manager and Certificate Manager.
If you need stricter guidelines, you can include the following:
No user can assume the roles of both Auditor and Certificate Manager.
To facilitate this delegation process, you need to understand how various PKI administrative roles
align with Windows Server 2003 administrative roles. Table 16.1 lists the Windows Server 2003
administrative roles that correspond to each PKI administrative role.
Table PKI Administrative Roles and Their Corresponding Windows Server 2003
Administrative Roles
PKI
Windows Server 2003 Administrative
Administrative Description
Role
Role
Configures, maintains, and
PKI Administrator User
renews the CA.
Performs system backup and Backup Operator on the server on which the
Backup Operator
recovery. CA is running
Configures, views, and Local Administrator on the server on which
Audit Manager
maintains audit logs. the CA is running
Key Recovery Requests retrieval of a private
User
Manager key stored by the service.
Approves certificate enrollment
Certificate Manager User
and revocation requests.
Manages users and their Account Operators (or person delegated to
User Manager
associated information. create user accounts in Active Directory)
Requests certificates form the
Enrollee Authenticated Users
CA
Table lists the actions that each PKI administrative role can perform.
Table Actions Performed By PKI Administrative Roles
Local
CA Certificate Audit Backup
Action Enrollee Server
Admin Manager Manager Operator
Admin
Install a CA
Configure a CA
Policy and exit module
configuration
Stop/start service
Change configuration
Assign user roles
Establish user accounts
Maintain user accounts
Configure profiles
Renew CA keys
Define key recovery

Page 183 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

agent(s)
Define officer roles
Enable role separation
Issue/Approve
certificates
Deny certificates
Revoke certificates
Unrevoke certificates
Renew certificates
Enable, publish, or
configure CRL schedule
Configure audit
parameters
Audit logs
Back up system
Restore system
Read CA properties,
CRL
Request certificate
Read CA database
Read CA configuration
information
Read issued, Revoked,
pending certificates

5.9 Defining CA Types and Roles


To plan your CA infrastructure, you need to understand the different types of CAs available with
Windows Server 2003 and the roles that they can play. Windows Server 2003 Certificate Services
supports the following two types of CAs:
Enterprise
Stand-alone
Enterprise and stand-alone CAs can be configured as either Root CAs or Subordinate CAs.
Subordinate CAs can further be configured as either Intermediate CAs (also referred to as a
policy CA) or Issuing CAs.
Before you create your CA infrastructure, you need to determine the type or types of CAs that you
plan to use, and define the specialized roles that you plan to have each CA assume.

5.9.1 Enterprise vs. Stand-Alone CAs


Enterprise CAs are integrated with Active Directory. They publish certificates and CRLs to Active
Directory. Enterprise CAs use information stored in Active Directory, including user accounts and
security groups, to approve or deny certificate requests. Enterprise CAs use certificate templates.
When a certificate is issued, the enterprise CA uses information in the certificate template to
generate a certificate with the appropriate attributes for that certificate type.
If you want to enable automated certificate approval and automatic user certificate enrollment,
use enterprise CAs to issue certificates. These features are only available when the CA
infrastructure is integrated with Active Directory. Additionally, only enterprise CAs can issue
certificates that enable smart card logon, because this process requires that smart card
certificates be mapped automatically to the user accounts in Active Directory.

Page 184 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Stand-alone CAs do not require Active Directory and do not use certificate templates. If you use
stand-alone CAs, all information about the requested certificate type must be included in the
certificate request. By default, all certificate requests submitted to stand-alone CAs are held in a
pending queue until a CA administrator approves them. You can configure stand-alone CAs to
issue certificates automatically upon request, but this is less secure and is usually not
recommended, because the requests are not authenticated.
From a performance perspective, using stand-alone CAs with automatic issuance enables you to
issue certificates at a faster rate than you can by using enterprise CAs. However, unless you are
using autoissuance, using stand-alone CAs to issue large volumes of certificates usually comes
at a high administrative cost because an administrator must manually review and then approve or
deny each certificate request. For this reason, stand-alone CAs are best used with public key
security applications on extranets and the Internet, when users do not have Windows 2000 or
Windows Server 2003 accounts, and when the volume of certificates to be issued and managed
is relatively low.
You must use stand-alone CAs to issue certificates when you are using a third-party directory
service or when Active Directory is not available.
Note
You can use both enterprise and stand-alone certification authorities in your organization.
Table 16.3 lists the options that each type of CA supports.

Table Options for Enterprise vs. Stand-Alone CAs


Enterprise Stand-alone
Option
CA CA
Publish certificates in Active Directory and use Active Directory to
validate certificate requests.
Take the CA offline.
Configure the CA to issue certificates automatically.
Allow administrators to approve certificate requests manually.
Use certificate templates.
Authenticate requests to Active Directory.

5.9.2 Root CAs


A root CA is the CA that is at the top of a certification hierarchy and must be trusted
unconditionally by clients in your organization. All certificate chains terminate at a root CA.
Whether you use enterprise or stand-alone CAs, you need to designate a root CA.
Because there is no higher certifying authority in the certification hierarchy, the subject of the
certificate issued by a root CA is also the issuer of the certificate. Likewise, because the
certificate chain terminates when it reaches a self-signed CA, all self-signed CAs are root CAs.
Windows Server 2003 only allows you to designate a self-signed CA as a root CA. The decision
to designate a CA as a trusted root CA can be made at either the enterprise level or locally, by
the individual IT administrator.
A root CA serves as the foundation upon which you base your certification authority trust model. It
guarantees that the subject public key belongs to the subject identity information that is contained
in the certificates it issues. Different CAs might also verify this relationship by using different
standards; therefore it is important to understand the policies and procedures of the root
certification authority before choosing to trust that authority to verify public keys.
The root CA is the most important CA in your hierarchy. If your root CA is compromised, every
other CA and certificate in your hierarchy might have been compromised. You can maximize the
security of the root CA by keeping it disconnected from the network and using subordinate CAs to
issue certificates to other subordinate CAs or to end users.

Page 185 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

5.9.3 Subordinate CAs


CAs that are not root CAs are considered subordinate. The first subordinate CA in a hierarchy
obtains its CA certificate from the root CA. This first subordinate CA can, in turn, use this key to
issue certificates that verify the integrity of another subordinate CA. These higher subordinate
CAs are referred to as intermediate CAs. An intermediate CA is subordinate to a root CA, but also
serves as a higher certifying authority to one or more subordinate CAs.
An intermediate CA is often referred to as a policy CA because it is typically used to separate
classes of certificates that can be distinguished by policy. For example, policy separation includes
the level of assurance that a CA provides or the geographical location of the CA to distinguish
different end-entity populations. A policy CA can be online or offline.
Note
Most organizations use one root CA and two policy CAs one to support internal users,
the second to support external users.
The next level in the CA hierarchy usually contains the issuing CA. The issuing CA issues
certificates to users and computers and is almost always online. In many CA hierarchies, the
lowest level of subordinate CAs is replaced by RAs, which can act as an intermediary for a CA by
authenticating the identity of a user who is applying for a certificate, initiating revocation requests,
and assisting in key recovery. Unlike a CA, however, an RA does not issue certificates or CRLs; it
merely processes transactions on behalf of the CA.

5.9.4 Using Hardware CSPs


Hardware CSPs can support a wide range of cryptographic operations and technologies. Keys
stored in tamper-resistant hardware crypto-devices are more secure than keys stored on local
computer hard disks. Therefore, keys stored in hardware cryptographic devices can have key
lifetimes that are longer than keys stored by software CSPs on hard disks.
Note
Another advantage to using hardware CSPs is that the key material is kept outside the
memory of the computer and within the hardware device. This makes it impossible to access
the key of the CA by means of a memory dump.
If you determine that a hardware CSP is too costly, consider using smart cards for key storage.
When you store cryptographic keys on a smart card, no one in your organization can issue or
revoke certificates without the appropriate smart card together with the correct personal
identification number (PIN).
If you choose to use hardware cryptographic service providers for CA private key storage, you
must ensure that the hardware device is physically secured, or at least back up the operator
cards or tokens. You might, for example, keep it in a highly secured area in the computer room of
your company, or lock it in a safe.

5.10 Establishing a CA Naming Convention


Before you configure CAs in your organization, you must establish a CA naming convention.
Names for CAs cannot be more than 64 characters in length. You can create a name using any
Unicode character, but you might want to use the ANSI character set if interoperability is a
concern. The CA name does not have to be identical to the name of the computer.
The name that you specify when you configure a server to be a CA becomes, in Active Directory,
the common name of the CA, and is reflected in every certificate that the CA issues. For this
reason, it is important that you do not use the fully qualified domain name (FQDN) for the
common name of the CA. This way, malicious users who obtain a copy of a certificate cannot
identify and use the fully qualified domain name of the CA to create a potential security
vulnerability.

5.11 Selecting a CA Database Location

Page 186 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

When you install a CA in your organization, you must specify a location for the database and log
files of the CA. You must also indicate whether you want to store the configuration information for
the CA. Storing the CA configuration information is helpful for backing up and, if necessary,
restoring your CA.
You can choose to copy the naming information and the certificate for the CA to the file system
(the configuration directory is automatically shared by means of a share named certconfig).
The CA database consists of the files listed in Table
Table CA Database Files
Database file Purpose
<CA name>.edb The CA store
edb.log The transaction log file for the CA store
res1.log Reservation log file to store transactions if disk space is exhausted
res2.log Reservation log file to store transactions if disk space is exhausted
edb.chk Database checkpoint file

Example: Designing a CA Infrastructure


After an organization defines its certificate requirements, it creates a linked hierarchy of
certification authorities to enable it to distribute certificates as needed, and to validate or reject
certificates as appropriate.
In creating this CA infrastructure, the organization takes the following elements into account:
The security administration model of the organization. For example, security
administration is managed centrally from the headquarters of the organization, but individual
business units create and support their own security requirements as needed for individual
projects and business relationships. Some units operate autonomously, but report back to
corporate IT.
The Active Directory infrastructure of the organization. Because the organization has
a single-forest logical structure, the CA infrastructure design is simple. The existing single-
forest structure allows them to set up CAs, based on geography and bandwidth, to serve
clients in multiple domains. For example, one or more common CAs support clients in offices
on opposite coasts.
Potential use of a third-party CA. The organization is concerned about IT costs and
also prefers to manage its own security infrastructure. It addresses both concerns by creating
and administering its own CA infrastructure. When joint venture business partners deploy
PKIs, it is possible to integrate the two CA infrastructures without having to rely on a third-
party CA.
Although the organization deploys Active Directory, it places a stand-alone root CA in a
workgroup, rather than in the domain, for increased security. Also, it keeps this root CA offline
and in a secure location that can only be accessed by an administrator who is authenticated by
means of a smart card.
Directly below the root CA, the organization adds three policy CAs. One CA signs all certificates
that have been issued to meet the high security standards of the organization, including software
code signing, smart card logon, and Internet authentication certificates. The second CA signs all
certificates that have been issued to meet the medium security standards of the organization,
such as e-mail and EFS certificates. The third signs certificates for the CAs that issue certificates
to external partners. These are also offline.
Figure shows the CA infrastructure for the organization.
Figure Example of a CA Infrastructure of an Organization

Page 187 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Table summarizes the configuration of these CAs.


Table CA Configuration
CA Name State Role Domain
Root CA RtCA01 Offline Stand-alone None
Internal medium security policy PolCA01 Offline Stand-alone None
Internal high security policy PolCA02 Offline Stand-alone None
External high security policy PolCA03 Offline Stand-alone None
Internal medium security issuing 1 IsCA01 Online Member server Corp
Internal medium security issuing 2 CA06 Online Member server Corp
Internal medium security issuing 3 CA07 Online Member server Corp
Internal high security issuing 2 CA08 Online Member server Corp
Internal high security issuing 3 CA09 Online Member server Corp
External high security issuing 1 CA01 Online Member server Corp

5.12 Overview of Smart Card Deployment


Most organizations use passwords to manage access to computer networks and resources.
However, some users set weak passwords, write passwords down in insecure locations, or forget
their passwords and require help desk assistance for password reset. For this reason, passwords
alone might not provide the level of security and manageability that your organization requires.
Smart card support in Microsoft Windows Server 2003, Standard Edition; Windows
Server 2003, Enterprise Edition; and Windows Server 2003, Datacenter Edition operating
systems provides users with stronger credentials than even the most complex passwords. If you
use, manage, and deploy smart cards properly, you can enhance the security of your
organization and reduce your support costs.
Smart cards offer the following benefits:

Page 188 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Protection. Smart cards provide tamper-resistant storage for private keys and other
data. If a smart card is lost or stolen, it is difficult for anyone except the intended user to use
the credentials that it stores.
Isolation. Cryptographic operations are performed on the smart card itself rather than on
the client or on a network server. This isolates security-sensitive data and processes from
other parts of the system.
Portability. Credentials and other private information stored on smart cards can easily be
transported between computers at work, home, or other remote locations.
The number and variety of smart cardenabled applications is growing to meet the needs of
organizations that want to rely on smart cards to enable secure authentication and to facilitate
services.
Before you can deploy smart cards in your organization, you must have a public key infrastructure
(PKI) in place. Next, you need to identify applications to enable for use with smart cards, and plan
how to implement and support a smart card infrastructure before you can take advantage of the
security benefits of smart cards.

5.12.1 Process for Planning a Smart Card Deployment


Planning a smart card deployment involves making decisions about technical standards,
hardware purchases, smart card management, and the logistics of smart card distribution.
Figure shows the process for planning a smart card deployment.
Figure Planning a Smart Card Deployment

5.12.2 Smart Card Fundamentals


Windows Server 2003 supports a variety of secure smart card applications and business
scenarios. Before you begin to plan your smart card deployment, it is important to understand the
basic components of smart card technology.

5.12.3 Components of a Smart Card Infrastructure


A number of hardware and software components are required in order to support a smart card
infrastructure.
Certificates Digital data that securely bind a public key to the entity that holds the
corresponding private key.
Certification authorities Trusted entities or services that issue digital certificates.
Active Directory The Windows Server 2003 directory service that serves as a repository for
account information, primarily user credentials, security group memberships, and certificate
templates. In addition, you can also use the Active Directory directory service to store
certificates, certificate revocation lists, and delta certificate revocation lists, and to publish root
certification authorities (CAs) and cross-certificates.
Page 189 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Smart cards Hardware tokens containing integrated processors and memory chips that can be
used to store certificates and private keys and to perform public key cryptography operations,
such as authentication, digital signing, and key exchange.
Smart card readers Devices that connect a smart card to a computer. Smart card readers can
also be used to write certificates to the smart card.
Smart card software The software provided by the smart card vendor to manage smart cards.
In some cases, organizations might choose to create their own software tools if customized
functionality is required.

5.12.4 Creating a Plan for Smart Card Use


Before deploying smart cards in your organization, you must determine which processes, users,
and groups of users require smart cards.
Figure shows the process for creating a plan for smart card use in your organization.
Figure Creating a Plan for Smart Card Use

5.12.5 Identifying the Processes That Require Smart Cards


A smart card deployment can help your organization meet numerous sensitive business
requirements. You can use smart cards for any or all of the following processes:
Interactive user logons, including remote access connections to the network
Administrator logons
Third-party authentication across the Internet
Signing and encrypting e-mail
Evaluate additional equipment and administrative costs, procedures, and changes to user work
patterns that each smart cardenabled process requires. Ensure that the benefits of deploying
smart cards for each process outweigh the costs from hardware, administration, and potential
user difficulties.

5.12.6 Interactive User Logons


Use smart cards for an interactive user logons if you want to enforce the use of secure encrypted
logon credentials. If you require users to log on by using smart cards, you do not have to worry
about the quality and security of user passwords.
Requiring smart cards for interactive user logons requires additional network administration for
smart card distribution and support. This is problematic for organizations that are spread across
different geographic locations and that do not have network or physical security personnel in each
location to administer and support smart cards.

Page 190 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

You can also use smart cards for remote access logons, and for Terminal Services and shared
client logons.

5.12.7 Remote Access Logons


Local interactive logons require that users have both physical access to a computer that is a
logical member of the organization and a network password. Remote users, however, can log on
from any computer outside of the organization. If a malicious user obtains the password of a
remote user, he or she can use it to access network resources from any computer. For this
reason, conventional password-based remote access logons are more vulnerable to attack than
local interactive logons.
You can secure the remote access process by requiring users to use smart cards when they
connect to the corporate network by means of remote access logon. This solution prevents
hackers from using the remote access dial-up or Internet connections to compromise the network,
even if they have physical access to laptops or home computers.
One problem with requiring the use of smart cards for remote access logons is the fact that
remote users often own computer hardware and software that does not conform to minimum
corporate standards and, therefore, might not support smart card use. This complicates the
process of administering and supporting smart cards for remote access logons. Also, users might
experience longer logon times when they use smart cards, especially over slow dial-up
connections.

5.12.8 Terminal Services and Shared Clients


If your organization is deploying Terminal Services, consider using smart cards for kiosk
computers that are shared by multiple users. This can improve security in environments in which
multiple users share a single computer terminal, relocate frequently, and do not use the
conventional logoff procedure every time they move away from the terminal. This is often the
case in hospitals, factories, or other businesses.
Administrator Logons
There is greater potential for harm to the network when administrator credentials, as opposed to
user credentials, are misused. As a result, preventing unauthorized users from using
administrative credentials to access their network is an important security priority for most
organizations. Another vulnerability is introduced when you allow people to perform network
administration tasks by using generic administrator accounts that are shared by multiple users;
this limits the ability of the organization to track which user performs a specific action. Allowing
administrators to log on by using administrative credentials when they are not performing
administrative tasks also creates a significant security risk because attackers who compromise an
administrator account can do a greater amount of damage to the system.
By requiring individuals to use smart cards to perform administrative tasks, you can significantly
reduce the possibility that unauthorized users can gain administrative access to your network.
You can use smart cards for administrator logons in the following two ways:
By using smart cards for individual administrative operations.
By using smart cards for an administrative shell.
In most cases, the best solution is to use a combination of these two strategies. For example, you
can require that all administrators use smart cards to access data center servers. If the
administrator is using a Windows 2000 or Windows XP client, he or she can use a smart card and
administrative credentials to open a Terminal Services client session in order to log on to the data
center servers.
Important
It is not possible to utilize multiple credentials stored on a single smart card. Therefore,
administrators who have more than one domain account require a smart card for each
account.

Page 191 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

5.12.9 Using Smart Cards for Individual Administrative Operations


When you use smart cards for individual administrative operations, administrators log on by using
their standard user credentials, and then use administrative credentials when they need to
perform specific administrative operations. For example, you might require an administrator to log
on by using a smart card in order to install Active Directory on a member server. Administrative
credentials apply only to the specific operation, which helps to protect the security of the system.
An administrator can also use smart cards to perform individual administrative operations on
target computers running versions of the Windows operating system earlier than Windows XP or
Windows Server 2003, as long as they use a smart card to log on to a computer running
Windows XP or Windows Server 2003.
Not all administrative tools work with smart cards. Therefore, before you implement this solution,
test it to ensure that you can perform the required administrative tasks and use the necessary
administrative tools. If some of your required tools and tasks are incompatible with using smart
cards, you must communicate to your administrators which tasks require smart cards and which
must be completed by using administrative credentials.
Using Smart Cards for an Administrative Shell
When you use smart cards for an administrative shell, administrators log on by using user
credentials. Then, when the administrator needs to perform administrative operations, he or she
logs on by using a smart card and administrative credentials to open a Terminal Services client
session. The administrator then performs the required administrative operations within the
administrative shell.
This approach simplifies the process of performing multiple sequential administrative operations
during a single session. However, the server that has Terminal Server enabled must be running
Windows Server 2003. Although the Windows XP Terminal Services client can run on
Windows 2000, the server-side support is only provided by Windows Server 2003.
Authenticating Third Parties
Use smart cards for third-party authentication if you want to verify that queries, orders, or other
communications originate from the appropriate individual or organization and that they conform to
preestablished standards, such as purchase order limits. For example, banks that allow users to
check their transaction histories or pay bills online, and distributors that accept purchase orders
over the Internet can benefit from using smart cards for third-party authentication.
Deploying smart cards to third parties, however, requires careful administration. For example, you
must ensure that attackers cannot obtain smart cards and guess the PIN to gain unauthorized
access to the system. Also, if the customer services that are based on smart card authentication
are an important part of your business, you need to ensure that the services are always available.
If you do not administer your third-party smart card authentication process effectively, it can have
a negative impact on your Internet business transactions.
Signing and Encrypting E-mail
You can use smart cards to enable digital signing and the encryption of electronic
communications such as e-mails or contracts.
If you choose to deploy smart cards for digital signing, you need to determine the types of e-mail
messages that require smart cardvalidated digital signatures. Use smart cards for the digital
signing of e-mail messages where it is important to verify the identity of the sender and that the
message has not been tampered with while in transit. Digitally signing routine e-mails creates
unnecessary network traffic and can slow down ordinary communication between users. Note
that when you use smart cards for the digital signing of sensitive documents, such as legal
contracts or purchase orders, you must configure the certificate policies and extensions that
control smart card certificate use.
Depending on the types of documents that you want users to sign digitally, you also need to
make additional decisions about smart cardenabled digital signatures, such as whether
assistants are allowed to sign documents on behalf of their superiors, whether send and read
receipts are required, and how the receipts are to be stored.

Page 192 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

5.12.10 Defining Smart Card Service Level Requirements


Before you deploy smart cards, establish service level agreements to help your IT organization
align smart card performance with the objectives of the organization in areas such as reliability,
response times, and support procedures.
For example, you need to define smart card service level standards for:
The types of identification required to obtain a smart card. You might choose to
require a specific type of personal identification, such as a drivers license or other photo ID,
in order for a user to obtain a smart card.
Unique service guarantees for special classes of employees, such as executives or
roaming employees. Define whether certain classes of employees are permitted to operate
under support agreements that differ from those of other users.
Acceptable time needed for users to log on. It is best to ensure that the different steps
and time needed for smart card logon time are comparable to the steps and time needed for
conventional password logons.
Acceptable logon times for remote access users. Remote access logon times are
more vulnerable to slowdowns than local network connections, especially if users have slow
dial-up access connections. You might need to upgrade your remote access configuration in
order ensure acceptable logon times for remote users.
Remote access exceptions. The computer configurations of some users might not be
compatible with smart cards, and remote users might lose or forget their smart cards. Identify
the circumstances, if any, in which remote users are allowed to use remote access without
using a smart card.
Number of unsuccessful PIN entries allowed. Do not allow an unlimited number of
attempts to enter a PIN. Allowing three or four attempts is generally adequate.
PIN reset requirements. Decide whether users are allowed to reset their own PINs, or
whether they need to provide personal identification to security or help desk personnel to
have their PINs reset. If you decide that users need to provide positive identification, decide
whether the user must present the identification in person, such as a photo ID, or
demonstrate knowledge of a predefined secret, such as a mothers maiden name.
Service guarantees to users who cannot use their smart cards because of loss,
damage, or blocking. This includes:
Establishing when and how users can regain access to the network.
Determining whether to restrict these users access to the network to certain
areas, or to allow them access to any areas of the network that were previously
accessible to them.
Defining these limits helps you to establish user expectations and support procedures.
Document your service level standards. You will need to apply these standards in your smart card
operations plan, test them in your lab and pilot deployments, communicate them to help desk
personnel and to your users, and include them in your support and maintenance plan

5.12.11 Selecting Smart Card Hardware


Single smart cards and smart card readers are relatively inexpensive. However, when you deploy
smart cards and smart card readers to hundreds or even thousands of users, equipment cost
becomes an important consideration. You must evaluate smart card hardware in order to select
the devices that best meet the needs of your organization at the best price.
Figure shows the process for selecting smart card hardware.
Figure Selecting Smart Card Hardware

Page 193 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Creating a Smart Card Specification


A wide variety of smart cards and smart card readers are available to choose from. Windows
Server 2003 is designed to work with any cryptographic smart card that has an associated
CryptoAPI cryptographic service provider. The physical characteristics of smart cards and
readers are governed by published standards. Cards from any manufacturer that adheres to the
ISO 7816 standard will likely be compatible with the reader you select. Be sure, however, to test
smart cards and smart card readers to verify compatibility before deploying them in your
production environment.
Because smart cards both store and process data, it is important to create a specification for your
smart cards. Creating a smart card specification involves making decisions about the following:
Smart card hardware type
Amount of memory required
Intended useful smart card lifetime
Intended smart card roles
Smart card reader hardware
Smart card management software
Smart Card Type
Two types of smart cards are available for use with Windows Server 2003 and Windows XP:
conventional credit cardshaped contact cards and smaller token-style cards that plug directly
into the USB port of a computer.
Note
Another type of smart card, called a contactless smart card, is not supported by
Windows XP or Windows Server 2003.
Credit cardshaped contact cards
Credit cardshaped smart cards are available in three-volt and five-volt versions. They are the
most common smart card solution, in part because they resemble the corporate card keys or
badges that many organizations use.
Note
You can specify that your smart cards be screen-printed with your corporate logo and a
picture of the user. If you plan to add graphics to smart cards, ask your vendor about the
methods available for bulk printing and customizing cards.
If your organization uses card keys or badges, you can apply smart card chips to the existing card
key or badge as a sticker or "skin." However, your card keys or badges need to fit into your smart
card readers with a minimal amount of friction; therefore, be sure to include the physical thickness

Page 194 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

of the smart card in your specifications. This is an important factor to consider when you select a
vendor to manufacture the stickers, as the material thickness for smart card chips can vary.
Token-style smart cards
Token-style smart cards are typically the size of a house key or automobile key. They plug
directly into a USB port, providing a more compact solution than separate cards and readers.
Token-style smart cards are ideal for laptop users who want to carry a minimum number of
peripherals, or for workers who use a number of different computers. However, you cannot use
token-style smart cards if your computers do not have USB connections, or if the USB
connections are full or difficult to access.
Memory
Your smart card requires enough memory to store the certificate of the user, the smart card
operating system, and additional applications. Smart cards run embedded operating systems,
and in many cases, a form of file system in which data can be stored. To enable Windows smart
card logon, you must be able to program the card to store a users key pair, retrieve and store an
associated public key certificate, and perform public and private key operations on behalf of the
user.
To calculate the amount of memory that you need, determine the space requirements for:
User certificates. A certificate typically requires about 1.5 kilobytes (KB). A smart card
logon certificate with a 1,024-bit key typically requires 2.5 KB of space.
The smart card operating system. The Windows for Smart Cards operating system
requires about 15 KB.
Applications required by the smart card vendor. A small application requires between
2 KB and 5 KB.
Your custom applications.
Future applications.
Figure shows the additional space requirements of a typical 32 KB smart card. The smart card
operating system requires about 15 KB, leaving 17 KB for the file system, which includes space
for the card management software, the certificate, and any other custom applications.
Figure Memory Use on a 32 KB Smart Card

It is possible to configure smart card file systems into public and private spaces. For example,
you can define segregated areas for protected information, such as certificates, e-purses, and
entire operating systems, and mark this data as Read Only to ensure the security of the smart
card and restrict the amount of data that can be modified. In addition, some vendors provide
cards with sub-states, such as Add Only, which is useful for organizations that want to restrict the
ability of a user to revise an existing credential, and Update Only, which is useful for
organizations that want to restrict ability of a user to add new credentials to a card.

Page 195 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The data capacity available on smart cards is increasing as smart card technology improves.
However, storage space on smart cards is expensive. Card vendors often restrict the amount of
storage available to individual applications so that multiple applications or services can be stored
on the card. Therefore, in your vendor specification, define all of your anticipated present and
future card usage requirements and the memory requirements for each certificate and application
that you require. If you plan to use your smart cards for multiple purposes, such as physical
access to facilities and user logon, or to store additional data, you must increase your memory
requirements. Also, when planning storage space on the chip, allocate space for applications that
you are planning for future implementation.
Note
Windows Server 2003 and Windows XP do not support the use of multiple certificates on
a smart card.
Life Expectancy
You must define the length of time for which you will use a smart card before you replace or
upgrade it. Contact your vendor for information about smart card life expectancy based on normal
wear and tear.
In addition, you must take into account your current and future space requirements, including the
anticipated need for additional applications and certificates with larger keys. Anticipate adding
new applications, and potentially issuing new smart cards, over an 18-24 month card lifecycle. In
the future, vendors are likely to introduce smart cards with more memory and other
enhancements for a lower cost.
Also, determine whether you want your smart cards to be reusable in the event that users leave
the organization. Reusing smart cards reduces the costs associated with issuing new ones.
However, the cost associated with removing existing data and writing new data and applications
is often equal to or more than the cost of preparing and issuing new smart cards.

5.12.12 Smart Card Roles


You can use smart cards for one of three roles. Determine how many smart cards you need to
issue for each of the following roles:
Enrollment card. Issue enrollment cards to individuals who enroll smart cards on behalf
of other users. Enrollment cards have a special enrollment agent certificate. Issue the
smallest possible number of enrollment cards that will enable you to enroll all required smart
card users. This protects the security of your system.
User cards. These are the standard cards that you issue to each user. Two types of user
cards are available:
Permanent. Permanent user cards are cards that employees carry with them.
They contain the cardholders credentials, certificates, data, and applications. They might
also have a photograph or a decal applied to the card. In a Windows Server 2003
environment, the permanent card points to a permanent certificate server.
Temporary. Temporary cards are a limited-use cards that are issued to guests,
temporary employees, and users who have forgotten their permanent cards. They point
to a temporary certificate server and can have a limited lifetime.

5.12.13 Evaluating Smart Cards and Readers


You need to evaluate your prospective smart cards and readers throughout your smart card
deployment process. Initially, obtain and evaluate a variety of smart cards and smart card readers
to determine which vendors provide the best balance of specifications, performance, and price.
As you deploy your smart card infrastructure, continue to evaluate your hardware to make sure
that it performs as expected.
The smart cards and smart card readers that you deploy and the smart card production
processes that you develop are likely be used many times every day. Therefore, you must ensure
that your hardware is reliable. The service level agreements that you created when you defined

Page 196 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

your smart card requirements provide objective standards for measuring and documenting
satisfactory performance.
To minimize user dissatisfaction and maximize manageability, be sure to test the following:
Installation and removal of the smart card software. Make sure the smart cards work
after you install the software. If the installation is faulty, use the Windows Event Viewer to
access error messages that might explain the cause of the failure.
Fit of smart cards in readers. Smart card dimensions, such as thickness, are governed
by international standards. However, some organizations have found that, if the card-to-
reader interface is too tight or abrasive, the cards deteriorate more rapidly.
Reader reliability. To test reliability, create an environment that includes systems that
have slower CPUs and less memory than computers in your organization. Test how well your
smart card readers operate in this environment, as well as in other configurations. You can,
for example, run a number of memory-intensive applications or use the smart cards and
readers over slow connections to evaluate how each combination of smart cards and readers
functions in these conditions. Your smart card service level agreements provide objective
criteria for acceptable and unacceptable performance.
Card production. Slow card production processes can impede your deployment. If your
organization is unable to produce cards efficiently, use a third-party vendor to produce smart
cards.
Ability to deploy multiple types of cards and readers. If you are unable to efficiently
deploy the types of cards, readers, and servers that you require, your service might be
inconsistent and inefficient.
Establishing Certification Authorities
It is important to ensure that your public key infrastructure can support the issuance and
verification of smart card certificates for the users and applications that you have identified. To
ensure that your PKI can support a smart card infrastructure, you must do the following
Configure your certification authorities (CAs) as enterprise CAs. Windows Server 2003
smart card certificates require enterprise CAs.
Important
CAs that issue smart card certificates need to be trusted in the CA hierarchy and
must be continuously online while the user is enrolled.
Make sure that your issuing CAs are installed on servers that have enough storage and
central processing power to support the smart card users in your organization.

5.12.14 Planning Smart Card Certificate Templates


You can use any of the following types of Windows Server 2003 certificate templates to enable
smart card use in the Windows Server 2003 PKI:
Enrollment Agent. Allows an authorized user to serve as a certificate request agent on
behalf of other users.
Smart Card User. Enables a user to log on and sign e-mail.
SmartCardLogon. Enables a user to log on by using a smart card.
You can also create your own certificate templates to serve multiple purposes. For example, the
smart card logon certificate template is designed for smart card logon only. If you intend to use
your smart card infrastructure to support multiple applications, you can choose multipurpose
templates instead. Multipurpose templates generate certificates that you can use for multiple
applications, such as smart card logon and e-mail signing.
As part of your planning for smart card certificate templates, you need to establish values for
public keys, certificate lifetimes, and certificate renewal policies. These values are interrelated.
For example, if you select a larger key value, you can implement a longer certificate lifetime. Or,
you can use a small public key value if a certificate has a relatively short lifetime. Note, however,
that the amount of memory that is available on the smart cards that you select also limits the size
of the public keys that you can use.
Important

Page 197 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Many organizations pre-enroll users for smart card certificates several weeks before they
distribute smart cards to users. The certificate lifetime is determined by the date that you
issue the certificate, not the date that you distribute the card to the user. Therefore, factor any
distribution delays into your certificate lifetime and renewal strategy.
A Windows Server 2003 CA allows you to select a certificate public key length from 384 bits for
minimal security to 16,384 bits for maximum security. For typical logon applications, a 1,024-bit
key is adequate.
You can establish certificate lifetimes that are as long or as short as you need, and you can
configure certificates to be nonrenewable, renewable a finite number of times, or renewable
indefinitely.
To define public key values and certificate lifetimes and renewal policies, take into account:
The physical capacity of your smart cards. Most of the smart cards that are available
today have adequate space for all but the largest certificates.
How you define acceptable logon times. Public keybased authentication often takes
longer than authentication without certificates.
The nature of the business relationship. Smart card certificates issued to permanent
employees usually warrant a longer lifetime and renewal cycle than certificates issued to
short-term workers or to nonemployees.
The level of security that you want to enforce. Highly sensitive operations warrant
larger public key values and, typically, shorter certificate lifetimes.

5.12.15 Establishing Issuance Processes


You must establish a plan for the issuance of the smart cards and for the writing of smart card
certificates to the cards. This involves making decisions about the following:
Smart card distribution requirements
Certificate enrollment options
Physical distribution of smart cards
A user preparation plan

5.13 Software Update Services Overview


Prior to SUS, administrators had to continually check the Windows Update web site for operating
system patches, and then download, test, and distribute patches manually. SUS streamlines and
automates these processes.
By using SUS, you can download the latest patches to an intranet server, test the patches in your
operating environment, select the patches you want to deploy to specific computers, and then
deploy the patches in a timely and efficient manner. SUS provides dynamic notification of critical
updates to Windows-based computers, whether or not they have Internet access, and it provides
a simple, automatic solution for distributing critical updates to networked clients and server

5.13.1 Implementing a SUS Solution


Deploying a software update solution involves determining your security and scalability needs
and deciding how to stage content before distribution. You can then deploy and configure the
server and client components of SUS to keep the computers in your organization updated and
secure. Figure 5.1 illustrates the process of deploying SUS.
Figure Deploying SUS

Page 198 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

5.13.2 SUS Security Features


The server running SUS contains all the synchronization service and administrative tools for
managing updates. Using the Hypertext Transfer Protocol (HTTP) protocol, it responds to
requests for approved updates made by the client computers connected to it. SUS can download
packages from either the public Microsoft Windows Update servers or from another intranet
server running SUS. During these downloads, no server-to-server authentication is carried out. All
content is checked to verify that it has been correctly signed by Microsoft. Any content that is not
correctly signed is not trusted and not applied.
The administration of servers running SUS is completely Web-based. You can administer the
server by using either a standard HTTP connection or a Secure Sockets Layer (SSL) enabled
HTTPS connection.
Additional SUS security provisions follow:
SUS benefits from the inherent security of NTFS because SUS must be installed on a
hard disk that is formatted with NTFS.
If a proxy password is configured, SUS stores it securely as an LSA Secret.
Automatic Update checks the cyclical redundancy check (CRC)
cyclical redundancy check (CRC) on each update to confirm that it was not tampered with en
route.
After you run SUS Setup, you must install and configure the IIS Lockdown tool 1.0 and the
Urlscan security tool 2.0 for servers running Windows Server 2000. For servers running Windows
Server 2003, these tools are automatically installed and run.

Section 6

6. Planning and Implementing an Active Directory Infrastructure

6.1Introduction to Active Directory


6.2.Windows 2000 Domain Upgrade to Windows Server 2003
6.2.1 The role of the global catalog
6.2.2 Global catalog replication
6.2.3 Adding attributes
6.2.4 Customizing the global catalog
6.2.5 Global catalogs and sites
6.2.6 Universal group membership caching
6.2.7 To cache universal group memberships
6.3 Creating a new domain tree

Page 199 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

6.3.1 Creating a new child domain


6.3.2 Creating a new forest
6.3.3 When to create a new forest
6.4 Operations master roles in a new forest
6.5 Adding new domains to your forest
6.6 Trust
6.6.1 Trusts in Windows NT
6.6.2 Trusts in Windows Server 2003 and Windows 2000 server operating systems
6.6.3 Trust protocols
6.6.4 Trust types
6.6.5 Trust direction
6.6.7 Trust transitivity

6.7 Organizational units


6.7.1 Organizational Unit Design Concepts
6.7.2 Organizational Unit Owner Role
6.7.3 Delegating Administration by Using OU Objects
6.7.4 Administration of Default Containers and OUs
6.7.5 Delegating Administration of Account and Resource OUs
6.7.6 Administrative Group Types
6.7.7 Creating Account OUs
6.7.8 Creating Resource OUs

6. Planning and Implementing an Active Directory Infrastructure

6.1 Introduction to Active Directory


The Active Directory directory service can be installed on servers running
Microsoft Windows Server 2003, Standard Edition; Windows Server 2003, Enterprise Edition;
and Windows Server 2003, Datacenter Edition. Active Directory stores information about objects
on the network and makes this information easy for administrators and users to find and use.
Active Directory uses a structured data store as the basis for a logical, hierarchical organization of
directory information
This data store, also known as the directory, contains information about Active Directory objects.
These objects typically include shared resources such as servers, volumes, printers, and the
network user and computer accounts
Security is integrated with Active Directory through logon authentication and access control to
objects in the directory. With a single network logon, administrators can manage directory data
and organization throughout their network, and authorized network users can access resources
anywhere on the network. Policy-based administration eases the management of even the most
complex network
Page 200 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Active Directory also includes:


A set of rules, the schema , that defines the classes of objects and attributes contained in
the directory, the constraints and limits on instances of these objects, and the format of their
names. For more information about the schema
Global catalog that contains information about every object in the directory. This allows
users and administrators to find directory information regardless of which domain in the directory
actually contains the data
A query and index mechanism, so that objects and their properties can be published and
found by network users or applications.
A replication service that distributes directory data across a network. All domain
controllers in a domain participate in replication and contain a complete copy of all directory
information for their domain. Any change to directory data is replicated to all domain controllers in
the domain

Schema
The set of definitions for the universe of objects that can be stored in a directory. For each object
class, the schema defines which attributes an instance of the class must have, which additional
attributes it can have, and which other object classes can be its parent object class.

Global catalog
A directory database that applications and clients can query to locate any object in a forest. The
global catalog is hosted on one or more domain controllers in the forest. It contains a partial
replica of every domain directory partition in the forest. These partial replicas include replicas of
every object in the forest, as follows: the attributes most frequently used in search operations and
the attributes required to locate a full replica of the object.
In Microsoft Provisioning System, the Exchange server maintains a list of global catalogs, and it
maintains a load balance across global catalogs.

Replication
The process of copying updated data from a data store or file system on a source computer to a
matching data store or file system on one or more destination computers to synchronize the data.
In Active Directory, replication synchronizes schema, configuration, application, and domain
directory partitions between domain controllers.In Distributed File System (DFS), replication
synchronizes files and folders between DFS roots and root targets.

Determining Your Active Directory Design Requirements


If your network environment is currently operating without a directory service, or if you need to
modify your current Active Directory infrastructure, complete the design process for your Active
Directory infrastructure. You must complete a comprehensive design of your Active Directory
logical structure before you deploy Active Directory. Thoroughly preparing your Active Directory
design is essential to a cost-effective deployment.
Logical Structure Design
Before you deploy Windows Server 2003 Active Directory, you must plan for and design the
Active Directory logical structure for your environment. The Active Directory logical structure
determines how your directory objects are organized, and provides an effective method for
managing your network accounts and shared resources. When you design your Active Directory
logical structure, you define a significant part of the network infrastructure of your organization.
To design the Active Directory logical structure, determine the number of forests that your
organization requires, and then create designs for domains, DNS, and organizational units.
Site Topology Design
After you design the logical structure for your Active Directory infrastructure, you must design the
site topology for your network. The site topology is a logical representation of your physical
network. It contains information about the location of Active Directory sites, the Active Directory

Page 201 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

domain controllers within each site, and the site links that support Active Directory replication
between sites.
Domain Controller Capacity Planning
To ensure efficient Active Directory performance, you must determine the appropriate number of
domain controllers for each site and verify that they meet the hardware requirements for Windows
Server 2003. Careful capacity planning for your domain controllers ensures that you do not
underestimate hardware requirements, which can cause poor domain controller performance and
application response time.
Advanced Active Directory Features
Functional levels in Windows Server 2003 Active Directory allow you to enable new features,
such as improved group membership replication, deactivation and redefinition of attributes and
classes in the schema, and forest trust relationships that require that all domain controllers within
the participating domain or forest run Windows Server 2003. Part of the Active Directory design
process involves identifying the domain and forest functional levels that your organization
requires. To implement these Windows Server 2003 Active Directory features in your
organization, you must first deploy Windows Server 2003 Active Directory and then raise the
forest and domain to the appropriate functional level.
Determining Your Active Directory Deployment Requirements
The structure of your existing environment determines your strategy for deploying Windows
Server 2003 Active Directory. If you are creating an Active Directory environment and you do not
have an existing domain structure, you must complete your Active Directory design before you
begin creating your Active Directory environment. Then you can deploy a new forest root domain
and deploy the rest of your domain structure according to your design.
Windows Server 2003 Forest Root
To deploy Active Directory, you must first deploy a Windows Server 2003 forest root domain. To
do this, you must configure DNS, deploy forest root domain controllers, configure the site
topology for the forest root domain, and configure operations master roles.
Windows Server 2003 Regional Domains
If you are creating one or more new regional domains in a Windows Server 2003 forest, you must
deploy each regional domain after you deploy your forest root domain. To do this, you must
delegate a DNS zone and deploy domain controllers for each regional domain.
Windows NT 4.0 Domain Upgrade to Windows Server 2003
When you perform an in-place domain upgrade of Windows NT 4.0 domains, you can begin to
use Active Directory without making any modifications to your existing domain structure.
Alternatively, if you do not want to retain your existing domain structure, you can restructure your
Windows NT 4.0 domains to a Windows Server 2003 forest. For more information about
restructuring your Windows NT 4.0 domains to a Windows Server 2003 forest,
6.2 Windows 2000 Domain Upgrade to Windows Server 2003
Upgrading your Windows 2000 domains to Windows Server 2003 domains is an efficient,
straightforward way to take advantage of additional Windows Server 2003 features and
functionality. Upgrading from Windows 2000 to Windows Server 2003 requires minimal network
configuration and has little impact on user operations.

6.2.1 The role of the global catalog


A global catalog is a domain controller that stores a copy of all Active Directory objects in a forest.
The global catalog stores a full copy of all objects in the directory for its host domain and a partial
copy of all objects for all other domains in the forest, as shown in the following figure.

Page 202 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The partial copies of all domain objects included in the global catalog are those most commonly
used in user search operations. These attributes are marked for inclusion in the global catalog as
part of their schema definition. Storing the most commonly searched upon attributes of all domain
objects in the global catalog provides users with efficient searches without affecting network
performance with unnecessary referrals to domain controllers.
You can manually add or remove other object attributes to the global catalog by using the Active
Directory Schema snap-in.
A global catalog is created automatically on the initial domain controller in the forest. You can add
global catalog functionality to other domain controllers or change the default location of the global
catalog to another domain controller.
A global catalog performs the following directory roles:
Finds objects
A global catalog enables user searches for directory information throughout all domains in a
forest, regardless of where the data is stored. Searches within a forest are performed with
maximum speed and minimum network traffic.
When you search for people or printers from the Start menu or choose the Entire Directory
option within a query, you are searching a global catalog. Once you enter your search
request, it is routed to the default global catalog port 3268 and sent to a global catalog for
resolution
Supplies user principal name authentication
A global catalog resolves user principal names (UPNs) when the authenticating domain
controller does not have knowledge of the account. For example, if a users account is
located in example1.microsoft.com and the user decides to log on with a user principal name
of [email protected] from a computer located in example2.microsoft.com, the
domain controller in example2.microsoft.com will be unable to find the users account, and
will then contact a global catalog to complete the logon process.
Supplies universal group membership information in a multiple domain
environment
Unlike global group memberships, which are stored in each domain, universal group
memberships are only stored in a global catalog. For example, when a user who belongs to a

Page 203 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

universal group logs on to a domain that is set to the Windows 2000 native domain functional
level or higher, the global catalog provides universal group membership information for the
users account at the time the user logs on to the domain.
If a global catalog is not available when a user logs on to a domain set to the functional level
of Windows 2000 native or higher, the computer will use cached credentials to log on the
user if the user has logged on to the domain previously. If the user has not logged on to the
domain previously, the user can only log on to the local computer. However, if a user is a
member of the Domain Admins group, the user can always log on to the domain, even when
a global catalog is not available
Validates object references within a forest
A global catalog is used by domain controllers to validate references to objects of other
domains in the forest. When a domain controller holds a directory object with an attribute
containing a reference to an object in another domain, this reference is validated using a
global catalog.

6.2.2 Global catalog replication


Replication of the global catalog ensures that users throughout the forest have fast access to
information about every object in the forest. The default attributes that make up the global catalog
provide a baseline of the most commonly searched attributes. These attributes are replicated to
the global catalog as part of normal Active Directory replication.
The replication topology for the GC is generated automatically by the KCC. However, the global
catalog is replicated only to other domain controllers that have been designated as global
catalogs. Global catalog replication is affected both by the attributes marked for inclusion in the
global catalog, and by universal group memberships.

6.2.3 Adding attributes


Active Directory defines a base set of attributes for each object in the directory. Each object and
some of its attributes (such as universal group memberships) are stored in the global catalog.
Using the Active Directory Schema snap-in, you can specify additional attributes to be kept in the
global catalog.
In Windows 2000 forests, extending the partial attribute set causes a full synchronization of all
object attributes stored in the global catalog (for all domains in the forest). In a large, multi-
domain forest, this synchronization can cause significant network traffic. Between domain
controllers enabled as global catalogs that are running Windows Server 2003, only the newly
added attribute is replicated

6.2.4 Customizing the global catalog


There may be instances where you will need to customize the global catalog to include additional
attributes. However, you will want to carefully consider your options as changes to attributes can
impact network traffic. By default, the global catalog contains an objects most common attributes
for every object in the entire forest, which applications and users can query. For example, you
can find a user by first name, last name, e-mail address, or other common properties of a user
account.
When determining whether or not to add an attribute to the global catalog, consider only adding
additional attributes that are frequently queried and referenced by users or applications across
the enterprise. Also consider how frequently an attribute gets updated during replication.
Attributes that are stored in the global catalog are replicated to every global catalog in the forest.
The smaller the attribute, the lower the impact of that replication. If the attribute is large, but very
seldom changes, it will have a smaller replication impact than a small attribute that changes often.

6.2.5 Global catalogs and sites


To optimize network performance in a multiple site environment, consider adding global catalogs
for select sites. In a single site environment, a single global catalog is usually sufficient to cover

Page 204 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

common Active Directory queries. The following table will help you determine whether your
multiple site environment will benefit using additional global catalogs.
Use a global catalog when Advantage Disadvantage
Additional network
A commonly used application in the site utilizes port 3268 to Performance
traffic due to
resolve global catalog queries. improvement
replication
A slow or unreliable WAN connection is used to connect to
other sites. Use the same failure and load distribution rules Additional network
that you used for individual domain controllers to determine Fault tolerance traffic due to
whether additional global catalog servers are necessary in replication
each site.
Users in the site belong to a Windows 2000 domain running in
native mode. In this case, all users must obtain universal
group membership information from a global catalog server. If
a global catalog is not located within the same site all logon
Additional network
requests must be routed over your WAN connection to a Fast user
traffic due to
global catalog located in another site. logons
replication
If a domain controller running Windows Server 2003 in the site
has universal group membership caching enabled, then all
users will obtain a current cached listing of their universal
group memberships.
Note
Network traffic related to global catalog queries generally use more network resources
than normal directory replication traffic.

6.2.6 Universal group membership caching


Due to available network bandwidth and server hardware limitations, it may not be practical to
have a global catalog in smaller branch office locations. For these sites, you can deploy domain
controllers running Windows Server 2003, which can store universal group membership
information locally.
Information is stored locally once this option is enabled and a user attempts to log on for the first
time. The domain controller obtains the universal group membership for that user from a global
catalog. Once the universal group membership information is obtained, it is cached on the
domain controller for that site indefinitely and is periodically refreshed. The next time that user
attempts to log on, the authenticating domain controller running Windows Server 2003 will obtain
the universal group membership information from its local cache without the need to contact a
global catalog.
By default, the universal group membership information contained in the cache of each domain
controller will be refreshed every 8 hours. To refresh the cache, domain controllers running
Windows Server 2003 will send a universal group membership confirmation request to a
designated global catalog. Up to 500 universal group memberships can be updated at once.
Universal group membership caching can be enabled using Active Directory Sites and Services.
Universal group membership caching is site specific and requires that all domain controllers
running Windows Server 2003 be located in that site to participate
The following list summarizes potential benefits for caching universal group memberships in
branch office locations:
Faster logon times since authenticating domain controllers no longer need to access a
global catalog to obtain universal group membership information.
No need to upgrade hardware of existing domain controllers to handle the extra system
requirements necessary for hosting a global catalog.
Minimized network bandwidth usage since a domain controller will not have to handle
replication for all of the objects located in the forest.

Page 205 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

6.2.7 To cache universal group memberships


1. Open Active Directory Sites and Services.
2. In the console tree, click the site in which you want to enable universal group
membership caching.
Where?
o Active Directory Sites and Services
o Sites
o the site in which you want to enable universal group membership caching
3. In the details pane, right-click NTDS Site Settings, and then click Properties.
4. Select the Enable Universal Group Membership Caching check box.
5. In Refresh cache from, click a site from which this site will refresh its cache, or accept
<Default> to refresh the cache from the nearest site that has a global catalog.

6.3 Creating a new domain tree


Create a new domain tree only when you need to create a domain whose DNS name space is not
related to the other domains in the forest . This means that the name of the tree root domain (and
all of its children) does not have to contain the full name of the parent domain. A forest can
contain one or more domain trees. Before creating a new domain tree, consider creating another
forest when you want a different DNS namespace. Multiple forests provide administrative
autonomy, isolation of the schema and configuration directory partitions separate security
boundaries, and the flexibility to use an independent namespace design for each forest

6.3.1 Creating a new child domain


Create a new child domain when you want to create a domain that shares a contiguous
namespace with one or more domains. This means that the name of the new domain contains the
full name of the parent domain For example, sales.microsoft.com would be a child domain of
microsoft.com. As a best practice, you create new domains as children of the forest root domain.
You can create a new child domain by creating a new domain under a parent domain using the
Active directory installation wizard
After you create the child domain, you can create additional domain controllers in the child
domain for fault tolerance and high availability of Active Directory.

6.3.2 Creating a new forest


When you create the first domain controller in your organization, you are creating the first domain
also called the forest root domain) and the first forest.
The top-level Active Directory container is called a forest. A forest consists of one or more
domains that share a common schema and global catalog. An organization can have multiple
forests.
A forest is the security and administrative boundary for all objects that reside within the forest. In
contrast, a domain is the administrative boundary for managing objects, such as users, groups,
and computers. In addition, each domain has individual security policies and trust relationships
with other domains.
Multiple domains tree within a single forest do not form a contiguous namespace; that is, they
have noncontiguous DNS domain names. Although trees in a forest do not share a namespace, a
forest does have a single root domain, called the forest root domain. The forest root domain is, by
definition, the first domain created in the forest. The Enterprise Admins and Schema Admins
groups are located in this domain. By default, members of these two groups have forest-wide
administrative Credentials

6.3.3 When to create a new forest


A first step in the Active Directory design process is to determine how many forests your
organization needs. For most organizations, a single forest design is the preferred model and the
simplest to administer. However, a single forest may not be practical for every organization.
Page 206 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

With a single forest, users do not need to be aware of directory structure because all users see a
single directory through the global catalog. When adding a new domain to a forest, no additional
trust configuration is required because all domains in a forest are connected by two-way,
transitive trust In a forest with multiple domains, configuration changes need be applied only once
to update all domains.
However, there are scenarios in which you might want to create more than one forest:
When upgrading a Windows NT domain to a Windows Server 2003 forest. You can
upgrade a Windows NT domain to become the first domain in a new Windows Server 2003
forest. To do this, you must first upgrade the primary domain controller in that domain. Then,
you can upgrade backup domain controllers, member servers, and client computers at any
time.
You can also keep a Windows NT domain and create a new Windows Server 2003 forest by
installing Active Directory on a member server running Windows Server 2003
To provide administrative autonomy. You can create a new forest when you need to
segment your network for purposes of administrative autonomy. Administrators who
currently manage the IT infrastructure for autonomous divisions within the organization
may want to assume the role of forest owner and proceed with their own forest design.
However, in other situations, potential forest owners may choose to merge their
autonomous divisions into a single forest to reduce the cost of designing and operating
their own Active Directory or to facilitate resource sharing
To create a different Domain Name System (DNS) namespace than an existing
forest. You can create a new forest when you need to use a noncontiguous DNS
namespace that is different from an existing forest on your network. It is recommended
that you create a new forest when you want a different DNS namespace rather than
creating additional domain trees with noncontiguous DNS namespaces within an existing
forest.

6.4 Operations master roles in a new forest


When you create the first forest in your organization, all five Opertion Master roles are
automatically assigned to the first domain controller in the forest. As new child domains are
added to the forest, the first domain controller in each of the new child domains is automatically
assigned the following roles:
Relative identifier master
Primary domain controller (PDC) emulator
Infrastructure master
Because there can be only one schema master and one domain naming master in a forest, these
roles remain in the forest root domain. In an Active Directory forest with only one domain and one
domain controller, that domain controller owns all the operations master roles

6.5 Adding new domains to your forest


A domain stores only the information about objects located in that domain, so by creating multiple
domains within a new forest, you are partitioning or segmenting Active Directory to better serve a
disparate user base.
The easiest domain structure to administer is a single domain within a single forest. When
planning, you should start with a single domain and only add additional domains when the single
domain model no longer meets your needs.
Before creating a new forest
Active Directory requires DNS to function and both share the same hierarchical domain structure.
For example, microsoft.com is a DNS domain and an Active Directory domain. Because of the
reliance that Active Directory has on DNS you must thoroughly understand Active Directory and
DNS concepts before creating a new forest.

6.6 Trusts

Page 207 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

A trust is a relationship established between domains that enables users in one domain to be
authenticated by a domain controller in the other domain. Trust relationships in Windows NT are
different than in Windows 2000 and Windows Server 2003 operating systems.

6.6.1 Trusts in Windows NT


In Windows NT 4.0 and earlier, trusts are limited to two domains and the trust relationship is one-
way and nontransitive. In the following figure, the nontranstive, oneway trust one-way trust
is shown by the straight arrow pointing to the trusted domain.

6.6.2 Trusts in Windows Server 2003 and Windows 2000 server operating systems
All trusts in a Windows 2000 and Windows Server 2003 forest are transitive & two way trust
Therefore, both domains in a trust relationship are trusted. As shown in the following figure, this
means that if Domain A trusts Domain B and Domain B trusts Domain C, then users from Domain
C can access resources in Domain A (when assigned the proper permissions). Only members of
the Domain Admins group can manage trust relationships.

6.6.3 Trust protocols


A domain controller running Windows Server 2003 authenticates users and applications using
one of two protocols: Kerberos V5 or NTLM. The Kerberos V5 protocol is the default protocol for
computers running Windows 2000, Windows XP Professional, or Windows Server 2003. If any
computer involved in a transaction does not support Kerberos V5, the NTLM protocol will be
used.
With the Kerberos V5 protocol, the client requests a ticket from a domain controller in its account
domain to the server in the trusting domain. This ticket is issued by an intermediary trusted by the
client and the server. The client presents this trusted ticket to the server in the trusting domain for
authentication
Trusted domain objects
Trusted domain objects (TDOs) are objects that represent each trust relationship within a
particular domain. Each time a trust is established a unique TDO is created and stored (in the
System container) in its domain. Attributes such as a trust transitivity, type, and the reciprocal
domain names are represented in a TDO.

Page 208 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Forest trust TDOs store additional attributes to identify all of the trusted namespaces from its
partner forest. These attributes include domain tree names ,UPN suffix,SPN suffix & SID
namespace

6.6.4 Trust types


Communication between domains occurs through trusts. Trusts are authentication pipelines that
must be present in order for users in one domain to access resources in another domain. Two
default trusts are created when using the Active Directory Installation Wizard. There are four
other types of trusts that can be created using the New Trust Wizard or the Netdom command-
line tool.
Default trusts
By default two way , transitive trust are automatically created when a new domain is added to a
domain tree or forest root domain using the Active Directory Installation Wizard. The two default
trust types are defined in the following table.
Trust
Transitivity Direction Description
type
By default, when a new child domain is added to an existing
Parent
domain tree, a new parent and child trust is established.
and Transitive Two-way
Authentication requests made from subordinate domains flow
child
upward through their parent to the trusting domain..
Tree- By default, when a new domain tree is created in an existing
Transitive Two-way
root forest, a new tree-root trust is established.
Other trusts
Four other types of trusts can be created using the New Trust Wizard or the Netdom command-
line tool: external, realm, forest, and shortcut trusts. These trusts are defined in the following
table.
Trust
Transitivity Direction Description
type
Use external trusts to provide access to resources
One-way or located on a Windows NT 4.0 domain or a domain
External Nontransitive
two-way located in a separate forest that is not joined by a forest
trust.
Use realm trusts to form a trust relationship between a
Transitive or One-way or
Realm non-Windows Kerberos realm and a Windows
nontransitive two-way
Server 2003 domain..
Use forest trusts to share resources between forests. If a
forest trust is a two-way trust
two-way trust
A trust relationship between two domains in which both
One-way or
Forest Transitive domains trust each other. For example, domain A trusts
two-way
domain B, and domain B trusts domain A. All parent-child
trusts are two-way.
, authentication requests made in either forest can reach
the other forest.
Use shortcut trusts to improve user logon times between
One-way or two domains within a Windows Server 2003 forest. This
Shortcut Transitive
two-way is useful when two domains are separated by two domain
trees
When creating external, shortcut, realm, or forest trusts, you have the option to create each side
of the trust separately or both sides of a trust simultaneously.

Page 209 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

If you choose to create each side of the trust separately, then you will need to run the New Trust
Wizard twice--once for each domain. When creating trusts using the method, you will need to
supply the same trust password for each domain. As a security best practice, all trust passwords
should be strong passwords.
If you choose to create both sides of the trust simultaneously, you will need to run the New Trust
Wizard once. When you choose this option, a strong trust password is automatically generated
for you.
You will need the appropriate administrative credentials for each domain between which you will
be creating a trust. Netdom.exe can also be used to create trusts

6.6.5 Trust direction


The trust type and its assigned direction will impact the trust path used for authentication. A trust
path is a series of trust relationships that authentication requests must follow between domains.
Before a user can access a resource in another domain, the security system on domain
controllers running Windows Server 2003 must determine whether the trusting domain (the
domain containing the resource the user is trying to access) has a trust relationship with the
trusted domain (the user's logon domain). To determine this, the security system computes the
trust path between a domain controller in the trusting domain and a domain controller in the
trusted domain. In the following figure, trust paths are indicated by arrows showing the direction
of the trust:

All domain trust relationships have only two domains in the relationship: the trusting domain and
the trusted domain.
One-way trust
A one way trust is a unidirectional authentication path created between two domains. This means
that in a one-way trust between Domain A and Domain B, users in Domain A can access
resources in Domain B. However, users in Domain B cannot access resources in Domain A.
Some one-way trusts can be a non transitive or transitive trust
Two-way trust
All domain trusts in a Windows Server 2003 forest are two-way or transitive . When a new child
domain is created, a two-way, transitive trust is automatically created between the new child
domain and the parent domain. In a two-way trust, Domain A trusts Domain B and Domain B
trusts Domain A. This means that authentication requests can be passed between the two
domains in both directions. Some two-way relationships can be nontransitive or transitive
depending on the type of trust being created
A Windows Server 2003 domain can establish a one-way or two-way trust with:
Windows Server 2003 domains in the same forest
Windows Server 2003 domains in a different forest
Windows NT 4.0 domains
Kerberos V5 realms

6.6.7 Trust transitivity

Page 210 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Transitivity determines whether a trust can be extended outside of the two domains with which it
was formed. A transitive trust can be used to extend trust relationships with other domains and a
nontranstive trust can be used to deny trust relationships with other domains.
Transitive trusts
Each time you create a new domain in a forest , a two-way, transitive trust relationship is
automatically created between the new domain and its parent domain. If child domains are added
to the new domain, the trust path flows upward through the domain hierarchy extending the initial
trust path created between the new domain and its parent domain. Transitive trust relationships
flow upward through a domain tree as it is formed, creating transitive trusts between all domains
in the domain tree.
Authentication requests follow these trust paths, so accounts from any domain in the forest can
be authenticated at any other domain in the forest. With a single logon process, accounts with the
proper permissions can access resources in any domain in the forest.

The diagram displays that all domains in the Domain A tree and all domains in the Domain 1 tree
have transitive trust relationships by default. As a result, users in the Domain A tree can access
resources in domains in the Domain 1 tree and users in the Domain 1 tree can access resources
in the Domain A tree, when the proper permissions are assigned at the resource.
In addition to the default transitive trusts established in a Windows Server 2003 forest, using the
New Trust Wizard, you can manually create the following transitive trusts.
Shortcut trust. A transitive trust between a domain in the same domain tree or forest
used to shorten the trust path in a large and complex domain tree or forest.
Forest trust. A transitive trust between a forest root domain and a second forest root
domain.
Realm trust. A transitive trust between an Active Directory domain and an Kerberos V5
realm
Nontransitive trust
A Nontranstive trust is restricted by the two domains in the trust relationship and does not flow to
any other domains in the forest. A nontransitive trust can be a two way or one way trust
Nontransitive trusts are one-way by default, although you can also create a two-way relationship
by creating two one-way trusts. In summary, nontransitive domain trusts are the only form of trust
relationship possible between:
A Windows Server 2003 domain and a Windows NT domain
A Windows Server 2003 domain in one forest and a domain in another forest (when not
joined by a forest trust)
Using the New Trust Wizard, you manually create the following nontransitive trusts:
External trust. A nontransitive trust created between a Windows Server 2003 domain
and a Windows NT domain or a Windows 2000 domain or Windows Server 2003 domain in
another forest.
When you upgrade a Windows NT domain to a Windows Server 2003 domain, all existing
Windows NT trusts are preserved intact. All trust relationships between Windows
Server 2003 domains and Windows NT domains are nontransitive.
Realm trust. A nontransitive trust between an Active Directory domain and an
Kerberos V5 realm
When to create an external trust
Page 211 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

You can create an external trust to form a one-way or two-way non transitive trust with domains
outside of your forest. External trusts are sometimes necessary when users need access to
resources located in a Windows NT 4.0 domain or in a domain located within a separate forest
that is not joined by a forest trust as shown in the figure.

When a trust is established between a domain in a particular forest and a domain outside of that
forest security principle from the external domain can access resources in the internal domain.
Active Directory creates a SPS object in the internal domain to represent each security principle
from the trusted external domain. These foreign security principals can become members of
domain local group in the internal domain. Domain local groups can have members from domains
outside of the forest.
Directory objects for foreign security principals are created by Active Directory and should not be
manually modified. You can view foreign security principal objects from ADS by enabling
advanced features
In domains with the functional level set to Windows 2000 mixed, it is recommended that you
delete external trusts from a domain controller running Windows Server 2003. External trusts to
Windows NT 4.0 or 3.51 domains can be deleted by authorized administrators on the domain
controllers running Windows NT 4.0 or 3.51. However, only the trusted side of the relationship
can be deleted on the domain controllers running Windows NT 4.0 or 3.51. The trusting side of
the relationship (created in the Windows Server 2003 domain) is not deleted, and although it will
not be operational, the trust will continue to display in Active Directory Domains and Trusts. To
remove the trust completely, you will need to delete the trust from a domain controller running
Windows Server 2003 in the trusting domain. If an external trust is inadvertently deleted from a
domain controller running Windows NT 4.0 or 3.51, you will need to recreate the trust from any
domain controller running Windows Server 2003 in the trusting domain.
Securing external trusts
To improve the security of Active Directory forests, domain controllers running Windows
Server 2003 and Windows 2000 Service Pack 4 (or higher) enable SID filtering on all new
outgoing external trusts by default. By applying SID filtering to outgoing external trusts, you
prevent malicious users who have domain administrator level access in the trusted domain from
granting, to themselves or other user accounts in their domain, elevated user rights to the trusting
domain.
When a malicious user can grant unauthorized user rights to another user it is known as an
elevation of privilege attack. For more information about SID Filtering and how to further mitigate
an elevation of privilege attack

6.7 Organizational units

Page 212 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

An Active Directory container object used within domains. An organizational unit is a logical
container into which users, groups, computers, and other organizational units are placed. It can
contain objects only from its parent domain. An organizational unit is the smallest scope to which
a Group Policy object (GPO) can be linked, or over which administrative authority can be
delegated.

6.7.1 Organizational Unit Design Concepts


The OU structure for a domain includes the following:
A diagram of the OU hierarchy.
A list of OUs.
For each OU:
The purpose for the OU.
A list of users or groups that have control over the OU or the objects in the OU.
The type of control that users and groups have over the objects in the OU.
The OU hierarchy does not need to reflect the departmental hierarchy of the organization or
group. OUs are created for a specific purpose, such as the delegation of administration, the
application of Group Policy, or to limit the visibility of objects.
You can design your OU structure to delegate administration to individuals or groups within your
organization that require the autonomy to manage their own resources and data. OUs represent
administrative boundaries and enable you to control the scope of authority of data administrators.
For example, you can create an OU called ResourceOU and use it to store all the computer
accounts that belong to the file and print servers managed by a group. Then you can configure
security on the OU such that only data administrators in the group have access to the OU. This
prevents data administrators in other groups from tampering with the file and print server
accounts. Figure shows an organizational unit that is created to enable delegation of
administration.
Figure Creating an Organizational Unit to Delegate Administration

You can further refine your OU structure by creating subtrees of OUs for specific purposes, such
as the application of Group Policy, or to limit the visibility of protected objects so that only certain
users can see them. For example, if you need to apply Group Policy to a select group of users or
resources, you can add those users or resources to an OU and then apply the Group Policy to
that OU. You can also use the OU hierarchy to enable further delegation of administrative control.
Figure shows a subtree that was created inside the ResourceOU. The subtree includes two
additional OUs: The Print Servers OU includes all the computer accounts of the print servers; the
File Servers OU includes the computer accounts for all of the file servers. This subtree enables
the application of separate Group Policies to each type of server managed in the ResourceOU.
Figure Creating an Organizational Unit Subtree for Application of Group Policy

Page 213 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Note
While there is no technical limit to the number of levels in your OU structure, for the
purpose of manageability, it is recommended that you limit your OU structure to a depth of no
more than 10 levels. There is no technical limit to the number of OUs on each level. Note that
Active Directoryenabled applications might have restrictions on the number of characters
used in the distinguished name (the full LDAP path to the object in the directory), or on the
OU depth within the hierarchy.
The Active Directory organizational unit structure is not intended to be visible to end users. The
organizational unit structure is an administrative tool for service and data administrators and is
easy to change. Continue to review and update your OU structure design to reflect changes in
your administrative structure and to support policy-based administration.

6.7.2 Organizational Unit Owner Role


The forest owner designates an OU owner for each OU that you design for the domain. OU
owners are data managers who control a subtree of objects in Active Directory. OU owners can
control how administration is delegated, and how policy is applied to objects within their OU. They
can also create new subtrees and delegate administration of OUs within that subtree.
Because OU owners do not own or control the operation of the directory service, you can
separate ownership and administration of the directory service from ownership and administration
of objects, thereby reducing the number of service administrators who have high levels of access.
OUs provide administrative autonomy and the means to control visibility of objects in the
directory. OUs provide isolation from other data administrators but they do not provide isolation
from service administrators. Although OU owners have control over a subtree of objects, the
forest owner retains full control over all subtrees. This enables the forest owner to correct
mistakes, such as an error in an access control list (ACL), or to reclaim delegated subtrees when
data administrators are terminated

6.7.3 Delegating Administration by Using OU Objects


You can use organizational units to delegate the administration of objects, such as users or
computers, within the OU to a designated individual or group. To delegate administration by using
an OU, place the individual or group to which you are delegating administrative rights into a

Page 214 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

group, place the set of objects to be controlled into an OU, and then delegate administrative tasks
for the OU to that group.
Active Directory enables you to control the administrative tasks that can be delegated at a very
granular level; for example, you can assign one group full control of all objects in an OU; assign
another group the rights only to create, delete, and manage user accounts in the OU; and assign
a third group the right only to reset user account passwords. You can make these permissions
inheritable so that they apply to not only a single OU, but also any OUs that are placed in
subtrees of the OU.
Default OUs and containers are created during the installation of Active Directory and are
controlled by service administrators. It is best if service administrators continue to control these
containers. If you need to delegate control over objects in the directory, create additional OUs
and place the objects in these OUs. Delegate control over these OUs to the appropriate data
administrators. This makes it possible to delegate control over objects in the directory without
changing the default control given to the service administrators.
The forest owner determines the level of authority that is delegated to an OU owner. This can
range from the ability to create and manipulate objects within the OU to only being allowed to
control a single attribute of a single type of object in the OU. Granting a user the ability to create
an object in the OU implicitly grants that user the ability to manipulate any attribute of any object
that the user creates. In addition, if the object that is created is a container, then the user implicitly
has the ability to create and manipulate any objects that are placed in the container.

6.7.4 Administration of Default Containers and OUs


Every Active Directory domain contains a standard set of OUs and containers that are created
during the installation of Active Directory. These include the following:
Domain container, which serves as the root container to the hierarchy.
Builtin container, which holds the default service administrator accounts.
Users container, which is the default location for new user accounts and groups created
in the domain.
Computers container, which is the default location for new computer accounts created in
the domain.
Domain Controllers OU, which is the default location for the computer accounts for
domain controllers computer accounts.
The forest owner controls these default OUs and containers.

6.7.5 Delegating Administration of Account and Resource OUs


Account OUs contain user, group, and computer objects. Forest owners must create an OU
structure to manage these objects and then delegate control of the structure to the OU owner.
Resource OUs contain resources and the accounts that are responsible for managing those
resources. The forest owner is also responsible for creating an OU structure to manage these
resources and delegating control of that structure to the OU owner.

Delegating Administration of Account OUs


Delegate an account OU structure to data administrators if they need to be able to create and
modify user, group, and computer objects. The administrative capabilities of an account OU
structure are similar to those of a Windows NT 4.0 Master User Domain. The account OU
structure is a subtree of OUs for each account type that must be independently controlled. For
example, the OU owner can delegate specific control to various data administrators over child
OUs in an account OU for users, computers, groups, and service accounts, as shown in Figure
Figure Account OU Structure

Page 215 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Table lists and describes the child OUs that you can create in an account OU structure.
Table Child OUs in the Account OU Structure
OU Purpose
Users Contains user accounts for non-administrative personnel.
Some services that require access to network resources run as user accounts. This
Service OU is created to separate service user accounts from the user accounts contained
Accounts in the Users OU. Also, placing the different types of user accounts in separate OUs
enables you to manage them according to their specific administrative requirements.
Computers Contains accounts for computers other than domain controllers.
Contains groups of all types, except for administrative groups, which are managed
Groups
separately.
Contains user and group accounts for data administrators in the account OU
structure, to allow them to be managed separately from regular users. Enable
Admins
auditing for this OU so that you can track changes to administrative users and
groups.
Figure shows the administrative group design for an Account OU.
Figure Delegation Model for Account OUs

Page 216 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The owner of the Account OU is the acct_ou_OU_admins group, which is comprised of data
administrators. This group has full control of the acct_ou_OU subtree, and is responsible for
creating the standard set of child OUs and the groups to manage them.
Groups that manage the child OUs are granted full control only over the specific class of objects
that they are responsible for managing. For example, acct_ou_group_admins has control only
over group objects.
Note that no separate administrative group manages the Admins OU; rather, it inherits ownership
from its parent OU, so it is managed by acct_ou_OU_admins. The OU owner might choose to
create additional administrative groups, however. For example, the OU owner might create the
optional group acct_ou_helpdesk_admins in the Admins OU to control password resets.

6.7.6 Administrative Group Types


The types of groups that you use to delegate control within an OU structure are based on where
the accounts are located relative to the OU structure that is to be managed. If the admin user
accounts and the OU structure all exist within a single domain, then the groups that you create to
use for delegation must be global groups. Figure 2.41 shows the delegation of administration
within a single domain.
Figure Group Type for Delegation Within a Single Domain

Page 217 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

If your organization has a department that manages its own user accounts and exists in more
than one region, you might have a group of data administrators who are responsible for managing
account OUs in more than one domain. If the accounts of the data administrators all exist in a
single domain and you have OU structures in multiple domains to which you need to delegate
control, make those administrative accounts members of global groups and delegate control of
the OU structures in each domain to those global groups, as shown in Figure 2.42
Figure Using Global Groups to Manage OUs in Multiple Domains

Page 218 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

If the data administrators accounts to which you delegate control of an OU structure come from
multiple domains, you must use a universal group. Universal groups can contain users from
different domains and can therefore be used to delegate control in multiple domains.
Figure shows a configuration in which universal groups are used to manage OUs in multiple
domains.
Figure Using Universal Groups to Manage OUs in Multiple Domains

Page 219 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Delegating Administration of Resource OUs


Resource OUs are used to manage access to resources. The resource OU owner creates
computer accounts for servers that are joined to the domain that include resources such as file
shares, databases, and printers. The resource OU owner also creates groups to control access to
those resources.
Figure shows the two possible locations for the resource OU.
Figure Resource OU Placement in a Domain

The resource OU can be located under the domain root or as a child OU of the corresponding
account OU in the OU administrative hierarchy. If the domain spans across different countries or
regions, or the domain owner is responsible for a large number of OUs, Windows NT 4.0
resource domain owners might prefer to control resource OUs that are subordinate to account
OUs, to ensure that they have direct access to support from the account OU owner. Resource
OUs do not have any standard child OUs. Computers and groups are placed directly in the
resource OU.
The resource OU owner owns the objects within the OU, but does not own the OU container
itself. Resource OU owners manage only computer and group objects; they cannot create other
classes of objects within the OU, and they cannot create child OUs.
Note
The creator or owner of an object has the ability to set the ACL on the object, regardless
of the permissions that are inherited from the parent container. If a resource OU owner can
reset the ACL on an OU, he or she can create any class of object in the OU, including users.
For this reason, resource OU owners are not permitted to create OUs.
For each resource OU in the domain, create a global group to represent the data administrators
who are responsible for managing the contents of the OU. This group has full control over the
group and computer objects in the OU, but not over the OU container itself.
Figure shows the administrative group design for a resource OU. The res_ou_OU_admin group
manages its own membership and is located in the resource OU.
Figure Resource OU Administrative Group Design

Page 220 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Placing the computer accounts into a resource OU gives the OU owner control over the account
objects but does not make the OU owner an administrator of the computers. In an Active
Directory domain, the Domain Admins group is, by default, placed in the local Administrators
group on all computers. This means that service administrators have control over those
computers. If resource OU owners require administrative control over the computers in their OU,
the forest owner can apply a Restricted Groups Group Policy to make the resource OU owner a
member of the Administrators group on the computers in that OU.

6.7.7 Creating Account OUs


The process that you use to create account OUs varies according to whether you are upgrading a
Windows NT 4.0 domain in place or creating a new Windows Server 2003 Active Directory
domain. Use one of the following approaches to create account OUs for your domain:
1. If you are upgrading a Windows NT 4.0 MUD in place to an Active Directory domain,
create an account OU in the domain for the accounts that are to be upgraded. Leave the
service administrator groups and user accounts in the default structure and move the
remaining user and group accounts to the newly created account OU.
2. If you are restructuring Windows NT 4.0 MUDs into an Active Directory domain, create an
account OU for each independently managed source MUD that is to be migrated into this
domain. If the same IT group manages multiple source MUDs, you can migrate accounts
from those domains into a single account OU.
3. If you are deploying a new Active Directory domain, create an account OU for the domain
so that you can delegate control of the accounts in the domain.

6.7.8 Creating Resource OUs


Use one of the following approaches to create resource OUs for your domain:
If the domain is the target domain in a Windows NT 4.0 resource domain restructure:

Page 221 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

1. Create an OU for each source resource domain that is managed independently and that
is to be migrated into this domain.
o If you are restructuring multiple resource domains that are managed by the same
IT group, create a single resource OU and migrate the objects from all domains into that
OU.
o If the source resource domains are managed by former Master User Domain
owners, you can migrate objects into the Computers and Groups child OUs of the
corresponding account OU subtree, instead of creating a separate resource OU under
the account OU.
2. Place the resource OU under the domain root or under an account OU, depending on
whether the resource OU owner reports to the account OU owner or directly to the forest
owner. The source resource domain owner becomes the resource OU owner.
If the domain is not the target of a resource domain restructure, create resource OUs as needed
based on the requirements of each group for autonomy in the management of data and
equipment.

Section 7

Managing and Maintaining an Active Directory Infrastructure

7. Understanding Domains and Forests


7.1 Domain controllers
7.1.1 Determining the number of domain controllers you need
7.1.2 Physical security
7.1.3 Backing up domain controllers
7.1.4 Upgrading domain controllers
7.2 Creating a domain
7.2.1 Planning for multiple domains
7.2.2 Removing a domain
7.2.3 Trust relationships between domains
7.2.4 Renaming domains
7.2.5 Forest restructuring
7.2.6 Domain and forest functionality
7.3 Domain functionality
7.4 Forest functionality
7.5 Raising domain and forest functional levels
7.6 Application directory partitions
7.6.1 Application directory partition naming
7.6.2 Application directory partition replication
7.6.3 Security descriptor reference domain
7.6.4 Application directory partitions and domain controller demotion
7.6.5 Identify the applications that use the application directory partition
7.6.6 Determine if it is safe to delete the last replica
7.6.7 Identify the partition deletion tool provided by the application
7.6.8 Remove the application directory partition using the tool provided
7.7 Managing application directory partitions
7.7.1 Creating an application directory partition
7.7.2 Deleting an application directory partition
7.7.3 Adding and removing a replica of an application directory partition
7.7.4 Setting application directory partition reference domain
7.7.5 Setting replication notification delays

Page 222 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

7.7.6 Displaying application directory partition information


7.7.7 Delegating the creation of application directory partitions
7.8 Operations master roles
7.8.1 Forest-wide operations master roles
7.8.2 Transferring operations master roles
7.8.3 Responding to operations master failures
7.9 Sites overview
7.9.1 Using sites
7.9.2 Defining sites using subnets
7.9.3 Assigning computers to sites
7.9.4 Understanding sites and domains
7.9.5 Replication overview
7.9.6 How replication works
7.10 Managing replication
7.10.1 Configuring site links
7.10.2 Site link cost
7.10.3 Replication frequency
7.10.4 Site link availability
7.10.5 Configuring site link bridges
7.10.6 Configuring preferred bridgehead servers

Page 223 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

7. Understanding Domains and Forests

7.1 Domain controllers


When you create the first domain controller in your organization, you are also creating the first
domain &, the first forest , the first site , and installing Active Directory . Domain controllers
running Windows Server 2003 store directory data and manage user and domain interactions,
including user logon processes, authentication, and directory searches. Domain controllers are
created by using the Active Directory Installation Wizard

7.1.1 Determining the number of domain controllers you need


A small organization using a single local area network (LAN) might need only one domain with
two domain controllers for high availability and fault tolerance. A larger organization with many
network locations will need one or more domain controllers in each site to provide high availability
and fault tolerance.
If your network is divided into sites, it is often good practice to put at least one domain controller
in each site to enhance network performance. When users log on to the network, a domain
controller must be contacted as part of the logon process. If clients must connect to a domain
controller located in a different site, the logon process can take a long time
By creating a domain controller in each site, user logons are processed more efficiently within the
site to optimize network traffic, you can also configure domain controllers to receive directory
replication updates only during off-peak hours. For information about how to schedule site
replication
The best network performance is available when the domain controller at a site is also a global
catalog. This way, the server can fulfill queries about objects in the entire forest. However,
enabling many domain controllers as global catalogs can increase the replication traffic on your
network. For more information about the global catalog

7.1.2 Physical security


Physical access to a domain controller can provide a malicious user unauthorized access to
encrypted passwords. For this reason, it is recommended that all domain controllers in your
organization be locked in a secured room with limited public access. You can use additional
security measures such as syskey for extra protection on domain controllers

7.1.3 Backing up domain controllers


You can back up domain directory partition data and data from other directory partitions by using
Backup, which is included with the Windows Server 2003 family, from any domain controller in a
domain. By using the backup tool on a domain controller, you can:
Back up Active Directory while the domain controller is online.
Back up Active Directory using batch file commands.
Back up Active Directory to removable media, an available network drive, or a file.
Back up other system and data files.
When you use the backup tool on a domain controller it will automatically back up all of the
system components and all of the distributed services upon which Active Directory is dependent.
This dependent data, which includes Active Directory, is known collectively as the System State
data.
On a domain controller running Windows Server 2003, the System State data consists of the
system startup files; the system registry; the class registration database of COM+ (an extension
to the Component Object Model); the SYSVOL directory; Certificate Services database (if
installed); Domain Name System (if installed); Cluster service (if installed); and Active Directory. It
is recommended that you regularly back up System State data.

Page 224 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

7.1.4 Upgrading domain controllers


On domain controllers running Windows NT 4.0, you will first need to upgrade the primary domain
controller (PDC) to successfully upgrade the domain. Once the PDC has been upgraded, you can
upgrade the backup domain controllers (BDCs).
One or more domains that share a common schema and global catalog are referred to as a forest
The first domain in a forest is referred to as the forest root domain
A single domain can span multiple physical locations or sites and can contain millions of objects.
Site structure and domain structure are separate and flexible. A single domain can span multiple
geographical sites, and a single site can include users and computers belonging to multiple
domains
A domain provides several benefits:
Organizing objects.
You do not need to create separate domains merely to reflect your company's organization of
divisions and departments. Within a domain, you can use organizational units for this
purpose. Using organizational units helps you manage the accounts and resources in the
domain. You can then assign Group Policy settings and place users, groups, and computers
into the organizational units. Using a single domain greatly simplifies administrative overhead
Publishing resources and information about domain objects.
A domain stores only the information about objects located in that domain, so by creating
multiple domains, you are partitioning or segmenting the directory to better serve a disparate
user base. When using multiple domains, you can scale the Active Directory directory service
to accommodate your administrative and directory publishing requirements
Applying a Group Policy object to the domain consolidates resource and security
management.
A domain defines a scope or unit of policy. A Group Policy object (GPO) establishes how
domain resources can be accessed, configured, and used. These policies are applied only
within the domain and not across domains. For more information about applying GPOs
Delegating authority eliminates the need for a number of administrators with broad
administrative authority.
Using delegated authority in conjunction with Group Policy objects and group memberships
enables you to assign an administrator rights and permissions to manage objects in an entire
domain or in one or more organizational units within the domain
Security policies and settings (such as user rights and password policies) do not
cross from one domain to another.
Each domain has its own security policies and trust relationships with other domains.
However, the forest is the final security boundary
Each domain stores only the information about the objects located in that domain.
By partitioning the directory this way, Active Directory can scale to very large numbers of
objects

7.2 Creating a domain


You create a domain by creating the first domain controller for a domain. To do this, install Active
Directory on a member server running Windows Server 2003 by using the Active Directory
Installation Wizard. The wizard uses the information that you provide to create the domain
controller and create the domain within the existing domain structure of your organization.
Depending on the existing domain structure, the new domain could be the first domain in a new
forest, the first domain in a new domain tree, or a child domain of an existing domain tree.
A domain controller provides the Active Directory directory service to network users and
computers, stores directory data, and manages user and domain interactions, including user
logon processes, authentication, and directory searches. Every domain must contain at least one
domain controller
After you create the first domain controller for a domain, you can create additional domain
controllers in an existing domain for fault tolerance and high availability of the directory

Page 225 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

7.2.1 Planning for multiple domains


Some reasons to create more than one domain are:
Different password requirements between departments or divisions
Massive numbers of objects
Decentralized network administration
More control of replication
Although using a single domain for an entire network has several advantages, to meet additional
scalability, security, or replication requirements you may consider creating one or more domains
for your organization. Understanding how directory data is replicated between domain controllers
will help you plan the number of domains needed by your organization

7.2.2 Removing a domain


In order to remove a domain, you must first remove Active Directory from all of the domain
controllers associated with that domain. Once Active Directory has been removed from the last
domain controller the domain will be removed from the forest and all of the information in that
domain will be deleted. A domain can only be removed from the forest if it has no child domains.
If this is the last domain in the forest, removing this domain will also delete the forest.

7.2.3 Trust relationships between domains


Trust relationships are automatically created between adjacent domains (parent and child
domains) when a domain is created in Active Directory. In a forest, a trust relationship is
automatically created between the forest root domain and any tree root domains or child domains
that are subordinate to the forest root domain. Because these trust relationships are transitive,
users and computers can be authenticated between any domains in the forest. For more
information about trust relationships
When upgrading a Windows NT domain to a Windows Server 2003 domain, the existing one-way
trust relationship between that domain and any other domains remains intact. This includes all
trusts with other Windows NT domains. If you are creating a new Windows Server 2003 domain
and want trust relationships with any Windows NT domains, you must create external trusts with
those domains.

The ability to rename a domain provides you with the flexibility to make important changes to your
forest structure and namespace as the needs of your organization change. Renaming domains
can accommodate acquisitions, mergers, name changes, or reorganizations. Domain rename
allows you to:
Restructure the position of any domain in the forest (except the forest root domain).
Change the DNS & NetBIOS name of any domain in Forest

7.2.5 Forest restructuring


Using domain rename, you can also restructure the hierarchy of domains in your forest so that a
domain residing in one domain tree can be moved to another domain tree. Restructuring a forest
allows you to move a domain anywhere within the forest in which it resides (except the forest root
domain). This includes the ability to move a domain so that it becomes the root of its own domain
tree.
You can use the domain rename utility (Rendom.exe) to rename or restructure a domain. The
Rendom.exe utility can be found in the Valueadd\Msft\Mgmt\Domren directory on the operating
system installation CD. A domain rename will affect every domain controller in your forest and is
a multistep process that requires a detailed understanding of the operation

7.2.6 Domain and forest functionality


Domain and forest functionality, introduced in Windows Server 2003 Active Directory, provides a
way to enable domain- or forest-wide Active Directory features within your network environment.

Page 226 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Different levels of domain functionality and forest functionality are available depending on your
environment.
If all domain controllers in your domain or forest are running Windows Server 2003 and the
functional level is set to Windows Server 2003, all domain- and forest-wide features are available.
When Windows NT 4.0 or Windows 2000 domain controllers are included in your domain or forest
with domain controllers running Windows Server 2003, Active Directory features are limited.
The concept of enabling additional functionality in Active Directory exists in Windows 2000 with
mixed and native modes. Mixed-mode domains can contain Windows NT 4.0 backup domain
controllers and cannot use Universal security groups, group nesting, and security ID (SID) history
capabilities. When the domain is set to native mode, Universal security groups, group nesting,
and SID history capabilities are available. Domain controllers running Windows 2000 Server are
not aware of domain and forest functionality.

7.3 Domain functionality


Domain functionality enables features that will affect the entire domain and that domain only. Four
domain functional levels are available: Windows 2000 mixed (default), Windows 2000 native,
Windows Server 2003 interim, and Windows Server 2003. By default, domains operate at the
Windows 2000 mixed functional level.
The following table lists the domain functional levels and their corresponding supported domain
controllers.
Domain controllers
Domain functional level
supported
Windows NT 4.0
Windows 2000
Windows 2000 mixed (default)
Windows Server 2003
family
Windows 2000
Windows 2000 native Windows Server 2003
family
Windows NT 4.0
Windows Server 2003 interim Windows Server 2003
family
Windows Server 2003
Windows Server 2003
family
Once the domain functional level has been raised, domain controllers running earlier operating
systems cannot be introduced into the domain. For example, if you raise the domain functional
level to Windows Server 2003, domain controllers running Windows 2000 Server cannot be
added to that domain.
Windows
Domain feature Windows 2000 mixed Windows 2000 native
Server 2003
Domain controller
rename tool Disabled Disabled Enabled
.
Update logon
timestamp Disabled Disabled Enabled

User password on
InetOrgPerson
Disabled Disabled Enabled
object

Universal Groups Enabled for distribution Enabled Enabled

Page 227 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

groups. Allows both security Allows both security


Disabled for security groups. and distribution groups. and distribution
groups.
Enabled for distribution
groups.
Enabled Enabled
Group Nesting Disabled for security groups,
Allows full group Allows full group
except for domain local
nesting. nesting.
security groups that can have
global groups as members.
Enabled Enabled
Converting Disabled Allows conversion Allows conversion
Groups No group conversions between security between security
allowed. groups and distribution groups and
groups. distribution groups.
Enabled
Enabled
Allows migration of
Allows migration of
SID history Disabled security principals
security principals from
from one domain to
one domain to another.
another.
7.4 Forest functionality
Forest functionality enables features across all the domains within your forest. Three forest
functional levels are available: Windows 2000 (default), Windows Server 2003 interim, and
Windows Server 2003. By default, forests operate at the Windows 2000 functional level. You can
raise the forest functional level to Windows Server 2003.
The following table lists the forest functional levels and their corresponding supported domain
controllers:
Forest functional level Domain controllers supported
Windows NT 4.0
Windows 2000 (default) Windows 2000
Windows Server 2003 family
Windows NT 4.0
Windows Server 2003 interim
Windows Server 2003 family
Windows Server 2003 Windows Server 2003 family
Once the forest functional level has been raised, domain controllers running earlier operating
systems cannot be introduced into the forest. For example, if you raise the forest functional level
to Windows Server 2003, domain controllers running Windows 2000 Server cannot be added to
the forest.
If you are upgrading your first Windows NT 4.0 domain so that it becomes the first domain in a
new Windows Server 2003 forest, you can set the domain functional level to Windows
Server 2003 interim
The following table describes the forest-wide features that are enabled for the Windows 2000 and
Windows Server 2003 forest functional levels.
Windows
Forest feature Windows 2000
Server 2003
Global catalog replication Enabled if both replication partners are
improvements running Windows Server 2003. Enabled
Otherwise, disabled.
Defunct schema objects
Disabled Enabled

Forest trusts Disabled Enabled

Page 228 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Linked value replication


Disabled Enabled

Domain rename
Disabled Enabled

Improved Active Directory


replication algorithms Disabled Enabled

Dynamic auxiliary classes.


Disabled Enabled

InetOrgPerson objectClass
change Disabled Enabled

7.5 Raising domain and forest functional levels


When Active Directory is installed on a server running Windows Server 2003, a set of basic
Active Directory features is enabled by default addition to the basic Active Directory features on
individual domain controllers, there are new domain- and forest-wide Active Directory features
available when all domain controllers in a domain or forest are running Windows Server 2003.
To enable the new domain-wide features, all domain controllers in the domain must be running
Windows Server 2003, and the domain functional level must be raised to Windows Server 2003
To enable new forest-wide features, all domain controllers in the forest must be running Windows
Server 2003, and the forest functional level must be raised to Windows Server 2003. Before
raising the forest functional level to Windows Server 2003, verify that all domains in the forest are
set to the domain functional level of Windows 2000 native or Windows Server 2003. Note that
domains that are set to the domain functional level of Windows 2000 native will automatically be
raised to Windows Server 2003 at the same time the forest functional level is raised to Windows
Server 2003.

7.6 Application directory partitions


An application directory partition is a directory partition that is replicated only to specific domain
controllers. A domain controller that participates in the replication of a particular application
directory partition hosts a replica of that partition. Only domain controllers running Windows
Server 2003 can host a replica of an application directory partition.
Applications and services can use application directory partitions to store application-specific
data. Application directory partitions can contain any type of objects, except security principle.
TAPI is an example of a service that stores its application-specific data in an application directory
partition.
Application directory partitions are usually created by the applications that will use them to store
and replicate data. For testing and troubleshooting purposes, members of the Enterprise Admins
group can manually create or manage application directory partitions using the Ntdsutil
command-line tool.
One of the benefits of an application directory partition is that, for redundancy, availability, or fault
tolerance, the data in it can be replicated to different domain controllers in a forest. The data can
be replicated to a specific domain controller or any set of domain controllers anywhere in the
forest. This differs from a domain directory partition in which data is replicated to all domain
controllers in that domain. Storing application data in an application directory partition instead of
in a domain directory partition may reduce replication traffic because the application data is only
replicated to specific domain controllers. Some applications may use application directory
partitions to replicate data only to servers where the data will be locally useful.

Page 229 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

7.6.1 Application directory partition naming


An application directory partition is part of the overall forest namespace just like a domain
directory partition. It follows the same DNS and distinguished names naming conventions as a
domain directory partition. An application directory partition can appear anywhere in the forest
namespace that a domain directory partition can appear.
There are three possible application directory partition placements within your forest namespace:
A child of a domain directory partition.
A child of an application directory partition.
A new tree in the forest.
If you created an application directory partition called example1 as a child of the microsoft.com
domain, the DNS name of the application directory partition would be example1.microsoft.com.
The distinguished name of the application directory partition would be dc=example1,
dc=microsoft, dc=com.
If you then created an application directory partition called example2 as a child of
example1.microsoft.com, the DNS name of the application directory partition would be
example2.example1.microsoft.com and the distinguished name would be dc=example2,
dc=example1, dc=microsoft, dc=com.
If the domain microsoft.com was the root of the only domain tree in your forest, and you created
an application directory partition with the DNS name of example1 and the distinguished name of
dc=example1, this application directory partition is not in the same tree as the microsoft.com
domain. This application directory partition would be the root of a new tree in the forest.
Domain directory partitions cannot be children of an application directory partition. For example, if
you created an application directory partition with the DNS name of example1.microsoft.com, you
could not create a domain with the DNS name of domain.example1.microsoft.com.

7.6.2 Application directory partition replication


The Knowledge Consistency Checker (KCC) automatically generates and maintains the
replication topology for all application directory partitions in the enterprise. When an application
directory partition has replicas in more than one site, those replicas follow the same intersite
replication schedule as the domain directory partition.

7.6.3 Security descriptor reference domain


Every container and object on the network has a set of access control information attached to it.
Known as a security descriptor, this information controls the type of access allowed by users,
groups, and computers. If the object or container is not assigned a security descriptor by the
application or service that created it, then it is assigned the default security descriptor for that
object class as defined in the schema. This default security descriptor is ambiguous in that it may
assign members of the Domain Admins group read permissions to the object, but it does not
specify to what domain the domain administrators belong. When this object is created in a
domain naming partition, that domain naming partition is used to specify which Domain Admins
group actually is assigned read permission. For example, if the object is created in
mydomain.microsoft.com then members of the mydomain Domain Admins group would be
assigned read permission.
When an object is created in an application directory partition, the definition of the default security
descriptor is difficult because an application directory partition can have replicas on different
domain controllers belonging to different domains. Because of this potential ambiguity, a default
security descriptor reference domain is assigned when the application directory partition is
created.
The default security descriptor reference domain defines what domain name to use when an
object in the application directory partition needs to define a domain value for the default security
descriptor. The default security descriptor reference domain is assigned at the time of creation.

Page 230 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

If the application directory partition is a child of a domain directory partition, by default, the parent
domain directory partition becomes the security descriptor reference domain. If the application
directory partition is a child object of another application directory partition, the security descriptor
reference domain of the parent application directory partition becomes the reference domain of
the new, child, application directory partition. If the new application directory partition is created
as the root of a new tree, then the forest root domain is used as the default security descriptor
reference domain.
You can manually specify a security reference domain using Ntdsutil. However, if you plan to
change the default security descriptor reference domain of a particular application directory
partition, you should do so before creating the first instance of that partition, to do this, you must
prepare the cross-reference object and change the default security reference domain before
completing the application directory partition creation process.

7.6.4 Application directory partitions and domain controller demotion


If a domain controller holds a replica of an application directory partition, then you must remove
the domain controller from the replica set of the application directory partition or delete the
application directory partition before you can demote the domain controller.
If a domain controller holds the last replica of a particular application directory partition, then you
must delete the application directory partition before you can demote the domain controller.
The Active Directory Installation Wizard will not remove a replica or delete an application directory
partition programmatically. You must decide when it is safe to delete the last replica of a
particular partition.
Before deleting the last replica of an application directory partition, identify the applications that
use the application directory partition, determine if it is safe to delete the last replica, identify the
partition deletion tool provided by the application, and then remove the application directory
partition by using the tool provided or by using the Ntdsutil command-line tool.

7.6.5 Identify the applications that use the application directory partition
To determine what application directory partitions are hosted on a computer, refer to the list on
the first page of the Active Directory Installation Wizard. If the list does not provide enough
information to identify the programs using a particular application directory partition, you may be
able to identify them in one of the following ways:
Speak to a member of the Enterprise Admins group.
Consult the network change control records for your organization.
Use LDP or ADSI Edit to view the data contained in the partition

7.6.6 Determine if it is safe to delete the last replica


Removing the last replica of an application directory partition will result in the permanent loss of
any data contained in the partition. If you have identified the applications using the application
directory partition, consult the documentation provided with those applications to determine if
there is any reason to keep the data. If the applications that use the application directory partition
are out of service, it is probably safe to remove the partition.
If it is not safe to delete the last replica, or if you cannot determine whether or not it is safe, and
you must demote the domain controller holding the last replica of a particular application directory
partition, follow these steps: Add a replica of the partition on another domain controller, force the
replication of the contents of the application directory partition to the domain controller holding the
new replica, and then remove the replica of the partition on the domain controller to be demoted

7.6.7 Identify the partition deletion tool provided by the application


Most applications that create application directory partitions provide a utility to remove the
partitions. When possible, always delete an application directory partition using the utility
provided. For example, to delete a TAPI partition, use the Tapicfg.exe command-line tool. For
more information about TAPI and removing TAPI application directory partitions

Page 231 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

7.6.8 Remove the application directory partition using the tool provided or use Ntdsutil
Refer to the application's documentation for information about removing application directory
partitions that were created and used by that application.

7.7 Managing application directory partitions


You can use the following tools to create, delete, or manage application directory partitions
application-specific tools from the application vendor
Ntdsutil command-line tool
LDP
Active Directory Service Interfaces (ADSI)

7.7.1 Creating an application directory partition


When you create an application directory partition, you are creating the first instance of this
partition. You can create an application directory partition by using the create nc option in the
domain management menu of Ntdsutil. When creating an application directory partition using
LDP or ADSI, provide a description in the description attribute of the domain DNS object that
indicates the specific application that will use the partition. For example, if the application
directory partition will be used to store data for a Microsoft accounting program, the description
could be Microsoft accounting application. Ntdsutil does not facilitate the creation of a description.

7.7.2 Deleting an application directory partition


When you delete an application directory partition, you are removing all replicas of that partition
from your forest. You can delete an application directory partition by using the delete nc
command in the domain management menu of Ntdsutil. The deletion process will need to
replicate to all domain controllers that contain a replica of the application directory partition before
the deletion process is complete. Any data that is contained in the application directory partition
will be lost.

7.7.3 Adding and removing a replica of an application directory partition


An application directory partition replica is an instance of an partition on another domain
controller. The information in the application directory partition is replicated between the domain
controllers. Application directory partition replicas are created for either redundancy or data
access purposes. You can add a replica of an application directory partition by using the add nc
replica command in the domain management menu of Ntdsutil. You can remove an application
directory partition replica by using the delete nc replica command in the domain management
menu of Ntdsutil

7.7.4 Setting application directory partition reference domain


The security descriptor reference domain defines a domain name for the default security
descriptor for objects in the application directory partition. By default, the security descriptor
reference domain is the parent domain of the application directory partition. If the application
directory partition is a child of another application directory partition, the default security
descriptor reference domain is the security descriptor reference domain of the parent application
directory partition. If the application directory partition has no parent, the forest root domain
becomes the default security descriptor reference domain. You can use Ntdsutil to change the
default security descriptor reference domain

7.7.5 Setting replication notification delays


Changes made to a particular directory partition on a particular domain controller are replicated to
the other domain controllers that contain that directory partition. The domain controller on which
the change was made notifies its replication partners that it has a change. You can configure how
long the domain controller will wait to send the change notification to its first replication partner.

Page 232 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

You can also configure how long it waits to send the subsequent change notification to its
remaining replication partners. These delays can be set for any directory partition (including
domain directory partitions) on a particular domain controller

7.7.6 Displaying application directory partition information


Any domain controller that holds a replica of a particular directory partition (including application
directory partitions) is said to be a member of the replica set for that directory partition. You can
use Ntdsutil to list the domain controllers that are members of a particular replica set for an
application directory partition. An addition of a domain controller to the replica set attribute on the
cross-reference object does not create the replica, but it will display when the list nc replica
command is used in Ntdsutil. The creation of the instance must replicate before the creation of
the replica is complete.

7.7.7 Delegating the creation of application directory partitions


There are two things that happen when creating an application directory partition:
Creation of the cross-reference object.
Creation of the application directory partition root node.
Normally only members of the Enterprise Admins group can create an application directory
partition. However, it is possible for a member of the Enterprise Admins group to prepare a cross-
reference object for the application directory partition and to delegate the rest of the process to
someone with more limited permissions.
The cross-reference object for an application directory partition holds several valuable pieces of
information, including the domain controllers that are to have a replica of this partition and the
security descriptor reference domain. The partition root node is the Active Directory object at the
root of the partition
The Enterprise Admin can create the cross-reference object then delegate to a person or group
with less permissions the right to create the application directory partition root node. Both creation
of the cross-reference object and the application directory partition root node can be
accomplished using Ntdsutil.
After using Ntdsutil to create the cross-reference object, the enterprise administrator must modify
the cross-reference object's access control list to allow the delegated administrator to modify this
cross-reference. This will allow the delegated administrator to create the application directory
partition and modify the list of domain controllers that holds replicas of this application directory
partition. The delegated administrator must use the names of the application directory partition
and the domain controller name that were specified during the precreation process

7.8 Operations master roles


Active Directory supports multimaster replication of the directory data store between all domain
controllers in the domain, so all domain controllers in a domain are essentially peers. However,
some changes are impractical to perform in using multimaster replication, so, for each of these
types of changes, one domain controller, called the operations master, accepts requests for such
changes.
In every forest, there are at least five Operation Master roles that are assigned to one or more
domain controllers. Forest-wide operations master roles must appear only once in every forest.
Domain-wide operations master roles must appear once in every domain in the forest.

7.8.1 Forest-wide operations master roles


Every forest must have the following roles:
Schema master
Domain naming master
These roles must be unique in the forest. This means that throughout the entire forest there can
be only one schema master and one domain naming master.

Page 233 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Schema master
The Schema master domain controller controls all updates and modifications to the schema. To
update the schema of a forest, you must have access to the schema master. There can be only
one schema master in the entire forest.
Domain naming master
The domain controller holding the domain naming master role controls the addition or removal of
domains in the forest. There can be only one domain naming master in the entire forest.
Domain-wide operations master roles
Every domain in the forest must have the following roles:
Relative ID (RID) master
Primary domain controller (PDC) emulator master
Infrastructure master
These roles must be unique in each domain. This means that each domain in the forest can have
only one RID master, PDC emulator master, and infrastructure master.
RID master
The RID master allocates sequences of RIDs to each of the various domain controllers in its
domain. At any time, there can be only one domain controller acting as the RID master in each
domain in the forest.
Whenever a domain controller creates a user, group, or computer object, it assigns the object a
unique
The SID consists of a domain SID, which is the same for all SIDs created in the domain, and a
RID, which is unique for each SID created in the domain.
To move an object between domains (using Movetree.exe), you must initiate the move on the
domain controller acting as the RID master of the domain that currently contains the object.
PDC emulator master
If the domain contains computers operating without Windows 2000 or Windows XP Professional
client software or if it contains Windows NT backup domain controllers (BDCs), the PDC emulator
master acts as a Windows NT primary domain controller. It processes password changes from
clients and replicates updates to the BDCs. At any time, there can be only one domain controller
acting as the PDC emulator master in each domain in the forest.
By default, the PDC emulator master is also responsible for synchronizing the time on all domain
controllers throughout the domain. The PDC emulator of a domain gets its clock set to the clock
on an arbitrary domain controller in the parent domain. The PDC emulator in the parent domain
should be configured to synchronize with an external time source. You can synchronize the time
on the PDC emulator with an external server by executing the "net time" command with the
following syntax:
net time \\ServerName /setsntp:TimeSource
The end result is that the time of all computers running Windows Server 2003 or Windows 2000
in the entire forest are within seconds of each other.
The PDC emulator receives preferential replication of password changes performed by other
domain controllers in the domain. If a password was recently changed, that change takes time to
replicate to every domain controller in the domain. If a logon authentication fails at another
domain controller due to a bad password, that domain controller will forward the authentication
request to the PDC emulator before rejecting the log on attempt.
The domain controller configured with the PDC emulator role supports two authentication
protocols:
the Kerberos V5 protocol
the NTLM protocol
Infrastructure master
At any time, there can be only one domain controller acting as the infrastructure master in each
domain. The infrastructure master is responsible for updating references from objects in its
domain to objects in other domains. The infrastructure master compares its data with that of a
global catalog. Global catalogs receive regular updates for objects in all domains through

Page 234 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

replication, so the global catalog data will always be up to date. If the infrastructure master finds
data that is out of date, it requests the updated data from a global catalog. The infrastructure
master then replicates that updated data to the other domain controllers in the domain.
The infrastructure master is also responsible for updating the group-to-user references whenever
the members of groups are renamed or changed. When you rename or move a member of a
group (and that member resides in a different domain from the group), the group may temporarily
appear not to contain that member. The infrastructure master of the group's domain is
responsible for updating the group so it knows the new name or location of the member. This
prevents the loss of group memberships associated with a user account when the user account is
renamed or moved. The infrastructure master distributes the update via multimaster replication.
There is no compromise to security during the time between the member rename and the group
update. Only an administrator looking at that particular group membership would notice the
temporary inconsistency.

7.8.2 Transferring operations master roles


Transferring an operation master role means moving it from one domain controller to another with
the cooperation of the original role holder. Depending upon the operations master role to be
transferred, you perform the role transfer using one of the three Active Directory consoles in MMC
Role Console in MMC
Schema master Active Directory Schema
Domain naming master Active Directory Domains and Trusts
RID master Active Directory Users and Computers
PDC emulator master Active Directory Users and Computers
Infrastructure master Active Directory Users and Computers
To transfer the schema master role

Using the Windows interface


1. Open the Active Directory Schema snap-in.
2. In the console tree, right-click Active Directory Schema and then click Change Domain
Controller.
3. Click Specify Name and type the name of the domain controller that you want to hold the
schema master role.
4. In the console tree, right-click Active Directory Schema, and then click Operations
Master.
5. Click Change.

Using a command line


1. Open Command Prompt.
2. Type:
ntdsutil
3. At the ntdsutil command prompt, type:
roles
4. At the fsmo maintenance command prompt, type:
connection
5. At the server connections command prompt, type:
connect to server DomainController
6. At the server connections command prompt, type:
quit
7. At the fsmo maintenance command prompt, type:
Transfer schema master
To transfer the domain naming master role

Page 235 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Using the Windows interface


1. Open Active Directory Domains and Trusts.
2. In the console tree, right-click Active Directory Domains and Trusts, and then click
Connect to Domain Controller.
3. In Enter the name of another domain controller, type the name of the domain
controller you want to hold the domain naming master role.
Or, click the domain controller in the list of available domain controllers.
4. In the console tree, right-click Active Directory Domains and Trusts, and then click
Operations Master.
5. Click Change.
To transfer the infrastructure master role
Using the Windows interface
1. Open Active Directory Users and Computers.
2. In the console tree, right-click Active Directory Users and Computers, and then click
Connect to Domain Controller.
3. In Enter the name of another domain controller, type the name of the domain
controller you want to hold the infrastructure master role.
Or, click the domain controller in the list of available domain controllers.
4. In the console tree, right-click Active Directory Users and Computers, point to All
Tasks, and then click Operations Masters.
5. On the Infrastructure tab, click Change.

7.8.3 Responding to operations master failures


Some of the operations master roles are crucial to the operation of your network. Others can be
unavailable for quite some time before their absence becomes a problem. Generally, you will
notice that a single master operations role holder is unavailable when you try to perform some
function controlled by the particular operations master.
If an operations master is not available due to computer failure or network problems, you can
seize the operations master role. This is also referred to as forcing the transfer of the operations
master role. Do not seize the operations master role if you can transfer it instead.
Schema master failure
Temporary loss of the schema master is not visible to network users. It will not be visible to
network administrators either, unless they are trying to modify the schema or install an application
that modifies the schema during installation.
If the schema master will be unavailable for an unacceptable length of time, you can seize the
role to the standby operations master. However, seizing this role is a drastic step that you should
take only when the failure of the schema master is permanent.
Domain naming master failure
Temporary loss of the domain naming master is not visible to network users. It will not be visible
to network administrators either, unless they are trying to add a domain to the forest or remove a
domain from the forest.
If the domain naming master will be unavailable for an unacceptable length of time, you can seize
the role to the standby operations master. However, seizing this role is a drastic step that you
should take only when the failure of the domain naming master is permanent.
RID master failure
Temporary loss of the RID master is not visible to network users. It will not be visible to network
administrators either, unless they are creating objects and the domain in which they are creating
the objects runs out of RIDs
If the RID master will be unavailable for an unacceptable length of time, you can seize the role to
the operations master. However, seizing this role is a drastic step that you should take only when
the failure of the RID master is permanent.

Page 236 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

PDC emulator master failure


The loss of the primary domain controller (PDC) emulator master affects network users.
Therefore, when the PDC emulator master is not available, you may need to immediately seize
the role.
If the current PDC emulator master will be unavailable for an unacceptable length of time and its
domain has clients without Windows 2000 client software, or if it contains Windows NT backup
domain controllers, seize the PDC emulator master role to the standby operations master. When
the original PDC emulator master is returned to service, you can return the role to the original
domain controller.
Infrastructure master failure
Temporary loss of the infrastructure master is not visible to network users. It will not be visible to
network administrators either, unless they have recently moved or renamed a large number of
accounts.
If the infrastructure master will be unavailable for an unacceptable length of time, you can seize
the role to a domain controller that is not a global catalog but is well connected to a global catalog
(from any domain), ideally in the same site as the current global catalog. When the original
infrastructure master is returned to service, you can transfer the role back to the original domain
controller.

7.9 Sites overview


Sites in Active Directory represent the physical structure, or topology of your network. Active
Directory uses topology information, stored as site and site link objects in the directory, to build
the most efficient replication topology. You use Active Directory Sites and Services to define sites
and site links. A site is a set of well-connected subnets. Sites differ from domains ; sites represent
the physical structure of your network, while domains represent the logical structure of your
organization.

7.9.1 Using sites


Sites help facilitate several activities within Active Directory, including:
Replication. Active Directory balances the need for up-to-date directory information with
the need for bandwidth optimization by replicating information within a site more frequently
than between sites. You can also configure the relative cost of connectivity between sites to
further optimize replication
Authentication. Site information helps make authentication faster and more efficient.
When a client logs on to a domain, it first searches its local site for a domain controller to
authenticate against. By establishing multiple sites, you can ensure that clients
authenticate against domain controllers nearest to them, reducing authentication latency
and keeping traffic off WAN connections.
Active Directory-enabled services. Active Directory-enabled services can leverage site
and subnet information to enable clients to locate the nearest server providers more
easily.

7.9.2 Defining sites using subnets


In Active Directory, a site is a set of computers well connected by high speed network such as
LAN. All computers within the same site typically reside in the same building, or on the same
campus network. A single site consists of one or more Internet Protocol (IP) subnets. Subnets are
subdivisions of an IP network, with each subnet possessing its own unique network address. A
subnet address groups neighboring computers in much the same way that postal codes group
neighboring postal addresses. The following figure shows several clients within a subnet that
defines an Active Directory site.

Page 237 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Sites and subnets are represented in Active Directory by site and subnet objects
objects
An entity, such as a file, folder, shared folder, printer, or Active Directory object, described by a
distinct, named set of attributes. For example, the attributes of a File object include its name,
location, and size; the attributes of an Active Directory User object might include the user's first
name, last name, and e-mail address.
For OLE and ActiveX, an object can also be any piece of information that can be linked to, or
embedded into, another object.
, which you create through Active Directory Sites and Services. Each site object is associated
with one or more subnet objects.

7.9.3 Assigning computers to sites


Computers are assigned to sites based on their Internet Protocol (IP) address and subnet Mask.
Site assignment is handled differently for clients and member servers than for domain controllers.
For a client, site assignment is dynamically determined by its IP address and subnet mask during
logon. For a domain controller, site membership is determined by the location of its associated
server object in Active Directory.

7.9.4 Understanding sites and domains


In Active Directory, sites map the physical structure of your network, while domains map the
logical or administrative structure of your organization. This separation of physical and logical
structure provides the following benefits:
You can design and maintain the logical and physical structures of your network
independently.
You do not have to base domain namespaces on your physical network.
You can deploy domain controllers for multiple domains within the same site. You can
also deploy domain controllers for the same domain in multiple sites.

Page 238 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

7.9.5 Replication overview


Except for very small networks, directory data must reside in more than one place on the network
to be equally useful to all users. Through replication the Active Directory directory service
maintains replicas of directory data on multiple domain controllers, ensuring directory availability
and performance for all users. Active Directory uses a multimaster replication model allowing you
to make directory changes at any domain controller, not just at a designated primary domain
controller. Active Directory relies on the concept of sites to help keep replication efficient, and on
the KCC to automatically determine the best replication topology for the network.
Replication enhancements in the Windows Server 2003 family
The Microsoft Windows Server 2003 family includes enhancements to make replication both
more efficient, as well as more scalable aross a larger number of domains and sites. These
include refinements in memory usage, enhancements to the Windows 2000 spanning tree
algorithm, a completely new spanning tree algorithm for Windows Server 2003 forests, and a new
load balancing tool (included in the Windows Resource Kit tools).
In a forest set to the Windows 2000 functional level, the replication enhancements provide gains
in replication efficiency and scalability, even when sites and domains contain domain controllers
running Windows 2000. If a site contains at least one domain controller running Windows
Server 2003, then a domain controller running Windows Server 2003 assumes the intersite
toplogy generator role for the site, allowing the enhancements to take effect.
In a forest set to the Windows Server 2003 functional level, the new Windows Server 2003
spanning tree algorithm goes into effect for larger gains in both efficiency and scalability. For
example, using the orginal spanning tree algorithm from Windows 2000, one domain can contain
up to 300 sites. With the new Windows Server 2003 algorithm, one domain can contain up to at
least 3,000 sites. In the new algorithm, the intersite toplogy generator in each site uses a
randomized selection process to determine the bridgehead server for the site.

7.9.6 How replication works


To keep directory data on all domain controllers consistent and up to date, Active Directory
replicates

Page 239 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

In Active Directory, replication synchronizes schema, configuration, application, and domain


directory partitions between domain controllers.directory changes on a regular basis. Replication
occurs over standard network protocols, uses change tracking information to prevent
unnecessary replication, and uses linked value replication to improve efficiency.
Transferring replication data
Active Directory uses RPC over IP to transfer replication data between domain controllers. RPC
over IP is used for both intersite & intrasite replication. To keep data secure while in transit, RPC
over IP replication uses both authentication (using the Kerberos V5 authentication protocol) and
data encryption.
When a direct or reliable IP connection is not available, replication between sites can be
configured to use the SMTP. However, SMTP replication functionality is limited, and requires an
CA. SMTP can only be used to replicate the configuration, schema and application directory
partitions, and does not support the replication of domain directory partitions.
Preventing unnecessary replication
Once a domain controller has processed a directory change from another domain controller
successfully, it should not try to replicate those changes back to the domain controller that sent
the change. In addition, a domain controller should avoid sending updates to another domain
controller if the target domain controller has already received that same update from a different
replication partner. To prevent such unnecessary replication, Active Directory uses change
tracking information stored in the directory.
Resolving conflicting changes
It is possible for two different users to make changes to the exact same object property and to
have these changes applied at two different domain controllers in the same domain before
replication of either change occurs. In such a case, both changes are replicated as new changes,
creating a conflict. To resolve this conflict, domain controllers that receive these conflicting
changes examine the attribute data contained within the changes, each of which holds a version
and a timestamp. Domain controllers will accept the change with the higher version and discard
the other change. If the versions are identical, domain controllers will accept the change with the
more recent timestamp.
Improving replication efficiency
Introduced in the Windows Server 2003 family, linked value replication allows individual values of
a multivalued attribute to be replicated separately. In Windows 2000, when a change was made
to a member of a group (one example of a multivalued attribute with linked values) the entire
group had to be replicated. With linked value replication, only the group member that has
changed is replicated, and not the entire group. To enable linked value replication, you must raise
the forest functional level to Windows Server 2003.
Replication within a site
Active Directory handles replication within a site or intrasite replication , differently than replication
between sites because bandwidth within a site is more readily available. The Active Direct KCC
builds the intrasite replication topology using a bidirectional ring design. Intrasite replication is
optimized for speed, and directory updates within a site occur automatically on the basis of
change notification. Unlike replication data travelling between sites, directory updates replicated
within a site are not compressed.
Building the intrasite replication topology
The Knowledge Consistency Checker (KCC) on each domain controller automatically builds the
most efficien replication topology
replication topology
In Active Directory replication, the set of physical connections that domain controllers use to
replicate directory updates among domain controllers within sites and between sites.
In the File Replication service (FRS), the interconnections between replica set members. These
interconnections determine the path that data takes as it replicates to all replica set members.
for intrasite replication, using a bidirectional ring design. This bidirectional ring topology attempts
to create at least two connections to each domain controller (for fault tolerance) and no more than
three hops between any two domain controllers (to reduce replication latency). To prevent
Page 240 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

connections of more than three hops, the topology can include shortcut connections across the
ring. The KCC updates the replication topology regularly.
Determining when intrasite replication occurs
Directory updates made within a site are likely to have the most direct impact on local clients, so
intrasite replication is optimized for speed. Replication within a site occurs automatically on the
basis of change notification. Intrasite replication begins when you make a directory update on a
domain controller. By default, the source domain controller waits 15 seconds and then sends an
update notification to its closest replication partner. If the source domain controller has more than
one replication partner, subsequent notifications go out by default at 3 second intervals to each
partner. After receiving notification of a change, a partner domain controller sends a directory
update request to the source domain controller. The source domain controller responds to the
request with a replication operation. The 3 second notification interval prevents the source
domain controller from being overwhelmed with simultaneous update requests from its replication
partners.
For some directory updates in a site, the 15 second waiting time does not apply and replication
occurs immediately. Known as urgent replication, this immediate replication applies to critical
directory updates, including the assigning of account lockouts and changes in the account lockout
policy, the domain password policy, or the password on a domain controller account.
Replication between sites
Active Directory handles replication between sites or intersite replication , differently than
replication within sites because bandwidth between sites is usually limited. The Active Directory
KCC builds the intersite replication topology using a least-cost spanning tree design. Intersite
replication is optimized for bandwidth efficiency, and directory updates between sites occur
automatically based on a configurable schedule. Directory updates replicated between sites are
compressed to preserve bandwidth.
Building the intersite replication topology
Active Directory automatically builds the most efficient intersite replication topology using
information you provide (through Active Directory Sites and Services) about your site
connections. The directory stores this information as site link objects. One domain controller per
site, called the intersite topology generator intersite topology generator
An Active Directory process that runs on one domain controller in a site that considers the cost of
intersite connections, checks if previously available domain controllers are no longer available,
and checks if new domain controllers have been added. The Knowledge Consistency Checker
(KCC) process then updates the intersite replication topology accordingly., is assigned to build
the topology. A least-cost spanning tree algorithm is used to eliminate redundant replication paths
between sites. The intersite replication topology is updated regularly to respond to any changes
that occur in the network. You can control intersite replication through the information you provide
when you create your site links
Determining when intersite replication occurs
Active Directory preserves bandwidth between sites by minimizing the frequency of replication
and by allowing you to schedule the availability of site links for replication. By default, intersite
replication across each site link occurs every 180 minutes (3 hours). You can adjust this
frequency to match your specific needs. Be aware that increasing this frequency increases the
amount of bandwidth used by replication. In addition, you can schedule the availability of site links
for use by replication. By default, a site link is available to carry replication traffic 24 hours a day,
7 days a week. You can limit this schedule to specific days of the week and times of day. You
can, for example, schedule intersite replication so that it only occurs after normal business hours.

7.10 Managing replication


Active Directory relies on site configuration information to manage and optimize the process of
replication. Active Directory provides automatic configuration of these settings in some cases. In
addition, you can configure site-related information for your network using Active Directory Sites
and Services. Configurable information includes settings for site link , site link bridges and
bridgehead servers.
Page 241 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

7.10.1 Configuring site links


You can use site link settings to control replication between sites. Configurable settings include
the relative cost of each site link, the frequency of replication on each site link, and the schedule
availability of each site link for replication

7.10.2 Site link cost


The cost of a site link determines the relative preference of the Active Directory KCC for using a
site link in the replication topology. The higher the cost of the site link, the lower will be the KCC's
preference for using the site link. For example, if you have two site links, site link A and site link
B, and you set the cost of site link A to 150 and the cost of site link B to 200, the KCC will prefer
to use site link A in the replication topology. By default, the cost of a newly created site link is 100.

7.10.3 Replication frequency


The replication frequency of a site link determines how often replication occurs over that site link.
By default, the replication frequency for a site link is 180 minutes, meaning that replication occurs
over that site link every 180 minutes, or three hours. Using Active Directory Sites and Services,
you can set the replication frequency from 15 minutes to 10,080 minutes (one week). A site link
must be available for any replication to occur. If a site link is not available when the number of
minutes between replication updates has passed, no replication will occur.

7.10.4 Site link availability


The availability schedule for a site link determines during which hours or days of the week a site
link can be used for replication. By default, a site link is always available for replication, 24 hours
a day and 7 days a week. You can change this schedule, for example, to exclude business hours
during which your network is busy handling other types of traffic. Or, you can exclude particular
days on which you do not want replication to occur. Scheduling information is ignored by site links
that use the Simple Mail Transfer Protocol (SMTP) for replication

7.10.5 Configuring site link bridges


By default, all site links are bridged, or transitive. This allows any two sites that are not connected
by an explicit site link to communicate directly, through a chain of intermediary site links and sites.
One advantage to bridging all site links is that your network is easier to maintain because you do
not need to create a site link to describe every possible path between pairs of sites.
Generally, you can leave automatic site link bridging enabled. However, you might want to
disable automatic site link bridging and create site link bridges manually just for specific site links,
in the following cases:
Your network is not fully routed (not every domain controller can directly communicate
with every other domain controller).
You have a network routing or security policy in place that prevents every domain
controller from being able to directly communicate with every other domain controller.
Your Active Directory design includes a large number of sites

7.10.6 Configuring preferred bridgehead servers


When the KCC constructs the intersite replication topology, it automatically assigns one or more
bridgehead servers for each site to ensure that directory changes only need to be replicated
across a site link one time. It is recommended that you allow the KCC to make the bridgehead
server assignments. You can make the bridgehead server assignments manually through Active
Directory Sites and Services. However, doing so can potentially disrupt replication if one of your
manually assigned bridgehead servers becomes unavailable.

Page 242 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Section 8

8. Planning and Implementing User, Computer, and Group Strategies


8.1 User and computer accounts
8.1.1 User accounts
8.1.2 Computer accounts
8.1.3 Access control in Active Directory
8.1.4 Security descriptors
8.1.5 Object inheritance
8.1.6 User authentication
8.1.7 Organizational units

8.2 Manage Groups in Windows Server 2003


8.2.1 Manage Groups
8.2.2 Summary
8.2.3 Add a Group
8.2.4 Convert a Group to another Group Type
8.2.5 Change Group Scope
8.2.6 Delete a Group
8.2.7 Find a Group
8.2.8 Find Groups where a User Is a Member
8.3 Modify Group Properties
8.3.1 Remove a Member from a Group
8.3.2 Rename a Group

8.4 Nesting groups


8.5 Special identities
8.6 Authentication
8.6.1 Authentication types
8.6.2 Introduction to authentication
8.6.3 Interactive logon
8.6.4 Network authentication

8.7 Smart card


8.7.1 Understanding smart cards
8.7.2 Stored User Names and Passwords overview
8.7.3 How Stored User Names and Passwords works

Page 243 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

8.Planning and Implementing User, Computer, and Group Strategies

8.1 User and computer accounts


Active Directory user accounts and computer accounts represent a physical entity such as a
computer or person. User accounts can also be used as dedicated service accounts for some
applications. User accounts and computer accounts (as well as groups) are also referred to as
security principals Security principals are directory objects that are automatically assigned
security IDs (SIDs) which can be used to access domain resources. A user or computer account
is used to:
Authenticate the identity of a user or computer.
A user account enables a user to log on to computers and domains with an identity that
can be authenticated by the domain. For information about authentication, Each user who
logs on to the network should have his or her own unique user account and password. To
maximize security, you should avoid multiple users sharing one account.
Authorize or deny access to domain resources.
Administer other security principals.
Audit actions performed using the user or computer account.

8.1.1 User accounts


The Users container located in Active Directory Users and Computers displays the three built-in
user accounts: Administrator, Guest, and HelpAssistant. These built-in user accounts are created
automatically when you create the domain.
Each built-in account has a different combination of rights and permissions. The Administrator
account has the most extensive rights and permissions over the domain, while the Guest account
has limited rights and permissions. The table below describes each default user account on
domain controllers running Windows Server 2003.
Default user account Description
The Administrator account
Administrator account
On a local computer, the first account that is created when you install an
operating system on a new workstation, stand-alone server, or member
server. By default, this account has the highest level of administrative
access to the local computer, and it is a member of the Administrators
group.
In an Active Directory domain, the first account that is created when you
set up a new domain by using the Active Directory Installation Wizard. By
default, this account has the highest level of administrative access in a
domain, and it is a member of the Administrators, Domain Admins,
Domain Users, Enterprise Admins, Group Policy Creator Owners, and
Schema Admins groups.
Administrator account
has full control of the domain and can assign user rights and access
control
access control
A security mechanism that determines which operations a user, group,
service, or computer is authorized to perform on a computer or on a
particular object, such as a file, printer, registry subkey, or directory
service object.
permissions to domain users as necessary. This account must be used
only for tasks that require administrative credentials
administrative credentials
Logon information that is used to identify a member of an administrative
group. Groups that use administrative credentials include Administrators,
Domain Admins, and DNS Admins. Most system-wide or domain-wide

Page 244 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

tasks require administrative credentials.


. It is recommended that you set up this account with a strong password.
For more information.
The Administrator account is a default member of the Administrators,
Domain Admins, Enterprise Admins, Group Policy Creator Owners, and
Schema Admins groups in Active Directory. The Administrator account
can never be deleted or removed from the Administrators group, but it
can be renamed or disabled. Because the Administrator account is known
to exist on many versions of Windows, renaming or disabling this account
will make it more difficult for malicious users to try and gain access to it.
The Administrator account is the first account created when you set up a
new domain using the Active Directory Installation Wizard.
Important
When the Administrator account is disabled, it can still be used to
gain access to a domain controller using Safe Mode
Safe Mode
A method of starting Windows using basic files and drivers only,
without networking. Safe Mode is available by pressing the F8 key
when prompted during startup. This allows you to start your computer
when a problem prevents it from starting normally.
.
The Guest account
Guest account
A built-in account used to log on to a computer running Windows when a
user does not have an account on the computer or domain or in any of
the domains trusted by the computer's domain.
is used by people who do not have an actual account in the domain. A
user whose account is disabled (but not deleted) can also use the Guest
account. The Guest account does not require a password.
You can set rights and permissions
Guest account
permissions
A rule associated with an object to regulate which users can gain access
to the object and in what manner. Permissions are granted or denied by
the object's owner.
for the Guest account just like any user account. By default, the Guest
account is a member of the built-in Guests group and the Domain Guests
global group, which allows a user to log on to a domain. The Guest
account is disabled by default, and it is recommended that it stay
disabled.
The primary account used to establish a Remote Assistance session.
HelpAssistant account This account is created automatically when you request a Remote
(installed with a Assistance session and has limited access to the computer. The
Remote Assistance HelpAssistant account is managed by the Remote Desktop Help Session
session) Manager service and will be automatically deleted if no Remote
Assistance requests are pending.

8.1.2 Computer accounts


Every computer running Windows NT, Windows 2000, Windows XP, or a server running Windows
Server 2003 that joins a domain has a computer account. Similar to user accounts, computer
accounts provide a means for authenticating and auditing computer access to the network and to
domain resources. Each computer account must be unique.

8.1.3 Access control in Active Directory


Page 245 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Administrators can use access control to manage user access to shared resources for security
purposes. In Active Directory, access control is administered at the object level by setting
different levels of access, or permissions , to objects, such as Full Control, Write, Read, or No
Access. Access control in Active Directory defines how different users can use Active Directory
objects. By default, permissions on objects in Active Directory are set to the most secure setting.
The elements that define access control permissions on Active Directory objects include security
descriptors, object inheritance, and user authentication.

8.1.4 Security descriptors


Access control permissions are assigned to shared objects and Active Directory objects to control
how different users can use each object. A shared object, or shared resource , is an object that is
intended to be used over a network by one or more users and includes files, printers, folders, and
services. Both shared objects and Active Directory objects store access control permissions in
security descriptors

8.1.5 Object inheritance


By default, Active Directory objects inherit ACEs from the security descriptor located in their
parent container object. Inheritance enables the access control information defined at a container
object in Active Directory to apply to the security descriptors of any subordinate objects, including
other containers and their objects. This eliminates the need to apply permissions each time a
child object is created. If necessary, you can change the inherited permissions. However, as a
best practice, avoid changing the default permissions or inheritance settings on Active Directory
objects

8.1.6 User authentication


Active Directory also authenticates and authorizes users, groups, and computers to access
objects on the network. The Local Security Authority (LSA) is the security subsystem responsible
for all interactive user authentication and authorization services on a local computer. The LSA is
also used to process authentication requests made through the Kerberos V5 protocol or NTLM
Once the identity of a user has been confirmed in Active Directory, the LSA on the authenticating
domain controller generates a user access token and associates a security ID (SID)

Security ID (SID)
A data structure of variable length that identifies user, group, and computer accounts. Every
account on a network is issued a unique SID when the account is first created. Internal processes
in Windows refer to an account's SID rather than the account's user or group name.
with the user.
Access token. When a user is authenticated, LSA creates a security access token for
that user. An access token contains the user's name, the groups to which that user belongs,
a SID for the user, and all of the SIDs for the groups to which the user belongs. If you add a
user to a group after the user access token has been issued, the user must log off and log on
again before the access token will be updated.
Security ID (SID). Active Directory automatically assigns SIDs to security principal
objects at the time they are created. Security principals are accounts in Active Directory that
can be assigned permissions such as computer, group, or user accounts. Once a SID is
issued to the authenticated user, it is attached to the access token of the user.
The information in the access token is used to determine a user's level of access to objects
whenever the user attempts to access them. The SIDs in the access token are compared with the
list of SIDs that make up the DACL for the object to ensure that the user has sufficient permission
to access the object. This is because the access control process identifies user accounts by SID
rather than by name.

8.1.7 Organizational units

Page 246 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

A particularly useful type of directory object contained within domains is the organizational unit
Organizational units are Active Directory containers into which you can place users, groups,
computers, and other organizational units. An organizational unit cannot contain objects from
other domains.
An organizational unit is the smallest scope or unit to which you can assign Group Policy settings
or delegate administrative authority. Using organizational units, you can create containers within
a domain that represent the hierarchical, logical structures within your organization. You can then
manage the configuration and use of accounts and resources based on your organizational
model

As shown in the figure, organizational units can contain other organizational units. A hierarchy of
containers can be extended as necessary to model your organization's hierarchy within a domain.
Using organizational units will help you minimize the number of domains required for your
network.
You can use organizational units to create an administrative model that can be scaled to any size.
A user can have administrative authority for all organizational units in a domain or for a single
organizational unit. An administrator of an organizational unit does not need to have
administrative authority for any other organizational units in the domain

8.2 Manage Groups in Windows Server 2003


Groups are Active Directory or local computer objects that can contain users, contacts,
computers, and other groups. You can use groups to do the following:
Manage user and computer access to shared resources such as Active Directory objects and
their properties, network shares, files, directories, and printer queues.
Filter Group Policy settings.
Create e-mail distribution lists.

The default groups that are put in the Built in container of Active Directory Users and Computers
are:
Account Operators
Administrators
Backup Operators
Guests
Incoming Forest Trust Builders (only appears in the forest root domain)
Network Configuration Operators
Performance Monitor Users
Page 247 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Performance Log Users


Pre-Windows 2000 Compatible Access
Print Operators
Remote Desktop Users
Replicator
Server Operators
Users
The predefined groups that are put in the Users container of Active Directory Users and
Computers are:
Cert Publishers
DnsAdmins (installed with DNS)
DNSUpdateProxy (installed with DNS)
Domain Admins
Domain Computers
Domain Controllers
Domain Guests
Domain Users
Enterprise Admins (only appears in the forest root domain)
Group Policy Creator Owners
IIS_WPG (installed with Internet Information Services)
Remote access and IAS Servers Schema Admins (only appears in the forest root domain)
Unlike groups, organizational units are used to create collections of objects in a single domain,
but do not confer membership. Organizational units are logical containers where you can put
users, groups, computers, and other organizational units. It can contain objects only from its
parent domain. An organizational unit is the smallest scope to which you can apply a Group
Policy or delegate authority. The administration of an organizational unit and the objects it
contains can be delegated to an individual administrator or a group. Group Policy objects can be
applied to sites, domains or organizational units, but never to groups. A Group Policy object is a
collection of settings that affects users or computers. Group membership is used to filter which
Group Policy objects affect the users and computers in the site, domain, or organizational unit.

8.2.1 Manage Groups

8.2.2 Summary
To manage groups in Windows Server 2003, follow these steps.

8.2.3 Add a Group


To add a group, follow these steps:
1. Click Start, point to All Programs, point to Administrative Tools, and then click Active
Directory Users and Computers.
2. In the console tree, expand DomainName , where DomainName is the name of your
domain.
3. Right-click the folder where you want to add the group, point to New, and then click
Group.
4. In the Group name box, type a name for the new group.

By default, the name that you type is also entered as the pre-Microsoft Windows 2000
name of the new group.
5. Under Group scope, click the option that you want, and then under Group type, click
the option that you want.
6. Click OK.

Add a Member to a Group


To add a member to a group, follow these steps:
Page 248 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

1. Click Start, point to All Programs, point to Administrative Tools, and then click Active
Directory Users and Computers.
2. In the console tree, expand DomainName, where DomainName is the name of your
domain.
3. Click the folder that contains the group where you want to add a member.
4. In the right pane, right-click the group where you want to add a member, and then click
Properties.
5. Click the Members tab, and then click Add.
6. In the Select User, Contacts, or Computers dialog box, type the names of the users
and computers that you want to add, and then click OK.
7. Click OK.

Note: In addition to users and computers, membership in a particular group can include
contacts and other groups.

8.2.4 Convert a Group to another Group Type


To convert a group to another group type, follow these steps:
1. Click Start, point to All Programs, point to Administrative Tools, and then click Active
Directory Users and Computers.
2. In the console tree, expand DomainName, where DomainName is the name of your
domain.
3. Click the folder that contains the group.
4. In the right pane, right-click the group, and then click Properties.
5. Click the General tab, under Group type, click the group type that you want, and then
click OK.

8.2.5 Change Group Scope


To change group scope, follow these steps:
1. Click Start, point to All Programs, point to Administrative Tools, and then click Active
Directory Users and Computers.
2. In the console tree, expand DomainName, where DomainName is the name of your
domain.
3. Click the folder that contains the group.
4. In the right pane, right-click the group, and then click Properties.
5. Click the General tab, under Group scope, click the group scope that you want, and
then click OK.

8.2.6 Delete a Group


To delete a group, follow these steps:
1. Click Start, point to All Programs, point to Administrative Tools, and then click Active
Directory Users and Computers.
2. In the console tree, expand DomainName, where DomainName is the name of your
domain.
3. Click the folder that contains the group.
4. In the right pane, right-click the group that you want to delete, and then click Delete
5. Click Yes when you are prompted to confirm the deletion.

8.2.7 Find a Group


To find a group, follow these steps:
1. Click Start, point to All Programs, point to Administrative Tools, and then click Active
Directory Users and Computers.
2. In the console tree, right-click DomainName, where DomainName is the name of your
domain, and then click Find.
3. Click the Users, Contacts, and Groups tab.
Page 249 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

4. In the Name box, type the name of the group that you want to find, and then click Find
Now.

Note For more powerful search options, click the Advanced tab, and then specify the
search conditions that you want.

8.2.8 Find Groups where a User Is a Member


To find a group where a user is a member, follow these steps:
1. Click Start, point to All Programs, point to Administrative Tools, and then click Active
Directory Users and Computers.
2. In the console tree, DomainName, where DomainName is the name of your domain, and
then click Users.

Or, click the folder that contains the user account.


3. In the right pane, right-click the user account, and then click Properties.
4. Click the Member Of tab.

Note The Member of tab for a user displays a list of groups in the domain where the
account of the user account is located. Active Directory does not display groups that are
located in trusted domains where the user is a member.

8.3 Modify Group Properties


To modify the properties of a group, follow these steps:
1. Click Start, point to All Programs, point to Administrative Tools, and then click Active
Directory Users and Computers.
2. In the console tree, expand DomainName, where DomainName is the name of your
domain.
3. Click the folder that contains the group.
4. In the right pane, right-click the group, and then click Properties.
5. Make the changes that you want, and then click OK.

8.3.1 Remove a Member from a Group


To remove a member from a group, follow these steps:
1. Click Start, point to All Programs, point to Administrative Tools, and then click Active
Directory Users and Computers.
2. In the console tree, expand DomainName, where DomainName is the name of your
domain.
3. Click the folder that contains the group.
4. In the right pane, right-click the group, and then click Properties.
5. Click the Members tab.
6. Click the members who you want to remove from the group, and then click Remove.
7. Click OK.

8.3.2 Rename a Group


To rename a group, follow these steps:
1. Click Start, point to All Programs, point to Administrative Tools, and then click Active
Directory Users and Computers.
2. In the console tree, expand DomainName, where DomainName is the name of your
domain.
3. Click the folder that contains the group.
4. In the right pane, right-click the group, and then click Rename.
5. Type a name for the new group, and then press ENTER.

8.4 Nesting groups


Page 250 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Using nesting, you can add a group as a member of another group. You nest groups to
consolidate member accounts and reduce replication traffic.
Nesting options depend on whether the domain functionality of your Windows Server 2003
domain is set to Windows 2000 native or Windows 2000 mixed.
Groups in domains set to the Windows 2000 native functional level or distribution groups in
domains set to the Windows 2000 mixed functional level can have the following members:
Groups with universal scope can have the following members: accounts, computer
accounts, other groups with universal scope, and groups with global scope from any domain.
Groups with global scope can have the following members: accounts from the same
domain and other groups with global scope from the same domain.
Groups with domain local scope can have the following members: accounts, groups with
universal scope, and groups with global scope, all from any domain. This group can also
have as members other groups with domain local scope from within the same domain.

8.5 Special identities


In addition to the groups in the Users and Builtin containers, servers running Windows
Server 2003 include several special identities. For convenience, these identities are generally
referred to as groups. These special groups do not have specific memberships that can be
modified, but they can represent different users at different times, depending on the
circumstances. The special groups are:
Anonymous Logon
Represents users and services that access a computer and its resources through the
network without using an account name, password, or domain name. On computers running
Windows NT and earlier, the Anonymous Logon group is a default member of the Everyone
group. On computers running a member of the Windows Server 2003 family, the Anonymous
Logon group is not a member of the Everyone group by default.
Everyone
Represents all current network users, including guests and users from other domains.
Whenever a user logs on to the network, the user is automatically added to the Everyone
group.
Network
Represents users currently accessing a given resource over the network (as opposed to
users who access a resource by logging on locally at the computer where the resource is
located). Whenever a user accesses a given resource over the network, the user is
automatically added to the Network group.
Interactive
Represents all users currently logged on to a particular computer and accessing a given
resource located on that computer (as opposed to users who access the resource over the
network). Whenever a user accesses a given resource on the computer to which they are
currently logged on, the user is automatically added to the Interactive group.
Although the special identities can be assigned rights and permissions to resources, the
memberships cannot be modified or viewed. Group scopes do not apply to special identities.
Users are automatically assigned to these special identities whenever they log on or access a
particular resource.

8.6 Authentication
The process for verifying that an entity or object is who or what it claims to be. Examples include
confirming the source and integrity of information, such as verifying a digital signature or verifying
the identity of a user or computer.
Authentication protocols overview
Authentication is a fundamental aspect of system security. It confirms the identity of any user
trying to log on to a domain or access network resources. Windows Server 2003 family
authentication enables single sign-on to all network resources. With single sign-on, a user can log

Page 251 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

on to the domain once, using a single password or smart card, and authenticate to any computer
in the domain.

8.6.1 Authentication types


When attempting to authenticate a user, several industry-standard types of authentication may be
used, depending on a variety of factors. The following table lists the types of authentication that
the Windows Server 2003 family supports.
Authentication protocols Description
A protocol that is used with either a password or a smart
Kerberos V5 authentication card for interactive logon. It is also the default method of
network authentication for services.
Secure Sockets Layer/Transport
A protocol that is used when a user attempts to access a
Layer Security (SSL/TLS)
secure Web server.
authentication
A protocol that is used when either the client or server uses
NTLM authentication
a previous version of Windows.
Digest authentication transmits credentials across the
Digest authentication network as an MD5 hash or message digest. For more
information,.
Passport authentication is a user-authentication service
Passport authentication
which offers single sign-in service.

8.6.2 Introduction to authentication


A key feature of authentication in the Windows Server 2003 family is its support of single sign-on
.Single sign-on provides two main security benefits:
For a user, the use of a single password or smart card reduces confusion and improves
work efficiency.
For administrators, the amount of administrative support required for domain users is
reduced, because the administrator only needs to manage one account per user.
Authentication, including single sign-on, is implemented as a two-part process: interactive logon
and network authentication. Successful user authentication depends on both of these processes.

8.6.3 Interactive logon


Interactive logon confirms the user's identification to either a domain account or a local computer.
This process is different, depending on the type of user account:
With a domain account, a user logs on to the network with a password or smart card,
using single sign-on credentials stored in the Active directory service. By logging in with a
domain account, an authorized user can access resources in the domain and any trusting
domains. If a password is used to log on to a domain account, Kerberos V5 is used for
authentication. If a smart card is used instead, Kerberos V5 authentication is used with
certificates.
With a local computer account, a user logs on to a local computer, using credentials
stored in Security Account Manager (SAM), which is the local security account database. Any
workstation or member server can store local user accounts, but those accounts can only be
used for access to that local computer.

8.6.4 Network authentication


Network authentication confirms the user's identification to any network service that the user is
attempting to access. To provide this type of authentication, the security system supports many
different authentication mechanisms, including Kerberos V5, Secure Socket Layer/Transport
Layer Security (SSL/TLS), and, for compatibility with Windows NT 4.0, NTLM.

Page 252 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Users who use a domain account do not see network authentication. Users who use a local
computer account must provide credentials (such as a user name and password) every time they
access a network resource. By using the domain account, the user has credentials that can be
used for single sign-on.

8.7 Smart card


A credit card-sized device that is used with an access code to enable certificate-based
authentication and single sign-on to the enterprise. Smart cards securely store certificates, public
and private keys, passwords, and other types of personal information. A smart card reader
attached to the computer reads the smart card.

8.7.1 Understanding smart cards


Logging on to a network with a smart card provides a strong form of authentication because it
uses cryptography-based identification and proof of possession when authenticating a user to a
domain.
For example, if a malicious person obtains a user's password, that person can assume the user's
identity on the network simply through use of the password. Many people choose passwords they
can remember easily, which makes passwords inherently weak and open to attack.
In the case of smart cards, that same malicious person would have to obtain both the user's
smart card and the personal identification number (PIN) to impersonate the user. This
combination is obviously more difficult to attack because an additional layer of information is
needed to impersonate a user. An additional benefit is that, after a small number of unsuccessful
PIN inputs occur consecutively, a smart card is locked, making a dictionary attack against a smart
card extremely difficult. (Note that a PIN does not have to be a series of numbers; it can also use
other alphanumeric characters.) Smart cards are also resistant to undetected attacks because the
card needs to be obtained by the malicious person, which is relatively easy for a user to know
about.

8.7.2 Stored User Names and Passwords overview


When you log on to a computer running an operating system in the Microsoft Windows
Server 2003 family, you can supply a user name and password. This becomes your default
security context for connecting to other computers on networks and over the Internet. However,
this user name and password may not provide access to all of your resources. Stored User
Names and Passwords provides a way to store these as a part of your profile.
There may be cases where you want to use different names and passwords for connecting to
different resources. Examples could include:
You want to log on to your computer with a standard account, but connect to certain
computers as an administrator for maintenance and troubleshooting reasons.
You work at home and want to use your work user name and password to connect to
work-related servers.
Your account is in a domain and you need access to computers in an untrusted domain.
You want to access Web sites with user names and passwords that are specific to each
of those sites.
For example, administrators may log on to the network using their standard user name and
password but need to connect to a remote server with administrative access to perform specific
functions. In this case, the user must be able to supply a different user name and password for
this connection. The user may also want to store this user name and password for reuse at a later
date. This is the function of Stored User Names and Passwords.
A user may also need to connect to secure Web servers using a specific user name and
password. Stored User Names and Passwords allows users to connect to different Web servers
using supplied user names and passwords and store them for future reuse. The user names and
passwords can be either specific to a unique Web server or they can be generic so that they will
be supplied when the user attempts to log on to a secured Web server.

Page 253 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Stored User Names and Passwords also stores saved information as part of a user's profile. This
means that these user names and passwords will travel with the user from computer to computer
anywhere on the network.

8.7.3 How Stored User Names and Passwords works


Stored User Names and Passwords obtains its information in two ways: explicit creation and
learning from the user. When users enter a user name and password for a target computer or
domain, that information is stored and used when the users attempt to log on to an appropriate
computer. If no stored information is available and users supply a user name and password, they
have the ability to save the information. If the user decides to save the information, Stored User
Names and Passwords then receives and stores it.
When Windows XP or a member of the Windows Server 2003 family attempts to connect to a
new computer on the network, it supplies the current user name and password to the computer. If
this is not sufficient to provide access, Stored User Names and Passwords will attempt to supply
the necessary user name and password. All stored user names and passwords will be examined,
from most specific to least specific as appropriate to the resource, and the connection will be
attempted with those user names and passwords in that order. Because user names and
passwords are read and applied in order from most to least specific, no more than one user name
and password can be stored for each individual target or domain.
Stored User Names and Passwords also allows the user to save the supplied user name and
password for reuse. This information is stored in a secure part of the user's profile and cannot be
accessed by other users. If the user is configured to use a single profile across the enterprise, the
stored user names and passwords will be retained wherever the user logs on to the network.

Section 9

9. Planning and Implementing Group Policy


9.1 What Is Core Group Policy
9.1.2 Change and Configuration Management
9.1.3 Change and Configuration Management Process
9.1.4 Core Group Policy Infrastructure
9.1.5 Group Policy and Active Directory
9.1.6 Sample Active Directory Organizational Structure
9.1.7 Group Policy Inheritance
9.1.8 Viewing and Reporting of Policy Settings
9.1.9 Delegating Administration of Group Policy
9.1.10 Core Group Policy Scenarios

9.2 Group Policy Dependencies


9.2.1 Core Group Policy Architecture
9.2.2 Group Policy Engine Architecture
9.2.3 RSoP Architecture
9.2.4 Planning Mode (Group Policy Modeling)
9.2.5 Core Group Policy Physical Structure

9.3 How a Group Policy Container is Named


9.3.1 Default Group Policy Container Permissions

Page 254 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

9.3.2 GroupPolicyContainer Subcontainers


9.3.3 Group Policy Container-Related Attributes of Domain, Site, and OU
9.3.4 Managing Group Policy Links for a Site, Domain, or OU
9.3.5 How WMIPolicy Objects are Stored and Associated with Group Policy
9.3.6 Group Policy Template
9.3.7 Default Group Policy Template Permissions
9.3.8 Core Group Policy Processes and Interactions
9.3.9 Group Policy Processing Rules
9.3.10 Targeting GPOs

9.4 How Security Filtering is Processed


9.4.1 WMI Filtering
9.4.2 How WMI Filtering is Processed
9.4.3 WMI Filtering Scenarios
9.4.4 Application of Group Policy
9.4.5 Group Policy Loopback Support
9.4.6 How the Group Policy Engine Processes Client-Side Extensions
9.4.7 How Group Policy Processing History Is Maintained on the Client Computer

9.5 Group Policy Replication


9.5.1 Network Ports Used by Group Policy

9.6 What Is Resultant Set of Policy?


9.6.1 Resultant Set of Policy Snap-in Core Scenario
4.6.2 Similar Technologies for Viewing Resultant Set of Policy Data
9.6.3 Resultant Set of Policy Snap-in Dependencies
9.6.4 How Resultant Set of Policy Works
9.6.5 Resultant Set of Policy Snap-in Architecture
9.7 Group policy Tools

9.1 What Is Core Group Policy


Group Policy is an infrastructure used to deliver and apply one or more desired configurations or
policy settings to a set of targeted users and computers within an Active Directory environment.
This infrastructure consists of a Group Policy engine and multiple client-side extensions (CSEs)
responsible for writing specific policy settings on target client computers.
Group Policy settings are contained in Group Policy objects (GPOs), which live in the domain and
can be linked to the following Active Directory containers: sites, domains, or organizational units
(OUs). The settings within GPOs are then evaluated by the affected targets, using the
hierarchical nature of Active Directory. Consequently, Group Policy is one of the top reasons to
deploy Active Directory.

Group Policy is one of a group of management technologies, collectively known as IntelliMirror,


that provides users with consistent access to their applications, application settings, roaming user
profiles, and user data, from any managed computer even when they are disconnected from
the network. IntelliMirror is implemented through a set of Windows features, including Active
Directory, Group Policy, Group Policy-based Software Installation, Windows Installer, Folder
Redirection, Offline Folders, and Roaming User Profiles.

Page 255 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Core Group Policy or the Group Policy engine is the framework that handles common
functionalities across Administrative Template settings and other client-side extensions. The
following figure shows how the Group Policy engine interacts with other components as part of
processing policy settings. You use Group Policy Management Console (GPMC) to create, view,
and manage GPOs and use Group Policy Object Editor to set and configure the policy settings in
GPOs.
Group Policy Components

9.1.2 Change and Configuration Management


Administrators face increasingly complex challenges in managing their IT infrastructures. You
must deliver and maintain customized desktop configurations for many types of workers,
including mobile users, information workers, or others assigned to strictly defined tasks, such as
data entry. Changes to standard operating system images might be required on an ongoing
basis. Security settings and updates must be delivered efficiently to all the computers and
devices in the organization. New users need to be productive quickly without costly training. In
the event of a computer failure or disaster, service must be restored with a minimum of data loss
and interruption.
Specifically, an IT department must respond to various factors that require change in an IT
environment including:
New operating systems and applications.
Updates to operating systems and applications.
New hardware.
New business requirements that require configuration changes.
Security influences that require configuration changes.
New users.
Managing this change can be viewed as a continuous cycle, in which new business requirements
demand changes that must first be tested before they can be deployed as a standard
configuration. This cycle is shown in the following figure.

Page 256 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

9.1.3 Change and Configuration Management Process

These roles, known collectively as Change and Configuration Management, enable


administrators to implement change quickly and affect large numbers of users and computers at
the lowest possible cost. You can use Group Policy to maintain standard operating environments
for specific groups of users, such as developers or information workers. As software changes and
policies change over time, Group Policy can be used to update the already-deployed standard
operating environment until the image can be updated. Group Policy can also enforce rules, if
necessary, by restricting the programs that can be run on company computers. For example, it
can prevent access to games or other programs unrelated to the workplace.
Group Policy is a key enabling technology that allows you to implement Change and
Configuration Management along with other technologies in IntelliMirror. For example, you can
deploy new operating systems with Remote Installation Services or other imaging technology.
You can deliver updates to computers throughout the network using Software Update Services
(SUS). Although you can deploy software using Group Policy, larger organizations might want to
use Microsoft Systems Management Server (SMS) to take advantage of the scalability that SMS
provides.
In summary, Group Policy is the delivery mechanism that allows you to implement change and
configuration for users and computers on the object level in Active Directory. Because you can
target Group Policy settings to individual objects throughout the Active Directory hierarchy, Group
Policy is the central enabling technology that allows organizations to effectively use Active
Directory as a management tool. In addition, the Group Policy Management Console simplifies
implementation and management of Group Policy.

9.1.4 Core Group Policy Infrastructure


Group policy is an infrastructure with pluggable extensions. Extensions that exist on client
computers include Administrative Templates (also known as registry-based policy), Security
Settings, Software Installation, Folder Redirection, Scripts, and Wireless Network Policies.
The policy settings exist in a GPO. A GPO is a virtual object that lives in the domain; part of the
GPO is located in Active Directory and is called the Group Policy container. The other part of a
GPO is located in the Sysvol and is called the Group Policy template. When policy settings need
to be applied, the framework calls each extension and the extension then applies the necessary
settings.
Each Group Policy extension has two extensions a client extension that is called by the Group
Policy engine to apply policy, and a server-side extension that plugs into Group Policy Object
Editor to define and set the policy settings that need to be applied to client computers.
Although you can configure Local Group Policy objects (Local GPOs) on individual computers,
the full power of Group Policy can only be realized in a Windows 2000 or Windows Server 2003-
based network with Active Directory installed. In addition, some features and policy settings
require client computers running Windows XP.

9.1.5 Group Policy and Active Directory


Active Directory organizes objects by sites, domains, and OUs. Domains and OUs are organized
hierarchically, making the containers and the objects within them easy to manage. The settings
defined in a GPO can only be applied when the GPO is linked to one or more of these containers.

Page 257 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

By linking GPOs to sites, domains, and OUs, you can implement Group Policy settings for as
broad or as narrow a portion of the organization as you want. GPO links affect users and
computers in the following ways:
A GPO linked to a site applies to all users and computers in the site.
A GPO linked to a domain applies directly to all users and computers in the domain and
by inheritance to all users and computers in child OUs. Note that policy is not inherited
across domains.
A GPO linked to an OU applies directly to all users and computers in the OU and by
inheritance to all users and computers in child OUs.
When a GPO is created, it is stored in the domain. When the GPO is linked to an Active Directory
container, such as an OU, the link is a component of that container, not a component of the GPO.
An example of how GPOs can be linked to sites, domains, and OUs is shown in the following
figure.

9.1.6 Sample Active Directory Organizational Structure

In this configuration, the Servers OUs have the following GPOs applied: A1, A2, A3, A4, A6. The
Marketing OUs have the following GPOs applied: A1, A2, A3, A5.

Loopback processing with merge or replace


Loopback is an advanced Group Policy setting that is useful on computers in certain closely
managed environments, such as servers, kiosks, laboratories, classrooms, and reception areas.
Setting loopback causes the User Configuration settings in GPOs that apply to the computer to
be applied to every user logging on to that computer, instead of, or in addition to, the User
Configuration settings of the user. This allows you to ensure that a consistent set of policies is
applied to any user logging on to a particular computer, regardless of their location in Active
Directory. Loopback is controlled by the setting, User Group Policy loopback processing mode,
which is located in Computer Configuration\Administrative Templates\System\Group Policy.
Loopback only works when both the user account and the computer account are in a
Windows 2000 or later domain. Loopback does not work for computers joined to a workgroup.
Loopback is not enabled if the computer or user is not in an Active Directory domain.

Filtering the Scope of the Group Policy Object


Group Policy is a powerful tool for managing the Windows Server 2003 environment. The value of
Group Policy can only be realized through properly applying the GPOs to the Active Directory
containers you want to manage. Determining which users and computers will receive the settings
in a GPO is referred to as "scoping the GPO." Scoping a GPO is based on three factors:
The site(s), domain(s), or organization unit(s) where the GPO is linked.

Page 258 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The security filtering on the GPO.


The WMI filter on the GPO.
You can use the site, domain, and OU links from a GPO as the primary targeting principle for
defining which computers and users should receive a GPO. You can then use security filtering
and WMI filtering to further reduce the set of computers and users to which the GPO will apply.
Scoping or targeting of a GPO allows you to apply or deny an entire GPO; you cannot choose to
filter settings within a GPO.

9.1.7 Group Policy Inheritance


In addition to the ability to filter the scope of GPOs, you can change the way GPOs are applied by
managing Group Policy inheritance. In most environments, the actual settings for a given user
and computer are the result of the combination of GPOs that are applied at a site, domain, or OU.
When multiple GPOs apply to these users and computers, the settings in the GPOs are
aggregated. The settings deployed by GPOs linked to higher containers (parent containers) in
Active Directory are inherited by default to child containers and combine with any settings
deployed in GPOs linked to child containers. If multiple GPOs attempt to set a setting to
conflicting values, the GPO with the highest precedence sets the setting. GPO processing is
based on a "last writer wins" model, and GPOs that are processed later have precedence over
GPOs that are processed earlier.

9.1.8 Viewing and Reporting of Policy Settings


In order to properly implement, troubleshoot, and plan Group Policy, administrators need to be
able to quickly view the settings in a GPO. When multiple GPOs apply to a given user or
computer, they can contain conflicting policy settings. For most policy settings, the final value of
the policy setting is set only by the highest precedent GPO that contains that setting. Resultant
Set of Policy (RSoP) helps you understand and identify the final set of policy that is applied as
well as settings that did not apply as a result of policy inheritance.
Specifically, Resultant Set of Policy helps you determine:
The final value of the setting that is applied as a result of all the GPOs.
The final GPO that set the value of this setting (also known as the winning GPO).
Precedence details that show any other GPOs that attempted to set this setting and the
value that each GPO attempted to set for that policy setting.
Group Policy Management Console, available as a separate download from the Microsoft Web
site, addresses some common reporting requirements including the ability to document all the
settings in a GPO to a file for printing or viewing. Users can either print the reports, or save them
to a file as either HTML or XML.

9.1.9 Delegating Administration of Group Policy


Organizations need to be able to delegate administration of Group Policy to other administrators
who can take responsibility for a given OU, domain, or other container. Active Directory is
designed to allow you to delegate control of portions of the directory service in managing aspects
of Group Policy. The following areas can be delegated:
GPO delegation. This includes permission to create GPOs in a domain or permission to
edit an existing GPO. Note that having permission to edit a GPO does not include any
delegated rights on the GPO links.
Link delegation. This includes permission to add, delete, or change links to GPOs. Note
that having link delegation does not include any delegated rights on the GPO itself.
RSOP delegation. This includes permission to run RSoP (in either planning or logging
mode) on objects under a container.
WMI filter delegation. This includes permission to create WMI filters or permission to
edit an existing filter.

Page 259 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

In GPMC, delegation is simplified because it manages the various Access Control Entries (ACEs)
required for a task as a single bundle of permissions for the task. You can also use the Access
Control List (ACL) editor to view or manage these permissions manually.
The underlying mechanism for achieving delegation is the application of the appropriate DACLs
to GPOs and other objects in Active Directory. This mechanism is identical to using security
groups to filter the application of GPOs to various users. You can also specify Group Policy to
control who can use MMC snap-ins. For example, you can use Group Policy to manage the rights
to create, configure, and use MMC consoles, and to control access to individual snap-ins.

9.1.10 Core Group Policy Scenarios


The Group Policy engine is designed to apply policy configurations to individual computers and
users through Group Policy objects. Settings within a GPO are configured by individual client-side
extensions and are applied to individual computers and users by the client-side extension. This
section summarizes the core scenarios for the Group Policy engine.
Scheduling of Group Policy application. Group Policy starts each time the computer
starts or a user logs in, a process called foreground policy application. Group Policy is also
applied in the background at regular refresh intervals. In addition, it can be forced to apply
through command line tools such as Gpupdate.
Obtaining GPOs from the relevant configuration locations. The Group Policy engine
obtains GPOs from the appropriate site, domain, and OU containers, known collectively as
Scope of Management (SOM).
Handling special cases affecting all CSEs. The Group Policy engine implements
additional changes specified by the administrator such as changing the link order in which
GPOs should be applied. In addition, the Group Policy engine handles any loopback
processing that has been set by an administrator. Loopback processing, typically used for
public workstations or kiosks, specifies that the user settings defined in the computers GPOs
replace or are merged with the user settings normally applied to the user.
Filtering and ordering of GPOs. The Group Policy engine checks for any conditions set
by the administrator to filter GPOs or specify the order in which GPOs should be applied.
Configuring CSEs. CSEs can be configured to run only in some conditions, as specified
in the registry.
Maintaining version numbers and histories for all CSEs. A history list is maintained
for each CSE in the Registry, showing when the CSE last applied policy. A status is also
maintained to figure out whether the client-side extension applied policy successfully last time
the policy was applied by the CSE.
Calling into the CSE. When the Group Policy engine has determined that a particular
CSE needs to be executed, it loads the dynamic link library (DLL) associated with the CSE
and loads up the entry point.
Notifying various components of any changes made by Group Policy. After all the
extensions have been called, the Group Policy engine updates the registry information to
specify what the next policy foreground application should be, as well as schedule the next
background refresh time.
Processing RSoP data. The Group Policy engine periodically refreshes the RSoP data,
ensuring that actual policy application is updated and stored in the WMI namespace on each
computer.

9.2 Group Policy Dependencies


Group Policy has several key dependencies. Domain-based Group Policy requires an Active
Directory environment with DNS properly configured.

Active Directory
Active Directory is the Windows 2000 Server and Windows Server 2003 directory service that
stores information about all objects on the computer network and makes this information easy for

Page 260 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

administrators and users to find and apply. With Active Directory, users can gain access to
resources anywhere on the network with a single logon. Similarly, administrators have a single
point of administration for all objects on the network, which can be viewed in a hierarchical
structure. In a network environment, Group Policy depends on Active Directory as the targeting
framework that allows you to link GPOs to specific Active Directory containers such as sites,
domains, or OUs.
In a stand-alone environment without Active Directory, you can use Local Group Policy objects to
configure settings on individual computers.

Domain Name System (DNS)


DNS is a hierarchical, distributed database that contains mappings of DNS domain names to
various types of data, such as IP addresses. DNS enables the location of computers and services
by user-friendly names, and it also enables the discovery of other information stored in the
database.
Group Policy application requires clients to access specified servers, including domain controllers
and other servers such as share points and install points. Group Policy management also
requires access to domain controllers. DNS is used to locate and identify these servers. In
Windows 2000 Server and later Active Directory requires DNS support. If the network is
functioning, but clients or consoles such as the Group Policy Object Editor or GPMC are unable
to locate the servers, there might be a problem with your networks DNS system.

Replication
Group Policy depends on other technologies in order to properly replicate between domain
controllers in a network environment. A GPO is a virtual object stored in both Active Directory and
the Sysvol of a domain controller. Property settings, stored in the Group Policy container, are
replicated through Active Directory replication. Replication automatically copies the changes that
originate on a writable directory partition replica to all other domain controllers that hold the same
directory partition replica. More specifically, a destination domain controller pulls these changes
from the source domain controller. Data settings, stored in the Sysvol as the Group Policy
template, are replicated through the File Replication Service (FRS), which provides multi-master
file replication for designated directory trees between designated servers running Windows
Server 2003. The Group Policy container stores GPO properties, including information about
version, GPO status, and a list of components that have settings in the GPO. The Group Policy
template is a directory structure within the file system that stores Administrative Template-based
policy settings, security settings, script files, and information regarding applications that are
available for software installation. The Group Policy template is located in Sysvol in the \Policies
sub-directory for its domain. GPOs are identified by their globally unique identifiers (GUIDs) and
stored at the domain level. The settings from a GPO are only applied when the Group Policy
container and Group Policy template are synchronized.

DFS publishing
The Sysvol folder is shared on each domain controller and is accessible through the UNC path
\\dcname.domainname\sysvol.
The Sysvol is also published as a domain-based Distributed File System (DFS) share. This allows
clients to access the Sysvol by using the generic path \\domainname\sysvol. A request for a DFS
referral for \\domainname\sysvol will always return a replica in the same Active Directory site as
the client if one is available. This is the mechanism that the Group Policy client-side extensions
use to retrieve a local copy of the Group Policy template information.
How Core Group Policy Works

Core Group Policy or the Group Policy engine is the infrastructure that processes Group Policy
components including server-side snap-in extensions and client-side extensions. You use
administrative tools such as Group Policy Object Editor and Group Policy Management Console
to configure and manage policy settings.
Page 261 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

At a minimum, Group Policy requires Windows 2000 Server with Active Directory installed and
Windows 2000 clients. Fully implementing Group Policy to take advantage of all available
functionality and the latest policy settings depends on a number of factors including:
Windows Server 2003 with Active Directory installed and with DNS properly configured.
Windows XP client computers.
Group Policy Management Console (GPMC) for administration.

9.2.1 Core Group Policy Architecture


The Group Policy engine is a framework that handles client-side extension (CSE) processing and
interacts with other elements of Group Policy, as shown in the following figure:
Core Group Policy Architecture

The following table describes the components that interact with the Group Policy engine.
Core Group Policy Components
Component Description
In an Active Directory forest, the domain controller is a server that contains a
Server (domain
writable copy of the Active Directory database, participates in Active Directory
controller)
replication, and controls access to network resources.
Active Directory, the Windows-based directory service, stores information
about objects on a network and makes this information available to users and
network administrators. Administrators link Group Policy objects (GPOs) to
Active Directory
Active Directory containers such as sites, domain, and organizational units
(OUs) that include user and computer objects. In this way, policy settings can
be targeted to users and computers throughout the organization.
The Sysvol is a set of folders containing important domain information that is
stored in the file system rather than in the directory. The Sysvol folder is, by
default, stored in a subfolder of systemroot folder (%\systemroot\sysvol\sysvol)
Sysvol and is automatically created when a server is promoted to a domain controller.
The Sysvol contains the largest part of a GPO: the Group Policy template,
which includes Administrative Template-based policy settings, security settings,
script files, and information regarding applications that are available for

Page 262 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

software installation. It is replicated through the File Replication Service (FRS)


between all domain controllers in a domain.
A GPO is a collection of Group Policy settings, stored at the domain level as a
virtual object consisting of a Group Policy container and a Group Policy
template. The Group Policy container, which contains information about the
Group Policy
properties of a GPO, is stored in Active Directory on each domain controller in
object (GPO)
the domain. The Group Policy template contains the data in a GPO and is
stored in the Sysvol in the /Policies subdirectory. GPOs affect users and
computers that are contained in sites, domains, and OUs.
The Local Group Policy object (Local GPO) is stored on each individual
computer, in the hidden Windows\System32\GroupPolicy directory. Each
computer running Windows 2000, Windows XP Professional, Windows XP 64-
Bit Edition, Windows XP Media Center Edition, or Windows Server 2003 has
Local Group
exactly one Local GPO, regardless of whether the computers are part of an
Policy object
Active Directory environment.
Local GPOs are always processed, but are the least influential GPOs in an
Active Directory environment, because Active Directory-based GPOs have
precedence.
A component of the Windows operating system that provides interactive logon
Winlogon
support, Winlogon is the service in which the Group Policy engine runs.
Group Policy The Group Policy engine is the framework that handles common functionalities
engine across registry-based settings and client-side extensions (CSEs).
CSEs run within dynamic-link libraries (DLLs) and are responsible for
Client-side implementing Group Policy at the client computer.
extensions The CSEs are loaded on an as-needed basis when a client computer is
processing policy
File system The NTFS file system on client computers.
A database repository for information about a computers configuration, the
registry contains information that Windows continually references during
operation, such as:
Profiles for each user.
The programs installed on the computer and the types of documents
that each can create.
Property settings for folders and program icons.
The hardware on the system.
Registry Which ports are being used.
The registry is organized hierarchically as a tree, and it is made up of keys and
their subkeys, hives, and entries.
Registry settings can be controlled through Group Policy, specifically,
Administrative Templates (.adm files). Windows Server 2003 comes with a
predefined set of Administrative Template files, which are implemented as text
files (with an .adm extension), that define the registry settings that can be
configured in a GPO. These .adm files are stored in two locations by default:
inside GPOs in the Sysvol folder and in the Windows\inf directory on the local
computer.
The Event log is a service, located in Event Viewer, which records events in the
Event log
system, security, and application logs.
Help and The Help and Support Center is a component on each computer that provides
Support Center HTML reports on the policy settings currently in effect on the computer.
Resultant Set of All Group Policy processing information is collected and stored in a Common

Page 263 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Policy (RSoP) Information Model Object Management (CIMOM) database on the local
infrastructure computer. This information, such as the list, content, and logging of processing
details for each GPO, can then be accessed by tools using Windows
Management Instrumentation (WMI).
WMI is a management infrastructure that supports monitoring and controlling of
system resources through a common set of interfaces and provides a logically
organized, consistent model of Windows operation, configuration, and status.
WMI makes data about a target computer available for administrative use.
Such data can include hardware and software inventory, settings, and
configuration information. For example, WMI exposes hardware configuration
WMI
data such as CPU, memory, disk space, and manufacturer, as well as software
configuration data from the registry, drivers, file system, Active Directory, the
Windows Installer service, networking configuration, and application data. WMI
filtering in Windows Server 2003 allows you to create queries based on this
data. These queries (WMI filters) determine which users and computers
receive all of the policy configured in the GPO where you create the filter.

9.2.2 Group Policy Engine Architecture


The primary purpose of Group Policy is to apply policy settings to computers and users in an
Active Directory domain. GPOs can be targeted through Active Directory containers, such as
sites, domains, and OUs, containing user or computer objects. The Group Policy engine is in
userenv.dll, which runs inside the Winlogon service. This is shown in the following figure.
Group Policy Engine Architecture and CSE Components

Group Policy Engine Architecture and CSE Components


Component Description
Group Policy The framework that handles functionalities across CSEs, the Group Policy

Page 264 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Engine engine runs inside userenv.dll.


A component of the Windows operating system that provides interactive logon
support, Winlogon is the service in which the Group Policy engine runs.
Winlogon.exe
Winlogon is the only system component that actively interacts with the Group
Policy engine.
Userenv.dll runs inside Winlogon and contains the Group Policy engine and the
Userenv.dll
Administrative Templates extension.
Used to configure Scripts, IP Security, QoS Packet Scheduler, and Wireless
Gptext.dll
settings.
fdeploy.dll Used to configure folder redirection
Scecli.dll Used to configure security settings.
iedkcs32.dll Used to manage various Internet Explorer settings.
appmgmts.dll Used to configure software installation settings.
dskquota.dll Used for setting disk quotas.

9.2.3 RSoP Architecture


Resultant Set of Policy (RSoP) uses WMI to determine how policy settings are applied to users
and computers. RSoP has two modes: logging mode and planning mode. Logging mode
determines the resultant effect of policy settings that have been applied to an existing user and
computer based on a site, domain, and OU. Logging mode is available on Windows XP and later
operating systems. Planning mode simulates the resultant effect of policy settings that are applied
to a user and computer. Planning mode requires a Windows Server 2003 computer as a domain
controller. For RSoP functionality, using GPMC is recommended, which includes RSoP features
integrated with the rest of GPMC. In GPMC, RSoP logging mode is referred to as Group Policy
Results; planning mode is referred to as Group Policy Modeling.
The following figure shows the high-level architecture of RSoP for Group Policy Results and
Group Policy Modeling:
RSoP Architecture

Windows Server 2003 collects Group Policy processing information and stores it in a WMI
database on the local computer. (The WMI database is also known as the CIMOM database.)This
information, such as the list, content and logging of processing details for each GPO, can then be
accessed by tools using WMI.

Page 265 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

In Group Policy Results, RSoP queries the WMI database on the target computer, receives
information about the policies and displays it in GPMC. In Group Policy Modeling, RSoP
simulates the application of policy using the Resultant Set of Policy Provider on a domain
controller. Resultant Set of Policy Provider simulates the application of GPOs and passes them to
virtual CSEs on the domain controller. The results of this simulation are stored to a local WMI
database on the domain controller before the information is passed back and displayed in GPMC
(or the RSoP snap-in). This is explained in greater detail in the following section.

WMI and CIMOM


WMI provides a common scriptable interface to retrieve, and in some cases set, a wide variety of
system and application information. WMI is implemented through the winmgmt.exe service. The
WMI information hierarchy is modeled as a hierarchy of objects following the Common
Information Model (CIM) standards.
This information hierarchy is extensible, which allows different applications and services to
expose configuration information by supplying a WMI provider. WMI providers are the interface
between the WMI service and the applications data in its native format.
WMI data can be dynamic (generated on demand when required by a management application)
or static. Static data is stored in the CIMOM database. This data can be accessed at any time
(security controls permitting) by management applications. RSoP uses WMI and the CIMOM to
write, store and query RSoP settings information.
Resultant Set of Policy Provider
RSoP in planning mode has the special requirement that no settings information is actually
applied to the client system during the RSoP data generation. In fact, in many planning scenarios,
there might not be a computer or user object to apply the settings to. To meet this requirement,
the RSoP provider runs on domain controllers and performs some of the functions of a client
system for GPO application.
The RSoP provider is actually a WMI provider that performs the role of Winlogon in invoking
CSEs to log RSoP information to the CIM repository. It takes parameters supplied by the RSoP
wizard to select GPOs from the directory. It uses the following parameters:
Scope of management (SOM). This is the combination of user object OU and computer
object OU and Site (although the latter is optional). Either the User or Computer SOM can be
omitted but one must be specified. This can be specified as either an existing computer, user,
or both, or as existing OUs for the computer, user, or both.
Security group memberships for the computer and user objects. By default, these
are the existing security group memberships if actual user or computer objects are chosen as
the SOM. Security filtering can be ignored entirely or a new or a modified set of groups can
be chosen.
WMI filter. New or modified WMI filters can be applied to the GPOs during the RSoP
generation.
RSoP provider is a service on a domain controller and runs in system context. There are two
ramifications to this design. The service manually evaluates the security descriptor of each GPO
in the SOM against the user object and computer object security identifiers (SIDs) and their
security group membership. If Active Directory is locked down, some security group membership
analyses might fail until the user is provided the correct access.
In addition, because the RSoP provider has Domain Admin-equivalent access rights, some
control needs to be placed on who can generate RSoP information in the directory. This control is
achieved by an extended access right GenerateRSoPData. To execute an RSoP session for a
particular container, the user must have the Generate RSoP (planning) access right for that
container.

9.2.4 Planning Mode (Group Policy Modeling)


In planning mode, RSoP provider performs the RSoP data generation in the following steps:
1. RSoP tool gets the user, computer and domain controller name from the wizard.

Page 266 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

2. RSoP connects to the WMI database on the specified domain controller.


3. WMI service in turn calls the WMI Provider (an out-of-process service provider) on the
same computer to create the RSoP data.
4. RSoP provider gets the list GPOs for the user and computer from Active Directory.
5. RSoP provider populates the WMI database with instances of the GPOs for user and
computer.
6. The list of registered CSEs is retrieved. Each of the policy extensions is dynamically
loaded in succession by the RSoP provider and the list of computer and user GPOs is
passed to each extension. Each policy extension takes the list of GPOs and instead of
applying the policy, it populates the WMI database with instances of policy objects that
describe the effective policy.
7. After the WMI database population is over, RSoP provider returns the namespace under
which the RSoP data was created to the WMI service. This returns the namespace to the
RSoP tool.
8. The RSoP tool connects to the namespace on the WMI database on the domain
controller. RSoP navigates or iterates through the populated data in the WMI database using
WMI enumeration APIs to retrieve the policy data.
9. The RSoP data is displayed to the user. When the user is done looking at the RSoP data,
the RSoP tool calls RsopDeleteSession on the WMI service to delete the data in WMIs
database that was previously created.

Logging Mode (Group Policy Results)


In logging mode, the RSoP data generation is controlled by Winlogon and is part of the normal
GPO processing operation.
The Winlogon process retrieves the list of GPOs from the Active Directory using the security
context of the user or computer.
Winlogon process populates the WMI database with instances of the GPOs.
The list of registered policy extensions is retrieved. Each of the policy extensions is
dynamically loaded in succession by the Winlogon process and the list of GPOs is passed to
each policy extension. Each extension takes the list of GPOs and, in addition to applying the
policy, it populates the WMI database with the policies set and which GPOs applied them.

9.2.5 Core Group Policy Physical Structure


Understanding where GPOs are stored and how they are structured can help you troubleshoot
problems you might encounter when you implement Group Policy. Although GPOs can be linked
to sites, domains, and OUs, they are stored only in the domain. As explained earlier, a GPO is a
virtual object that stores its data in two locations: a Group Policy container and a Group Policy
template.
Group Policy Container
A Group Policy container is a location in Active Directory that stores GPOs and their properties.
The properties of a GPO include both computer and user Group Policy information. The Policies
container is the default location of GPOs. The path to the Policies container, in Lightweight
Directory Access Protocol (LDAP) syntax, is
CN=Policies,CN=System,DC=Domain_Name,DC=Domain_Name, where the Domain_Name
values specify a fully qualified domain name (FQDN).
The Active Directory store contains the Group Policy container of each GPO in the domain. The
Group Policy container contains attributes that are used to deploy GPOs to the domain, to OUs,
and to sites within the domain. The Group Policy container also contains a link to the file system
component of a GPO the Group Policy template. Some of the information in a Group Policy
container includes:
Version information. Ensures that the information is synchronized with the Group Policy
template information.

Page 267 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Status information. Indicates whether the user or computer portion of the GPO is
enabled or disabled.
List of components. Lists (extensions) that have settings in the GPO. These attributes
are gPCMachineExtensionNames and gPCUserExtensionNames.
File system path. Specifies the Universal Naming Convention (UNC) path to the Sysvol
folder. This attribute is gPCFileSysPath,
Functionality version. Gives the version of the tool that created the GPO. Currently, this
is version 2. This attribute is gPCFunctionalityVersion.
WMI filter. Contains the distinguished name of the WMI filter. This attribute is
gPCWQLFilter.

System Container
Each Windows Server 2003 domain contains a System container. The System container stores
per-domain configuration settings, including GPO property settings, Group Policy container
settings, IP Security settings, and WMI policy settings. IP Security and WMI policy are deployed
to client computers through the GPO infrastructure.
The following subcontainers of the System container hold GPO-related settings:
Policies. This object contains groupPolicyContainer objects listed by their unique name.
Each groupPolicyContainer object holds subcontainers for selected computer and user policy
settings.
Domain, OUs and Sites. These objects contain two GPO property settings, gPLink and
gPOptions.
Default Domain Policy. This object contains the AppCategories container, which is part
of the Group Policy Software installation extension.
IP Security. This object contains IP Security policy settings that are linked to a GPO. The
linked IP Security policy is applied to the recipients (user or computer) of the GPO.
WMIPolicy. This object contains WMI filters that can be applied to GPOs. WMI filters
contain one or more Windows Query Language (WQL) statements.

System\Policies Container
The System container is a top level container found in each domain naming context. It is normally
hidden from view in the Active Directory Users and Computers snap-in but can be made visible
by selecting "Advanced Features" from the snap-in View menu inside MMC. (Objects appear
hidden in the Active Directory Users and Computers snap-in when they have the property
showInAdvancedViewOnly = TRUE.) Group Policy information is stored in the Policies
subcontainer of this container. Each GPO is identified by a GroupPolicyContainer object stored
within the Policies container.
The Group Policy container is located in the Domain_Name/System/Policies container. Each
Group Policy container is given a common name (CN) and this name is also assigned as the
container name. For example, the name attribute of a Group Policy container, might be:
{923B9E2F-9757-4DCF-B88A-1136720B5AF2}, which is also assigned to the Group Policy
containers CN attribute.
The default GPOs are assigned the same Group Policy container CN on all domains. All other
GPOs are assigned a unique CN. The default GPOs and their Group Policy container common
names are:
Default Domain Policy: {31B2F340-016D-11D2-945F-00C04FB984F9}.
Default Domain Controllers Policy: {6AC1786C-016F-11D2-945F-00C04fB984F9}.
Knowing the common names of the default GPOs will help you distinguish them from non-default
GPOs.

Page 268 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

9.3 How a Group Policy Container is Named


Group Policy containers are named automatically when they are created. The CN of each Group
Policy container is a GUID (Globally Unique Identifier). This is distinct from and unrelated to the
Object GUID given to each Active Directory object. The CN is the name of the Group Policy
container used to ensure uniqueness of Group Policy container names within the Policies
container. There is no requirement for these GUIDs to be unique between domains (the Default
Domain Policy and the Default Domain Controllers Policy GPOs each have identical GUIDs in all
Active Directory installations). However, an Object GUID is always unique across all installations
of the Active Directory store.
The following table shows permissions on Group Policy container:

9.3.1 Default Group Policy Container Permissions


Trustee Access
Authenticated Users Read, Apply Group Policy
Domain Admins Read, Write
System Read, Write

GPO Attributes in the Policies CN


GPOs are created by instantiating the groupPolicyContainer class in the Active Directory schema
and storing the resulting GPO in the System/Policies container of the Active Directory store. After
creating a GPO, you can review its CN from the Active Directory Users and Computers snap-in
by enabling the Advanced view and then expanding the Policies CN. You can review all GPO
attributes and their values from the Active Directory Services Interface Editor snap-in, ADSI Edit.
Object attributes are either mandatory or optional, as defined in the Active Directory schema. The
CN attribute is mandatory for the class Container, Group Policy containers parent class. Three
attributes instanceType, objectCategory, and objectClass are mandatory for the class Top,
CNs parent class. Thus the Group Policy container class inherits all four mandatory attributes.
The following table describes the mandatory attributes:

Mandatory Attributes of the groupPolicyContainer Class


Name Description
The common name of the GPO. This is in the form of a GUID to avoid GPO
CN
naming conflicts within the Policies container.
An attribute that dictates how an object is instantiated from its class on a
particular server. In this case, it describes how the groupPolicyContainer class is
instanceType
created into a GPO in the Active Directory. A GPO is assigned the instanceType
value of 4.
An object class name, including the objects path, used to group objects of the
instantiated class. For example, the objectCategory of a GPO in the contoso.com
objectCategory
domain is: CN=Group-Policy-
Container,CN=Schema,CN=Configuration,DC=contoso,DC=com.
The list of classes from which this class is derived. For a GPO, the objectClass is
objectClass
Container, groupPolicyContainer, and top.

There are also a number of optional attributes inherited from the top class, and others that are
assigned directly to the Group Policy container. Many optional attributes are required in order for
the Group Policy container to function properly. For example, the GPCFileSysPath optional
attribute must be present or the Group Policy container will not be linked to its corresponding
Group Policy template.

Page 269 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

9.3.2 GroupPolicyContainer Subcontainers


Within the GroupPolicyContainer there are a series of subcontainers. The first level of
subcontainers User and Machine belong to the class Container. These two containers are
used to separate some User-specific and Computer-specific Group Policy components.

9.3.3 Group Policy Container-Related Attributes of Domain, Site, and OU Containers


Windows Server 2003 uses domain, DNS, site, and organizational unit classes to create domain,
site, and OU container objects respectively. These objects contain two optional Group Policy
container-related attributes, gPLink and gPOptions. The gPLink property contains the prioritized
list of GPOs and the gPOptions property contains the Block Policy Inheritance setting.
The gPLink attribute holds a list of all Group Policy containers linked to the container and a
number for each listed Group Policy container, that represents the Enforced (previously known
as No Override) and Disabled option settings. The list appears in priority order from lowest to
highest priority GPO.
The gPOptions attribute holds an integer value that indicates whether the Block Policy
Inheritance option of a domain or OU is enabled (0) or disabled (1).

9.3.4 Managing Group Policy Links for a Site, Domain, or OU


To manage GPO links to a site, domain, or OU, you must have read and write access to the
gPLink and gPOptions properties. By default, Domain Admins have this permission for domains
and organizational unit, and only Enterprise Admins and Domain Admins of the forest root
domain can manage links to sites. Active Directory supports security settings on a per-property
basis. This means that a non-administrator can be delegated read and write access to specific
properties. In this case, if non-administrators have read and write access to the gPLink and
gPOptions properties, they can manage the list of GPOs linked to that site, domain, or OU.

9.3.5 How WMIPolicy Objects are Stored and Associated with Group Policy Container
Objects
A single WMI filter can be assigned to a Group Policy container. The Group Policy container
stores the distinguished name of the filter in gPCWQLFilter attribute. The Group Policy container
locates the assigned filter in the System/WMIPolicy/SOM container. Each Windows Server 2003
domain stores its WMI filters in this Active Directory container. Each WMI filter stored in the SOM
container lists the rules that define the WMI filter. Each rule is listed separately. For example,
consider a WMI filter containing the following three WQL queries:
SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{5E076CF2-EFED-43A2-A623-
13E0D62EC7E0}"
SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{242365CD-80F2-11D2-989A-
00C04F7978A9}"
SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{00000409-78E1-11D2-B60F-
006097C998E7}"
Three WMI rules are defined in the details of the filter. Each rule contains a number of attributes,
including the query language (WQL) and the WMI namespace queried by the rule.

9.3.6 Group Policy Template


The majority of Group Policy settings are stored in the file system of the domain controllers. This
part of each GPO is known as the Group Policy template. The GroupPolicyContainer object for
each GPO has a property, GPCFileSysPath, which contains the UNC path to its related Group
Policy template.
All Group Policy templates in a domain are stored in the
\\domain_name\Sysvol\domain_name\Policies folder, where domain_name is the FQDN of the

Page 270 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

domain. The Group Policy template for the most part stores the actual data for the policy
extensions, for example Security Settings inf file, Administrative Template-based policy settings
.adm and .pol files, applications available for the Group Policy Software installation extension,
and potentially scripts.

The Gpt.ini File


The Gpt.ini file is located at the root of each Group Policy template. Each Gpt.ini file contains
GPO version information. Except for the Gpt.ini files created for the default GPOs, a display name
value is also written to the file.
Each Gpt.ini file contains the GPO version number of the Group Policy template.
[General]
Version=65539
Normally, this is identical to the version-number property of the corresponding
GroupPolicyContainer object. It is encoded in the same way as a decimal representation of a 4
byte hexadecimal number, the upper two bytes of which contain the GPO user settings version
and the lower two bytes contain the computer settings version. In this example the version is
equal to 10003 hexadecimal giving a user settings version of 1 and a computer settings version of
3.
Storing this version number in the Gpt.ini allows the CSEs to check if the client is out of date to
the last processing of policy settings or if the currently applied policy settings (cached policies)
are up-to-date. If the cached version is different from the version in the Group Policy template or
Group Policy container, then policy settings will be reprocessed.

Group Policy Template Subfolders


The Group Policy template folder contains the following subfolders:
Machine. Includes a Registry.pol file that contains the registry settings to be applied to
computers. When a computer initializes, this Registry.pol file is downloaded and applied to
the HKEY_LOCAL_MACHINE portion of the registry. The Machine folder can contain the
following subfolders (depending on the contents of the GPO):
Scripts\Startup. Contains the scripts that are to run when the computer starts
up.
Scripts\Shutdown. Contains the scripts that are to run when the computer shuts
down.
Applications. Contains the advertisement files (.aas files) used by the Windows
installer. These are applied to computers.
Microsoft\Windows NT\Secedit. Contains the Gpttmpl.inf file, which includes
the default security configuration settings for a Windows Server 2003 domain controller.
Adm. Contains all of the .adm files for the GPO.
User. Includes a Registry.pol file that contains the registry settings to be applied to users.
When a user logs on to a computer, this Registry.pol file is downloaded and applied to the
HKEY_CURRENT_USER portion of the registry. The User folder can contain the following
subfolders (depending on the contents of the GPO):
Applications. Contains the advertisement files (.aas files) used by the Windows
installer. These are applied to users.
Documents and Settings. Contains the Fdeploy.ini file, which includes status
information about the Folder Redirection options for the current users special folders.
Microsoft\RemoteInstall. Contains the OSCfilter.ini file, which holds user
options for operating system installation through Remote Installation Services.
Microsoft\IEAK. Contains settings for the Internet Explorer Maintenance snap-
in.
Scripts\Logon. Contains all the user logon scripts and related files for this GPO.
Scripts\Logoff. Contains all the user logoff scripts and related files for this GPO.

Page 271 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The User and Machine folders are created at install time, and the other folders are created as
needed when policy is set.
The permissions of each Group Policy template reflect the read and write permissions applied to
the GroupPolicyContainer through the Group Policy Object Editor. These permissions are
automatically maintained and are shown in the following table.

9.3.7 Default Group Policy Template Permissions


Trustee Access
Authenticated Users Read and Execute
Administrators Full Control
Group Policy Creator Owners Read and Execute
Creator Owner Full Control (Subfolders and Files only)
System Full Control

Group Policy Object Editor use of Sysvol


Each policy setting changed in a GPO causes at least two files to be rewritten the GPT.ini and
the file holding the changed setting. Making many changes to a GPO can cause a lot of network
traffic as Sysvol replicates these changes. This congestion should only occur on a local area
network where Sysvol replication occurs frequently. Across wide area network links, the inter-site
replication schedule will cause these changes to be amalgamated into a smaller amount of traffic
(for example, four changes to the Registry.pol file will result in only a single file replication).
The Local Group Policy Object
The Local GPO has no Active Directory component. Information stored in the Group Policy
container of an Active Directory GPO is instead stored in the Group Policy template of a Local
GPO. The Group Policy template of a Local GPO is located in the
Windows\system32\GroupPolicy folder. The Gpt.ini file in this GroupPolicy folder must hold
more management information than its counterpart in a domain-based GPO because there is no
Active Directory component to hold this information. The following table shows the attributes for
the Group Policy template.ini file.
Local GPO GPT.INI Attributes
Attribute Description
Includes a list of GUIDs that tells the client-side engine which
CSEs have User data in the GPO.
gPCUserExtensionNames The format is: [{GUID of CSE}{GUID of MMC extension}{GUID of
second MMC extension if appropriate}][repeat first section as
appropriate].
Includes a list of GUIDs that tells the client-side engine which
GPCMachineExtensionNames
CSEs have computer data in the GPO.
Refers to GPO options such as User portion disabled or Computer
Options
portion disabled.

The following extensions are disabled in a Local GPO:


Group Policy Software installation extension
Folder Redirection
The following extensions have reduced functionality in a Local GPO:
Public Key policies; EFS only; there are not any options for trust lists or auto enrolment.
Security Settings; there are not any options for Restricted Groups or File System,
Registry or Service Access Control Lists (ACLs).

9.3.8 Core Group Policy Processes and Interactions

Page 272 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Application of GPOs to targeted users and computers relies on many interactive processes. This
section explains how GPOs are applied and filtered to Active Directory containers such as sites,
domains, and OUs. It includes information about how the Group Policy engine processes GPOs
in conjunction with CSEs. In addition, it explains how Group Policy is replicated among domain
controllers.

9.3.9 Group Policy Processing Rules


GPOs that apply to a user or computer do not all have the same precedence. Settings that are
applied later can override settings that are applied earlier. Group Policy settings are processed in
the following order:
Local Group Policy object. Each computer has exactly one Group Policy object that is
stored locally. This processes for both computer and user Group Policy processing.
Site. Any GPOs that have been linked to the site that the computer belongs to are
processed next. Processing is in the order that is specified by the administrator, on the
Linked Group Policy Objects tab for the site in GPMC. The GPO with the lowest link order is
processed last, and therefore has the highest precedence.
Domain. Processing of multiple domain-linked GPOs is in the order specified by the
administrator, on the Linked Group Policy Objects tab for the domain in GPMC. The GPO
with the lowest link order is processed last, and therefore has the highest precedence.
Organizational units. GPOs that are linked to the organizational unit that is highest in
the Active Directory hierarchy are processed first, then GPOs that are linked to its child
organizational unit are processed, and so on. Finally, the GPOs that are linked to the
organizational unit that contains the user or computer are processed.
To summarize, the Local GPO is processed first, and the organizational unit to which the
computer or user belongs (the one that it is a direct member of) is processed last. All of this
processing is subject to the following conditions:
WMI or security filtering that has been applied to GPOs.
Any domain-based GPO (not Local GPO) can be enforced by using the Enforce option
so that its policies cannot be overwritten. Because an Enforced GPO is processed last, no
other settings can write over the settings in that GPO. If you have more than one Enforced
GPO, its possible to set the same setting in each GPO to a different value, in which case, the
link order of the GPOs determines which one contains the final settings.
At any domain or organizational unit, Group Policy inheritance can be selectively
designated as Block Inheritance. However, because enforced GPOs are always applied,
and cannot be blocked, blocking inheritance does not prevent policy from Enforced GPOs
from applying.
Every computer has a single Local GPO that is always processed regardless of whether the
computer is part of a domain or is a stand-alone computer. The Local GPO cant be blocked by
domain-based GPOs. However, settings in domain GPOs always take precedence since they are
processed after the Local GPO.

9.3.10 Targeting GPOs


The site, domain, and OU links from a GPO are used as the primary targeting principle for
defining which computers and users should receive a GPO. Security filtering and WMI filtering
can be used to further reduce the set of computers and users to which the GPO will apply. The
Group Policy engine uses the following logic in processing GPOs: If a GPO is linked to a domain,
site, or OU that applies to the user or computer, the Group Policy engine must then determine
whether the GPO should be added to its GPO list for processing. A GPO is blocked from
processing in the following circumstances:
The GPO is disabled. You disable either or both the computer or user components of a
GPO from its Policy Properties dialog box.

Page 273 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

The computer or user does not have permission to read and apply the GPO. You
control permission to a GPO through security filtering, as explained in the following section.
A WMI filter applied to a GPO evaluates to false on the client computer. A WMI filter
must evaluate to true before the Group Policy engine will allow it to be processed, as
explained in the following section.

Security Filtering
Security filtering is a way of refining which users and computers will receive and apply the
settings in a GPO. By using security filtering to specify that only certain security principals within a
container where the GPO is linked apply the GPO, you can narrow the scope of a GPO so that it
applies only to a single group, user, or computer. Security filtering determines whether the GPO
as a whole applies to groups, users, or computers; it cannot be used selectively on different
settings within a GPO.
In order for the GPO to apply to a given user or computer, that user or computer must have both
Read and Apply Group Policy (AGP) permissions on the GPO, either explicitly, or effectively
though group membership.
By default, all GPOs have Read and AGP both Allowed for the Authenticated Users group. The
Authenticated Users group includes both users and computers. This is how all authenticated
users receive the settings of a new GPO when it is applied to an organizational unit, domain or
site. Therefore, the default behavior is for every GPO to apply to every Authenticated User. By
default, Domain Admins, Enterprise Admins, and the local system have full control permissions,
without the Apply Group Policy access-control entry (ACE). However, administrators are
members of Authenticated Users, which means that they will receive the settings in the GPO by
default.
These permissions can be changed to limit the scope to a specific set of users, groups, or
computers within the organizational unit, domain, or site. The Group Policy Management Console
manages these permissions as a single unit, and displays the security filtering for the GPO on the
GPO Scope tab. In GPMC, groups, users, and computers can be added or removed as security
filters for each GPO.

9.4 How Security Filtering is Processed


Before processing a GPO, the Group Policy engine checks the Access Control List ACL
associated with the GPO. If an ACE on a GPO denies a security principal to which the computer
or user belongs, either the Apply Group Policy or Read permission, the Group Policy engine does
not add the GPO to its list of GPOs to process. Additionally, an ACE on a GPO must allow the
appropriate security principal both Apply Group Policy and Read permissions in order for the
Group Policy engine to add the GPO to the GPO processing list.
If appropriate permissions are granted to the GPO, it is added to the list of GPOs to download.
In general, Deny ACEs should be avoided because you can achieve the same results by granting
or not granting Allow permissions.

9.4.1 WMI Filtering


WMI makes data about a target computer available for administrative use. Such data can include
hardware and software inventory, settings, and configuration information. For example, WMI
exposes hardware configuration data such as CPU, memory, disk space, and manufacturer, as
well as software configuration data from the registry, drivers, file system, Active Directory, the
Windows Installer service, networking configuration, and application data.
WMI filtering allows you to filter the application of a GPO by attaching a Windows Query
Language query to a GPO. The queries can be written to query WMI for multiple items. If the
query returns true for all queried items, then the GPO will be applied to the target user or
computer.

Page 274 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

When a GPO that is linked to a WMI filter is applied on the target computer, the filter is evaluated
on the target computer. If the WMI filter evaluates to false, the GPO is not applied (except if the
client computer is running Windows 2000, in which case the filter is ignored and the GPO is
always applied). If the WMI filter evaluates to true, the GPO is applied.
The WMI filter is a separate object from the GPO in the directory. A WMI filter must be linked to a
GPO in order to apply. Each GPO can have only one WMI filter; however the same WMI filter can
be linked to multiple GPOs. WMI filters, like GPOs, are stored only in domains. A WMI filter and
the GPO it is linked to must be in the same domain.

9.4.2 How WMI Filtering is Processed


If, after security filtering, appropriate permissions are granted to the GPO, it is added to the list of
GPOs to download. Upon download, the Group Policy engine reads the gPCWQLFilter attribute
in the Group Policy container to determine if a WMI filter is applied to the GPO. If so, the WMI
filter, which contains one or more WQL statements, is evaluated. If the statement evaluates to
true, then the GPO is processed. There are tradeoffs in using WMI filters because they can
increase the amount of time it takes to process policy especially if the filter to be evaluated takes
a long time to process.
9.4.3 WMI Filtering Scenarios
Sample uses of WMI filters include:
Services. Computers where DHCP is turned on.
Registry. Computers that have this registry key populated.
Hardware inventory. Computers with a Pentium III processor.
Software inventory. Computers with Visual Studio .NET installed.
Hardware configuration. Computers with network interface cards (NICs) on interrupt level
3.
Software configuration. Computers with multi-casting turned on.
Associations. Computers that have any services dependent on Systems Network
Architecture (SNA) service.
Client support for WMI filters exists only on Windows XP, Windows Server 2003, and later
operating systems. Windows 2000 clients will ignore any WMI filter and the GPO is always
applied, regardless of the WMI filter. WMI filters are only available in domains that have at least
one Windows Server 2003 domain controller.

9.4.4 Application of Group Policy


Application of Group Policy involves a series of processes, beginning with user and computer
logon.
Initial Processing of Group Policy
Group Policy for computers is applied at computer startup. For users, Group Policy is applied
when they log on. In Windows 2000, the processing of Group Policy is synchronous, which
means that computer Group Policy is completed before the logon dialog box is presented, and
user Group Policy is completed before the shell is active and available for the user to interact with
it. As explained in the following section, Windows XP with Fast Logon-enabled (which is the
default setting) allows users to log on while Group Policy is processed in the background.)
Synchronous and Asynchronous Processing
Synchronous processes can be described as a series of processes where one process must
finish running before the next one begins. Asynchronous processes, on the other hand, can run
on different threads simultaneously because their outcome is independent of other processes.
You can change the default processing behavior by using a policy setting for each GPO so that
processing is asynchronous instead of synchronous. For example, if the policy has been set to
remove the Run command from the Start menu, it is possible under asynchronous processing
that a user could logon prior to this policy taking effect, so the user would initially have access to
this functionality.

Page 275 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Fast Logon in Windows XP Professional


By default in Windows XP Professional, the Fast Logon Optimization feature is enabled for both
domain and workgroup members. This means that policy settings apply asynchronously when the
computer starts and when the user logs on. This process of applying policies is similar to a
background refresh process. As a result, users can logon and begin using the Windows shell
faster than they would with synchronous processing. Fast Logon Optimization is always off during
logon under the following conditions:
When a user first logs on to a computer.
When a user has a roaming user profile or a home directory for logon purposes.
When a user has synchronous logon scripts.
Note that under the preceding conditions, computer startup can still be asynchronous. However,
because logon is synchronous under these conditions, logon does not exhibit optimization. The
following table compares policy processing of Windows 2000 and Windows XP client computers.
Default Policy Processing for Client Computers
Client Application at startup/log on Application at refresh
Windows 2000 Synchronous Asynchronous
Windows XP Professional Asynchronous Asynchronous

Windows XP clients support Fast Logon Optimization in any domain environment. Fast Logon
Optimization can be disabled with the following policy setting:
Computer Configuration\Administrative Templates\System\Logon\ Always wait for the
network at computer startup and logon.
Note that Fast Logon Optimization is not a feature of Windows Server 2003.
Folder Redirection and Software Installation Policies
Note that when Fast Logon Optimization is on, a user might need to log on to a computer twice
before folder redirection policies and software installation policies are applied. This occurs
because the application of these types of policies requires the synchronous policy application.
During a policy refresh (which is asynchronous), the system sets a flag indicating that the
application of folder redirection or a software installation policy is required. The flag forces
synchronous application of the policy at the users next logon.
Time Limit for Processing of Group Policy
Under synchronous processing, there is a time limit of 60 minutes for all of Group Policy to finish
processing on the client computer. Any CSEs that are not finished after 60 minutes are signaled
to stop, in which case the associated policy settings might not be fully applied.
Background Refresh of Group Policy
In addition to the initial processing of Group Policy at startup and logon, Group Policy is applied
subsequently in the background on a periodic basis. During a background refresh, a CSE will only
reapply the settings if it detects that a change was made on the server in any of its GPOs or its
list of GPOs.
In addition, software installation and folder redirection processing occurs only during computer
startup or user logon. This is because background processing could cause undesirable results.
For example, in software installation, if an application is no longer assigned, it is removed. If a
user is using the application while Group Policy tries to uninstall it or if an assigned application
upgrade takes place while someone is using it, errors would occur. Although the Scripts CSE is
processed during background refresh, the scripts themselves only run at startup, shutdown,
logon, and logoff, as appropriate.
Periodic Refresh Processing
By default, Group Policy is processed every 90 minutes with a randomized delay of up to 30
minutes for a total maximum refresh interval of up to 120 minutes.
Group Policy can be configured on a per-extension basis so that a particular extension is always
processed during processing of policy even if the GPOs havent changed. Policy settings for each

Page 276 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

extension are located in Computer Configuration\Administrative Templates\System\Group


Policy.
On-Demand Processing
You can also trigger a background refresh of Group Policy on demand from the client. However,
the application of Group Policy cannot be pushed to clients on demand from the server.
Messages and Events
When Group Policy is applied, a WM_SETTINGCHANGE message is sent, and an event is
signaled. Applications that can receive window messages can use them to respond to a Group
Policy change. Those applications that do not have a window to receive the message (as with
most services) can wait for the event.
Refreshing Policy from the Command Line
You can update or refresh Group Policy settings manually through a command line tool. On
Windows 2000, you can use Secedit with the /refreshpolicy option; on Windows XP and Windows
Server 2003, you can use Gpupdate.
Group Policy and Slow Links
When Group Policy detects a slow link, it sets a flag to indicate to CSEs that a policy setting is
being applied across a slow link. Individual CSEs can determine whether or not to apply a policy
setting over the slow link. The default settings are as follows:
Default Slow Link Settings
Extension Default Setting
Security Settings On (and cannot be turned off)
Administrative Templates On (and cannot be turned off)
Software Installation Off
Scripts Off
Folder Redirection Off

9.4.5 Group Policy Loopback Support


Group Policy is applied to the user or computer, based on where the user or computer object is
located in Active Directory. However, in some cases, users might need policy applied to them,
based on the location of the computer object, not the location of the user object. The Group
Policy loopback feature gives you the ability to apply User Group Policy, based on the computer
that the user is logging onto. The following figure shows a sample site, domain, and OU structure
and is followed by a description of the changes that can occur with loopback processing.
Sample Active Directory Structure

Page 277 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Normal user Group Policy processing specifies that computers located in the Servers
organizational unit have the GPOs A3, A1, A2, A4, and A6 applied (in that order) during computer
startup. Users of the Marketing organizational unit have GPOs A3, A1, A2, and A5 applied (in that
order), regardless of which computer they log on to.
In some cases this processing order might not be what you want. An example is when you do not
want applications that have been assigned or published to the users of the Marketing
organizational unit to be installed while they are logged on to the computers in the Servers
organizational unit. With the Group Policy loopback feature, you can specify two other ways to
retrieve the list of GPOs for any user of the computers in the Servers organizational unit:
Merge mode. In this mode, the computers GPOs have higher precedence than the users
GPOs. In this example, the list of GPOs for the computer is A3, A1, A2, A4, and A6, which is
added to the users list of A3, A1, A2, A5, resulting in A3, A1, A2, A5, A3, A1, A2, A4, and
9A6 (listed in lowest to highest priority).
Replace mode. In this mode, the users list of GPOs is not gathered. Only the list of
GPOs based upon the computer object is used. In this example, the list is A3, A1, A2, A4,
and A6.
The loopback feature can be enabled by using the User Group Policy loopback processing
mode policy under Computer Settings\Administrative settings\System\Group Policy.
The processing of the loopback feature is implemented in the Group Policy engine. When the
Group Policy engine is about to apply user policy, it looks in the registry for a computer policy,
which specifies which mode user policy should be applied in.

9.4.6 How the Group Policy Engine Processes Client-Side Extensions


Client-side extensions are the components running on the client system that process and apply
the Group Policy settings to that system. There are a number of extensions that are pre-installed
in Windows Server 2003. Other Microsoft applications and third party application vendors can
also write and install additional extensions to implement Group Policy management of these
applications.
The default Windows Server 2003 CSEs are listed in the following table:
Default Windows Server 2003 CSEs
Client-Side Extension Active Directory Component Sysvol Component
Software Installation PackageRegistration objects .aas files
Security Settings Gptmpl.inf
Folder Redirection fdeploy.ini
Scripts Scripts.ini

Page 278 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

IP Security IPSec Policy objects


Internet Explorer Maintenance .ins + branding .inf files.
Administrative Templates Registry.pol and .adm files
Disk Quota Registry.pol
EFS Recovery Registry.pol
Remote Installation Oscfilter.ini
Wireless Network Policies Registry.pol and .adm files
QoS Packet Scheduler Registry.pol and .adm files

Client-Side Extension Operation


CSEs are called by the Winlogon process at computer startup, user logon and at the Group
Policy refresh interval. CSEs are registered with Winlogon in the registry. This registration
information includes a DLL and a DLL entry point (function call) by which the CSE processing can
be initiated. The Winlogon process uses these to trigger Group Policy processing.
Each extension can opt not to perform processing at any of these points (for example, avoid
processing during background refresh).
Client-Side Extensions Registered with WinLogon
Each of the CSEs is registered under the following key:
HKEY_LOCAL_MACHINE\Software\Microsoft\WindowsNT\CurrentVersion\Winlogon\GPExt
ensions
Each extension is identified by a key named after the GUID of the extension. These extensions
are shown in the following table.
CSE Extensions
Extension GUID Extension Name
25537BA6-77A8-11D2-9B6C-0000F8080861 Folder Redirection
35378EAC-683F-11D2-A89A-00C04FBBCFA2 Administrative Templates Extension
3610EDA5-77EF-11D2-8DC5-00C04FA31A66 Disk Quotas
426031c0-0b47-4852-b0ca-ac3d37bfcb39 QoS Packet Scheduler
42B5FAAE-6536-11D2-AE5A-0000F87571E3 Scripts
827D319E-6EAC-11D2-A4EA-00C04F79F83A Security
A2E30F80-D7DE-11d2-BBDE-00C04F86AE3B Internet Explorer Maintenance
B1BE8D72-6EAC-11D2-A4EA-00C04F79F83A EFS Recovery
C6DC5466-785A-11D2-84D0-00C04FB169F7 Software Installation
E437BC1C-AA7D-11D2-A382-00C04F991E27 IP Security

9.4.7 How Group Policy Processing History Is Maintained on the Client Computer
Each time GPOs are processed, a record of all of the GPOs applied to the user or computer is
written to the registry. GPOs applied to the local computer are stored in the following registry
path:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Group
Policy\History
GPOs applied to the currently logged on user are stored in the following registry path:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Group
Policy\History
Preferences and Policy Configuration
Manipulating these registry values directly is not recommended. Most of the items in which you
might need to change the behavior of an extension (such as forcing a CSE to run over a slow

Page 279 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

link), are available as Group Policy settings. These can be found in the Group Policy Object
Editor in the following location:
Computer Settings\Administrative Templates\System\Group Policy
The behavior can be changed for the following CSEs:
Administrative Templates (Registry-based policy)
Internet Explorer Maintenance
Software Installation
Folder Redirection
Scripts
Security
IP Security
EFS recovery
Disk Quotas
Order of Extension Processing
Administrative Templates policy settings are always processed first. Other extensions are
processed in an indeterminate order.
Policy Application Processes
There are two primary milestones that the Group Policy engine uses for GPO processing:
Creating the list of GPOs targeted at the user or computer.
Invoking the relevant CSEs to process the policy settings relevant to them within the
GPO list.
The following figure shows the steps required to reach the first milestone in GPO processing,
GPO list creation.

GPO List Creation

Page 280 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Creating the GPO list involves the following steps:


1. Query the Active Directory for the gPLink and gPOptions properties in the Site and
Domain hierarchies to which the user or computer object belongs.
2. Query the Active Directory for the GroupPolicyContainer objects referenced in the
gPLink properties.
3. Evaluate security filtering to determine if the user or computer have the Apply Group
Policy access permission to the GPO.
4. Evaluate the WMI query against the WMI repository on the client computer to determine if
the computer meets the query requirements.
Once the GPO list is created, the Group Policy engine and the CSEs work together to process
Group Policy template components. The following figure shows the steps required to determine
which CSEs to call.
Determining CSEs to Call

Page 281 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Determining which CSEs to call involves the following steps:


1. Retrieve the list of CSEs registered with Winlogon.
2. Check to see whether it is appropriate to run a particular CSE (for example, whether
background processing or slow link processing is enabled for the extension).
3. Check the CSE history against list of Applied GPOs. GPOs with new version numbers
and GPOs that have settings relevant to the CSE (that is, they have the CSE extension GUID
in the Group Policy container gpcUserExtension or gpcMachineExtension properties) are
added to the Changed GPO List. GPOs no longer in the Applied GPO List are added to the
Deleted GPO List.
4. Check to see whether the appropriate CSE should be processing policy settings for the
user or the computer.
5. Check the version number listed in the GPO against its recorded version history in the
registry to determine whether the GPO needs reprocessing.
If all of the version numbers are unchanged the MaxNoGPOListChanges interval might have
expired; if so, the CSE processes policy settings without regard to an unchanged version number.
Steps 3 through 5 are repeated by each CSE for all GPOs in the GPO list. After one CSE is done,
the next CSE that needs to run repeats the entire process.

Page 282 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Group Policy updates are dynamic and occur at specific intervals. If there have been no changes
to Group Policy, the client computer still refreshes the security policy settings at regular intervals
for the GPO.
If no changes are discovered, GPOs are not processed. Security policies have a periodic force
apply every 16 hours. For security policies, there is a value that sets a maximum limit of how long
a client can function without reapplying non-changed GPOs. By default, this setting is every 16
hours plus a randomized delay of up to 30 minutes. Even when GPOs that contain security policy
settings do not change, the policy is reapplied every 16 hours

9.5 Group Policy Replication


In a domain that contains more than one domain controller, Group Policy information takes time
to propagate, or replicate, from one domain controller to another. Low bandwidth network
connections between domain controllers slow replication. The Group Policy infrastructure has
mechanisms to manage these issues.
Each GPO is stored partly in the Sysvol on the domain controller and partly in Active Directory.
GPMC and Group Policy Object Editor present and manage the GPO as a single unit. For
example, when you set permissions on a GPO in GPMC, GPMC is actually setting permissions
on objects in both Active Directory and the Sysvol. It is not recommended that you manipulate
these separate objects independently outside of GPMC and the Group Policy Object Editor. As
shown in the following figure, it is important to understand that these two separate components of
a GPO rely on different replication mechanisms. The file system portion is replicated through
FRS, independently of the replication handled by Active Directory. Only the Sysvol subfolder
(%systemroot%\SYSVOL\sysvol) is shared and replicated. Sysvol was designed to allow multiple
domains Sysvols to be replicated in the same tree each domains Sysvol is contained under a
subfolder of the Sysvol share. For the current domain, a copy of the domains Sysvol subtree is
also stored directly under the %systemroot%\SYSVOL\domain folder.
Group Policy Replication

FRS is a multi-master replication service that synchronizes folders between two or more Windows
Server 2003 or Windows 2000 systems. Modified files are queued for replication at the point the
file is closed. In the case of conflicting modifications between two copies of an FRS replica, the
file with the latest modification time will overwrite any other copies. This is referred to as a "last-
writer-wins" model.
FRS replication topology configuration is stored as a combination of child objects of each FRS
replica partner (in the FRS Subscriptions subcontainer) and objects within another hidden
subcontainer of the domain System container. Replication links between systems are maintained

Page 283 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

as FRS subscription objects. These objects specify the replica partner and the replication
schedule. It is possible to view the schedule by browsing to an FRS subscription object and
viewing the properties. The replica partner is stored as the object GUID of the computer account
of that partner.
The Sysvol folder is a special case of FRS replication. Active Directory automatically maintains
the subscription objects and their schedules as the directory replication is built and maintained. It
is possible, but not recommended, to modify the properties (for example, the schedule) of the
Sysvol subscription objects manually.
The FRS replication schedule only approximates to the directory replication schedule so it is
possible for the directory-based Group Policy information and the file-based information to get
temporarily out of synch. Since GPO version information is stored in both the Group Policy
container object and in the Group Policy template, any discrepancy can be viewed with tools such
as Gpotool.exe and Repladmin.exe.
For those Group Policy extensions that store data in only one data store (either Active Directory
or Sysvol), this is not an issue, and Group Policy is applied as it can be read. Such extensions
include Administrative Templates, Scripts, Folder Redirection, and most of the Security Settings.
For any Group Policy extension that stores data in both storage places (Active Directory and
Sysvol), the extension must properly handle the possibility that the data is unsynchronized. This
is also true for extensions that need multiple objects in a single store to be atomic in nature, since
neither storage location handles transactions.
An example of an extension that stores data in Active Directory and Sysvol is Group Policy
Software installation extension. The .aas files are stored on Sysvol and the Windows Installer
package definition is in Active Directory. If the .aas file exists, but the corresponding Active
Directory components are not present, the software is not installed. If the .aas file is missing, but
the package is known in Active Directory, application installation fails gracefully and will be retried
on the next processing of Group Policy.
The tools used to manage Active Directory and Group Policy, such as GPMC, the Group Policy
Object Editor, and Active Directory Users and Computers all communicate with domain
controllers. If there are several domain controllers available, changes made to objects like users,
computers, organizational units, and GPOs might take time to appear on other domain
controllers. The administrator might see different data depending on the last domain controller on
which changes were made and which domain controller they are currently viewing the data from.
If multiple administrators manage a common GPO, it is recommended that all administrators use
the same domain controller when editing a particular GPO, to avoid collisions in FRS. Domain
Admins can use a policy to specify how Group Policy chooses a domain controller that is, they
can specify which domain controller option should be used. The Group Policy domain controller
selection policy setting is available in the Administrative Templates node for User Configuration,
in the System\Group Policy subcontainer.

9.5.1 Network Ports Used by Group Policy


Port Assignments for Group Policy
Service Name UDP TCP
Lightweight Directory Access Protocol n/a 389
SMB n/a 445
DCOM Dynamicallly assigned Dynamically assigned
RPC Dynamically assigned Dynamically assigned

9.6 What Is Resultant Set of Policy?


One challenge of Group Policy administration is to understand the cumulative effect of a number
of Group Policy objects (GPOs) on any given computer or user, or how changes to Group Policy,
such as reordering the precedence of GPOs or moving a computer or user to a different
organizational unit (OU) in the directory, might affect the network. The Resultant Set of Policy

Page 284 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

(RSoP) snap-in offers administrators one solution. Administrators use the RSoP snap-in to see
how multiple Group Policy objects affect various combinations of users and computers, or to
predict the effect of Group Policy settings on the network

9.6.1 Resultant Set of Policy Snap-in Core Scenario


There are two core scenarios for the Resultant Set of Policy (RSoP) snap-in: reporting the effect
of policy on various combinations of users and computers, and predicting the affect of policy. The
reporting functionality of RSoP snap-in is known as logging mode. The predictive functionality is
known as planning mode. Both of these scenarios are described in detail in the following
respective sections.
Whether in logging or planning mode, an administrator typically accesses the RSoP snap-in by
opening an empty Microsoft Management Console (MMC), adding the RSoP snap-in to the
console, and then using the Resultant Set of Policy Wizard to collect data from various computers
on the network for either logging or planning mode.
Logging Mode
An administrator uses logging mode to report on the current state of Group Policy settings. The
scope of reports can include Group Policy settings for various targets, including a computer, a
user, or both. The administrator selects combinations of targets in the Resultant Set of Policy
Wizard.
When the administrator finishes with the wizard, the RSoP snap-in queries the Windows
Management Instrumentation (WMI) repository on the client computer for information about the
Group Policy settings for the targets. RSoP snap-in displays these settings.
Logging mode in the RSoP snap-in is useful for troubleshooting Group Policy. The RSoP snap-in
lists each policy setting, and from which GPO the displayed setting came. GPO precedence is
also available, as is any error information that was logged by either the Group Policy engine or
any of the Group Policy client side extensions. Using this information, an administrator can
determine which GPOs are applying a Group Policy setting and which GPOs are not.
In addition to using an empty MMC to access RSoP snap-in, there is another scenario an
administrator is just as likely to use for troubleshooting Group Policy settings. An administrator
can run RSOP.msc from the command prompt on a client computer to report on the current
computer with the current user logged on. In this manner, the administrator avoids having to
select targets in the Resultant Set of Policy Wizard.
Planning Mode
An administrator uses planning mode to perform "what if" scenarios with Group Policy. Like
logging mode, the end result is a report of Group Policy settings. However, unlike logging mode,
this report contains simulated data.
The administrator uses RSoP snap-in and a Windows Server 2003 domain controller. First, the
administrator must use the Resultant Set of Policy Wizard to define the scope of the report. This
is done by selecting various targets. In planning mode these targets must include both user
information and computer information; however, the administrator can select either a particular
user or computer or an Active Directory container for users or computers.
After specifying the scope of the plan, the administrator uses the wizard to specify a simulated
Group Policy settings environment. Because these Group Policy environment settings are
simulated, an actual client computer might behave differently in live tests. It is not possible for a
domain controller to accurately determine how a particular client computer will actually behave.
The following simulated environment settings are available:
A slow network connection
Loopback processing (merge mode and replace mode)
Site-based policy implementation
Security group memberships for the user, the computer, or both
WMI Filters that are linked to the user, the computer, or both
The administrator can elect to simulate any, all, or none of the environment variables. Whatever
the choices, when the wizard finishes, the RSoP snap-in requests a simulation of policy from the

Page 285 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

domain controller. The domain controller uses Resultant Set of Policy Provider to simulate the
application of GPOs. This service passes the GPO settings to virtual client-side extensions on the
domain controller. The results of the simulation are stored in the WMI repository on the domain
controller before the information is passed back to the RSoP snap-in for analysis. It is important
to remember that the results displayed in the RSoP snap-in are not actual Group Policy settings,
but simulated Group Policy based on the settings created using the wizard. If a custom client side
extension exists on a client but does not exist on the domain controller, then any Group Policy
settings this custom client side extension might create would not appear in the simulation results.
Planning mode in the RSoP snap-in is useful for planning Group Policy. The RSoP snap-in lists
each GPO from which the displayed setting came as well as any other lower priority GPOs that
attempted to configure settings. Using this information, an administrator can determine which
GPOs are applying a policy setting and which GPOs are not.

9.6.2 Similar Technologies for Viewing Resultant Set of Policy Data


Although administrators can use the RSoP snap-in for reporting and planning the effects of Group
Policy, much of its functionality has been subsumed into Group Policy Management Console
(GPMC), which provides a much better experience for the network administrator.
The following table gives the equivalent names of GPMC and RSoP snap-in features that utilize
the RSoP infrastructure.
Equivalent Names and Descriptions of GPMC and RSoP Snap-in Features
GPMC feature RSoP snap-in feature Description
Group Policy Results Logging Mode Reports the effect of policy on a computer or user
Group Policy Modeling Planning Mode Predicts the affect of policy on the network

Although GPMC provides functionality that subsumes most of the reporting features of RSoP
snap-in, there is some Group Policy information that can only be reported on using RSoP snap-in.
For example, RSoP snap-in lists each GPO from which the displayed setting came as well as any
other lower priority GPOs that attempted to configure settings. Using this information, an
administrator can determine which GPOs are applying a policy setting and which GPOs are not.
In these cases, an administrator can use GPMC to open the RSoP snap-in by electing to view
advanced information about a Group Policy Results or Group Policy Modeling report.

9.6.3 Resultant Set of Policy Snap-in Dependencies


Resultant Set of Policy snap-in has the following dependencies.
Resultant Set of Policy snap-in is an MMC console; as such, it requires MMC
infrastructure to function.
Resultant Set of Policy infrastructure was introduced in Windows XP. Therefore, you
cannot run gather RSoP data for a computer running Windows 2000 or lower.
To run RSoP on a remote computer, you must be logged on as a member of the local
Administrators group, or be delegated Generate Resultant Set of Policy (logging) rights for
logging mode, or Generate Resultant Set of Policy (planning) rights for planning mode.
To run RSoP planning mode, you must have a Windows Server 2003 domain controller.
For planning mode involving security group membership, the administrator needs access
to see membership of security groups.

9.6.4 How Resultant Set of Policy Works


Administrators can use the Resultant Set of Policy (RSoP) snap-in for two purposes: to predict
the cumulative effect of Group Policy objects (GPOs), or to determine the actual result of Group
Policy settings on a particular computer, user, or user on a computer.
Although administrators can use the RSoP snap-in for reporting and planning the effects of Group
Policy, much of its functionality has been subsumed into Group Policy Management Console
(GPMC), which provides a much better experience for the network administrator.

Page 286 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

In an ideal environment, administrators are encouraged to use the GPMC features for simulating
Group Policy or determining the effect of Group Policy on a particular user or computer.

9.6.5 Resultant Set of Policy Snap-in Architecture


The RSoP snap-in is one of three administrative tools used to manage Group Policy. The
following diagram shows all three of the tools, as well as the domain controller and a client
computer. In addition, the diagram describes the different communication protocols being used by
each tool (LDAP, SMB, RPC/COM); the interactions between RSoP, the domain controller, and
the client; and whether those interactions are READ or READ/WRITE.
Resultant Set of Policy Snap-in Architectural Diagram

Component Description of Resultant Set of Policy Snap-in Architectural Diagram


Component Description
The RSoP snap-in is an MMC used to determine which policy settings are in
effect for a given computer, user, or user on a computer, or to predict the effect of
applied policy.
The RSoP snap-in itself is contained within the same binary as the Group Policy
Object Editor. Thus, the user interface is a read-only view of the same
Resultant Set
information available in Group Policy Object Editor. However, there is one
of Policy snap-
important difference: while Group Policy Object Editor can show all settings from
in
a single GPO at a time, the RSoP snap-in can show the cumulative effect of
many GPOs.
For RSoP snap-in functionality, administrators can use GPMC, which includes its
own integrated RSoP infrastructure reporting features.
RSoP snap-in is capable of read access to the Active Directory, Sysvol, Event

Page 287 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Log, RSoP infrastructure, and Local GPO on the target computer. Although
RSoP is capable of read-only access to the Active Directory and Sysvol, most of
the work of predicting or reporting Group Policy is done using RPC/COM
communication with the RSoP provider, either on the client or the domain
controller.
In an Active Directory forest, the domain controller is a server that contains a
Domain writable copy of the Active Directory database, participates in Active Directory
Controller replication, and controls access to network resources. GPOs are stored in two
(Server) parts of domain controllers: The Active Directory database (sometimes called
Group Policy Container) and the Sysvol (known as the Group Policy template).
Active Directory, the Windows-based directory service, stores information about
objects on a network and makes this information available to users and network
Active administrators. Administrators link GPOs to Active Directory containers such as
Directory sites, domain, and OUs that include user and computer objects. In this way,
policy settings can be targeted to users and computers throughout the
organization.
Sysvol is a shared directory that stores the server copy of the domains public
files, which are replicated among all domain controllers in the domain. The Sysvol
contains the largest part of a GPO: the Group Policy template, which includes
Sysvol
Administrative Template-based policy settings, security settings, script files, and
information regarding applications that are available for software installation. File
Replication Service (FRS) replicates this information throughout the network.
LDAP (Lightweight Directory Access Protocol) is the protocol used by the Active
Directory directory service. RSoP snap-in uses LDAP for authentication and
LDAP Protocol
delegation checks. The client also uses LDAP to read the directory store on the
domain controller.
SMB (Server Message Block) protocol is the primary method of file and print
sharing. SMB can also be used for abstractions such as named pipes and mail
SMB Protocol
slots. RSoP snap-in and the client use SMB to access the Sysvol on the domain
controller.
RPC (Remote Procedure Call), DCOM (Distributed Component Object Model)
and COM (Component Object Model) enable data exchange between different
processes. The different process can be on the same computer, on the local area
RPC/COM
network, or across the Internet.
COM and RPC are used by the RSoP snap-in for communication with the RSoP
provider on the client or domain controller.
WMI is a management infrastructure that supports the monitoring and controlling
of system resources through a common set of interfaces and provides a logically
organized, consistent model of Windows operation, configuration, and status.
WMI makes available data about a target computer for administrative use. Such
data can include hardware and software inventory, settings, and configuration
information. For example, WMI exposes hardware configuration data such as
WMI
CPU, memory, disk space, and manufacturer, as well as software configuration
data from the registry, drivers, file system, Active Directory, the Windows Installer
service, networking configuration, and application data. WMI Filtering in Windows
Server 2003 allows you to create queries based on this data. These queries (also
called WMI filters) determine which users and computers receive all of the policy
configured in the GPO where you create the filter.
All Group Policy processing information is collected and stored in a namespace in
RSoP
WMI. This information, such as the list, content and logging of processing details
infrastructure
for each GPO, can then be accessed by tools using WMI.

Page 288 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

In logging mode, RSoP snap-in queries the database on the target computer,
receives information about the policies and displays it in the RSoP snap-in.
In planning mode, RSoP snap-in simulates the application of policy using the
Resultant Set of Policy Provider on a Domain Controller. This simulates the
application of GPOs and passes them to Group Policy client-side extensions on
the Domain Controller. The results of this simulation are stored to a local WMI
database on the domain controller before the information is passed back and
displayed in the RSoP snap-in.
The Event log is a service that records events in various logs. The RSoP snap-in
Event Log reads the Event Log on client computers and domain controllers in order to
provide information about error events.
The local Group Policy object (local GPO) is stored on each individual computer,
in the hidden %systemroot%\System32\GroupPolicy directory. Each computer
running Windows 2000, Windows XP Professional, Windows XP 64-Bit Edition, or
Windows Server 2003 has exactly one local GPO, regardless of whether the
computers are part of an Active Directory environment.
Local Group Local GPOs do not support certain extensions, such as Folder Redirection or
Policy object Group Policy Software Installation. Local GPOs do support many security
settings, but the Security Settings extension of the Group Policy Object Editor
does not support remote management of local GPOs. Local GPOs are always
processed, but can be overridden by domain policy in an Active Directory
environment, because GPOs linked to Active Directory containers have
precedence.

9.7 Group policy Tools


GPUpdate.exe
This tool is used for refreshing local and Active Directory policy settings on the computer from
which you run the GPUpdate command.
Category This command-line tool is included in Windows XP and Windows Server 2003.
Version compatibility You can use GPUpdate locally on Windows XP and higher computers to
refresh policy immediately. On computers running Windows 2000, this behavior is provided by the
using the secedit.exe command line tool, with a specific parameter.
GPUpdate refreshes local Group Policy settings and Group Policy settings that are stored in
Active Directory, including security settings, on the computer from which it is run. This command
supersedes the now obsolete /refreshpolicy option for the secedit command line tool. For more
information about GPUpdate, type GPUpdate /? at the command line.
GPResult.exe
GPResult.exe is a Group Policy tool for examining the settings applied during Group Policy
refresh.
Category There are two versions of GPResult: One shipped with the Windows 2000 Resource
Kit; the other is included with Windows XP and Windows Server 2003. The Windows 2000
version runs only locally on Windows 2000. The windows Server 2003 version runs locally or
remotely on Windows XP or Windows Server 2003.
The different versions are not compatible.
Version compatibility GPResult utilizes Resultant Set of Policy (RSoP) data. You can use
GPResult that shipped with Windows Server 2003 family on Windows XP and higher computers.
You can use GPResult that shipped with Windows 2000 Resource Kit for Windows 2000.
GPResult for Windows Server 2003 displays Group Policy settings and Resultant Set of Policy
(RSoP) for a user or a computer. Because you can apply overlapping levels of policies to any
computer or user, the Group Policy feature generates a resulting set of policies at logon.
GPResult displays the resulting set of policies that were enforced on the computer for the
specified user at logon.
Page 289 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

GPResult for Windows 2000 estimates the Group Policy settings that would be applied at a
specific computer. Full documentation for this version of GPResult is available in the readme file
distributed with the tool.
Dcgpofix.exe: Dcgpofix
Category Dcgpofix ships with Windows Server 2003.
Version compatibility You can run Dcgpofix only on servers running Windows Server 2003
family. This tool can restore default domain policy and default domain controllers policy to their
original state after installation, except for some security-related settings that are impossible to
return to their exact original state. When you run Dcgpofix, you will lose any changes made to
these Group Policy objects. For more information about Dcgpofix, type Dcgpofix /? at the
command line.
This tool should be used as a last-resort disaster-recovery tool. A better solution is to use GPMC
to back up and restore these GPOs.
GPMonitor.exe: Group Policy Monitor Tool
Category Group Policy Monitor tool is included in the Windows Server 2003 Deployment Kit.
Version compatibility The Group Policy Monitor tool works on Windows XP and higher
computers. Group Policy Monitor tool collects Group Policy information at every Group Policy
refresh and sends that information to a centralized location that you specify. You can then use the
Group Policy Monitor user interface (UI) to view the data. The Group Policy Monitor UI can
provide a historical view of policy changes. The UI is also designed to make it easy to navigate
through historical snapshots of data and trace changes. For more information about the Group
Policy Monitor tool, type GPMonitor /? at the command line. You can find full documentation for
the Group Policy Monitor tool in the Windows Server 2003 Deployment Kit Tools.
GPOTool.exe: Group Policy Verification Tool
Category Group Policy Verification tool is included in the Windows Server 2003 Deployment Kit.
Version compatibility The Group Policy Verification tool works on Windows 2000 and higher
computers. You use Group Policy Verification tool to check the health of the Group Policy objects
on domain controllers. The tool checks GPOs for consistency on each domain controller in your
domain. The tool also determines whether the policies are valid and displays detailed information
about replicated Group Policy objects (GPOs).

Section 10

10. Managing and Maintaining Group Policy


10.1 Group Policy Administrative Tools
10.1.1 Group Policy Administrative Tools Architecture
10.1.2 Group Policy Administrative Tools Components
10.1.3 Group Policy Management Console
10.1.4 Group Policy Object Editor
10.1.5 Resultant Set of Policy Snap-in

10.2 Group Policy Administrative Tools Deployment Scenarios


10.2.1 Group Policy Management Console
10.2.2 Group Policy Object Editor
10.2.3 Resultant Set of Policy snap-in
10.2.4 Group Policy Management Console Core Scenarios
10.2.5 Creating and Editing GPOs
10.2.6 Manipulating Inheritance
10.2.7 Reporting of GPO Settings

Page 290 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

10.3 Group Policy Modeling


10.4 Group Policy Results

10.5 Group Policy Management Console Dependencies


10.5.1 GPMC System Installation Requirements
10.5.2 GPMC Feature Requirements

10.6 How Group Policy Management Console Works


10.7 Group Policy Management Console Architecture

10.8 Group Policy Management Console Interfaces


10.8.1 Group Policy Management Console Processes and Interactions
10.8.2 GPO Operations in Group Policy Management Console
10.8.3 Specifying the discretionary access control list (DACL) on the new GPO
10.8.4 Copying within a domain compared with copying to another domain
10.8.5 Migration Tables
10.8.6 Settings impacted by migration tables
10.8.7 Options for specifying migration tables
10.8.8 Contents of Migration tables

10.9 Administrative Templates in GPMC and Group Policy Object Editor


10.9.1 Administrative Templates
10.9.2 Handling Administrative Template Files in GPMC
10.9.3 Handling Administrative Template Files in Group Policy Object Editor

10.1 Group Policy Administrative Tools


There are three primary tools used to administer Group Policy: Microsoft Group Policy
Management Console (GPMC), Group Policy Object Editor, and Resultant Set of Policy (RSoP)
snap-in. Each of these tools is a Microsoft Management Console (MMC). Administrators use
GPMC for the bulk of Group Policy management tasks. Group Policy Object Editor is for editing
Group Policy objects. Although administrators can still use the RSoP snap-in for reporting and
planning the effects of Group Policy, much of its functionality has been subsumed into GPMC.

10.1.1 Group Policy Administrative Tools Architecture


The diagram below illustrates the relationship between the domain controller, client, and the three
primary tools used to administer Group Policy.

Group Policy Administrative Tools Architectural Diagram

Page 291 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Components of Group Policy Administrative Tools Architectural Diagram


Component Description
The Group Policy Object Editor is used to edit GPOs. It was previously known
as the Group Policy snap-in, Group Policy Editor, or Gpedit. A notable feature
of the Group Policy Object Editor is its extensibility. Developers can extend
Group Policy
the server-side snap-ins that ship with Group Policy Object Editor or they can
Object Editor
develop completely new extensions for implementing Group Policy.
The Group Policy Object Editor is capable of read and write access to Active
Directory, Sysvol, and the Local GPO.
The nodes of the Group Policy Object Editor are MMC snap-ins. These snap-
ins include Administrative Templates, Scripts, Security Settings, Software
Installation, Folder Redirection, Remote Installation Services, and Internet
Server-Side Snap- Explorer Maintenance. Snap-ins may in turn be extended. For example, the
Ins Security Settings snap-in includes several extension snap-ins. Developers
can also create their own MMC extension snap-ins to the Group Policy Object
Editor to provide additional policy settings. Extensions are capable of read
and write access to the Local GPO.
GPMC makes Group Policy much easier to manage by providing a view of
GPOs, sites, domains, and organizational units (OU) across an enterprise.
Group Policy
GPMC can be used to manage either Windows Server 2003 or Windows
Management
2000 domains.
Console (GPMC)
GPMC simplifies the management of Group Policy by providing a single place
for managing core aspects of Group Policy, such as scoping, delegating,

Page 292 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

filtering, and manipulating inheritance of GPOs. You can also back up GPOs
to the file system as well as restore GPOs from backups. GPMC includes
features that enable an administrator to predict how GPOs are going to affect
the network as well as to determine how GPOs have actually changed
settings on any particular computer or user.
GPMC is capable of read and write access to the Sysvol using the SMB
protocol. It is also capable of read and write access to Active Directory via the
LDAP protocol. In addition, GPMC is capable of read access to the event log
and RSoP infrastructure.
The Resultant Set of Policy snap-in is an MMC used to determine which
policy settings are in effect for a given User or Computer, or to predict the
effect of applied policy.
The snap-in itself is contained within the same binary as the Group Policy
Object Editor snap-in (gpedit.dll). The user interface is mostly a read-only
view of the same information available in the Group Policy Object Editor.
However there is one important difference: while the Group Policy Object
Resultant Set of Editor can show only a single GPO setting at a time, the RSoP snap-in shows
Policy (RSoP) the cumulative effect of many GPOs.
snap-in For RsoP functionality it is recommended to use GPMC, which includes its
own integrated RsoP features.
The RSoP snap-in is capable of read access to the Active Directory, Sysvol,
Event Log, RSoP infrastructure, and Local GPO. Although the RSoP snap-in
is capable of read only access to the Active Directory and Sysvol, most of the
work of predicting or reporting Group Policy is done using RPC/COM
communication with the RSoP provider, either on the client or the domain
controller.
In an Active Directory forest, the domain controller is a server that contains a
writable copy of the Active Directory database, participates in Active Directory
Domain Controller
replication, and controls access to network resources. GPOs are stored in two
(Server)
parts of domain controllers: The Active Directory database and the Sysvol
share.
In an Active Directory forest, settings from GPOs are applied to clients.
Client GPMC and the RSoP snap-in query the client to determine how policy has
been applied to a particular user or computer.
Active Directory, the Windows-based directory service, stores information
about objects on a network and makes this information available to users and
network administrators. Administrators link GPOs to Active Directory
Active Directory
containers such as sites, domains, and OUs that include user and computer
objects. In this way, policy settings can be targeted to users and computers
throughout the organization.
Sysvol is a shared directory that stores the server copy of the domains public
files, which are replicated among all domain controllers in the domain. The
Sysvol contains the largest part of a GPO: the Group Policy template (GPT),
Sysvol
which includes Administrative Template-based policy settings, security
settings, and script files. File Replication Service (FRS) replicates this
information throughout the network.
All Group Policy processing information is collected and stored in a Common
Information Model Object Management (CIMOM) database on the local
RsoP computer. This information, such as the list of GPOs that have been
infrastructure processed, as well as content and logging of processing details for each
GPO, can then be accessed by tools using Windows Management
Instrumentation (WMI).

Page 293 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

With Group Policy Results in GPMC, or logging mode for the RSoP snap-in,
the RSoP service is used to query the CIMOM database on the target
computer; it receives information about the policies that were applied and
displays the resulting information in GPMC or the RSoP snap-in.
With Group Policy Modeling in GPMC, or planning mode for the RSoP snap-
in, the RSoP service simulates the application of policy using the Group
Policy Directory Access Service (GPDAS) on a Domain Controller. GPDAS
simulates the application of GPOs and passes them to virtual client-side
extensions on the Domain Controller. The results of this simulation are stored
in a local CIMOM database on the domain controller before the information is
passed back and displayed in either GPMC or the RSoP snap-in.

10.1.2 Group Policy Administrative Tools Components


The following is a list of Group Policy administrative tools.

10.1.3 Group Policy Management Console


In the past, administrators have been required to use several Microsoft tools to manage Group
Policy, such as the Active Directory Users and Computers, Active Directory Sites and Services,
Group Policy Object Editor and Resultant Set of Policy snap-ins. With the introduction of Group
Policy Management Console (GPMC), most administrative tasks have been integrated into a
single, unified console that also offers several new capabilities.

10.1.4 Group Policy Object Editor


Group Policy Object Editor is an MMC snap-in used to edit the policy settings in Group Policy
objects (GPOs). Like all MMC snap-ins, its functionality can be customized or extended by means
of MMC snap-in extensions. When you use Group Policy Object Editor, various extensions are
included by default, including Administrative Templates, Scripts, Security Settings, Software
Installation, Folder Redirection, Remote Installation Services, and Internet Explorer Maintenance.
Snap-ins may in turn be extended. For example, the Security Settings snap-in includes several
extension snap-ins. Developers can also create their own MMC extension snap-ins to the Group
Policy Object Editor to provide additional policy settings.

10.1.5 Resultant Set of Policy Snap-in


Resultant Set of Policy (RSoP) snap-in is an MMC used to predict the effect of GPOs on the
network as a whole, or to determine the effect Group Policy has had on a specific user or
computer. Although administrators can use the RSoP snap-in, much of its functionality has been
subsumed into GPMC.

10.2 Group Policy Administrative Tools Deployment Scenarios


The following is a list of common scenarios supported by each of the Group Policy Administrative
Tools.

10.2.1 Group Policy Management Console


An administrator uses GPMC to manage Group Policy in an Active Directory environment.
Although the majority of computers on the network might be running Windows 2000 server or
Windows 2000 professional, the administrator might download and install GPMC on a machine
running Windows XP SP1 or later. GPMC requires Windows XP or later.
GPMC offers the administrator a persistent view of the Group Policy environment on the network,
including icons that represent GPOs, GPO links, sites, domains, and organizational units (OU) in
the selected forest. With GPMC, an administrator can do any of the administrative tasks
previously only available from the Group Policy tab of the Active Directory administrative tools.

Page 294 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

GPMC can also be used to generate RSoP data that either predicts the cumulative effect of
GPOs on the network, or reports the cumulative effect of GPOs on a particular user or computer.
In addition, the administrator can use GPMC to perform GPO operations never possible before,
like backing up and restoring a GPO, copying a GPO, or even migrating a GPO to another forest.
Reading or generating HTML or XML reports of GPO settings is also possible.

10.2.2 Group Policy Object Editor


An administrator uses Group Policy Object Editor to manipulate settings in a GPO. Typically an
administrator accesses Group Policy Object Editor by electing to edit a GPO from within GPMC.
The Group Policy Object Editor opens, allowing the administrator to change settings for that
GPO. If the administrator had two GPOs linked to an OU and wanted to manage settings in both,
he would have to open them one at a time. This is because Group Policy Object Editor can only
display settings for GPOs one at a time. To see how settings for multiple GPOs might affect an
OU, the administrator would use a different tool either GPMC or the RSoP snap-in.

10.2.3 Resultant Set of Policy snap-in


An administrator uses Resultant Set of Policy snap-in to predict the cumulative effect of GPOs on
the network, or report the cumulative effect of GPOs on a particular user or computer. With the
advent of GPMC, the administrator can generate reports with most of the information formerly
only available through the RSoP snap-in. The RSoP snap-in does provide some information that
GPMC does not. For example, when multiple GPOs attempt to set the same Group Policy setting
differently, both GPMC and RSoP snap-in report how the setting is ultimately set and which GPO
is responsible for the setting, but only RSoP snap-in can report all the GPO(s) that attempted and
failed to manipulate the setting.
What Is Group Policy Management Console?
The Group Policy Management Console (GPMC) is a new and comprehensive administrative tool
for Group Policy management.
Prior to GPMC, administrators used property pages in various Active Directory administrative
tools to manage Group Policy. For example, an administrator who wanted to implement policy for
users might open the Active Directory Users and Computers snap-in, find an appropriate
Organizational Unit (OU) and open its property page to access the Group Policy tab. On the
Group Policy tab, the administrator might do any of a dozen or so administrative tasks, like
creating Group Policy object links or manipulating their order to achieve the desired results.
Whatever the tasks, when the administrator leaves the Group Policy tab, access to a visual
representation of Group Policy ends and a view that focuses on Active Directorys user and
computer objects appears.
GPMC integrates the existing Group Policy functionality of the property pages on the Active
Directory administrative tools into a single, unified console dedicated to Group Policy
management tasks; GPMC also expands management capabilities with new features.
Administrators still use Active Directory administrative tools to manage Active Directory, but
GPMC replaces the Group Policy management functionality of those tools with its own.

10.2.4 Group Policy Management Console Core Scenarios


There are many core scenarios for GPMC. Administrators use GPMC to perform all Group Policy
management tasks, with the exception of configuring individual policy settings in Group Policy
Objects themselves, which is done with Group Policy Object Editor. The scenarios below
describe how an administrator uses GPMC to manage Group Policy.

10.2.5 Creating and Editing GPOs


Administrators use GPMC to create a GPO with no initial settings. An administrator can also
create a GPO and linking it to an Active Directory container at the same time. To configure
individual settings within a GPO, an administrator edits the GPO from within GPMC and Group
Policy Object Editor appears with the GPO loaded. An administrator can use either GPMC or

Page 295 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Group Policy Object Editor to disable or enable computer, user, or both computer and user nodes
within a GPO.

Scoping GPOs
An administrator can use GPMC to link GPOs to sites, domains, or OUs in the Active Directory.
Administrators must link GPOs to apply settings to users and computers in Active Directory
Containers. Linking GPOs is the primary mechanism by which administrators apply Group Policy
settings.
In addition to linking, an administrator can manipulate permissions on GPOs to manage how
Group Policy applies. Prior to GPMC, an administrator would have to manually manipulate
access control entries (ACE) to modify the scope of the GPO. For example, the administrator
might remove Read and Apply Group Policy from the Authenticated Users group for GPO1.
This effectively disables GPO1, since users in the Authenticated Users group require both Read
and Apply Group Policy permissions to process Group Policy. To apply the settings in GPO1 to
select network users or computers, the administrator would add a new security principal (typically
a security group containing the target users or computers) to the ACL on the GPO and set Read
and Apply Group Policy permissions. This is known as security filtering.
With GPMC, security filtering has been simplified. The administrator adds the security principal to
the GPO, and GPMC automatically sets the Read and Apply Group Policy permissions.
Administrators can also use GPMC to link WMI Filters to GPOs. WMI Filters allow an
administrator to dynamically determine the scope of GPOs based on attributes (available through
WMI) of the target computer. A WMI filter consists of one or more queries that are evaluated to be
either true or false against the WMI repository of the target computer. The WMI filter is a separate
object from the GPO in the directory.

10.2.6 Manipulating Inheritance


Group Policy can be applied to users and computers at the site, domain, or OU level. GPOs from
parent containers are inherited by default. When multiple GPOs apply to these users and
computers, the settings in the GPOs are aggregated. For most policy settings, the final value of a
given policy setting is set only by the highest precedent GPO that contains that setting. (However,
the final value for a few settings will actually be the combination of values across GPOs.)
Group Policy determines precedence of GPOs by the order of processing for the GPOs. GPOs
processed last have highest precedence. GPOs follow the SDOU rule for processing; site first,
then domain, and then followed by OU, including nested OUs. A nested OU is one that has
another OU as its parent. In the case of nested OUs, GPOs associated with parent OUs are
processed prior to GPOs associated with child OUs. In this processing order, sites are applied
first but have the least precedence. OUs are processed last and have the highest precedence.
When a container has multiple GPO links, administrators can use GPMC to manipulate the link
order for every container. GPMC assigns each link a Link Order number; the GPO link with Link
Order of 1 has highest precedence on that container.
Administrators can use GPMC to Block Inheritance. This is the ability to prevent an OU or domain
from inheriting GPOs from any of its parent container. Note that Enforced GPO links (see below)
will always be inherited.
Administrators can use GPMC to set GPO links to Enforced (previously known as "No Override").
This is the ability to specify that a GPO should take precedence over any GPOs that are linked to
child containers. Enforcing a GPO link works by moving that GPO to the end of the processing
order.
An administrator can also use GPMC to enable or disable a GPO link. If an administrator enables
a GPO link, Group Policy processes the linked GPO. If the link is not enabled, Group Policy does
not process the linked GPO.
GPO Operations
GPO operations refer to the ability to backup (export), restore, import, and copy GPOs. Backing
up a GPO consists of making a copy of GPO data to the file system. Note that the Backup

Page 296 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

function also serves as the export function for GPOs. Backed up GPOs can be used either in
conjunction with Restore or Import operations.
Restoring a GPO takes an existing GPO backup and re-instantiates it back in the domain. The
purpose of a restore is to reset a specific GPO back to the identical state it was in when it was
backed up. This restoration does not include GPO links. This is because the links are a property
of the container the GPO is linked to, not the GPO itself. Since a restore operation is specific to a
particular GPO, it is based on the GUID and domain of the GPO. Therefore, a restore operation
cannot be used to transfer GPOs across domains.
Importing a GPO allows you to transfer settings from a backed up GPO to an existing GPO. You
can perform this operation within the same domain, across domains, or across forests. This
allows for many interesting capabilities such as staging of a test GPO environment in a lab before
importing into a production environment.
Restoring and Importing a GPO will remove any existing settings already in the target GPO. Only
the settings in the backup will be in the GPO when these operations are complete.
Copying a GPO is similar to an export/import operation only the GPO is not saved to a file system
location first. In addition, a copy operation creates a new GPO as part of the operation, whereas
an import uses an existing GPO as its destination.

10.2.7 Reporting of GPO Settings


GPMC can display a report of the defined settings in a given GPO. This report can be generated
by any user with read access to the GPO. Without GPMC, users that did not have write access to
a GPO could not read and view the settings in that GPO. This is because the Group Policy Object
Editor requires the user to have read and write permissions to the GPO to open it. Some
examples of users that might need to read and view but not edit a GPO include security audit
teams that need to read but not edit GPO settings, helpdesk personnel that are troubleshooting a
Group Policy issue, and OU administrators that may need to read and view the settings from
inherited GPOs. With GPMC these users now have read access to the settings.
The HTML reports also make it easy for the administrator to view settings that are contained in a
GPO at a glance. Alternatively, administrators can expand and contract individual sections within
the report by clicking the heading for each section.
GPMC also solves some common reporting requirements including the ability to document all the
settings in a GPO to a file for printing or viewing. Using a context menu, users can either print the
reports, or save them to a file in either HTML or XML format.

Search for GPO


GPMC provides extensive capabilities to search for GPOs within a domain or across all domains
in a forest. This search feature allows an administrator to search for GPOs based on the following
criteria:
Display name of the GPO.
Whether or not a specific domain has containers that link to the GPO.
The permissions set on the GPO.
The WMI filter that is linked to the GPO.
The type of policy settings that have been set in the User Configuration or Computer
Configuration in the GPO, such as folder redirection or security settings. Note that you cannot
search based on the individual settings configured in a GPO.
GUID of the GPO.

10.3 Group Policy Modeling


Windows Server 2003 has a powerful new Group Policy management feature that allows the user
to simulate a policy deployment that would be applied to users and computers before actually
applying the policies. This feature, known in Windows Server 2003 as Resultant Set of Policy
(RSoP) Planning Mode, is integrated into GPMC as Group Policy Modeling. This feature requires
a domain controller that is running Windows Server 2003 in the forest, because the simulation is

Page 297 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

performed by a service that is only present on Windows Server 2003 domain controllers.
However, with this feature, you can simulate the resultant set of policy for any computer in the
forest, including those running Windows 2000.

10.4 Group Policy Results


This feature allows administrators to determine the resultant set of policy that was applied to a
given computer and (optionally) the user that logged on to that computer. The data that is
presented is similar to Group Policy Modeling data, however, unlike Group Policy Modeling, this
data is not a simulation. It is the actual resultant set of policy data obtained from the target
computer. Unlike Group Policy Modeling, the data from Group Policy Results is obtained from the
client, and is not simulated on the DC. The client must be running Windows XP, Windows
Server 2003 or later. It is not possible to get Group Policy Results data for a Windows 2000
computer. (However, with Group Policy Modeling, you can simulate the RSoP data).

GPMC Scripting
The GPMC user interface is based on a set of COM interfaces that accomplish all of the
operations performed by GPMC. These interfaces are available to Windows scripting
technologies like JScript and VBScript, as well as programming languages such as Visual Basic
and VC++. An administrator can use these interfaces to automate many Group Policy
management tasks.
These interfaces are discussed in detail in the GPMC software development kit (SDK) located in
the %programfiles%\gpmc\scripts\gpmc.chm Help file on systems where GPMC has been
installed. The contents of the GPMC SDK are also available in the Platform SDK.

10.5 Group Policy Management Console Dependencies


Group Policy Management Console requires the MMC infrastructure to function. In addition,
GPMC has its own set of system installation requirements, as well as requirements for certain
GPMC features to function.

10.5.1 GPMC System Installation Requirements


Although GPMC can manage both Windows 2000 and Windows Server 2003 domains with Active
Directory, the tool itself must be installed on a computer running Windows Server 2003 or
Windows XP Professional (with Windows XP Service Pack 1 (or later) and the Microsoft .NET
Framework). Note that when installing GPMC on Windows XP Professional with SP1, a post-SP1
hotfix is required. This hotfix (Q326469) is included with GPMC. GPMC Setup prompts you to
install Windows XP QFE Q326469 if it is not already present.

10.5.2 GPMC Feature Requirements


GPMC exposes features that are available in the underlying operating system. Because new
features have been added to Group Policy since Windows 2000, certain features will only be
available in GPMC depending on the operating system that has been deployed on the domain
controllers and clients. This section describes these dependencies. In general, there are four key
issues that determine whether a feature is available in GPMC:
Windows Server 2003 Active Directory schema must be available to delegate Group
Policy Modeling or Group Policy Results.
Windows Server 2003 domain controller must be available to run Group Policy Modeling.
Windows Server 2003 domain configuration (ADPrep /DomainPrep) must be available to
use WMI Filters.
Clients must be running Windows XP or Windows Server 2003 in order to generate
Group Policy Results data.
GPMC dependencies upon Windows and Active Directory platform appear below:

GPMC Dependencies on Windows and Active Directory

Page 298 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Dependency Feature Reason


The Generate Resultant Set of Policy (Logging)
Delegation of Group
Windows Server 2003 and Generate Resultant Set of Policy (Planning)
Policy Modeling and
Active Directory permissions needed for this operation are only
Group Policy
Schema available with the Windows Server 2003 Active
Results
Directory schema.
Windows Server 2003 The simulation is performed by the Resultant Set of
Group Policy
Domain Controller in Policy Service which is only available on domain
Modeling
the forest controllers running Windows Server 2003.
Windows Server 2003 ADPREP /DomainPrep configures the domain for
domain configuration WMI Filters Windows 2003 Active Directory including
(DomainPrep) configuration for WMI Filters.
Clients must be
Clients must be instrumented to log Group Policy
running Windows XP Group Policy
Results data when policy is processed. This
or Windows Results
capability is only available on the listed systems.
Server 2003

There is no dependency from the Group Policy perspective on whether a domain is in native
mode or mixed mode.

10.6 How Group Policy Management Console Works


Administrators use Group Policy Management Console (GPMC) to manage Group Policy. An
administrator must have an Active Directory environment to use GPMC to manage Group Policy.
In an ideal environment, the clients are running Windows XP while the servers are running
Windows 2003. However, GPMC can also be used to manage Windows 2000 domains.

10.7 Group Policy Management Console Architecture


GPMC is one of three administrative tools used to manage Group Policy. The following diagram
shows all three of those tools, as well as the domain controller and a client. In addition, the
diagram describes the different communication protocols being used by each tool (LDAP, SMB,
RPC/COM, DNS), the interactions between the tool, the domain controller and the client, and
whether those interactions are READ (lines with a single arrow head) or READ/WRITE (lines with
double arrow heads).

Group Policy Management Console Architectural Diagram

Page 299 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Components of Group Policy Management Console Architectural Diagram


Component Description
GPMC makes Group Policy much easier to manage by providing a view of
GPOs, sites, domains, and OUs across an enterprise. GPMC can be used to
manage either Windows Server 2003 or Windows 2000 domains.
GPMC simplifies the management of Group Policy by providing a single place
for managing core aspects of Group Policy, such as scoping, delegating,
Group Policy filtering, and manipulating inheritance of GPOs. You can also back up GPOs to
Management the file system as well as restore GPOs from backups. GPMC includes features
Console that enable an administrator to predict how GPOs are going to affect the
(GPMC) network as well as to determine how GPOs have actually changed settings on
any particular computer or user.
GPMC is capable of read and write access to the Sysvol using the SMB
protocol. It is also capable of read and write access to Active Directory via the
LDAP protocol. In addition, GPMC is capable of read access to the event log
and RSoP infrastructure.
In an Active Directory forest, the domain controller is a server that contains a
writable copy of the Active Directory database, participates in Active Directory
replication, and controls access to network resources. GPOs are stored in two
Server (Domain
parts of domain controllers: The Active Directory database and the Sysvol.
Controller)
In addition to storing the GPOs used in normal policy processing, GPMC and
the RSoP snap-in use the domain controller and the RSoP infrastructure to
simulate the application of policy across the network.

Page 300 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Active Directory, the Windows-based directory service, stores information about


objects on a network and makes this information available to users and network
administrators. Administrators link GPOs to Active Directory containers such as
Active Directory sites, domains, and OUs that include user and computer objects. In this way,
policy settings can be targeted to users and computers throughout the
organization. Prior to GPMC, administrators used property pages in various
Active Directory administrative tools to manage Group Policy.
The Sysvol is a shared directory that stores the server copy of the domains
public files, which are replicated among all domain controllers in the domain.
The Sysvol contains the largest part of a GPO: the Group Policy template
Sysvol (GPT), which includes Administrative Template-based policy settings, security
settings, script files, and information regarding applications that are available for
software installation. File Replication Service (FRS) replicates this information
throughout the network.
Domain Name Service (DNS) and Windows Naming Service (WINS) are
network name services. Network name services are used to translate IP
DNS/WINS
addresses to domain names. GPMC can use either DNS or WINS to locate
domain controllers.
Lightweight Directory Access Protocol (LDAP) is the protocol used by the Active
Directory directory service. GPMC uses LDAP to access the directory store on
LDAP Protocol
the domain controller. The client computer on which Group Policy is processed
also uses LDAP to read the directory store on the domain controller.
The Server Message Block (SMB) protocol is the primary protocol used for file
and print sharing. SMB can also be used for named pipes and mail slots.
SMB Protocol GPMC uses SMB to access the Sysvol as well as back up and retrieve files to a
remote file system. The client also uses SMB to read the sysvol on the domain
controller.
Domain Name Service (DNS) protocol and Windows Internet Name Service
DNS protocol (WINS) protocol are used for locating computers on a network by using
/WINS protocol hostnames instead of IP addresses. GPMC can use either to locate domain
controllers.
RPC (Remote Procedure Call), DCOM (Distributed Component Object Model)
and COM (Component Object Model) enable data exchange between different
processes. The different process can be on the same machine, on the local
area network, or across the Internet.
The various binary files that make up the GPMC components primarily use
RPC/COM
COM calls to communicate. The GPMC COM interfaces are also all scriptable
so that an administrator can automate Group Policy Management.
DCOM and RPC are used by GPMC for communication with the RSoP provider
on the client or domain controller for Group Policy Results and Group Policy
Modeling reports.
WMI is a management infrastructure that supports monitoring and controlling of
system resources through a common set of interfaces and provides a logically
organized, consistent model of Windows operation, configuration, and status.
WMI makes data about a target computer available for administrative use. Such
data can include hardware and software inventory, settings, and configuration
WMI
information. For example, WMI exposes hardware configuration data such as
CPU, memory, disk space, and manufacturer, as well as software configuration
data from the registry, drivers, file system, Active Directory, the Windows
Installer service, networking configuration, and application data. WMI Filtering in
Windows Server 2003 allows you to create queries based on this data. These

Page 301 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

queries (also called WMI filters) determine which users and computers receive
the policy configured in the GPO.
Group Policy processing information is collected and stored in a Common
Information Model Object Management (CIMOM) database on the local
computer. This information, such as the list of GPOs, as well as the content and
logging of processing details for each GPO, can then be accessed by tools
using WMI.
With Group Policy Results in GPMC, or logging mode for the RSoP snap-in, the
RSoP service is used to query the CIMOM database on the target computer; it
RsoP receives information about the policies and displays it in GPMC or the RSoP
infrastructure snap-in.
With Group Policy Modeling in GPMC, or planning mode for the RSoP snap-in,
the RSoP service simulates the application of policy using the Group Policy
Directory Access Service (GPDAS) on a Domain Controller. GPDAS simulates
the application of GPOs and passes them to virtual client-side extensions on
the Domain Controller. The results of this simulation are stored to a local
CIMOM database on the domain controller before the information is passed
back and displayed in either GPMC or the RSoP snap-in.
The Group Policy Engine is a framework that handles common functionality
Group Policy
across client-side extensions. GPMC does not communicate directly with the
Engine
Group Policy Engine. For more information about the Group Policy Engine
Client-side extensions (CSEs) consist of one or more dynamic-link libraries
(DLLs) that are responsible for implementing Group Policy at the client
computer. CSEs typically correspond to snap-ins: Administrative Templates,
Client-Side
Scripts, Security Settings, Software Installation, Folder Redirection, Remote
Extensions
Installation Services, and Internet Explorer Maintenance. GPMC does not
communicate directly with the Client-side extensions. For more information
about client-side extensions.
GPMC can write to the file system of the local machine or any remote machine.
File System GPMC writes to the file system for GPO operations, such as backups or copy,
and for saving HTML/XML reports.
The Event log is a service that records events in the various logs on the
Event Log computer. GPMC is capable of read and write access to the Event Log on client
computers and domain controllers.
The local Group Policy object (local GPO) is stored on each individual
computer, in the hidden %systemroot%\System32\GroupPolicy directory.
Each computer running Windows 2000, Windows XP Professional,
Local Group Windows XP 64-Bit Edition, or Windows Server 2003 has exactly one local
Policy object GPO, regardless of whether the computers are part of an Active Directory
environment.
GPMC does not offer access to the local GPO. For more information about the
local Group Policy Object,.

10.8 Group Policy Management Console Interfaces


GPMC includes a set of programmable interfaces designed for use by administrators writing
scripts, as well as programmers using languages such as Visual Basic or C++.
The following is a comprehensive list of all the Group Policy Management Console (GPMC)
interfaces.
Comprehensive List of GPMC Interfaces
Interface Description

Page 302 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Accesses GPMC interfaces to retrieve GPMC information and to


IGPM create objects. Manages and queries Group Policy objects (GPOs)
and GPO links. Queries scope of management (SOM) objects.
Cancels an asynchronous GPMC operation, such as a backup,
restore, import, copy, or report generation. The GPMC function
IGPMAsyncCancel called by the client will asynchronously return a pointer to the
IGPMAsyncCancel interface. IGPMAsyncCancel is not available
through scripting.
Enables client notification about the progress of an operation.
Implemented by the client and passed as a parameter to GPMC
IGPMAsyncProgress
methods that can run asynchronously. IGPMAsyncProgress is not
available through scripting.
Retrieves properties of GPMBackup objects. Deletes GPMBackup
IGPMBackup
objects
IGPMBackupCollection Accesses a collection of GPO backups.
IGPMBackupDir Queries GPO backups and GPO backup collections.
Queries client-side extension properties such as ID and display
IGPMClientSideExtension
name.
IGPMConstants Retrieves the values of GPMC constants.
IGPMCSECollection Accesses a collection of client-side extension objects.
Queries SOM objects. Creates, restores, and queries GPOs.
IGPMDomain Creates and queries WMI filters. WMI filter queries use WMI Query
Language (WQL).
Manages an individual GPO. Deletes, copies, imports, or backs up a
GPO. Enables or disables user and computer configuration options.
IGPMGPO
Sets security information. Retrieves multiple GPO properties for a
given GPO and generates reports of the settings in the GPO.
IGPMGPOCollection Accesses a collection of GPOs.
Manages a GPO link from a SOM. Sets and retrieves properties of
IGPMGPOLink GPO links, such as the link order or whether a link is enabled or
enforced.
IGPMGPOLinksCollection Accesses a collection of GPO links.
IGPMPermission Retrieves permission properties.
Retrieves status message information while performing GPMC
IGPMResult
operations on GPOs, such as restore, import, backup and copy.
Queries Resultant Set of Policy (RSoP) in logging or planning mode.
IGPMRSOP
Generates RSoP reports.
IGPMSearchCriteria Defines the criteria for search operations.
Defines the set of permissions that exist on a SOM, GPO or WMI
IGPMSecurityInfo
filter. Removes or adds permissions.
IGPMSitesContainer Queries SOM objects for particular sites in a forest.
Creates and retrieves GPO links for a SOM. Sets and retrieves
IGPMSOM
security attributes and properties for a SOM.
IGPMSOMCollection Accesses a collection of SOMs.
IGPMStatusMessage Retrieves properties of status messages of GPO operations.
IGPMStatusMsgCollection Accesses a collection of status messages.
IGPMTrustee Retrieves information about a trustee that is a user, group, or

Page 303 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

computer in the domain.


IGPMWMIFilter Sets or retrieves security attributes and properties of a WMI filter.
IGPMWMIFilterCollection Accesses a collection of WMI filters.
IGPMMapEntry Retrieves map entries.
IGPMMapEntryCollection Accesses a collection of map entries.
IGPMigrationTable Accesses a migration table.

10.8.1 Group Policy Management Console Processes and Interactions


This section defines how GPO operations are handled by GPMC, as well as important differences
between how GPMC and Group Policy Object Editor handle Administrative template files.

10.8.2 GPO Operations in Group Policy Management Console


With Group Policy Management Console (GPMC) administrators can back up, restore, import, or
copy Group Policy objects (GPOs). When copying or importing GPOs across domains, an
administrator can use migration tables to facilitate the copy or import operation.
Backups
Backing up a Group Policy object (GPO) copies the data in the GPO to the file system. The
backup function also serves as the export capability for GPOs. A GPO backup can be used to
restore the GPO to the backed-up state, or to import the settings in the backup to another GPO.
Because each backup is identified by a unique backup ID, multiple backups of the same or
different GPO can be stored in the same file system location. The collection of backups in a given
file system location can be managed using the GPMC or through the scriptable interfaces. When
you open Manage Backups from the Group Policy Objects node in GPMC, the view is
automatically filtered to show only backups of GPOs from that domain. When opened from the
Domains node, all backups are displayed, regardless of which domain they are from.
Information saved in a backup
Backing up a GPO saves all information that is stored inside the GPO to the file system. This
includes the following information:
GPO globally unique identifier (GUID) and domain.
GPO settings.
Discretionary access control list (DACL) on the GPO.
WMI filter link, if there is one, but not the filter itself.
Links to IP Security Policies, if any.
XML report of the GPO settings, which can be viewed as HTML from within GPMC.
Date and time stamp of when the backup was taken.
User-supplied description of the backup.
Information not saved in a backup
Backing up a GPO only saves data that is stored inside the GPO. Data that is stored outside the
GPO is not available when the backup is restored to the original GPO or imported into a new one.
This data that becomes unavailable includes the following information:
Links to a site, domain, or organizational unit.
WMI filter.
IP Security policy.
Restore
Restoring a Group Policy object (GPO) re-creates the GPO from the data in the backup. A restore
operation can be used in both of the following cases: the GPO was backed up but has since been
deleted, or the GPO is live and you want to roll back to a known previous state. The end effect is
the same in either case, except as noted below in some cases for GPOs that contain software
installation settings.

Page 304 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

GPMC identifies the GPO by its domain and globally unique identifier (GUID). The purpose of a
restore operation is to return the GPO to its original state, so the restore operation retains the
original GUID even if it is recreating a deleted GPO. This is a key difference between the restore
operation and the import or copy operations. You cannot use restore to transfer GPOs to different
domains or forests. That capability is provided by import and copy.

Information replaced in a restore


A restore operation replaces the following components of a GPO:
GPO settings.
The Discretionary Access Control List (DACL) on the GPO.
WMI filter links (but not the filters themselves).

Information not replaced in a restore


The restore operation does not restore objects that are not part of the GPO. This includes the
following:
Links to a site, domain, or organizational unit. Links are an attribute of the site, domain, or
organizational unit, not the GPO. Any existing links in these containers will continue to be
used, for example, when restoring an existing GPO to a previous state. However, if you have
deleted a GPO and all links to the GPO, you must add these links back after restoring the
GPO.
WMI filters. A GPO only contains a link to the WMI filter. If the WMI filter does not exist at
the time of restore, the link will be dropped, otherwise the link will be restored.
IPSec Policies. A GPO only contains a link to the IPSec policy. If the IPSec policy does
not exist at the time of restore, the link will be dropped, otherwise the link will be restored.

Permissions required to restore a GPO from backup


The permissions necessary to perform a restore of a GPO vary, depending on whether you are
restoring an existing GPO or if you are restoring a GPO that has been deleted since it was
backed up. Version numbers for the GPO are handled differently as well. The following table
summarizes the situation for existing and deleted GPOs:

Permissions Required to Restore GPOs from Backup


GPO
Permission needed to restore GPO from backup GPO version number
state
The user must have Edit settings, delete, and modify
Incremented by 1, which will
Existing permissions on the GPO, as well as read access to the file
trigger client refreshes of
GPO system location where the backup is stored. This does
settings from the GPO.
NOT require GPO creation rights.
The user must have the right to create GPOs in the
Deleted Retained unchanged from
domain, as well as read access to the file system location
GPO the backed-up GPO.
where the backup is stored.

Restoring GPOs with software installation settings


When restoring a deleted GPO that contains Software Installation settings, some side effects are
possible depending on the circumstances under which the GPO is restored.
When restoring a GPO that contains software installation settings, it is possible that:
Cross-GPO upgrade relationships that upgrade applications in the GPO being restored, if
any, are not preserved after restore. A cross-GPO upgrade is one in which the administrator
has specified that an application should upgrade another application, and the two
applications are not in the same GPO. Note that upgrade relationships are preserved when
applications in the GPO being restored upgrade applications in other GPOs.

Page 305 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

If the client computer has not yet detected that the GPO has been deleted (either
because the user has not logged on again or rebooted since the deletion of the GPO), and
the application was deployed with the option to Uninstall this application when it falls out
of scope of management then the next time the user logs on:
Published applications that the user has previously installed will be removed.
Assigned applications will be uninstalled before re-installation.
This issue can be avoided if all of the following conditions are met:
You perform the restore on a Windows Server 2003 domain controller instead of
a Windows 2000 domain controller.
The user performing the restore has permissions to re-animate tombstone
objects in the domain.
The time elapsed between deletion and restoration of the GPO does not exceed
the tombstone lifetime specified in Active Directory.
Tombstone re-animation is a new feature of Windows Server 2003 Active Directory. By default,
only Domain Admins and Enterprise Admins have this permission but you can delegate this right
to additional users at the domain level using the ACL editor.
As a general rule, if you deploy software using Group Policy, it is recommended that you perform
the restoration of GPOs that contain application deployments using a domain controller running
Windows Server 2003 and that you grant the tombstone re-animation right to the users who will
be performing restoration of those GPOs.
Finally, when restoring a GPO that contains software installation settings, if you are using
categories to classify applications, the application in the restored GPO will appear in its original
category only if the category exists at the time of restoration. Note that the category definition is
not part of the GPO.
Import
The Import operation transfers settings into an existing GPO in Active Directory using a backed-
up GPO in the file system location as its source. Import operations can be used to transfer
settings from one GPO to another GPO within the same domain, to a GPO in another domain in
the same forest, or to a GPO in a domain in a different forest. The import operation always places
the backed-up settings into an existing GPO. It erases any pre-existing settings in the destination
GPO. Import does not require trust between the source domain and destination domain.
Therefore it is useful for transferring settings across forests and domains that dont have trust.
Importing settings into a GPO does not affect its discretionary access control list (DACL), links on
sites domains or organizational units to that GPO, or a link to a WMI filter.
When using import to transfer GPO settings to a GPO in a different domain or different forest, you
might want to use a migration table in conjunction with the import operation. A migration table
allows you to facilitate the transfer of references to security groups, users, computers, and UNC
paths in the source GPO to new values in the destination GPO.
Copy
A copy operation allows you to transfer settings from an existing Group Policy object (GPO) in
Active Directory directly into a new GPO. The new GPO created during the copy operation is
given a new globally unique identifier (GUID) and is unlinked. You can use a copy operation to
transfer settings to a new GPO in the same domain, another domain in the same forest, or a
domain in another forest. Because a copy operation uses an existing GPO in Active Directory as
its source, trust is required between the source and destination domains. Copy operations are
suited for moving Group Policy between production environments, and for migrating Group Policy
that has been tested in a test domain or forest to a production environment, as long as there is
trust between the source and destination domains.
Copying is similar to backing up followed by importing, but there is no intermediate file system
step, and a new GPO is created as part of the copy operation. The import operation, in contrast
with the copy operation, does not require trust.

10.8.3 Specifying the discretionary access control list (DACL) on the new GPO
Page 306 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

You have two options for specifying the DACL to use on the new GPO:
Use the default permissions that are used when creating new GPOs.
Preserve the DACL from the source GPO. For this option, you can specify a migration
table, used to facilitate the transfer of references to security groups, users, computers, and
UNC paths in the source GPO to new values in the destination GPO. If you specify a
migration table for the copy operation, and you choose the option to preserve the DACL from
the source GPO, the migration table will apply to any security principals in the DACL.

10.8.4 Copying within a domain compared with copying to another domain


Copying a GPO to another domain is slightly different from copying it to the same domain.
When copying a GPO within the same domain, you have a simple choice of two options, just
described, for choosing the DACL. However, for copy operations to another domain, GPMC
presents a wizard to facilitate the operation. The wizard guides you through the following choices:
Choice of DACL for the new GPO, as described earlier.
Specification of migration table, if applicable. A migration table allows you to facilitate the
transfer of references to security groups, users, computers, and UNC paths in the source
GPO to new values in the destination GPO.
When copying a GPO within the same domain, any link to a WMI filter is preserved. However,
when copying a GPO to a new domain, the link is dropped because WMI filters can only be linked
to GPOs within the same domain.

10.8.5 Migration Tables


When you copy or import a Group Policy object (GPO) from one domain to another, you can use
a migration table to tell Group Policy Management Console (GPMC) how domain-specific data
should be treated.
The key challenge when migrating GPOs from one domain to another is that some information in
the GPO is actually specific to the domain where the GPO is defined. When transferring the GPO
to a new domain, it may not always be desirable, or even possible, to use the exact same
settings. Items that can be specific to a domain include references to security principals such as
users, groups, and computers as well as UNC paths. The solution is to modify these references in
the GPO that are domain-specific, during the import or copy operation, so that the settings in the
destination GPO are written with the appropriate information for the destination domain. GPMC
supports this capability using migration tables.
A migration table is a file that maps references to users, groups, computers, and UNC paths in
the source GPO to new values in the destination GPO. A migration table consists of one or more
mapping entries. Each mapping entry consists of a type, source reference, and destination
reference. If you specify a migration table when performing an import or copy, each reference to
the source entry will be replaced with the destination entry when writing the settings into the
destination GPO.
The migration table will apply to any references in the settings within a GPO, whether you are
performing an import or copy operation. In addition, during a copy operation, if you choose the
option to preserve the discretionary access control list (DACL) on the GPO, the migration table
will also apply to both the DACL on the GPO and the DACLs on any software installation settings
in the GPO.
Migration tables store the mapping information as XML, and have their own file name extension,
.migtable. You can create migration tables using the Migration Table Editor (MTE). The MTE is a
convenient tool for viewing and editing migration tables without having to work in, or be familiar
with, XML. The MTE is associated with the .migtable extension so that when you double-click a
migration table, it opens in the MTE. The MTE is installed with GPMC. You can also create and
edit migration tables using any XML editor or using the GPMC scripting interfaces.

10.8.6 Settings impacted by migration tables


There are two types of settings that may not transfer well across domains:

Page 307 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Users, groups, and computers (security principals) that are referenced in the following
places: the settings of the GPO, in the ACL for the GPO itself, and the ACL for any software
installation settings in the GPO. Security principals do not transfer well for several reasons.
For instance, domain local groups are invalid in external domains but other groups are valid if
there is trust in place. Even with a trust between domains, it may not always be appropriate
to use the exact same group in the new domain. If there is no trust, then none of the security
principals in the source domain will be available in the destination domain.
UNC paths. When you are migrating GPOs across domains that do not have trust, such
as from test to production environments, users in the destination domain may not have
access to paths in the source domain.
The following items can contain security principals and can be modified using a migration table:
Security policy settings of the following types:
User rights assignment
Restricted groups
Services
File system
Registry
Advanced folder redirection policies.
The GPO discretionary access control list (DACL), if you choose to preserve it during a
copy operation.
The DACL on software installation objects. This is only preserved if the option to copy the
GPO DACL is specified. Otherwise the default DACL is used.
Note
Security Principals referenced in Administrative Templates settings cannot be
migrated using a migration table.
The following items can contain UNC paths and can be modified using a migration table:
Folder redirection policies.
Software installation policies (for software distribution points).
Pointers to scripts deployed through Group Policy (such as startup and shutdown scripts)
that are stored outside the GPO. Note that the script itself is not copied as part of the GPO
copy operation, unless the script is stored inside the source GPO.

10.8.7 Options for specifying migration tables


Migration tables are specified when performing import and copy operations. There are three
options for using migration tables with import and copy:
Do not use a migration table. This option copies the GPO exactly as it is. All references to
security principals and UNC paths are copied identically.
Use a migration table. This option maps any references in the GPO that are in the
migration table. References that are not contained in the migration table are copied as is.
Use a migration table exclusively. This option requires that all references to security
principals and UNC paths that are referenced in the source GPO be specified in the migration
table. If a security principal or UNC path is referenced in the source GPO and is not included
in the migration table, the import or copy operation fails.
In addition, cross-domain copy operations will apply the migration table to the DACL on the GPO
(and any software installation settings) if you choose the option Preserve or migrate the
existing permissions.
When performing a copy or import, the wizard scans the source GPO to determine if there are
any references to security principals or UNC paths in the GPO. If there are, you have the
opportunity to specify a migration table. During cross-domain copy operations, if you specify the
option Preserve or migrate the permissions on the GPO, the wizard will always present the
opportunity to specify a migration table, because a DACL by definition contains security
principals.

Page 308 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

10.8.8 Contents of Migration tables


A migration table consists of one or more mapping entries. Each mapping entry has three pieces
of information: type, source and destination. The following sections describe these categories in
more detail.
Source Types Source types describe the type of domain-specific information for the source
GPO. Migration tables support the following types of source types:
XML Elements of Source Types
How This Type Appears in XML
Source Type
Editor
User <Type>User</Type>
Computer <Type>Computer</Type>
Domain Local Group <Type>LocalGroup</Type>
Domain Global Group <Type>GlobalGroup</Type>
Universal Group <Type>UniversalGroup</Type>
UNC Path <Type>UNCPath</Type>
Free Text or security identifier (SID). This category is only for
use with security principals that are specified as free text and <Type>Unknown</Type>
raw SIDs.

Note
Built-in groups such as "Administrators" and "Account Operators" have the same SID in
all domains. If references to built-in groups are stored in the GPO using their resolved format
(based on the underlying SID), they cannot be mapped using migration tables. However, if
references to built-in groups are stored as free text, you can map them using a migration
table, and in this case, you must specify source type="Free Text or SID."
Source Reference A source reference is the specific name of the User, Computer, Group or
UNC Path referenced in the source GPO. For example, \\server1\publicshare is a specific
example of a UNC path. The type of the source reference must match whatever source type has
been specified in the migration table.
Source Reference Syntax
Source
Syntax
Reference
UPN [email protected]
SAM example\someone
DNS example.com\someone
Free text someone (must be specified as the unknown type)
S-1-11-111111111-111111111-1111111111-1112 (must be specified as the
SID
unknown type.)

Destination Name The destination name specifies how the name of the User, Computer, Group
or UNC Path in the source GPO should be treated upon transfer to the destination GPO.
Destination Name Options
Destination
Description
Name
Copy without changes. Equivalent to not putting the source value in the
migration table at all.
Same as source
MTE value: <Same As Source>
XML tag:<DestinationSameAsSource />

Page 309 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Removes the User, Computer or Group from the GPO. This option cannot be
used with UNC paths
None
MTE value: <None>
XML tag: <DestinationNone />
For example, map SourceDomain\Group1 to TargetDomain\Group1. This
Map by relative option cannot be used with UNC paths.
name MTE value: <Map by Relative name>
XML tag: <DestinationByRelativeName />
In the destination GPO, replace the source value with the exact literal value
Explicitly specify you specify.
value MTE value: <exact name to use in target>
XML tag: <Destination>Exact Name to use in Destination</Destination>

Note
Administrators can specify security principals for destination names using any of the
syntactical formats described in source references, with the exception of using a raw SID. A
raw SIDs can never be used for the destination name.
Default Entries Valid migtable files must contain the following entries, which identifies the
namespace used by migtables. If you are creating migtables by hand, you need to add this.
Otherwise MTE does this for you.
<?xml version="1.0" encoding="utf-16"? >
<MigrationTable xmlns="http://www.microsoft.com/GroupPolicy/GPOOperations/MigrationTable">
Each mapping entry appears in the following format:
<Mapping>
<Type>Type</Type>
<Source>Source</Source>
<Destination>Destination</Destination>
</Mapping>
The Migration Table Editor
The Migration Table Editor (MTE) is provided with Group Policy Management Console (GPMC) to
facilitate the editing of migration tables. Migration tables are used for copying or importing Group
Policy objects (GPOs) from one domain to another, in cases where the GPOs include domain-
specific information that must be updated during copy or import.
MTE displays three columns of information:
Migration Table Editor columns
Source Name Source Type Destination Name
Name of the user, computer, The type of the reference The new value, after copy or
group, or UNC path, named in the Source Name import to the new GPO; or the
referenced in the source column. For example, Domain method used to calculate the new
GPO. Global Group. value.

Note that you do not specify Destination Type. The Destination Type is determined during the
import or copy operation by checking the actual reference identified in the Destination Name.
Migration Table Editor features
You can edit text fields using Cut, Copy, and Paste options, for example when you right-
click an item in the Source Name column.
You can add computers, users, and groups, for source and destination names, by using a
Browse dialog box.
You can fill in Source Type fields by using a drop-down menu.
MTE provides automatic syntax checking to make sure the XML file is properly populated
and also ensures that only compatible entries are entered in the table. For example,

Page 310 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

\\server1\share1 is not a valid security group name name, and server1 is not a valid UNC
path so MTE prompts the user to correct those entries.
You can validate the overall migration table using MTE, in addition to basic syntax
checking using the Validate Table option. The Validate Table option checks to ensure:
The existence of destination security principals and UNC paths. This is important
to know before you copy or import a GPO, because if there are unresolvable entries in
the migration table, then the copy or import operation might fail.
Source entries with UNC paths do not have destinations of MapByRelative name
or NoDestination.
Type of the destination entry in the table matches the real type in the destination
source.
You can auto-populate a migration table by scanning one or more GPOs or backups to
extract all references to security principals and UNC paths, and then enter these items into
the table as source name entries. This capability is provided by the Populate from GPO and
Populate from Backup options available on the Tools menu. To complete the migration
table, you only need to adjust the destination values. By default, the destination name for
each entry will be set to Same as source when you use either of the Populate options.
Note
With either auto-populate option, when the source GPO or backup contains
references to built-in groups such as Administrators or Backup Operators, these
references will not be entered into the migration table if the references are stored in the
source GPO or backup in their resolved format [based on security identifier (SID)]. This is
because built-in groups have the same SID in all domains and therefore cannot be
mapped to new values using a migration table. However, if references to built-in groups
are stored as free text, they will be captured by the auto-populate options. In this case,
you can map them as long as you specify the source type as "Free Text or SID."

10.9 Administrative Templates in GPMC and Group Policy Object Editor


Administrators using Group Policy Management Console need to understand that Administrative
template files are handled differently, depending upon whether they are working with Group
Policy Management Console or Group Policy Object Editor. How each of the tools handles
Administrative template files can be manipulated using Group Policy, but it is important to
understand first the differences between the two.

10.9.1 Administrative Templates


Administrative templates, or .adm files, enable administrators to control registry settings using
Group Policy. Windows comes with a predefined set of Administrative template files, which are
implemented as text files (with an .adm extension), that define the registry settings that can be
configured in a GPO. These .adm files are stored in two locations by default: inside GPOs in the
Sysvol folder and in the %windir%\inf directory on the local computer.
Windows Server 2003 includes the following .adm files: System.adm, Inetres.adm, Conf.adm,
Wmplayer.adm, and Wuau.adm, which contain all the settings initially displayed in the
Administrative Templates node.

Default Administrative Templates


Administrative
Contains For Use on Description
Template File
Settings to
Windows 2000 or Loaded by
System.adm configure the
Windows Server 2003 default.
Operating System
Settings to Windows 2000 or Loaded by
Inetres.adm
configure Internet Windows Server 2003 default

Page 311 of 312 Testking


Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10

Explorer
Windows 2000 or
Settings to Windows Server 2003. Note: This tool is
Loaded by
Conf.adm configure not available on Windows XP 64-Bit Edition
default.
NetMeeting v3 and the 64-bit versions of the Windows
Server 2003 family.
Windows XP, Windows Server 2003.
Settings to
Note: This tool is not available on
configure Loaded by
Wmplayer.adm Windows XP 64-Bit Edition and the 64-bit
Windows Media default.
versions of the Windows Server 2003
Player
family.
Settings to
Windows 2000 SP3, Windows XP SP1, Loaded by
Wuau.adm configure
Windows Server 2003 default.
Windows Update

10.9.2 Handling Administrative Template Files in GPMC


GPMC uses .adm files to display the friendly names of policy settings when generating HTML
reports for GPOs, Group Policy Modeling, and Group Policy Results.
By default, GPMC uses the local .adm file, regardless of time stamp. If the file is not found, then
GPMC will look in the GPOs directory on Sysvol.
The user can specify an alternate path for where to find .adm files. If specified, this takes
precedence over the previous locations.
GPMC never copies the .adm file to the Sysvol.

10.9.3 Handling Administrative Template Files in Group Policy Object Editor


Group Policy Object Editor uses .adm files to display available registry-based policy settings in
the Administrative Templates section of a GPO. This includes Group Policy for the Windows
Server 2003 operating system and its components and for applications.
By default, it attempts to read .adm files from the GPO (from the Sysvol on the domain controller).
Alternatively, the .adm file can be read from the local workstation computer. This behavior can be
controlled by a policy setting.
By default, if the version of the .adm file found on the local computer is newer (based on the time
stamp of the file) than the version on the Sysvol, the local version is copied to the Sysvol and is
then used to display the settings. This behavior can be controlled by a policy setting.
If the GPO contains registry settings for which there is no corresponding .adm file, these settings
cannot be seen in Group Policy Object Editor. However, the policy settings are still active and will
be applied to users or computers targeted by the GPO.
Policy settings pertaining to a user who logs on to a given workstation or server are written to the
User portion of the registry database under HKEY_CURRENT_USER. Computer-specific
settings are written to the Local Machine portion of the registry under
HKEY_LOCAL_MACHINE.

Page 312 of 312 Testking

You might also like