Guide Contents 1. Planning and Implementing Server Roles and Server Security
Guide Contents 1. Planning and Implementing Server Roles and Server Security
Guide Contents
Section 2
Section 4
Section 5
6.6 Trust
6.6.1 Trusts in Windows NT
6.6.2 Trusts in Windows Server 2003 and Windows 2000 server operating systems
6.6.3 Trust protocols
6.6.4 Trust types
6.6.5 Trust direction
6.6.7 Trust transitivity
Section 8
Section 9
Section 1
Windows 2000 and Windows Server 2003 to address security concerns of organizations,
including the following concerns:
Organizations that have acquired subsidiaries with significantly different administrative
and security requirements.
International organizations that want to divide up administrative and security
responsibilities along national or regional boundaries.
Rapidly growing organizations that want to unify security and administrative
responsibilities across disparate business units.
Active Directory in Microsoft Windows Server 2003 enables organizations to simplify user and
resource management; support directory-enabled programs; and create a scalable, secure, and
manageable infrastructure. A well-designed Active Directory logical structure provides the
following benefits:
Centralized management of Windows networks that contain large numbers of objects
A consolidated domain structure and reduced hardware and administration costs
The supporting framework for Group Policybased user, data, and software management
The ability to delegate administrative control over resources
Integration with services such as Microsoft Exchange 2000 Server, a PKI, and domain-
based distributed file system (DFS)
Achieving these results requires careful planning of the following elements:
Domains. An administrative unit in a computer network that groups a number of
capabilities for management convenience, including network-wide user identity,
authentication, trust relationships, policy administration, and replication.
Forests. One or more Active Directory domains that share a schema and global catalog.
Each forest is a single instance of the directory and defines a security boundary.
Organizational units. Active Directory containers where you can place users, groups,
computers, and other organizational units. You can use organizational units to create
containers within a domain to represent the hierarchical and logical structures within your
organization.
The way that you plan your domains, forests, and organizational units plays a critical role in
defining your networks security boundaries. The relationship might sometimes be based on
administrative requirements; at other times, the relationship might be defined by operational
requirements such as controlling replication. Additionally, if you have multiple forests, you need to
plan the logical trust relationships between forests that allows pass-through authentication.
Control over an organizational unit and the objects within it is determined by the ACLs on
the organizational unit and on the objects in the organizational unit. Users and groups that
have control over objects in organizational units are data administrators.
To facilitate the management of large numbers of objects, Active Directory supports
administrative delegation at the container level. If administrative control is the priority for your
organization, base your logical structure design on forests and organizational units. Forests and
organizational units are used to control the delegation of authority throughout the directory. Many
organizations consolidate divisions into a single forest to enhance their users ability to
collaborate and to reduce costs.
If you choose to organize Active Directory according to geographic location, you must apply a
domain model to your logical design. Domains let you control where information is replicated and
let you partition data so that it can be stored where it is used most frequently. A well-designed
domain model prevents unnecessary replication and promotes more efficient use of available
bandwidth between remote locations.
To determine the number of forests that your organization requires, identify the isolation
requirements for each division of the organization that will be using the directory service.
Consider the following elements:
Generally, a single forest deployment isolates data from parties outside the organization.
If your organization includes more than one IT group, the only way to achieve isolation in a
single forest environment is to select one IT group to act as the administrators of the forest,
and then make the other IT groups in the organization relinquish control of the directory.
If divisions of your organization require that you isolate data from the rest of the
organization, you must deploy multiple forests. For example, you might need multiple forests
if legal or contractual obligations require that your organization guarantees the security of
data for a particular project.
If your organization includes multiple divisions with separate IT groups, each IT group
might prefer to manage its own forest; however, your business needs might require resource
sharing between divisions. You can deploy multiple forests, each of which is managed by an
individual IT group, and then establish external trusts between the forests to facilitate
collaboration. In this type of environment, be careful to avoid granting administrative access
to users in other forests.
Trusts
If your organization includes more than one forest, you must enable the forests to allow
authentication and resource sharing. You can do this by establishing trust relationships between
some or all of the domains in the forests. The types of trust relationships that you can establish
depend on the versions of the operating system that are running in each forest:
Authentication between Windows Server 2003 forests. When all domains in two
forests trust each other and must authenticate users, establish a forest trust between the
forests. When only some of the domains in two Windows Server 2003 forests trust each
other, establish one-way or two-way external trusts between the domains that require
interforest authentication.
Authentication between Windows Server 2003 and Windows 2000 forests. It is not
possible to establish transitive forest trusts between Windows Server 2003 and
Windows 2000 forests. To enable authentication with Windows 2000 forests, establish one-
way or two-way external trusts between the domains that need to share resources.
Authentication between Windows Server 2003 and Windows NT 4.0 forests. It is not
possible to establish transitive forest trusts between Windows Server 2003 and
Windows NT 4.0 domains. Establish one-way or two-way external trusts between the
domains that need to share resources.
administrators can manage these features easily and efficiently. The following sections describe
these features of the security model.
1.6.1 Authentication
Interactive logon confirms the user's identification to the user's local computer or Active
Directory account.
Network authentication confirms the user's identification to any network service that the
user is attempting to access. To provide this type of authentication, the security system
includes these authentication mechanisms: Kerberos V5, public key certificates, Secure
Sockets Layer/Transport Layer Security (SSL/TLS), Digest, and NTLM (for compatibility with
Windows NT 4.0 systems).
Single sign-on makes it possible for users to access resources over the network without
having to repeatedly supply their credentials. For the Windows Server 2003 family, users
need to only authenticate once to access network resources; subsequent authentication is
transparent to the user.
1.6.4 Auditing
Monitoring the creation or modification of objects gives you a way to track potential security
problems, helps to ensure user accountability, and provides evidence in the event of a security
breach
Search in files that are in different formats and languages, either through the Search
command on the Start menu or through HTML pages that users view in a browser.
Important
If you plan to include computers on the Internet on your network, use a unique DNS
domain name names consist of a sequence of name labels separated by periods.
After configuring the DNS server role, you can do the following:
Host records of a distributed DNS database and use these records to answer DNS
queries sent by DNS client computers, such as queries for the names of Web sites or
computers in your network or on the Internet.
Name and locate network resources using userfriendly names.
Control name resolution for each network segment and replicate changes to either the
entire network or globally on the Internet.
Reduce DNS administration by dynamically updating DNS information.
Support earlier Windows and NetBIOS-based clients on your network by permitting these
types of clients to browse lists for remote Windows domains without requiring a local domain
controller to be present on each subnet.
Support DNS-based clients by enabling those clients to locate NetBIOS resources when
WINS lookup integration is implemented.
4. If you are creating a new database, in Import Template, click a template, and then click
Open.
5. In the console tree, right-click Security Configuration and Analysis, and then click
Configure Computer Now.
6. Do one of the following:
o To use the default log in Error log file path, click OK.
o To specify a different log, in Error log file path, type a valid path and file name.
To import a security template to a Group Policy object
1. Perform one of these steps
If Do this
o Click Start, point to Run, type
mmc and click OK.
o On the File menu, click
Add/Remove snap-in.
You are on a workstation or server which o In Add/Remove Snap-in, click
is joined to a domain and you would like to Add, in Add Standalone Snap-in, double-
import a security template for a Group click Group Policy Object Editor.
Policy object. o In Select Group Policy Object,
click Browse, select the policy object you
would like to modify, click OK, and then click
Finish.
o Click Close and then click OK.
o Open Active Directory Users and
Computers.
o In the console tree, right-click the
domain or organizational unit you want to set
You are on a domain controller and you Group Policy for.
would like to import a security template for o Click Properties, and then click
a domain or organizational unit. the Group Policy tab.
o Click Edit to open the Group
Policy object you want to edit or click New to
create a new Group Policy object, and then
click Edit.
2. In the Group Policy console tree, right-click Security Settings.
Where?
o Group Policy Object Policy
o Computer Configuration
o Windows Settings
o Security Settings
3. Click Import Policy, click the security template you want to import, and then click Open.
4. (Optional) If you want to clear the database of any previously stored security templates,
select the Clear this database before importing check box.
To import a security template
1. Open Security Configuration and Analysis.
2. In the console tree, right-click Security Configuration and Analysis, and then click
Import Template.
Where?
o ConsoleRoot
o Security Configuration and Analysis
3. (Optional) To clear the database of any template that has been previously stored, select
the Clear this database before importing check box.
4. Click a template file, and then click Open.
5. Repeat these steps for each template that you want to merge into the database.
Page 23 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10
o For other computers, in the console tree, right-click Security Configuration and
Analysis, click Import Template, and then click setup security.
5. Select the Clear this database before importing check box, and then click Open.
6. In the console tree, right-click Security Configuration and Analysis, and then click
Configure Computer Now.
7. Do one of the following:
o To use the default log specified in Error log file path, click OK.
o To specify a different log, in Error log file path, type a valid path and file name,
and then click OK.
8. When the configuration is done, right-click Security Configuration and Analysis, and
then click View Log File.
Section 2
Enables the most widely used network protocol. Windows Server 2003 TCP/IP is a
complete, standards-based implementation of the most widely accepted networking protocol
in the world. IP is routable, scalable, and efficient. IP forms the basis for the Internet, and it is
also used as the primary network technology on most major enterprise networks in
production today. You can configure computers running Windows Server 2003 with TCP/IP to
perform nearly any role that a networked computer requires.
Connects dissimilar systems. Although all modern networking operating systems offer
TCP/IP support, Windows Server 2003 TCP/IP provides the best platform for connecting
Windowsbased systems to earlier Windows systems and to non-Windows systems. Most
standard connectivity utilities are available in Windows Server 2003 TCP/IP, including the
File Transfer Protocol (FTP) program, the Line Printer (LPR) program, and Telnet, a terminal
emulation protocol.
Provides client/server framework. Windows Server 2003 TCP/IP provides a cross-
platform client/server framework that is robust, scalable, and secure. Windows Server 2003
TCP/IP offers the Windows Sockets programming interface, which is ideal for developing
client/server applications that can run on Windows Socketscompliant TCP/IP protocol
implementations from other vendors.
Provides access to the Internet. Windows Server 2003 TCP/IP can provide users with
a method of gaining access to the Internet. A computer running Windows Server 2003 can be
configured to serve as an Internet Web site, it can function in a variety of other roles as an
Internet client or server, and it can use nearly all of the Internet-related software available
today.
The modular nature of a hierarchical model such as the three-tier model can simplify deployment,
capacity planning, and troubleshooting in a large internetwork. In this design model, the tiers
represent the logical layers of functionality within the network. In some cases, network devices
serve only one function; in other cases, the same device may function within two or more tiers.
The three tiers of this hierarchical model are referred to as the core, distribution, and access tiers.
Figure illustrates the relationship between network devices operating within each tier.
Figure Three-Tier Network Design Model
After planning your network infrastructure based on your design model, plan how to implement
routing. Figure shows the tasks involved in developing a unicast routing strategy.
Figure Developing a Routing Strategy
To plan an effective routing solution for your environment, you must understand the differences
between hardware routers and software routers; static routing and dynamic routing; and distance
vector routing protocols and link state routing protocols.
Routing Information Protocol (RIP) is the best known and most widely used of the distance vector
routing protocols. RIP version 1 (RIP v1), which is now outmoded, was the first routing protocol
accepted as a standard for TCP/IP. RIP version 2 (RIP v2) provides authentication support,
multicast announcing, and better support for classless networks. The Windows Server 2003
Routing and Remote Access service supports both RIP v1 and RIP v2 (for IPv4 only).
Using RIP, the maximum hop count from the first router to the destination is 15. Any destination
greater than 15 hops away is considered unreachable. This limits the diameter of a RIP
internetwork to 15. However, if you place your routers in a hierarchical structure, 15 hops can
cover a large number of destinations
131.107.65.37/21.
Again, "/21" indicates the number of high-order bits set to 1 in binary notation, leaving 11 bits (the
11 zeros) for the host ID portion of the address.
To determine the appropriate number of subnets versus hosts for your organizations network,
consider the following:
More subnets. Allocating more host bits for subnetting supports more subnets but fewer
hosts per subnet.
More hosts. Allocating fewer host bits for subnetting supports more hosts per subnet, but
limits the growth in the number of subnets.
Each router in Figure 1.6 must use a subnet mask to look up a match in the routing table.
Because a classful address, by definition, has only its class-based default subnet mask, the
router uses the network mask that corresponds to the class of the subnet ID when advertising the
route for the subnet. With classful routing, each of the routers in Figure 1.6 summarizes and
advertises the class-based network ID of 10.0.0.0/8, resulting in two routes to 10.0.0.0/8, each of
which might have a different metric. Therefore, a packet meant for one subnet could be
incorrectly routed to the other subnet. In the figure, the arrows represent the routes advertised by
the routers.
When using VLSM, do not accidentally overlap blocks of addresses. If possible, start with
equal-size subnets and then subdivide them.
VLSM also can be used when a point-to-point WAN link connects two routers. One way to handle
such a WAN link is to create a small subnet consisting of only two addresses. Without VLSM, you
might divide a Class C network ID into an equal number of two-address subnets. If only one WAN
link is in use, all the subnets but one serve no purpose, wasting 252 addresses.
Alternatively, you can divide the Class C network into 16 workgroup subnets of 14 nodes each by
using a prefix length of 28 bits (or, in subnet mask terms, 255.255.255.240). By using VLSM, you
can then subdivide one of those 16 subnets into 8 smaller subnets, each supporting only 2 nodes.
You can use one of the 8 subnets for your existing WAN link and reserve the remaining 7 subnets
for similar links that you might need in the future. To accomplish this act of sub-subnetting by
using VLSM, use a prefix length of 30 bits (or, in subnet mask terms, 255.255.255.252).
Figure shows variable length subnetting for two-host WAN subnets.
Figure Variable Length Subnetting of 131.107.106.0
If your network includes numerous WAN links, each with its own subnet, this approach can
require significant administrative overhead. If you do not use route summarization, each subnet
requires another entry in the routing table, increasing the overhead of the routing process.
Some routers support unnumbered connections; a link with unnumbered connections does not
require its own subnet.
need to quickly replace the IP addresses of all the nodes on a large private network can require
considerable time and interrupt network operation.
Although BOOTP and DHCP hosts can interoperate, DHCP is easier to configure. BOOTP
requires maintenance by a network administrator, whereas DHCP requires minimal maintenance
after the initial installation and configuration.
The DHCP standard, defined in RFC 2131, defines a DHCP server as any computer running the
DHCP service. Compared with static addressing, DHCP simplifies IP address management
because the DHCP server automatically allocates IP addresses and related TCP/IP configuration
settings to DHCP-enabled clients on the network. This is especially useful on a network with
frequent configuration changes for example, in an organization that has a large number of
mobile users.
The DHCP server dynamically assigns specific addresses from a manually designated range of
addresses called a scope. By using scopes, you can dynamically assign addresses to clients on
the network no matter where the clients are located or how often they move.
process is communicating with a specific process on a specific computer, a unique name is used.
When a NetBIOS process is communicating with multiple processes on multiple computers, a
group name is used.
The exact mechanism by which NetBIOS names are resolved to IP addresses depends on the
NetBIOS node type that is configured for the node. RFC 1001, Protocol Standard for a NetBIOS
Service on a TCP/UDP Transport: Concepts and Methods, defines the NetBIOS node types, as
listed in the following table.
NetBIOS Node Types
Node Type Description
B-node uses broadcast NetBIOS name queries for name registration and
B- resolution. B-node has two major limitations: (1) Broadcasts disturb every node
node(broadcast) on the network, and (2) Routers typically do not forward broadcasts, so only
NetBIOS names on the local network can be resolved.
P-node uses a NetBIOS name server (NBNS), such as a WINS server, to
P-node (peer-
resolve NetBIOS names. P-node does not use broadcasts; instead, it queries
peer)
the name server directly.
M-node is a combination of B-node and P-node. By default, an M-node
M-node (mixed) functions as a B-node. If an M-node is unable to resolve a name by broadcast,
it queries a NBNS using P-node.
H-node is a combination of P-node and B-node. By default, an H-node
H-node(hybrid) functions as a P-node. If an H-node is unable to resolve a name through the
NBNS, it uses a B-node to resolve the name.
Computers running Windows Server 2003 operating systems are B-node by default and become
H-node when they are configured with a WINS server. Those computers can also use a local
database file called Lmhosts to resolve remote NetBIOS names. The Lmhosts file is stored in the
systemroot\System32\Drivers\Etc folder
Authoritative DNS server A DNS server that hosts a primary or secondary copy of zone data.
Each zone has at least one authoritative DNS server.
Conditional forwarding A DNS query setting that enables a DNS server to route a request for
a particular name to another DNS server by specifying a name and IP address. For example, a
DNS server in contoso.com can be configured to forward queries for names in treyresearch.com
to a DNS server hosting the treyresearch.com zone.
Delegation The process of using resource records to provide pointers from parent zones to
child zones in a namespace hierarchy. This enable DNS servers in a parent zone to route queries
to DNS servers in a child zone for names within their branch of the DNS namespace. Each
delegation corresponds to at least one zone.
DNS client resolver A service that runs on client computers and sends DNS queries to a DNS
server. Some resolvers use a cache to improve name resolution performance.
DNS namespace The hierarchical naming structure of the domain tree. Each domain label that
is used in a fully qualified domain name (FQDN) indicates a node or branch in the domain tree.
For example, host1.contoso.com is an FQDN that represents the node host1, under the node
Contoso, under the node com, under the DNS root.
DNS server A computer that hosts DNS zone data, resolves DNS queries, and caches the
query responses.
Domain tree In DNS, the inverted hierarchical tree structure that is used to index domain names
within a namespace. Domain trees are similar in purpose and concept to the directory trees used
by computer filing systems for disk storage.
Public namespace A namespace on the Internet, such as www.microsoft.com, that can be
accessed by any connected device. Beneath the top-level domains, the Internet Corporation for
Assigned Names and Numbers (ICANN), the Internet Assigned Numbers Authority (IANA), and
other Internet naming authorities delegate domains to organizations such as Internet Service
Providers (ISPs), which in turn delegate subdomains to their customers or host zones for their
customersForward lookup zone An authoritative DNS zone that is primarily used to resolve
network resource names to IP addresses.
Fully qualified domain name (FQDN) A DNS name that uniquely identifies a node in a DNS
namespace. The FQDN of a computer is a concatenation of the computer name (for example,
client1) and the primary DNS suffix of the computer (for example, contoso.com), and a
terminating dot (for example, contoso.com.).
Internal namespace A namespace internal to an organization to which it can control access.
Organizations can use the internal namespace to shield the names and IP addresses of its
internal computers from the Internet. A single organization might have multiple internal
namespaces. Organizations can create their own root servers and any subdomains as needed.
The internal namespace can coexist with an external namespace.
Iterative query A query made by a client to a DNS server for an authoritative answer that can
be provided by the server without generating additional server-side queries to other DNS servers.
Primary DNS server A DNS server that hosts read-write copies of zone data, has a DNS
database of resource records, and resolves DNS queries.
Secondary DNS server A DNS server that hosts a read-only copy of zone data. A secondary
DNS server periodically checks for changes made to the zone on its configured primary DNS
server, and performs full or incremental zone transfers, as needed.
Recursive query A query made by either a client or a DNS server on behalf of a client, the
response to which can be an authoritative answer or a referral to another server. Recursive
queries continue until the DNS server receives an authoritative answer for the queried name. By
default, recursion is enabled for Windows Server 2003 DNS.
Resource record (RR) A DNS database structure containing name information for a particular
zone. For example, an address (A) resource record can map the IP address 172.16.10.10 to the
name DNSserverone.contoso.com or a namespace (NS) resource record can map the name
contoso.com to the server name DNS1.contoso.com. Most of the basic RR types are defined in
RFC 1035: Domain Names Implementation and Specification, but additional RR types are
defined in other RFCs.
Page 49 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10
Reverse lookup zone An authoritative DNS zone that is primarily used to resolve IP addresses
to network resource names.
Stub zone A partial copy of a zone that can be hosted by a DNS server and used to resolve
recursive or iterative queries. Stub zones contain the Start of Authority (SOA) resource records of
the zone, the DNS resource records that list the zones authoritative servers, and the glue
address (A) resource records that are required for contacting the zones authoritative servers.
Stub zones are used to reduce the number of DNS queries on a network, and to decrease the
network load on the primary DNS servers hosting a particular name.
Zone In a DNS database, a contiguous portion of the domain tree that is administered as a
single separate entity by a DNS server. The zone contains resource records for all of the names
within the zone.
Zone file A file that consists of the DNS database resource records that define the zone. DNS
data that is Active Directoryintegrated is not stored in zone files because the data is stored in
Active Directory. However, DNS data that is not Active Directoryintegrated is stored in zone files.
Zone transfer The process of copying the contents of the zone file located on a primary DNS
server to a secondary DNS server. Using zone transfer provides fault tolerance by synchronizing
the zone file in a primary DNS server with the zone file in a secondary DNS server. The
secondary DNS server can continue performing name resolution if the primary DNS server fails.
If you use an internal DNS root, a private DNS root zone is hosted on a DNS server on your
internal network. This private DNS root zone is not exposed to the Internet. Just as the DNS root
zone contains delegations to all of the top-level domain names on the Internet, such as .com,
.net, and .org, a private root zone contains delegations to all of the top-level domain names on
your network. The DNS server that hosts the private root zone in your namespace is considered
to be authoritative for all of the names in the internal DNS namespace.
Using an internal DNS root provides the following benefits:
Simplicity. If your network spans multiple locations, an internal DNS root might be the
best method for administering DNS data in a network.
Secure name resolution. With an internal DNS root, DNS clients and servers on your
network never contact the Internet to resolve internal names. In this way, the DNS data for
your network is not broadcast over the Internet. You can enable name resolution for any
name in another namespace by adding a delegation from your root zone. For example, if
your computers need access to resources in a partner organization, you can add a
delegation from your root zone to the top level of the DNS namespace of the partner
organization.
Important
Do not reuse names that exist on the Internet in your internal namespace. If you repeat
Internet DNS names on your intranet, it can result in name resolution errors.
If name resolution is required by computers that do not support software proxy, or by computers
that support only LATs, then you cannot use an internal root for your DNS namespace. In this
case, you must configure one or more internal DNS servers to forward queries that cannot be
resolved locally to the Internet.
Table lists the types of client proxy capabilities and whether you can use an internal DNS root for
each type.
Table Client Proxy Capabilities
Can You Use
Microsoft Software with Corresponding Forwards
Proxy Capability an Internal
Proxy Capabilities Queries
Root?
No Proxy Generic Telnet
Winsock Proxy (WSP) 1.x and later
Local Address
Microsoft Internet Security and Acceleration
Table (LAT)
(ISA) Server 2000 and later
WSP 1.x and later
Name Exclusion Internet Security and Acceleration (ISA)
List Server 2000 and later, and all versions of
Microsoft Internet Explorer
Proxy Auto- WSP 2.x, Internet Security and Acceleration
configuration (PAC) Server (ISA) Server 2000 and later, Internet
File Explorer 3.01 and later
solution requires increased storage space for hosting secondary copies of zones in different
namespaces, and generates increased zone transfer traffic.
If storage capacity on DNS servers is a consideration, configure the DNS servers that
host the DNS zones in one namespace to forward name resolution queries in a second
namespace to the DNS servers that are hosting the DNS zones for the second namespace.
Then configure the DNS servers that host the DNS zones in the second namespace to
forward name resolution queries in the first namespace to the DNS servers that are hosting
the DNS zones for the first namespace. You can use Windows Server 2003 DNS conditional
forwarders for this configuration
Decide which type of zone to use, based on your domain structure. For each zone type, with the
exception of secondary zones, decide whether to deploy file-based zones or Active Directory
integrated zones.
have a complete copy of the child zone. In addition, when a stub zone is used, the DNS server
does not have to send queries to the DNS root servers. If the stub zone for a child zone is hosted
on the same DNS server as the parent zone, the DNS server that is hosting the stub zone
receives a list of all new authoritative DNS servers for the child zone when it requests an update
from the stub zones primary server. In this way, the DNS server that is hosting the parent zone
maintains a current list of the authoritative DNS servers for the child zone as the authoritative
DNS servers are added and removed.
Use conditional forwarding if you want DNS servers in one network to perform name resolution
for DNS clients in another network. You can configure DNS servers in separate networks to
forward queries to each other without querying DNS servers on the Internet. If DNS servers in
separate networks forward DNS client names to each other, the DNS servers cache this
information. This enables you to create a direct point of contact between DNS servers in each
network and reduces the need for recursion.
If you are using a stub zone and you have a firewall between DNS servers in the networks, then
DNS servers on the query/resolution path must have port 53 open. However, if you are using
conditional forwarding and you have a firewall between DNS servers in each of the networks, the
requirement to have port 53 open only applies to the two DNS servers on either side of the
firewall.
partitions are present only on the domain controllers that run the DNS Server service. By storing
Active Directoryintegrated zones in an application directory partition, you can reduce the number
of objects that are stored in the global catalog, and you can reduce the amount of replication
traffic within a domain.
In contrast, Active Directoryintegrated zones that are stored in domain directory partitions are
replicated to all domain controllers in the domain. Storing Active Directoryintegrated zones in an
application directory partition allows replication of DNS data to domain controllers anywhere in
the same Active Directory forest.
When you are setting up your Active Directory environment and installing the first Windows
Server 2003 domain controller in the forest, if you install DNS, two Windows Server 2003 DNS
application directory partitions are created by default. A forest-wide DNS application directory
partition called ForestDNSZones will be created, and for each domain in the forest, a domain-
wide DNS application directory partition called DomainDNS Zones will be created.
2.12.10.8.Using Forwarding
If a DNS server does not have the data to resolve a query in its cache or in its zone data, it
forwards the query to another DNS server, known as a forwarder. Forwarders are ordinary DNS
servers and require no special configuration; a DNS server is called a forwarder because it is the
recipient of a query forwarded by another DNS server.
Use forwarding for off-site or Internet traffic. For example, a branch office DNS server can forward
all off-site traffic to a forwarder at the company headquarters, and an internal DNS server can
forward all Internet traffic to a forwarder on the external network. To ensure fault tolerance,
forward queries to more than one forwarder.
Forwarders can increase network security by minimizing the list of DNS servers that
communicate across a firewall.
You can use conditional forwarding to more precisely control the name resolution process.
Conditional forwarding enables you to designate specific forwarders for specific DNS names. You
can use conditional forwarding to resolve the following:
Queries for names in off-site internal domains
Queries for names in other namespaces
Using Conditional Forwarding to Query for Names in Off-Site Internal Domains
In Windows Server 2003 DNS, non-root servers resolve names for which they are not
authoritative, do not have a delegation, and do not have in their cache by doing one of the
following:
Querying a root server.
Forwarding queries to a forwarder.
Both of these methods generate additional network traffic. For example, a non-root server in Site
A is configured to forward queries to a forwarder in Site B, and it must resolve a name in a zone
hosted by a server in Site C. Because the non-root server can forward queries only to Site B, it
cannot directly query the server in Site C. Instead, it forwards the query to the forwarder in Site B,
and the forwarder queries the server in Site C.
When you use conditional forwarding, you can configure your DNS servers to forward queries to
different servers based on the domain name specified in the query. This eliminates steps in the
forwarding chain and reduces network traffic. When conditional forwarding is applied, the server
in Site A can forward queries to forwarders in Site B or Site C, as appropriate.
For example, the computers in the Seville site need to query computers in the Hong Kong site.
Both sites use a common DNS root server, DNS3.corp.fabrikam.com, located in Seville.
Before the Contoso Corporation upgraded to Windows Server 2003, the server in Seville
forwarded all queries that it could not resolve to its parent server, DNS1.corp.contoso.com, in
Seattle. When the server in Seville queried for names in the Hong Kong site, the server in Seville
first forwarded those queries to Seattle.
After upgrading to Windows Server 2003, administrators configured the DNS server in Seville to
forward queries destined for the Hong Kong site directly to a server in that site, instead of first
detouring to Seattle, as shown in Figure
Figure Conditional Forwarding to an Off-Site Server
External DNS servers in front of your firewall are configured with root hints pointing to the
root servers for the Internet.
All Internet name resolution is performed by using proxy servers and gateways.
Add a secondary server on another subnet or network, or on an ISP. This protects you
against denial-of-service attacks.
Eliminate single points of failure by securing your routers and DNS servers, and
distributing your DNS servers geographically. Add secondary copies of your zones to at least
one offsite DNS server.
Encrypt zone replication traffic by using Internet Protocol security (IPSec) or virtual
private network (VPN) tunnels to hide the names and IP addresses from Internet-based
users.
Configure firewalls to enforce packet filtering for UDP and TCP port 53.
Restrict the list of DNS servers that are allowed to initiate a zone transfer on the DNS
server. Do this for each zone in your network.
Monitor the DNS logs and monitor your external DNS servers by using Event Viewer.
Before configuring replication, carefully design and review your WINS replication topology. For
WANs, this planning can be critical to the success of your deployment and use of WINS.
WINS provides the following choices when you are configuring replication:
You can manually configure WINS replication for a WAN environment.
For larger networks, you can configure WINS to replicate within a LAN environment.
In smaller or bounded LAN installations, consider enabling and using WINS automatic
partner configuration for simplified setup of WINS replication.
In larger or global installations, you might have to configure WINS across untrusted
Windows NT domains.
If your network uses only two WINS servers, configure them as push/pull replication partners to
each other. When configuring replication partners, avoid push-only or pull-only servers except
where necessary to accommodate slow links. In general, push/pull replication is the most simple
and effective way to ensure full WINS replication between partners. This also ensures that the
primary and secondary WINS servers for any particular WINS client are push/pull partners of
each other, a requirement for proper WINS functioning in the event of a failure of the primary
server of the client.
In most cases, the hub-and-spoke model provides a simple and effective design for organizations
that require complete convergence with minimal administrative intervention. For example, this
model works well for organizations with centralized headquarters or a corporate data center (the
hub) and several branch offices (the spokes). Also, a second or redundant hub (that is, a second
WINS server in the central location) can increase the fault tolerance for WINS.
In some large enterprise WINS networks, limited replication partnering can effectively support
replication over slow network links. However, when you plan limited WINS replication, ensure that
each server has at least one replication partner. Furthermore, balance each slow link that
employs a unidirectional link by a unidirectional link elsewhere in the network that carries updated
entries in the opposite direction.
You can select other configurations for replication partner configurations to meet the particular
needs of your site. For example, Figure 4.7 shows replication in a T network topology, in which
Server1 has only Server2 as a partner, but Server2 has three partners. So Server1 gets all the
replicated information from Server2, but Server2 gets information from Server1, Server3, and
Server4.
Figure Replication in a T Network Topology
If Server2 needs to perform pull replications with Server3, make sure it is a push partner of
Server3. If Server2 needs to push replications to Server3, configure it as a pull partner of
Server3. Determine whether to configure WINS servers as either pull or push partners, and set
partner preferences for each server.
After determining the replication strategy that works best for your organization, map the strategy
to your physical network. For example, if you have chosen a hub-and-spoke strategy, indicate on
your network topology map which sites have the "hub" server, and which have the "spoke"
servers. Also indicate whether the replication is push/pull, push-only, or pull-only.
To verify whether a client has basic TCP/IP access to the DNS server, first try pinging the
preferred DNS server by its IP address.
For example, if the client uses a preferred DNS server of 10.0.0.1, type ping 10.0.0.1 at the
command prompt on the client computer. If you are not sure what the IP address is for the
preferred DNS server, you can observe it by using the ipconfig command.
For example, at the client computer, type ipconfig /all|more if necessary to pause the display so
you can read and note any IP addresses listed in DNS servers for the command output.
If no configured DNS servers respond to a direct pinging of their IP address, it indicates that the
source of the problem is more likely a network connectivity problem between the client and the
DNS servers. If that is the case, follow basic TCP/IP network troubleshooting steps to fix the
problem.
Cause: The DNS server is not running or responding to queries.
Solution: If the DNS client can ping the DNS server computer, verify that the DNS server is
started and able to listen for and respond to client requests. Try using the nslookup command to
test whether the server can respond to DNS clients.
Cause: The DNS server the client is using does not have authority for the failed name and
cannot locate the authoritative server for this name.
Solution: Confirm whether the DNS domain name the client is trying to resolve is one for which
its configured DNS servers are authoritative.
For example, if the client is attempting to resolve the name host.example.microsoft.com, verify
that the preferred (or an alternate, if one is being used) DNS server queried by the client loads
the authoritative zone where a host (A) resource record (RR)
resource record (RR)
A standard DNS database structure containing information used to process DNS queries. For
example, an address (A) resource record contains an IP address corresponding to a host name.
Most of the basic resource record types are defined in RFC 1035, but additional RR types have
been defined in other RFCs and approved for use with DNS.
for the failed name should exist.
If the preferred server is authoritative for the failed name and loads the applicable zone,
determine whether the zone is missing the appropriate RRs. If needed, add the RRs to the zone.
If the preferred server is not authoritative for the failed name, it indicates that configuration errors
at the DNS server are the likely cause. As needed, further troubleshoot the problem at the DNS
server.
I am having a problem related to zone transfers.
Cause: The DNS Server service is stopped or the zone is paused.
Solution: Verify that the master (source) and secondary (destination) DNS servers involved in
completing transfer of the zone are both started and that the zone is not paused at either server
Cause: The DNS servers used during a transfer do not have network connectivity with each
other.
Solution: Eliminate the possibility of a basic network connectivity problem between the two
servers.
Using the ping command, ping each DNS server by its IP address from its remote counterpart.
For example, at the source server, use the ping command to test IP connectivity with the
destination server. At the destination server, repeat the ping test, substituting the IP address for
the source server.
Both ping tests should succeed. If not, investigate and resolve intermediate network connectivity
issues.
Solution: To help prevent the most common types of problems, review WINS best practices for
deploying and managing your WINS servers. Most WINS-related problems start as failed queries
at a client, so it is best to start there.
Cause: The WINS server might not be able to service the client.
Solution: At the primary or secondary WINS server for the client that cannot locate a name, use
Event Viewer or the WINS console to see if WINS is started and currently running. If WINS is
running on the server, search for the name previously requested by the client to see if it is in the
WINS server database.
The server intermittently loses its ability to resolve names.
Cause: There might be a split registration problem, where the WINS server is registering its
names in WINS at two servers on the network. This is possible when the WINS server settings
configured in TCP/IP properties at the server computer are pointing to IP address of remote
WINS servers and are not configured to use the IP address of the local WINS server.
Solution: Re-configure client TCP/IP properties at the WINS server to have its primary and
secondary WINS servers point to the IP address of the local server computer.
I can't locate the source of "duplicate name" error messages.
Cause: You might need to manually remove static records, or enable static mappings to be
overwritten during replication.
Solution: If the duplicate name exists already as a static mapping, you can tombstone or delete
the entry. If possible, this should be done at the WINS server that owns the static mapping for the
duplicate name record. As a preventive measure, in cases where a static mapping needs to be
replaced and updated by a dynamic mapping, you can also enable Overwrite unique static
mappings at this server (migrate on) in Replication Partners Properties.
I need to locate the source of "network path not found" error messages on a WINS client.
Cause: The network path might contain the name of a server computer configured as a p-node,
m-node, or h-node and its IP address is different from the one in the WINS database. In this case,
the IP address for this computer might have changed recently and the new address has not yet
replicated to the local server.
Solution: Check the WINS database for the name and IP address mapping information. If they
are not current, you can start replication at the WINS server that owns the name record requiring
an update at other servers.
Section 3
3. Planning, Implementing, and Maintaining Server Availability
3.1 Planning for High Availability and Scalability
3.1.1 Overview
3.1.2 High Availability and Scalability Planning Process
3.1.3 Basic High Availability and Scalability Concepts
3.1.4 Defining Availability and Scalability Goals
3.1.5 Quantifying Availability and Scalability for Your Organization
3.2 Determining Availability Requirements
3.2.1 Determining Reliability Requirement
3.2.2 Determining Scalability Requirements
3.2.3 Analyzing Risk
3.2.4 Developing Availability and Scalability Goals
3.2.5 Details on Record That Help Define Availability Requirements
3.2.6 Users of IT Services
3.2.7 Requirements and Requests of End Users
3.2.8 Requirements for User Accounts, Networks, or Similar Types of Infra
3.2.9 Time Requirements and Variations
3.3 Using IT Procedures to Increase Availability and Scalability
3.1.1 Overview
A highly available system reliably provides an acceptable level of service with minimal downtime.
Downtime penalizes businesses, which can experience reduced productivity, lost sales, and lost
faith from clients, partners, and customers.
By implementing recommended IT practices, you can increase the availability of key services,
applications, and servers. These practices also help you minimize both planned downtime, such
as for maintenance or service pack installations, and unplanned downtime, such as downtime
caused by a server failure.
Network failures
There are many components to a computer network, and there are many typical network
topologies that provide highly available connectivity. All types of networks need to be considered,
including client access networks and management networks. In storage area networks (SANs),
failures might include the storage fabrics that link the computers to the storage units
Computer failures
Many enterprise-level server platforms provide redundancy inside the computer itself, such as
through redundant power supplies and fans. Vendors also allow components such as peripheral
component interconnect (PCI) cards and memory to be swapped in and out without removing the
computer from service. In cases where a computer fails or needs to be taken out of service for
routine maintenance or upgrades, clustering provides redundancy to enable applications or
services to continue. This redundancy happens automatically in clustering, either through failover
of the application (transferring client requests from one computer to another) or by having multiple
instances of the same application available for client requests.
Site failures
In extreme cases, a complete site can fail due to a total power loss, a natural disaster, or other
unusual occurrences. More and more businesses are recognizing the value of deploying mission-
critical solutions across multiple geographically dispersed sites. For disaster tolerance, a data
centers hardware, applications, and data can be duplicated at one or more geographically
remote sites. If one site fails, the other sites continue offering service until the failed site is
repaired. Sites can be active-active, where all sites carry some of the load, or active-passive,
where one or more sites are on standby.
You can prevent most of these failures by using the following methods:
Proven IT practices. IT strategies can help your organization avoid some or all of the
above failures. IT practices take on added importance when a clustering solution is not
applicable to, or even possible in, your particular deployment. Whether or not you choose to
deploy a clustering solution, all Windows deployments should at a minimum follow the
guidelines and reference the resources listed in this chapter for fault-tolerant hardware
solutions.
Clustering. This chapter introduces Windows Server 2003 clustering technologies and
provides an overview of how they work. Different kinds of clusters can be used together to
provide a true end-to-end high availability and scalability solution
Basic Scalability Concepts
In general deployments, scalability is the measure of how well a service or application can grow
to meet increasing performance demands. When applied to clustering, scalability is the ability to
incrementally add systems to an existing cluster when the overall load of the cluster exceeds the
clusters capabilities.
Scaling up
Scaling up involves increasing system resources (such as processors, memory, disks, and
network adapters) to your existing hardware or replacing existing hardware with greater system
resources. Scaling up is appropriate when you want to improve client response time, such as on
a Network Load Balancing cluster. If the required number of users are not properly supported,
adding random access memory (RAM) or central processing units (CPUs) to the server is one
way to meet the demand.
Windows Server 2003 supports single or multiple CPUs that conform to the symmetric
multiprocessing (SMP) standard. Using SMP, the operating system can run threads on any
available processor, which makes it possible for applications to use multiple processors when
additional processing power is required to increase the capability of a system
Scaling out
Scaling out involves adding servers to meet demand. In a server cluster, this means adding
nodes to the cluster. Scaling out is also appropriate when you want to improve client response
time with your servers, and when you have the hardware budget to purchase additional servers
as needed
Testing and Pilot Deployments
Before you deploy any new solution, whether it is a fault-tolerant hardware or networking
component, a software monitoring tool, or a Windows clustering solution, you should thoroughly
test the solution before deploying it. After testing in an isolated lab, test the solution in a pilot
deployment in which only a few users are affected, and make any necessary adjustments to the
design. After you are satisfied with the pilot deployment, perform a full-scale deployment in your
production environment. Depending on the number of users you have, you might want to perform
your full-scale deployment in stages. After each stage, verify that your system can accommodate
the increased processing load from the additional users before deploying the next group of users.
Your goal in quantifying availability is to compare the costs of your current IT environment
including the actual costs of outages and the cost of implementing high availability solutions.
These solutions include training costs for your staff as well as facilities costs, such as costs for
new hardware. After you have calculated the costs, IT managers can use these numbers to make
business decisions, not just technical decisions, about your high availability solution. For
information about monitoring tools that can help you measure the availability of your services and
systems, Scalability is more difficult to quantify because it is based on future needs and therefore
requires a certain amount of estimation and prediction. Remember, though, that scalability is tied
to availability because if your system cannot grow to meet increased demand, certain services
will become less available to your users.
Recreate your Windows deployment as accurately as possible in a test environment and, either
manually or through a simulation program, put as much workload as possible on different areas of
your deployment. Observing your system under such circumstances can help you formulate
scaling priorities and anticipate where you might need to scale first.
After your system is deployed, software-monitoring tools can alert you when certain components
of your system are near or at capacity. Use these tools to monitor performance levels and system
capacity so that you know when a scaling solution is needed. For more information about
monitoring performance levels
Do you have data about the cost of outages or the effect of service delays or outages (for
example, information about the cost of an outage at 9 A.M. versus the cost of an outage at
9 P.M.)?
Do you have any data from groups that practice incident management, problem
management, availability management, or similar disciplines?
system. Figure 6.3 displays the process for deploying your servers and network infrastructure in a
fault-tolerant manner that also provides manageability.
Figure Using IT Procedures to Increase Availability and Scalability
To aid in this planning, Microsoft recommends the Microsoft Operations Framework (MOF). MOF
is a flexible, open-ended set of guidelines and concepts that you can adapt to your specific
operations needs. Adopting MOF practices provides greater organization and contributes to
regular communication between your IT department, your end users, and other departments in
your company that might be affected
sure pull-out, rack-mounted equipment has enough slack in the cables, and that the cables
do not bind and are not pinched or scraped. Set up good pathways for redundant sets of
cables. If you use multiple sources of power or network communications, try to route the
cables into the cabinets from different points. If one cable is severed, the other can continue
to function. Do not plug dual power supplies into the same power strip. If possible, use
separate power outlets or UPS units (ideally, connected to separate circuits) to avoid a single
point of failure.
Security of the computer room. For servers that must maintain high availability, restrict
physical access for all but designated individuals. In addition, consider the extent to which
you need to restrict physical access to network hardware. The details of how you implement
this depend on your physical facilities and your organizations structure and policies. When
reviewing the security in place for the computer room, also review your methods for
restricting access to remote administration of servers. Make sure that only designated
individuals have remote access to your configuration information and your administration
tools.
together in a server cluster configuration. Back-end applications and services, such as messaging
applications like Microsoft Exchange or database applications like Microsoft SQL Server, are
ideal candidates for server clusters.
In server clusters, nodes share access to data. Nodes can be either active or passive, and the
configuration of each node depends on the operating mode (active or passive) and how you
configure failover in the cluster. A server that is designated to handle failover must be sized to
handle the workload of the failed node in addition to its own workload.
In Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition,
server clusters can contain up to eight nodes. Each node is attached to one or more cluster
storage devices, which allow different servers to share the same data. Because nodes in a server
cluster share access to data, the type and method of storage in the server cluster is very
important.
Changes made to one instance of a cloned stateless application can be replicated to the other
instances, because the dataset of stateless applications is relatively static. Because stateful
applications such as Microsoft Exchange or Microsoft SQL Server are updated with new data
frequently, they cannot be easily cloned and they are not good candidates for hosting on Network
Load Balancing clusters.
Component Load Balancing
CLB clusters address the unique scalability and availability needs of middle-tier (business)
applications that use the COM+ programming model. Organizations can load balance COM+
components over more than one node to dramatically enhance the availability and scalability of
software applications. CLB clusters, however, are a feature of Microsoft Application Center 2000.
For information about CLB clusters, see your Microsoft Application Center 2000 documentation.
3.5.6 Using Clusters to Increase Availability and Scalability
Different types of clusters provide different benefits. Network Load Balancing clusters are
designed to provide scalability because you can add nodes as your workload increases. Server
clusters increase availability of stateful applications, and they can also allow you to consolidate
servers and save on hardware costs. Figure 6.6 shows the steps for planning cluster deployment.
Figure Planning Cluster Deployment
Table summarizes the number of servers a cluster can contain, by cluster type and operating
system.
Table Maximum Number of Nodes in a Cluster
Network Load Component Load Server
Operating System
Balancing Balancing* Cluster
Microsoft Windows 2000
32 12 2
Advanced Server
Microsoft Windows 2000
32 12 4
Datacenter Server
Windows Server 2003, Standard
32 12 N/A
Edition
Windows Server 2003, Enterprise
32 12 8
Edition
Windows Server 2003, Datacenter
32 12 8
Edition
* Component Load Balancing is not included with the Windows Server 2003 operating system
and runs only on Windows Application Center 2000. You can use CLB clusters with Windows
Server 2003, provided that you use Windows Application Center 2000 Service Pack 2 or later.
For complete information about Component Load Balancing, see your Windows Application
Center 2000 documentation.
Network Load Balancing Clusters also run on Windows Server 2003, Web Edition; the maximum
number of nodes is 32.
total number of nodes also reduces availability. This is a tradeoff that your organization should
evaluate.
Figure shows two clusters before consolidation, where two separate two-node clusters, each with
one active and one passive node, provide service for a group of clients.
Figure Two Server Clusters Before Consolidation
Two clusters, each dedicated to a separate application, generally have higher availability than
one cluster hosting both applications. This is depends on, among other factors, the available
capacity of each server, other programs that may be running on the servers, and the hosted
applications themselves. However, if a potential loss in availability is acceptable to your
organization, you can consolidate servers.
Figure shows what the clusters in Figure 6.7 would look like if they were consolidated into a
single three-node cluster. There is a potential loss in availability because in the event of a failure,
both active clusters will fail over to the same passive node, whereas before consolidation, each
active node had a dedicated passive node. In this example, if both active nodes were to fail at the
same time, the single passive node might not have the capacity to take on the workloads of both
nodes simultaneously. Your organization must consider such factors as the likelihood of multiple
failures in the server cluster, the importance of keeping the applications in this server cluster
available, and if potential loss in services is worth the money saved by server consolidation.
Figure Consolidated Server Cluster
You begin the server cluster design and deployment process by defining your high-availability
needs. After you determine the applications or services you want to host on a server cluster, you
need to understand the clustering requirements of those applications or services. Next, design
your server cluster support network, making sure that you protect your data from failure, disaster,
or security risks. After you evaluate and account for all relevant hardware, software, network, and
security factors, you are ready to deploy a server cluster. Figure 7.1 illustrates the server cluster
design process.
Figure Designing and Deploying Server Clusters
resource groups are indivisible units that are hosted on one node at any point in time. During
failover, resource groups are transferred from one node to another.
Virtual server A collection of services that appear to clients as a physical Windows-based
server but are not associated with a specific server. A virtual server is typically a resource group
that contains all of the resources needed to run a particular application and can be failed over like
any other resource group
Failover The process of taking resource groups offline on one node and bringing them back
online on another node. When a resource group goes offline, all resources belonging to that
group go offline. The offline and online transitions occur in a predefined order. Resources that are
dependent on other resources are taken offline before and brought online after the resources
upon which they depend.
Failback The process of moving resources, either individually or in a group, back to their
original node after a failed node rejoins a cluster and comes back online.
Quorum resource The quorum-capable resource selected to maintain the configuration data
necessary for recovery of the cluster. This data contains details of all of the changes that have
been applied to the cluster database. The quorum resource is generally accessible to other
cluster resources so that any cluster node has access to the most recent database changes. By
default there is only one quorum resource per cluster.
availability. In addition, server clusters now support the Kerberos version 5 authentication
protocol
Scripting An application can be made server cluster-aware through scripting (both VBScript and
Jscript are supported), rather than through resource dynamic-link library (DLL) files. Unlike DLLs,
scripting does not require knowledge of the C or C++ programming languages, which means
scripts are easier for developers and administrators to create and implement. Scripts are also
easier to customize for your applications.
Majority node set clusters In every cluster, the quorum resource maintains all configuration
data necessary for the recovery of the cluster. In majority node set clusters, the quorum data is
stored on each node, allowing for, among other things, geographically dispersed clusters.
inventory of your current IT environment and how to create a functional specification that you can
use to create a clear and thorough cluster deployment plan. In addition, gather the following
materials and have them available for reference during installation:
A list of all services and applications to be deployed on server clusters.
A plan that defines which applications are to be installed on which nodes.
Failover policies for each service or application, including resource group planning.
A selected quorum model.
A physical and logical security plan for the cluster.
Specifications for capacity requirements.
Documentation for your storage system.
The Windows Server Catalog approved device drivers for network hardware and
storage systems
Documentation supplied with all cluster hardware and all applications or services that will
be deployed on the server cluster.
An IP addressing scheme for the cluster networks, both private and public.
A selected cluster name, its length limited by NetBIOS parameters.
As you create your Network Load Balancing design, document your decisions and use that
information to deploy your Network Load Balancing solution.
NLB Fundamentals
To create a successful Network Load Balancing design and to ensure that Network Load
Balancing is correct for your solution, you need to know the fundamentals of how Network Load
Balancing provides improved scalability and availability, and how Network Load Balancing
compares with other strategies for providing scalability and availability.
3.6.8 How NLB Provides Improved Scalability and Availability
Network Load Balancing improves scalability and availability by distributing client traffic across
the servers that you include in the Network Load Balancing cluster. Each cluster host (a server
running on a cluster) runs an instance of the applications supported by your cluster. Network
Load Balancing transparently distributes client requests among the cluster hosts. Clients access
your cluster by using one or more virtual IP addresses. From the perspective of the client, the
cluster appears to be a single server that answers the client request.
As the scalability and availability requirements of your solution change, you can add or remove
servers from the cluster as necessary. Network Load Balancing automatically distributes client
traffic to take advantage of any servers that you add to the cluster. In addition, when you remove
a server from the cluster, Network Load Balancing redistributes the client traffic among the
remaining servers in the cluster.
As an example, assume that your organization has a Web application farm running Microsoft
Internet Information Services (IIS) version 6.0 that hosts your organizations Internet presence. As
seen in Figure 8.2, Network Load Balancing allows your individual Web application servers to
service client requests from the Internet by distributing them across the cluster. On each of the
servers, you install IIS 6.0 and Network Load Balancing. By combining the individual Web
application servers into a Network Load Balancing cluster, you can load balance the requests to
improve client response times and to provide improved fault tolerance in the event that one of the
Web application servers fails.
Figure Network Load Balancing Cluster in a Web Farm
Network Load Balancing automatically detects and recovers when the entire server fails or is
manually disconnected from the network. However, Network Load Balancing is unaware of the
applications and services running on the cluster, and it does not detect failed applications or
services. To provide awareness of application or service failures, you need to add management
software, such as Microsoft Operations Manager (MOM) 2000, Microsoft Application Center
2000, a third-part party application, or software developed by your organization.
When your design requires fault tolerance for servers that support your Network Load Balancing
cluster, such as servers running Microsoft SQL Server 2000, include Microsoft server
clusters. For example, you can improve the availability of the network database (SQLCLSTR-01
in Figure ) by creating a two-node server cluster.
Network Load Balancing runs as an intermediate network driver in the Windows Server 2003
network architecture. Network Load Balancing is logically situated beneath higher-level
application protocols, such as Hypertext Transfer Protocol (HTTP) and File Transfer Protocol
(FTP), and above the network adapter drivers. Figure 8.3 illustrates the relationship of Network
Load Balancing in the Windows Server 2003 network architecture.
Figure Network Load Balancing in the Windows Server 2003 Network Architecture
To maximize throughput and to provide high availability in your solution, Network Load Balancing
uses a distributed software architecture. A copy of the Network Load Balancing driver runs on
each host in the cluster. The Network Load Balancing drivers allow all hosts in the cluster to
concurrently receive incoming network traffic for the cluster.
On each host in the cluster, the driver acts as an intermediary between the network adapter driver
and the TCP/IP stack. This allows a subset of the incoming network traffic to be received by the
host. Network Load Balancing uses this filtering mechanism to distribute incoming client requests
among the servers in the cluster.
Network Load Balancing architecture maximizes throughput by using a common media access
control (MAC) address to deliver incoming network traffic to all hosts in the cluster. As a result,
there is no need to route incoming packets to the individual hosts in the cluster. Because filtering
unwanted network traffic is faster than routing packets (which involves receiving, examining,
rewriting, and resending), Network Load Balancing delivers higher network throughput than
dispatcher-based software load balancing solutions. Also, as you add hosts to your Network Load
Balancing cluster, the scalability grows proportionally, and any dependence on a particular host
diminishes.
Because Network Load Balancing load balances client traffic across multiple servers, it provides
higher availability in your solution. One or more cluster hosts can fail, but the cluster continues to
service client requests as long as any cluster hosts are running.
NLB and Round Robin DNS
Round robin Domain Name System (DNS) is a software method for distributing workload among
multiple servers, but does not prevent clients from detecting server outages. If one of the servers
fails, round robin DNS continues sending client requests to the server until a network
administrator detects the failure and removes the server from the DNS address list. This results in
service disruption for clients.
In contrast, Network Load Balancing automatically detects servers that have been disconnected
from the cluster and redistributes client requests to the remaining servers. Unlike round robin
DNS, this prevents clients from sending requests to the failed servers.
You can automate the deployment of Windows Server 2003 and Network Load Balancing cluster
hosts by using one of the following methods:
Unattended installation
Sysprep
Remote Installation Services (RIS)
Table compares the characteristics of the various automated deployment methods.
Table Comparing Automated Deployment Methods
Deployment Characteristics Unattended Installation Sysprep RIS
Uses images to deploy installation
Uses scripts to customize installation
Supports post installation scripts to install applications
Deployed by using local drives on the target server
Initiated by headless servers
Deployed from a network share
Depending on the requirements of each cluster, more than one method might be required to
deploy all your Network Load Balancing clusters
For Network Load Balancing, you need to perform specific tasks when creating the unattended
installation and Sysprep script files:
Review the content in the Microsoft Windows Preinstallation Reference that relates to the
[MS_WLBS parameters] section.
WLBS stands for "Windows NT Load Balancing Service," the name of the load balancing
service used in Microsoft Windows NT Server version 4.0. For reasons of backward
compatibility, WLBS continues to be used in certain instances.
Ensure that the IP addresses for the cluster and all virtual clusters are entered in the
IPAddress parameter under the [MS_TCPIP parameters] section.
Typically, Network Load Balancing Manager automatically adds the cluster and virtual cluster
IP addresses to the list of IP addresses. Both unattended installation and Sysprep require
that you add the addresses to the IPAddress parameter under the [MS_TCPIP parameters]
section of the script.
In addition to deciding that the new Web farm will run on a Network Load Balancing cluster, the
design team also made the following configuration decisions:
No single router, switch, or Internet Information Services (IIS) Web farm server failure will
prevent users from running the Web application.
Web application will store data in a clustered SQL server running Microsoft
SQL Server 2000 on a server cluster.
Web application executables Active Server Pages (ASP), Hypertext Markup Language
(HTML) pages, and other executable code will be stored on a file server running on a server
cluster.
Accounts used for authenticating Internet users will be stored in Active Directory
directory service.
As the first step in the organizations deployment of the Web application, the IIS 6.0 Web farm,
and Network Load Balancing, the organization must restructure the network infrastructure to
support the new Web farm and cluster. Figure illustrates the organizations network environment
after preparing for the implementation.
Figure Network Environment After Preparing to Implement a New Cluster
Table lists the deployment steps that were performed prior to the implementation of the new
cluster and the reasons for performing those steps.
Table Deployment Steps Prior to Implementation of the New Cluster
Deployment Step Reason
Add Firewall-02 and Firewall-03 Provide redundancy and load balancing.
Add Switch-01 and Switch-02 Provide redundancy and load balancing.
Add network segments on Switch-01
Connect the IIS 6.0 Web farm to the network.
and Switch-02
Configure Switch-01 and Switch-02 to Provide load balancing of client requests by using
belong to the same VLAN Network Load Balancing.
Provide database support for the Web application on a
Add SQLCLUSTR-01
Microsoft server cluster.
Provide secured storage for the Web application
Add FILECLUSTR-01
executables and content on a Microsoft server cluster.
Provide storage and management of user accounts used
Add DC-01 and DC-01
in authenticating Internet users.
3.7.2 Installing and Configuring the Hardware and Windows Server 2003
The first step in performing the implementation of the new cluster is to install and configure the
hardware and Windows Server 2003 for each cluster host. Install and configure all cluster host
hardware at the same time to ensure that you eliminate any configuration errors prior to installing
and configuring the Network Load Balancing cluster.
To install and configure Windows Server 2003 on the cluster host hardware, you must be logged
on as a user account that is a member of the local administrators group on all cluster hosts.
Install and configure the cluster host by using the information documented in the "NLB Cluster
Host Worksheet" that your design team completed for that host during the design process.
To install and configure the hardware and Windows Server 2003 on each cluster host in the new
cluster, complete the following tasks:
1. Install the cluster host hardware in accordance with the manufacturers
recommendations.
2. Connect the cluster host hardware to the network infrastructure.
3. Install Windows Server 2003 with the default options and specifications from the
worksheet for the cluster host.
4. Install any additional services (such as IIS 6.0 or Routing and Remote Access) by using
the design specifications for the service.
5. Configure the TCP/IP property settings and verify connectivity for the cluster adapters.
6. If a separate management network is used, configure the TCP/IP property settings and
verify connectivity for the management adapter.
7. Configure each server to be a member server in a domain created specifically for
managing the cluster and other related servers.
4. Verify that the first Network Load Balancing cluster host responds to client queries by
directing requests to the cluster IP address.
Test the first cluster host by specifying the cluster IP address or a virtual cluster IP address in
the client software that is used to access the application or service running on the cluster. For
example, a client accessing an IIS application would put the cluster IP address or virtual
cluster IP address in the Web browser address line.
Table lists the deployment steps that were performed to implement the new cluster and the
reasons for performing those steps.
Table Deployment Steps for Implementing the New NLB Cluster
Deployment Step Reason
Server hardware needs to be connected to network
Add IIS-01, IIS-02, IIS-03, IIS-04, IIS-05, IIS-
infrastructure in preparation for Network Load
06, IIS-07, and IIS-08 server hardware.
Balancing deployment.
Install Windows Server 2003 and Network
Unattended setup is chosen because of the limited
Load Balancing on IIS-01 by using
number of hosts to be deployed.
unattended installation.
Create an image of IIS-01 to use as a model RIS allows the servers to be reimaged in the event
for RIS deployment. of a server failure.
Image deployment ensures a consistent
Deploy the image on IIS-02, IIS-03, IIS-04,
configuration on all servers in the Network Load
IIS-05, IIS-06, IIS-07, and IIS-08.
Balancing cluster.
Verification ensures that the Web farm is properly
Verify the Web farm responds to client
configured and that Network Load Balancing is load
requests.
balancing.
The Backup utility in the Microsoft Windows Server 2003 operating systems helps you protect
your data if your hard disk fails or files are accidentally erased due to hardware or storage media
failure. By using Backup, you can create a duplicate copy of the data on your hard disk and then
archive it on another storage device, such as a hard disk or a tape.
If the original data on your hard disk is accidentally erased or overwritten, or becomes
inaccessible because of a hard-disk malfunction, you can easily restore it from the disk or
archived copy.
Backup uses the Volume Shadow Copy service to create an accurate copy of the contents of
your hard drive, including any open files or files that are being used by the system. Users can
continue to access the system while Backup is running, without risking loss of data
Using Backup, you can:
Archive selected files and folders on your hard disk.
Restore the archived files and folders to your hard disk or any other disk you can access.
Make a copy of your computers System State data.
Use Automated System Recovery (ASR) to create a backup set that contains the System
State data, system services, and all disks associated with the operating system components.
ASR also creates a floppy boot disk that contains information about the backup, the disk
configurations (including basic and dynamic volumes), and how to restore your system.
Make a copy of any Remote Storage data and any data stored in mounted drives.
Create a log of what files were backed up and when the backup was performed.
Make a copy of your computers system partition, boot partition, and the files needed to
start up your system in case of a computer or network failure.
Schedule regular backups to keep your archived data up-to-date.
Backup also performs simple media management functions such as formatting. You can perform
more advanced management tasks such as mounting and dismounting a tape or disk by using
Removable Storage, which is a feature in Windows Server 2003.
need the most recent copy of the backup file or tape to restore all of the files. You
usually perform a normal backup the first time you create a backup set.
If you want the quickest backup method that requires the least amount of storage space, you
should back up your data using a combination of normal backups and incremental backups.
However, recovering files from this combination of backups can be time-consuming and difficult
because the backup set might be stored on several disks or tapes.
If you want to restore your data more easily, you should back up your data using a combination of
normal backups and differential backups. This backup set is usually stored on only a few disks or
tapes. However, this combination of backups is more time-consuming.
Volume Shadow Copy Service
Backup uses the Volume Shadow Copy service to create a volume shadow copy, which is an
accurate copy of the contents of your hard drive, including any open files, files that are being
used by the system, and databases that are held open exclusively.
Backup uses the Volume Shadow Copy service to ensure that:
Applications can continue to write data to the volume during a backup.
Files that are open are no longer omitted during a backup.
Backups can be performed at any time, without locking out users.
If you choose to disable the volume shadow copy using advanced options or if the service fails,
Backup will revert to creating a backup without the Volume Shadow Copy service technology. If
this occurs, Backup skips files that are open or in use by other applications at the time of the
backup.
Important
Some applications manage storage consistency differently while files are open, which
can affect the consistency of the files in a backup. For critical applications, consult the
application documentation or your provider for information about the recommended backup
method. When in doubt, close the application before performing a backup.
When you choose to back up or restore the System State data, all of the System State data that
is relevant to your computer is backed up or restored. You cannot choose to back up or restore
individual components of the System State data because of dependencies among the System
State components. However, you can restore the System State data to an alternate location. If
you do this, only the registry files, SYSVOL directory files, cluster database information files, and
system boot files are restored to the alternate location. The Active Directory database, Certificate
Services database, and COM+ Class Registration database are not restored if you designate an
alternate location when you restore the System State data.
Files Under Windows File Protection
Backup works together with the catalog file for the Windows File Protection service when backing
up and restoring boot and system files. System files are backed up and restored as a single
entity. The Windows File Protection service catalog file, located in the folder
systemroot\system32\catroot\{F750E6C3-38EE-11D1-85E5-00C04FC295EE}, is backed up with
the system files.
In Windows NT 4.0 and earlier, backup programs could selectively back up and restore
operating system files. However, Windows 2000 Server and Windows Server 2003 do not allow
incremental restores of operating system files.
There is an Advanced Backup option that automatically backs up protected system files with the
System State data. All of the system files that are in the systemroot\ directory and the startup files
that are included with the System State data are backed up when you use this option.
Automated System Recovery (ASR) is a part of Backup that you can use to recover a system that
will not start. With ASR, you can create ASR sets on a regular basis as part of an overall plan for
system recovery in case of system failure. You should use ASR as a last resort in system
recovery, only after you have exhausted other options such as the startup options Safe Mode and
Last Known Good Configuration.
ASR is a recovery option that has two parts: ASR backup and ASR restore. You can access the
backup portion through the Automated System Recovery Preparation Wizard located in
Backup. The Automated System Recovery Preparation Wizard creates an ASR set, which is a
backup of the System State data, system services, and all disks associated with the operating
system components. It also creates a floppy disk, which contains information about the backup,
the disk configurations (including basic and dynamic volumes), and how to restore your system.
You can access the restore part of ASR by pressing F2 when prompted in the text mode portion
of Setup. ASR reads the disk configurations from the floppy disk and restores all of the disk
signatures, volumes and partitions on the disks that are required to start your computer (at a
minimum). It will attempt to restore all of the disk configurations, but under some circumstances it
might not be able to. ASR then installs a simple installation of Windows and automatically starts
to restore from backup using the backup ASR set.
Note
ASR does not include data files. You should back up data files separately on a regular
basis and restore them after the system is working.
ASR only supports FAT16 volumes up to 2.1 gigabytes (GB). ASR does not support 4-
GB FAT16 partitions that use a cluster size of 64 K. If your system contains 4-GB FAT16
partitions, convert them from FAT16 to NTFS before using ASR.
Primary restore Use this type of restore when the server you are trying to restore is the only
running server of a replicated data set (for example, the SYSVOL and File Replication service are
replicated data sets). Select primary restore only when restoring the first replica set to the
network. Do not use a primary restore if one or more replica sets have already been restored.
Typically, perform a primary restore only when all the domain controllers in the domain have
failed, and you are trying to rebuild the domain from backup.
Distributed Data Reason for Using Primary Restore of System State Data
Restoring a stand-alone domain controller.
Active Directory
Restoring the first of several domain controllers.
Restoring a stand-alone domain controller.
SYSVOL
Restoring the first of several domain controllers
Replica sets Restoring the first replica set.
Normal restore During a normal restore operation, Backup operates in nonauthoritative restore
mode. That is, any data that you restore, including Active Directory objects, will have their original
update sequence number. The Active Directory replication system uses this number to detect and
propagate Active Directory changes among the servers in your organization. Because of this, any
data that is restored nonauthoritatively will appear to the Active Directory replication system as
though it is old, which means the data will never get replicated to your other servers. Instead, if
newer data is available from your other servers, the Active Directory replication system will use
this to update the restored data. To replicate the restored data to the other servers, you must use
an authoritative restore.
Distributed
Reason for Using Normal Restore of System State Data
Data
Active Directory Restoring a single domain controller in a replicated environment.
SYSVOL Restoring a single domain controller in a replicated environment.
Restoring all but the first replica sets (that is, sets 2 through n, for n replica
Replica sets
sets).
Authoritative restore To authoritatively restore Active Directory data, you need to run the
Ntdsutil utility after you have restored the System State data but before you restart the server.
The Ntdsutil utility lets you mark Active Directory objects for authoritative restore. When an object
is marked for authoritative restore its update sequence number is changed so that it is higher
than any other update sequence number in the Active Directory replication system. This will
ensure that any replicated or distributed data that you restore is properly replicated or distributed
throughout your organization.
For example, if you inadvertently delete or modify objects stored in Active Directory, and those
objects are replicated or distributed to other servers, you will need to authoritatively restore those
objects so they are replicated or distributed to the other servers. If you do not authoritatively
restore the objects, they will never get replicated or distributed to your other servers because they
will appear to be older than the objects currently on your other servers. Using the Ntdsutil utility to
mark objects for authoritative restore ensures that the data you want to restore gets replicated or
distributed throughout your organization. On the other hand, if your system disk has failed or the
Active Directory database is corrupted, then you can simply restore the data nonauthoritatively
without using the Ntdsutil utility.
You can run the Ntdsutil command-line utility from the command prompt. Help for the Ntdsutil
utility is available through the command prompt by typing ntdsutil /?.
Distributed Data Reason for Using Authoritative Restore of System State Data
Active Directory Rolling back or undoing changes
SYSVOL Resetting data
There are three main components that make up Shadow Copies for Shared Folders: the source
volume, the storage volume, and the Previous Versions Client.
Shadow Copies for Shared Folders Components
Component Description
The volume that is being copied. This volume is located on a server running
Windows Server 2003 and has Shadow Copies for Shared Folders enabled. This
Source volume
volume contains several folders that are shared on the network. This is typically
a volume on a file server.
Storage
The volume where shadow copies are stored.
volume
Previous The interface that is used to view shadow copies. The client is available for
Versions Client Windows Server 2003, Windows XP, and Windows 2000 operating systems
The following diagram shows the main components of Shadow Copies for Shared Folders and
how they interact.
Shadow Copies for Shared Folders Architecture
Source Volume
The source volume is the volume that is being copied, typically a volume on a file server. Shadow
Copies for Shared Folders is enabled on a per-volume basis. That is, you can only make shadow
copies of an entire volume. You cannot select specific shared folders and files on a volume to be
copied.
The volume must reside on a server running Windows Server 2003 and must be formatted using
the NTFS file system. Shadow Copies for Shared Folders is built upon the Volume Shadow Copy
service technology, which provides a way to make copies of open files.
The mounted drive will not be included when shadow copies are taken. You should enable
shadow copies only on volumes without mount points or only when you do not want the shared
resources on the mounted volume to be copied.
You can access the server portion of Shadow Copies for Shared Folders through the Shadow
Copies tab of the Local Disk Properties dialog box.
Note
When you enable Shadow Copies for Shared Folders on a volume, a default scheduled
task is also created. The default schedule for copies is twice a day at 7:00 A.M. and 12:00
noon, Monday through Friday.
Storage Volume
The storage volume is where shadow copies are stored. Shadow copies can be stored on any
NTFS-formatted volume that is available to the server. Because of the high I\O involved in
creating the copies, we recommend that you store the shadow copies on a volume on a separate
disk from the disk that contains the source volume.
Shadow Copies for Shared Folders works by making a block-level copy of any changes that have
occurred to a file since the last shadow copy was created. The file changes are copied and stored
as blocks, or units of data. Generally, the entire file is not copied. Only the previous values of the
changed blocks are copied to the storage area. As a result, previous versions of files do not
usually take up as much disk space as the current file.
However, the amount of disk space that is used for changes can vary, depending on the
application that changed the file. For example, some applications rewrite the entire file when a
change is made, but other applications add changes to the existing file. If the entire file is
rewritten to disk when a change is made, then the shadow copy contains the entire file.
The minimum amount of storage space that you can specify to be used for storing shadow copies
on the storage volume is 400 megabytes (MB). The default storage size is 10% of the source
volume (the volume being copied). When the storage limit is reached, older versions of the
shadow copies will be deleted and cannot be restored. There is also a limit of 64 shadow copies
per volume that can be stored. When this limit is reached, the oldest shadow copy will be deleted
and cannot be retrieved.
Important
Shadow copies are read-only. You cannot edit the contents of a shadow copy.
File Permissions
When you restore a file to a previous version, the file permissions will not be changed. File
permissions remain the same as they were before the file was restored. When you recover a file
that was accidentally deleted, the file permissions are set to the default permissions for the
directory the file is in. This directory might have different permissions than the file.
Section 4
The Microsoft Windows implementation of IPSec is based on standards developed by the Internet
Engineering Task Force (IETF) IPSec working group. For a list of relevant IPSec RFCs, see the
Related Information section later in this subject.
recommended, IPSec scenarios that are not recommended, and IPSec scenarios that require
special consideration.
You can also use IPSec with the IP packet-filtering capability or NAT/Basic Firewall component of
the Routing and Remote Access service to permit or block inbound or outbound traffic, or you can
use IPSec with the Internet Connection Firewall (ICF) component of Network Connections, which
provides stateful packet filtering. However, to ensure proper Internet Key Exchange (IKE)
management of IPSec security associations (SAs), you must configure ICF to permit UDP port
500 and port 4500 traffic needed for IKE messages.
End-to-End Security Between Specific Hosts
IPSec establishes trust and security from a unicast source IP address to a unicast destination IP
address (end-to-end). For example, IPSec can help secure traffic between Web servers and
database servers or domain controllers in different sites. As shown in the following figure, only the
sending and receiving computers need to be aware of IPSec. Each computer handles security at
its respective end and assumes that the medium over which the communication takes place is not
secure. The two computers can be located near each other, as on a single network segment, or
across the Internet. Computers or network elements that route data from source to destination
are not required to support IPSec.
Securing Communications Between a Client and a Server by Using IPSec
The following figure shows domain controllers in two forests that are deployed on opposite sides
of a firewall. In addition to using IPSec to help secure all traffic between domain controllers in
separate forests, as shown in the figure, you can use IPSec to help secure all traffic between two
domain controllers in the same domain and between domain controllers in parent and child
domains.
Securing Communications Between Two Domain Controllers in Different Forests by Using
IPSec
Secure Server
You can require IPSec protection for all client computers that access a server. In addition, you
can set restrictions on which computers are allowed to connect to a server running Windows
Server 2003. The following figure shows IPSec in transport mode securing a line of business
(LOB) application server.
Securing an Application Server by Using IPSec
In this scenario, an application server in an internal corporate network must communicate with
clients running Windows 2000 or Windows XP Professional; a Windows Internet Name Service
(WINS) server, Domain Name System (DNS) server, and Dynamic Host Configuration Protocol
(DHCP) server; Active Directory domain controllers; and a non-Microsoft data backup server. The
users on the client computers are company employees who access the application server to view
their personal payroll information and performance review scores. Because the traffic between
the clients and the application server involves highly sensitive data, and because the server
should only communicate with other domain members, the network administrator uses an IPSec
policy that requires ESP encryption and communication only with trusted computers in the Active
Directory domain.
Other traffic is permitted as follows:
Traffic between the WINS server, DNS server, DHCP server, and the application server
is permitted because WINS servers, DNS servers, and DHCP servers must typically
communicate with computers that run on a wide range of operating systems, some of which
might not support IPSec.
Traffic between Active Directory domain controllers and the application server is
permitted, because using IPSec to secure communication between domain members and
their domain controllers is not a recommended usage.
Traffic between the non-Microsoft data backup server and the application server is
permitted because the non-Microsoft backup server does not support IPSec.
L2TP/IPSec for Remote Access and Site-to-Site VPN Connections
You can use L2TP/IPSec for all VPN scenarios. This does not require the configuration and
deployment of IPSec policies. Two common scenarios for L2TP/IPSec are securing
communications between remote access clients and the corporate network across the Internet
and securing communications between branch offices.
Note
Windows IPSec supports both IPSec transport mode and tunnel mode. Although VPN
connections are commonly referred to as tunnels, IPSec transport mode is used for
L2TP/IPSec VPN connections. IPSec tunnel mode is most commonly used to help protect
site-to-site traffic between networks, such as site-to-site networking through the Internet.
L2TP/IPSec for remote access connections
A common requirement for organizations is to secure communications between remote access
clients and the corporate network across the Internet. Such a client might be a sales consultant
who spends most of the time traveling, or an employee working from a home office. In the
following figure, the remote gateway is a server that provides edge security for the corporate
intranet. The remote client represents a roaming user who requires regular access to network
resources and information. An ISP is used as an example to demonstrate the path of
communication when the client uses an ISP to access the Internet. L2TP/IPSec provides a
simple, efficient way to build a VPN tunnel and help protect the data across the Internet.
Securing Remote Access Clients by Using L2TP/IPSec
In this figure, traffic is being sent between a client computer in a vendor site (Site A) and a File
Transfer Protocol (FTP) server at the corporate headquarters site (Site B). Although an FTP
server is used for this scenario, the traffic can be any unicast IP traffic. The vendor uses a non-
Microsoft IPSec-enabled gateway, while corporate headquarters uses a gateway running
Windows Server 2003. An IPSec tunnel is used to secure traffic between the non-Microsoft
gateway and the gateway running Windows Server 2003.
Scenarios for Which IPSec Is Not Recommended
IPSec policies can be quite complex to configure and manage. Additionally, IPSec can incur
performance overhead to establish and maintain secure connections, and it can incur network
latency. In some deployment scenarios, the lack of standard methods for user authentication and
address assignment make IPSec an unsuitable choice. Because IPSec depends on IP addresses
for establishing secure connections, you cannot specify dynamic IP addresses. It is often
necessary for a server to have a static IP address in IPSec policy filters. In large network
deployments and in some mobile user cases, using dynamic IP addresses at both ends of the
connection can increase the complexity of IPSec policy design. For these reasons, IPSec is not
recommended for the following scenarios:
Securing communication between domain members and their domain controllers
Securing all traffic in a network
Securing traffic for remote access VPN connections using IPSec tunnel mode
4.1.3 Securing Communication Between Domain Members and their Domain Controllers
Using IPSec to help secure traffic between domain members (either clients or servers) and their
domain controllers is not recommended because:
If domain members were to use IPSec-secured communication with domain controllers,
increased latency might occur, causing authentication and the process of locating a domain
controller to fail.
Complex IPSec policy configuration and management is required.
Increased load is placed on the domain controller CPU to maintain SAs with all domain
members. Depending on the number of domain members in the domain controllers domain,
such a load might overburden the domain controller.
In addition to reduced network performance, using IPSec to help secure all traffic in a network is
not recommended because:
IPSec cannot secure multicast and broadcast traffic.
Traffic from real-time communications, applications that require Internet Control Message
Protocol (ICMP), and peer-to-peer applications might be incompatible with IPSec.
Network management functions that must inspect the TCP, UDP, and other protocol
headers are less effective, or cannot function at all, due to IPSec encapsulation or encryption
of IP payloads.
4.1.5 Securing Traffic for Remote Access VPN Connections by Using IPSec Tunnel Mode
IPSec tunnel mode is not a recommended technology for remote access VPN connections,
because there are no standard methods for user authentication, IP address assignment, and
name server address assignment. Using IPSec tunnel mode for gateway-to-gateway VPN
connections is possible using computers running Windows Server 2003. But because the IPSec
tunnel is not represented as a logical interface over which packets can be forwarded and
received, routes cannot be assigned to use the IPSec tunnel and routing protocols do not operate
over IPSec tunnels. Therefore, the use of IPSec tunnel mode is only recommended as a VPN
solution for site-to-site VPN connections in which one end of the tunnel is a non-Microsoft VPN
server or security gateway that does not support L2TP/IPSec. Instead, use L2TP/IPSec or PPTP
for remote access VPN connections.
IPSec Uses That Require Special Considerations
The following scenarios merit special consideration, because they introduce an additional level of
complexity for IPSec policy configuration and management:
Securing traffic over IEEE 802.11wireless networks
Securing traffic in home networking scenarios
Securing traffic in environments that use dynamic IP addresses
IPSec is not recommended for end users in general home networking scenarios for the following
reasons:
The IPSec policy configuration user interface (IP Security Policy Management) is
intended for professional network security administrators, rather than for end users. Improper
policy configuration can result in blocked communications, and if problems occur, built-in
support tools are not yet available to aid end users in troubleshooting.
Some home networking applications use broadcast and multicast traffic, for which IPSec
cannot negotiate security.
Many home networking scenarios use a wide range of dynamic IP addresses.
Many home networking scenarios involve the use of a network address translator. To use
IPSec across a NAT, both IPSec peers must support IPSec NAT-T
and computers, IPSec policy is a computer configuration Group Policy setting that applies
only to computers.
Each computer that will establish IPSec-secured communications must have an IPSec
policy assigned. This policy must be compatible with the IPSec policy that is assigned to
other computers with which that computer must communicate.
Authentication must be configured correctly and an appropriate authentication method
must be specified in the IPSec policy so that mutual authentication can occur between IPSec
peers.
Routers, firewalls, or other filtering devices must be configured correctly to permit IPSec
protocol traffic on all parts of the corporate network, if IPSec negotiation messages and
IPSec-secured traffic must pass through these devices.
Computers must run operating systems that automatically support IPSec or must have
appropriate client software installed.
If computers are running different versions of the Microsoft Windows operating system
(for example, Windows Server 2003, the Microsoft Windows XP operating system, and the
Microsoft Windows 2000 operating system), you must address the compatibility of the IPSec
policies.
If clients must establish IPSec-secured connections with servers, those servers must be
adequately sized to support those connections. If necessary, you can use IPSec hardware
offload network adapters.
The number of IPSec policies are kept to a minimum, and the IPSec policies are made as
simple as possible.
Systems administrators who will configure and support IPSec must be properly trained
and must be members of the appropriate administrative groups.
The SAD contains the parameters of each active SA. The Internet Key Exchange (IKE) protocol
automatically populates the SAD. After an SA is established, the information for each SA is stored
in the SAD. The following figure shows the relationship between SAs, the SPD, and the SAD.
SA, SPD, and SAD Architecture
The Oakley protocol uses the Diffie-Hellman key exchange or key agreement algorithm to create
a unique, shared, secret key, which is then used to generate keying material for authentication or
encryption. For example, such a shared secret key could be used by the DES encryption
algorithm for the required keying material. A Diffie-Hellman exchange can use one of a number of
groups that define the length of the base prime numbers (key size) which are created for use
during the key exchange process. The longer the number, the greater the key strength. Well-
known groups include Groups 1, 2, and 14.
The following figure shows the relationship between the Oakley protocol, the Diffie-Hellman
algorithm, and well-known Diffie-Hellman key exchange groups.
Oakley Protocol, Diffie-Hellman Key Exchange Algorithm, and Well-Known Diffie-Hellman
Groups Architecture
The Oakley protocol defines several modes for the key exchange process. These modes
correspond to the two negotiation phases defined in the ISAKMP protocol. For phase 1, the
Oakley protocol defines two principle modes: main and aggressive. IPSec for Windows does not
implement aggressive mode. For phase 2, the Oakley protocol defines a single mode, quick
mode.
IPSec Protocols
To provide security for the IP layer, IPSec defines two protocols: Authentication Header (AH) and
Encapsulating Security Payload (ESP). These protocols provide security services for the SA.
Each SA is identified by the Security Parameters Index (SPI), IP destination address, and security
protocol (AH or ESP) identifier.
The SPI is a unique, identifying value in an SA that is used to distinguish among multiple SAs on
the receiving computer. For example, IPSec communication between two computers requires two
SAs on each computer. One SA services inbound traffic and the other services outbound traffic.
Because the addresses of the IPSec peers for the two SAs are the same, the SPI is used to
distinguish between the inbound and outbound SA. Because the encryption keys differ for each
SA, each SA must be uniquely identified.
The following figure shows the relationship between the SA, SPI, IP destination address, and
security protocol.
IPSec Protocols and SA Architecture
The authentication methods for IPSec, as defined by the IKE protocol, are grouped into three
categories: digital signature, public-key, and pre-shared key. The following figure shows the
relationship between the IKE protocol and the authentication methods.
IKE Protocol and Authentication Methods Architecture
Windows Server 2003 Active Directory stores domain-wide IPSec policies for
Active
computers that are members of the domain. Active Directory-based IPSec policies
Directory
are polled and retrieved by the Policy Agent.
The Policy Agent retrieves IPSec policy from an Active Directory domain, a
configured set of local policies, or a local cache. The Policy Agent then distributes
Policy Agent
authentication and security settings to the IKE component and the IP filters to the
IPSec driver.
IKE receives authentication and security settings from the Policy Agent and waits
for requests to negotiate IPSec SAs. When requested by the IPSec driver, IKE
negotiates both kinds of SAs (main mode and quick mode) with the appropriate
IKE
endpoint requested by the IPSec driver based on the policy settings obtained from
the Policy Agent. After negotiating an IPSec SA, IKE sends the SA settings to the
IPSec driver.
The IPSec driver monitors and secures outbound unicast IP traffic and monitors,
decrypts, and validates inbound unicast IP traffic. After the IPSec driver receives
the filters from the Policy Agent, it determines which packets are permitted, which
IPSec driver are blocked, or which are secured. For secure traffic, the IPSec driver either uses
active SA settings to secure the traffic or requests that new SAs be created. The
IPSec driver is bound to the TCP/IP driver to provide IPSec processing for IP
packets that pass through the TCP/IP driver.
The TCP/IP driver is the Windows Server 2003 implementation of the TCP/IP
TCP/IP
protocol. It is a kernel-mode component that is loaded from the tcpip.sys file during
driver
startup.
The architecture of the Policy Agent, IKE protocol, and IPSec driver are described in more detail
in the following sections.
The following sections provide additional detail about each of these components.
The policy store reads and writes policy information both to and from persistent storage and is
aware of shared policy-setting dependencies. This ensures that all policies using shared settings
are marked as changed when they are modified and that Windows Server 2003 IPSec
components download the modified policies.
4.5.7 Policy Agent Service Retrieving and Delivering IPSec Policy Information
The local registry maintains the IPSec policy configuration in the following registry key and its
subkeys: HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\IPSec.
If you assign local IPSec policies and you do not assign Active Directory-based IPSec policies,
the local policies are stored in the following registry key:
HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\IPSec\Policy\Local.
If you assign Active Directory-based IPSec policies, the policies are read from Active Directory
and stored in the local cache.
physical store on the Windows Server 2003 computer, but it is viewed logically as
belonging to the user account of the currently logged-on user, a service account, or
the computer account. IKE can use only the computer account, usually referred to
as the computer store. You can view the computer store by using the Certificates
snap-in.
The SSPI enables network applications to access one of several security providers
SSPI to establish authenticated connections and exchange data securely over those
connections.
Kerberos The Kerberos SSP contains an implementation of the Kerberos security protocol.
SSP The Kerberos SSP is an SSPI provider
The TCP/IP driver is the Windows Server 2003 implementation of the TCP/IP
TCP/IP driver protocol. It is a kernel-mode component that is loaded from the Tcpip.sys file
during startup.
TCP/IP applications use TCP/IP and access TCP/IP network services through an
TCP/IP
appropriate network API, such as Windows Sockets, NetBIOS, or Remote
applications
Procedure Call (RPC).
The network interface is the logical or physical interface over which IP packets are
Network sent and received. The details of the Network Driver Interface Specification (NDIS)
interface interface, the network adapter driver, and the physical media over which the IP
packets are sent and received are beyond the scope of this subject.
rule is configured for a specific purpose (for example, Block all inbound traffic
from the Internet to TCP port 135). You can define many IPSec rules in a single
IPSec policy.
IPSec rules are configured on the Rules tab in the properties of an IPSec policy.
IPSec rules associate IKE negotiation parameters with one or more IP filters. The following table
describes the components of an IPSec rule
Note
You can configure the security methods and their preference order on the Security Methods tab
in the properties of the default response rule in the IP Security Policies snap-in.
The default response rule works in the following way:
1. If the IKE module receives a request to negotiate security, it queries the Policy Agent for
a matching filter for traffic to and from the source and destination address of the ISAKMP
message. If a matching filter is explicitly configured, the IKE negotiation is based on the
settings of the associated rule.
2. If no matching filter is found and the default response rule is not activated, IKE
negotiation fails.
3. If no matching filter is found and the default response rule is activated, then IKE
dynamically creates an IP filter within the Policy Agent that corresponds to the traffic
specification of the incoming ISAKMP message. IKE authenticates and negotiates security
based on the settings on the Authentication Methods and Security Methods tabs for the
default response rule.
You configure the default authentication method for the default response rule by using the IP
Security Policy wizard.
Example: Default response rule
Typically, the default response rule is used when a group of servers are configured with policy to
secure communications between themselves and any IP address and to accept unsecured
communication, but respond using secured communications. The client computers are configured
with the default response rule. When the clients communicate with each other, the traffic is not
secured. When the clients communicate with the server, the traffic is secured (with the exception
of the initial packet sent by the client to the server).
For example, a client computer can reliably exchange data with a server by using Transmission
Control Protocol (TCP). A TCP connection is established through the exchange of three TCP
protocol, or in IPSec tunnel mode. IPSec tunnel mode is used to protect site-to-site (gateway-to-
gateway) traffic between networks, such as site-to-site networking through the Internet. The
sending gateway encapsulates the entire IP datagram by adding a new IP header and then
protects the new packet using one of the IPSec protocols. Windows Server 2003 supports IPSec
tunnel mode for configurations where Layer Two Tunneling Protocol (L2TP) cannot be used.
The ESP protocol is an IPSec protocol that provides data confidentiality, data origin
authentication, data integrity, and anti-replay protection for the ESP payload. The ESP protocol
can be used alone, in combination with the AH protocol, or in IPSec tunnel mode.
As shown in the figure, data integrity and authentication are provided by the placement of the AH
header between the IP header and the IP packet payload. The AH protocol uses keyed hash
algorithms to sign the packet for integrity. The AH protocol is identified in the IP header with an IP
protocol ID of 51. This protocol can be used alone or with the ESP protocol.
The following table describes the AH header fields.
AH Header Fields
AH Header
Function
Field
Identifies the IP payload by using the IP protocol ID. For example, a value of 6
Next header
represents TCP.
Length Indicates the length of the AH header.
Used in combination with the destination address and the security protocol (AH or
SPI ESP) to identify the correct SA for the communication. The receiver uses this
value to determine with which SA the packet is identified.
Provides anti-replay protection for the packet. The sequence number is a 32-bit,
incrementally increasing number (starting from 1) that indicates the packet
Sequence number sent over the SA for the communication. The sequence number cannot
number repeat for the life of the quick mode security association. The receiver checks this
field to verify that a packet for a security association with this number has not
already been received. If one has been received, the packet is rejected.
Authentication Contains the integrity check value (ICV), also known as the message
data authentication code, which is used to verify both data origin authentication and
data integrity. The receiver calculates the ICV value and checks it against this
value (which is calculated by the sender) to verify integrity. The ICV is calculated
over the IP header, the AH header, and the IP payload.
As shown in the figure, the ESP header is placed before the IP payload and an ESP trailer and
ESP trailer and authentication data field are placed after the IP payload. The ESP protocol is
identified in the IP header with the IP protocol ID of 50.
The following table describes the ESP header fields.
ESP Header Fields
ESP
Header Function
Field
When used in combination with the destination address and the security protocol
SPI (AH or ESP), the SPI identifies the SA for the communication. The receiver uses this
value to determine the SA with which this packet should be identified.
Provides anti-replay protection for the packet. The sequence number is a 32-bit,
incrementally increasing number (starting from 1) that indicates the packet number
Sequence sent over the quick mode SA for the communication. The sequence number cannot
number repeat for the life of the quick mode SA. The receiver checks this field to verify that a
packet for an SA with this number has not already been received. If one has been
received, the packet is rejected.
Contains the ICV, also known as the message authentication code, which is
used to verify both message authentication and integrity. The receiver
Authentication data calculates the ICV value and checks it against this value (which is calculated
by the sender) to verify integrity. The ICV is calculated over the ESP header,
the payload data, and the ESP trailer.
The signed portion of the packet indicates where the packet has been signed for integrity and
authentication. The encrypted portion of the packet indicates what information is protected with
confidentiality.
Because a new header for tunneling is added to the packet, everything that comes after the ESP
header is signed (except for the ESP authentication trailer) because it is now encapsulated in the
tunneled packet. The original header is placed after the ESP header. The entire packet is
appended with an ESP trailer before encryption occurs. All data that follows the ESP header,
except for the ESP authentication trailer, is encrypted. This includes the original header, which is
now considered to be part of the data portion of the packet.
The entire ESP payload is then encapsulated within the new tunnel header. The tunnel header is
not encrypted because it is used only to route the packet from origin to tunnel endpoint.
If the packet is being sent across a public network, it is routed to the IP address of the gateway
for the receiving intranet. The gateway decrypts the packet, discards the ESP header, and uses
the original IP header to route the packet to the intranet computer.
If the Policy Agent successfully loads a policy, the polling interval for policy checks is set from the
value in the policy data. If the Policy Agent fails to load a policy, it goes into the Initial policy state
and sets the polling interval to the default value.
While loading the static IPSec policy, the Policy Agent notes the state of the static policy data. If a
service loop timeout occurs, the Policy Agent uses the state information to determine activity.
The first four main mode messages contain the following ISAKMP payloads:
Security Association. The Security Association payload sent in message 1 is a list of
proposed protection mechanism for the main mode SA. The Security Association payload
sent in message 2 is a specific protection suite for the main mode SA that is common to both
IPSec peers. It is selected by the responder.
Key Exchange. The Key Exchange payload is sent in message 3 by the initiator and in
message 4 by the responder and contains Diffie-Hellman key determination information for
the Diffie-Hellman key exchange process.
Nonce. The Nonce payload is sent in messages 3 and 4 and contains a nonce, which is
a pseudorandom number that is used only once. The initiator and responder each send their
own unique nonces. Nonces are used to provide replay protection.
Depending on the authentication method that is selected in the IPSec policy, messages 3 and 4
might contain additional payloads. The payloads of all messages beyond the first four messages
are encrypted and vary based on the authentication method selected.
Note
It is important to understand the differences in negotiation behavior between initiating an
IKE main mode negotiation and quick mode negotiation and responding to one, and rekeying
an existing one. The IKE RFC 2409 requires that rekeys can be performed by either peer (in
either direction) at any time, regardless of the security association lifetimes negotiated.
Therefore, the computer that initiates a negotiation might become the responder and these
roles might alternate many times. Some of these differences are due to behavior required for
interoperability and some are caused by enforcement of policy settings.
Part One: Negotiation of protection mechanisms
When initiating an IKE exchange, IKE proposes protection mechanisms based on the applied
security policy. Each proposed protection mechanism includes attributes for encryption
algorithms, hash algorithms, authentication methods, and Diffie-Hellman groups. The first part of
the main mode is contained in main mode messages 1 and 2.
The following table lists the protection mechanism attribute values that are supported by Windows
Server 2003 IKE. These values are described in more detail in later sections.
Main Mode Attribute Values Supported by IKE
Main Mode Attribute Value
Encryption algorithm DES, 3DES
Integrity algorithm HMAC-MD5, HMAC-SHA1
Authentication method Kerberos v5, public key certificate, preshared key
Diffie-Hellman group Group 1, Group 2, Group 14 (2048)
The encryption algorithm, integrity algorithm, and Diffie-Hellman group are configured as one of
multiple key exchange security methods.
The initiating IKE peer proposes one or more protection suites in the same order as they appear
in the applied security policy. If one of the protection suites is acceptable to the responding IKE
peer, the responder selects it for use and responds to the initiator with its choice. Because the
responding IKE peer might not be running Windows Server 2003 or Windows 2000 and is
selecting the first proposed protection suite that is acceptable, the protection suites in the applied
security policy should be configured in the order of most secure to least secure.
Part Two: Diffie-Hellman exchange
After a protection suite is negotiated, IKE queries a Diffie-Hellman CSP through CryptoAPI to
generate a Diffie-Hellman public and private key pair based on the negotiated Diffie-Hellman
group. The Diffie-Hellman public key is sent to the IKE peer in an ISAKMP Key Exchange
payload. Main mode negotiation part 2 is contained in main mode messages 3 and 4.
The cryptographic strength of a Diffie-Hellman key pair is related to its prime number length (key
size). Windows Server 2003 IKE supports the following Diffie-Hellman groups:
Group 1 (768 bits)
Group 2 (1024 bits)
Group 14 (2048 bits)
Note
For enhanced security, Windows Server 2003 IPSec includes Diffie-Hellman Group 14,
which provides 2048 bits of keying strength. However, Diffie-Hellman Group 14 is not
currently supported in Windows 2000 or Windows XP for general IPSec policy use
After the Diffie-Hellman public keys are exchanged, IKE accesses CryptoAPI to compute
the shared key based on the mutually agreeable authentication method.
The following figure shows the Diffie-Hellman exchange between the IPSec peers and the
relationship between IKE, CryptoAPI, and the Diffie-Hellman CSP.
IKE Diffie-Hellman Key Exchange
Main Mode
Sender Payload
Message
1 Initiator ISAKMP header, Security Association (contains proposals)
ISAKMP header, Security Association (contains a selected
2 Responder
proposal)
ISAKMP header, Key Exchange (contains Diffie-Hellman key),
3 Initiator
Nonce, Initiator Kerberos Token
ISAKMP header, Key Exchange, Nonce, Responder Kerberos
4 Responder
Token
5* Initiator ISAKMP header, Identification, Initiator Hash
6* Responder ISAKMP header, Identification, Responder Hash
12. The responder sends the responder Kerberos token and computer identity to the sender
in main mode message 4.
13. On each IPSec peer, IKE creates the hash for the next main mode message to be sent
and then requests that SSPI sign the hash with the Kerberos session key.
14. The Kerberos SSP returns the signature to the IKE component.
15. The initiator sends the signed initiator hash to the responder in main mode message 5.
16. The responder sends the signed responder hash to the initiator in main mode message 6.
17. On each IPSec peer, IKE accesses the SSPI to compute the hash for the other peer. The
initiator accesses the SSPI to compute the responder hash. The responder accesses the
SSPI to compute the initiator hash.
18. The Kerberos SSP returns the computed hash to the IKE component where the hash
value is verified to complete IKE authentication. The initiator compares its calculated
responder hash with the responder hash received in main mode message 6. The responder
compares its calculated initiator hash with the initiator hash received in main mode message
5.
The following sections describe the IKE certificate selection and acceptance process. If you
decide to use certificates for IKE authentication, understanding this process and its requirements
is integral to ensuring proper deployment.
Public key certificate authentication Windows IKE performs public key certificate
authentication during main mode in compliance with RFC 2409. IKE uses CryptoAPI to retrieve
the computer certificate, verify peer certificates and certificate chains, check certificate
revocation, and create and verify digital signatures. All certificate, certificate chain, and signature
information is exchanged in main mode messages, as shown in the following table.
Certificate-based IKE Authentication Main Mode Messages
Main Mode
Sender Payload
Message
1 Initiator ISAKMP header, Security Association (contains proposals)
ISAKMP header, Security Association (contains a selected
2 Responder
proposal)
ISAKMP header, Key Exchange (contains Diffie-Hellman key),
3 Initiator
Nonce
ISAKMP header, Key Exchange (contains Diffie-Hellman key),
4 Responder
Nonce, Certificate Request
ISAKMP header, Identification, Certificate, Certificate Request,
5* Initiator
Signature
6* Responder ISAKMP header, Identification, Certificate, Signature
Schannel completes the mapping and builds an access token for the computer account. This
access token is automatically evaluated against the Access this computer from the
network or the Deny this computer access from the network logon right defined in Group
Policy Security settings.
If the logon right evaluation fails, the peer authentication fails.
computer, such as the name of the company that owns the computer and the domain
membership of the computer (if an internal PKI is being used). Although excluding the CA name
from certificate requests enhances security, computers with multiple certificates from different
roots might require the CA root names to select the correct certificate. Also, some non-Microsoft
IKE implementations might not respond to a certificate request that does not include a CA name.
For these reasons, excluding the CA name from certificate requests might cause IKE certificate
authentication to fail in certain cases.
When main mode negotiation completes or an existing quick mode SA expires, IKE begins quick
mode negotiation. During quick mode negotiation, IKE queries the Policy Agent for information
required to perform the appropriate filter actions, including whether the IPSec mode is tunnel or
transport, whether the protocol is ESP or AH or both, and which encryption and hashing
algorithms are proposed or accepted.
Note
Computers running Windows Server 2003 and Windows XP support the 3DES and DES
algorithms and do not require installation of additional components. However, computers
running Windows 2000 must have the High Encryption Pack or Service Pack 2 (or later)
installed in order to use 3DES. If a computer running Windows 2000 is assigned a policy that
uses 3DES encryption, but does not have the High Encryption Pack or Service Pack 2 (or
later) installed, the security method defaults to the weaker DES algorithm.
Diffie-Hellman groups are used to determine the length of the base prime numbers (key material)
for the Diffie-Hellman exchange. The cryptographic strength of any key derived from a Diffie-
Hellman exchange depends, in part, on the strength of the Diffie-Hellman group on which the
prime numbers are based. When a stronger group is used, the key that is derived from a Diffie-
Hellman exchange is stronger and more difficult for an attacker to break.
IKE negotiates which group to use, ensuring that there are not any negotiation failures that result
from a mismatched Diffie-Hellman group between the two peers.
If session key PFS is enabled, a new Diffie-Hellman key is negotiated during the first quick mode
SA negotiation. This new key removes the dependency of the session key on the Diffie-Hellman
exchange that is performed for the master key.
Both the initiator and responder must have session key PFS enabled, or negotiation fails.
The Diffie-Hellman group is the same for both the main mode and quick mode SA negotiations.
When session key PFS is enabled, even though the Diffie-Hellman group is set as part of the
main mode SA negotiation, it affects any rekeys during session key establishment.
Perfect Forward Secrecy
Unlike key lifetimes, PFS determines how a new key is generated, rather than when it is
generated. Specifically, PFS ensures that the compromise of a single key permits access only to
data that is protected by it, not necessarily to the entire communication. To achieve this, PFS
ensures that a key used to protect a transmission cannot be used to generate additional keys. In
addition, if the key that was used was derived from specific keying material, that material cannot
be used to generate other keys.
Master key PFS
In Windows Server 2003 IPSec, you can configure the number of times quick mode SAs can be
created based on a single main mode SA. If you enable master key PFS, the IKE allows only a
single quick mode SA for each main mode SA. By default, master key PFS is disabled, so there is
no limit to the number of quick mode SAs that can be created from one main mode SA. To derive
a new quick mode SA, a new main mode negotiation is performed, which includes a new Diffie-
Hellman exchange and a new authentication process.
Session key PFS
Whenever a quick mode SA requires renegotiation, IKE determines whether a session key PFS is
specified in the corresponding filter rule. If it is, IKE additionally generates a new Diffie-Hellman
key and exchanges it with the IKE peer during quick mode negotiation. By performing another
Diffie-Hellman key exchange, IKE provides additional cryptographic strength to quick mode key
generation beyond that already contributed by the main mode SA. Performing additional Diffie-
Hellman exchanges requires additional computational resources and might affect IPSec
performance.
Tracking the length of time a specific key is in use and requesting a new key from IKE as
necessary.
Tracking the number of bytes that have been transformed (that is, hashed or encrypted)
for each SA and requesting a new key from the IKE module if the byte count allowed by the
SA is exceeded.
For each secured inbound packet that contains an AH or ESP header, parsing the packet
for the SPI to determine the SA.
For each non-secured inbound packet, checking the filter list in the SPD to determine
whether the packet is permitted or discarded:
The filter list can contain an inbound permit filter if the corresponding filter action is set to
Permit or Negotiate security and either the Accept unsecured communication, but
always respond with IPSec or Allow unsecured communication with non IPSec-aware
computer options are enabled. The IPSec driver sends unmodified permitted packets to the
TCP/IP driver for additional processing.
The packet is discarded either because the filter action is set to Block or the filter action is
set to Negotiate security and unsecured communications are not allowed.
Handling hardware offload of cryptographic functions by skipping cryptographic
processing on packets processed by offload network adapters and managing offloaded SAs.
Handling network layer issues such as path maximum transmission unit (PMTU)
discovery.
Creating the SPI that the responder uses to identify the appropriate SA for an inbound
packet.
Deleting expired SAs.
Providing the implementation of the AH and ESP protocols.
For outbound traffic that must be secured; the IPSec driver, based on the parameters of the
SA, calculates and places the AH or ESP or both headers and trailer on the IP packet before
sending it to the TCP/IP driver. For inbound traffic that contains an AH or ESP header, the
IPSec driver processes the header and, if it is valid, sends the authenticated and decrypted
packet without the AH or ESP headers and trailer back to the TCP/IP driver.
When a packet matches a filter, the IPSec driver applies the filter action. When a packet does not
match any filters, the IPSec driver passes the packet back without modification to the TCP/IP
driver to be received or transmitted.
If the filter action permits transmission, the packet is received or sent with no modifications. If the
action blocks transmission, the packet is discarded. If the action requires the negotiation of
security, main mode and quick mode SAs are negotiated.
The negotiated quick mode SA and keys are used with both outbound and inbound processing.
The IPSec driver stores all current quick mode SAs in a database. The IPSec driver uses the SPI
field to match the correct SA with the correct packet.
When an outbound IP packet matches the IP filter list with an action to negotiate security, the
IPSec driver queues the packet and then notifies IKE, which begins security negotiations with the
destination IP address of that packet. If several outbound packets are going to the same
destination and match the same filter before IKE has finished the negotiation, then only the last
packet sent is saved.
The following sections describe the basic inbound packet and outbound packet processing that
the IPSec driver performs in transport mode.
Inbound packet processing
The following figure illustrates this process.
Basic Inbound Packet Process
Note
The inbound packet process applies only to local host unicast traffic (traffic with the
unicast destination address of the host) when there is an active IPSec policy.
Basic inbound packet processing for transport mode occurs in the following sequence:
32. IP packets are sent from the network interface to the TCP/IP driver.
33. The TCP/IP driver sends the IP packet to the IPSec driver.
34. If the inbound packet is IPSec-protected, the IPSec driver looks up the SA in the SAD.
35. If the inbound packet is not IPSec-protected, the IPSec driver checks the packet for a
filter match by looking up the filters in the SPD.
36. After the IPSec-protected inbound packet is authenticated and decrypted, the AH or ESP
or both headers are removed and the packet is sent to the TCP/IP driver. If a packet that is
not IPSec-protected is permitted by policy, that packet is sent to the TCP/IP driver.
37. The TCP/IP driver performs IP packet processing as needed and sends the application
data to the TCP/IP application.
Detailed inbound packet processing for transport mode occurs in the following sequence:
38. The TCP/IP driver sends the unicast packet to the IPSec driver.
39. If the packet is ISAKMP, the unmodified packet is sent back to the TCP/IP driver.
Note
o To modify the default filtering behavior for Windows Server 2003 IPSec, you can
use the Netsh IPSec context or modify the registry. For more information, see Default
exemptions to IPSec filtering later in this section.
40. If hardware offload processing was performed, the IPSec driver checks to determine
whether the hardware processing was successful.
41. If the hardware processing was not successful, an event is logged and the packet is
discarded.
42. The packet is parsed to determine whether an AH or ESP header or both are present.
43. If the packet does not contain an AH or ESP header, the packet is compared to the filter
list for a match.
44. If a filter match is not found, the unmodified packet is sent to the TCP/IP driver.
45. If a filter match is found, the IPSec driver attempts to find an SA based on the packet
contents.
46. If an SA is not found, the matching filter is checked to determine if it is an inbound permit
filter.
47. If the matching filter is an inbound permit filter, the unmodified packet is sent to the
TCP/IP driver.
48. If the matching filter is not an inbound permit filter, the packet is discarded.
49. If an SA is found, it is checked to determine whether it is a soft SA. A soft SA is one in
which the Negotiate security filter action is enabled, but there is no authentication or
encryption being performed because the computer with which communication occurs is not
running IPSec. This process is also known as fallback to clear. Even though the packet is not
being protected, an SA without an AH or ESP header is still maintained in the SAD. Soft SAs
and fallback to clear are possible only when Allow unsecured communication with non
IPSec-aware computer is selected on the Security methods tab in the properties of a filter
action.
50. If the SA is a soft SA, the unmodified packet is sent to the TCP/IP driver.
51. If the SA is not a soft SA, the packet is discarded.
52. If the packet contains an AH or ESP header (or both), the header is parsed for the SPI.
53. The SPI is used to look up the SA in the SAD.
54. If the SA corresponding to the SPI is not found in the SAD, a Bad SPI event is logged
and the packet is discarded.
55. If the SA corresponding to the SPI is found in the SAD, the current time is used to update
the SAs last used time. The time is used for aging the SA.
56. The SA is checked to determine whether cryptographic processing for the SA was
offloaded to hardware. For packets that have been processed by hardware offload, steps 20
and 21 are skipped.
57. The packet is authenticated or decrypted or both. This process involves verifying the
HMAC in the AH or ESP header, processing the other fields in the AH and ESP headers and
trailer, and decrypting the ESP payload.
Page 157 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10
Basic outbound packet processing for transport mode occurs in the following sequence:
62. Application data is sent to the TCP/IP driver from the TCP/IP application.
63. The TCP/IP driver sends an IP packet to the IPSec driver.
64. The IPSec driver checks the packet for a filter match by looking up the filters in the SPD.
65. The IPSec driver checks the packet for an active SA by looking up the SAs in the SAD.
Based on the SA, the traffic is authenticated or encrypted or both.
66. If the traffic must be protected and there is not an SA, the IPSec driver requests that IKE
create the appropriate SAs. The IP packet is then held until the SA is established and can be
IPSec framed.
67. The IP packet is sent back to the TCP/IP driver.
68. The TCP/IP driver sends the IP packet to the network interface.
Detailed outbound packet processing for transport mode occurs in the following sequence:
69. The TCP/IP driver sends the unicast outbound packet to the IPSec driver.
70. If the packet is ISAKMP, the unmodified packet is sent back to the TCP/IP driver.
71. The IPSec driver attempts to find a filter that matches the packet. If a filter is not found,
the unmodified packet is sent back to the TCP/IP driver.
72. If a filter match is found, the IPSec driver attempts to find an SA that matches the packet.
73. If an SA is not found, the filter action is checked. If the filter action is set to Negotiate
security, the IPSec driver requests that the IKE module negotiate the appropriate SAs.
74. If the IKE negotiation is successful, the IKE module informs the IPSec driver of the new
SA and the IPSec driver looks up the SA again.
75. If the IKE negotiation is not successful, the packet is discarded.
76. If the filter action is set to Permit, the unmodified packet is sent back to the TCP/IP
driver. Otherwise, the packet is discarded.
77. If an SA is found in the SAD, the current time is used to update the SAs last used time.
The time is used for aging the SA.
78. The SA is checked to determine whether it is about to expire. If the SA is about to expire,
the IPSec driver informs the IKE module to initiate a quick mode or Phase 2 rekey of the
quick mode SA.
79. The SA is checked to determine whether it has expired. If the SA has expired, the packet
is discarded.
80. The Dont Fragment (DF) flag in the IP header of the packet is checked. If the DF flag is
set to 1, the size of the IP packet with the proposed AH or ESP or both headers and trailer is
calculated.
81. If the size of the IP packet with the proposed IPSec overhead is larger than the path
maximum transmission unit (PMTU) for the destination IP address, the IPSec driver indicates
a packet-too-large condition for the packet and the unmodified packet is sent back to the
TCP/IP driver. The packet-too-large condition allows the TCP/IP driver to either adjust the
PMTU for the destination or, in the case of transit traffic, inform the sending host with an
Internet Control Message Protocol (ICMP) Destination Unreachable-Fragmentation Needed
and DF Set message that includes the new PMTU. The packet is eventually discarded by the
TCP/IP driver.
82. If the DF flag is not set to 1, or if it is set to 1 and the additional IPSec overhead is not
greater than the current PMTU for the destination, blank AH or ESP both headers and trailer
are constructed (based on the settings for the SA).
83. The IPSec driver checks to determine whether the hardware offload is capable of
offloading the SA for this packet. If so, the IPSec driver checks to determine whether the SA
for the packet was offloaded to the hardware.
84. If the SA was offloaded to the hardware, an offload status is set on the packet and the
modified packet with blank AH or ESP or both headers and trailer is sent to the TCP/IP
driver.
85. If the SA has not been offloaded to the hardware, the IPSec driver accesses NDIS with
instructions to add the SA to the hardware offload network interface.
86. If hardware offload is not enabled or the SA has not been offloaded to the hardware, the
IPSec driver performs the cryptographic processing and adds the appropriate values in the
fields of the AH or ESP or both headers and trailer.
87. The IPSec driver sends the modified packet to the TCP/IP driver.
A value of 2 specifies that multicast and broadcast traffic are not exempt from IPSec
filtering (RSVP, Kerberos, and ISAKMP traffic are exempt).
A value of 3 specifies that only ISAKMP traffic is exempt from IPSec filtering. This is the
default filtering behavior for Windows Server 2003, Windows 2000 (with Service Pack 4 and
later service packs) and Windows XP (with Service Pack 1 and later service packs).
If you change the value for this setting, you must restart the computer for the new value to take
effect.
To modify the default filtering behavior by using the registry
88. In Regedit, under the
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\IPSEC key, add a new
DWORD entry named NoDefaultExempt.
89. Assign this entry any value from 0 through 3.
90. Restart the computer.
The filtering behaviors for each value are equivalent to those noted above for the netsh ipsec
dynamic set config ipsecexempt value=x command.
The following table summarizes the equivalent filters that are implemented if all default
exemptions to IPSec filtering are enabled (that is, if NoDefaultExempt is 0). When the IP
address is specified, the subnet mask is 255.255.255.255. When the IP address is Any, the
subnet mask is 0.0.0.0.
Equivalent Filters When NoDefaultExempt=0
Source Address Destination Address Protocol Source Port Destination Port Filter Action
My IP Address Any IP Address UDP Any 88 Permit
Any IP Address My IP Address UDP 88 Any Permit
Any IP Address My IP Address UDP Any 88 Permit
My IP Address Any IP Address UDP 88 Any Permit
My IP Address Any IP Address TCP Any 88 Permit
Any IP Address My IP Address TCP 88 Any Permit
Any IP Address My IP Address TCP Any 88 Permit
My IP Address Any IP Address TCP 88 Any Permit
1
My IP Address Any IP Address UDP 500 500 Permit
Any IP Address My IP Address UDP 500 500 Permit
My IP Address Peer IP Address UDP 4500 4500 2 Permit
Peer IP Address My IP Address UDP 4500 4500 Permit
My IP Address Any 46 (RSVP) Permit
Any IP Address My IP Address 46 (RSVP) Permit
Any IP Address <multicast> 3 Permit
My IP Address <multicast> Permit
4
Any IP Address <broadcast> Permit
My IP Address <broadcast> Permit
<All IPv6 protocol traffic> 5 Permit
1
In order for IPSec transport mode to be negotiated through an IPSec tunnel mode SA, ISAKMP
traffic cannot be exempted if it needs to pass through the IPSec tunnel first.
2
When IPSec NAT-T is performed, the filter exemption for UDP port 4500 is automatically
generated based on the source and destination IP addresses used during the initial part of the
IKE negotiation on UDP port 500. This dynamic permit filter for port 4500 is displayed in the IP
Security Monitor snap-in, under Quick Mode\Specific Filters, and in the output for the netsh
ipsec dynamic show qmfilter command.
3
Multicast traffic is defined as the class D range, with a destination address range of 224.0.0.0
with a 240.0.0.0 subnet mask, which corresponds to the range of addresses from 224.0.0.0 to
239.255.255.255.
4
Broadcast traffic is defined as a destination address of 255.255.255.255 (the limited broadcast
address) or as having the host ID portion of the IP address set to all 1s (the subnet broadcast
address).
5
IPSec does not support filtering for IP version 6 (IPv6) packets, except when IPv6 packets are
encapsulated with an IPv4 header.
Windows Server 2003 IPSec does not support specific filters for broadcast protocols or ports, nor
does it support multicast groups, protocols, or ports. Because IPSec does not negotiate security
for multicast and broadcast traffic, these types of traffic are dropped if they match a filter with a
corresponding filter action to negotiate security. A filter with a source address of Any IP Address
and a destination address of Any IP Address can block or permit all multicast and broadcast
traffic. By default (and if the NoDefaultExempt registry key is set to a value of 2 or 3), outbound
multicast or broadcast traffic will be matched against a filter with a source address of My IP
Address and a destination address of Any IP Address. More specific unicast IP address filters
that block, permit, or negotiate security for unicast IP traffic should be configured in the same
IPSec policy to achieve appropriate security.
AH 51 N/A N/A
ISAKMP N/A 500 (4500) N/ A
The following sections describe how to configure routers, firewalls, or other filtering devices to
ensure that traffic that is sent over IPSec protocols can pass through these devices. Additional
considerations for IPSec NAT traversal are also described.
2. In the console tree, expand the domain or OU that you want to manage, right-click the
Group Policy object that you want to edit, and then click Edit.
3. In the Group Policy Object Editor console tree, click Computer Configuration, click
Windows Settings, and then click Security Settings.
4. Click Wireless Network (IEEE 802.11) Policies, right-click the wireless network policy
that you want to modify, and then click Properties.
5. On the Preferred Networks tab, under Networks, click the wireless network for which
you want to define IEEE 802.1X authentication.
6. On the IEEE 802.1X tab, check the Enable network access control using IEEE 802.1X
check box to enable IEEE 802.1X authentication for this wireless network. This is the default
setting. To disable IEEE 802.1X authentication for this wireless network, clear the Enable
network access control using IEEE 802.1X check box.
7. Specify whether to transmit EAPOL-start message packets and how to transmit them.
8. Specify EAPOL-Start message packet parameters.
9. In the EAP type box, click the EAP type that you want to use with this wireless network.
10. In the Certificate type box, select one of the following options:
o Smart card. Permits clients to use the certificate that resides on their smart card
for authentication.
o Certificate on this computer. Permits clients to use the certificate that resides
in the certificate store on their computer for authentication.
11. To verify that the server certificates that are presented to client computers are still valid,
select the Validate server certificate check box.
12. To specify whether client computers must try authentication to the network, select one of
the following check boxes:
o Authenticate as guest when user or computer information is unavailable.
Specifies that the computer must attempt authentication to the network if user
information or computer information is not available.
o Authenticate as computer when computer information is available. Specifies
that the computer attempts authentication to the network if a user is not logged on. After
you select this check box, specify how the computer attempts authentication.
To use Windows to configure wireless network settings on a client computer
1. Open Network Connections.
2. Right-click Wireless Network Connection, and then click Properties.
3. Click the Wireless Networks tab.
4. On the Wireless Networks tab, do one of the following:
o To use Windows to configure wireless network settings on your computer, select
the Use Windows to configure my wireless network settings check box. This check
box is selected by default. For information about this option, see Notes.
o If you do not want to use Windows to configure wireless network settings on your
computer, clear the Use Windows to configure my wireless network settings check
box.
To add, edit, or remove wireless network connections on a client computer
1. Open Network Connections.
2. Right-click Wireless Network Connection, and then click Properties.
3. Click the Wireless Networks tab.
4. Choose whether to add, modify, or remove a wireless network connection:
To Do this
Add a new wireless Select the Use Windows to configure my network settings check
network connection box, and then click Add.
Modify an existing Select the Use Windows to configure my network settings check
wireless network box. Under Preferred networks, click the wireless network
connection connection that you want to modify, and then click Properties.
Remove a preferred
Under Preferred networks, click the wireless network connection
wireless network
that you want to remove, and then click Remove.
connection
5. If you are adding or modifying a wireless network connection, click the Association tab,
configure wireless network settings as needed, and then click OK. For more information, see
Related Topics.
6. To define 802.1X authentication for the wireless network connection, click the
Authentication tab, and then configure the settings as needed, and then click OK. For more
information, see Related Topics.
7. To connect to a wireless network after configuring network settings, on the Wireless
Networks tab, under Available networks, click the network name, click Configure, and
then, in Wireless network properties, click OK.
Important
o If a network does not broadcast its network name, it does not appear under
Available networks. To connect to an access point (infrastructure) network that you
know is available but that does not appear under Available networks, click Add. On
Association, type the network name, and if needed, configure additional network
settings.
8. To change the order in which connection attempts to preferred networks are made, under
Preferred networks, click the wireless network that you want to move to a new position in
the list, and then click Move up or Move down until the wireless network is at the required
position.
9. To update the list of available networks that are within range of your computer, click
Refresh.
10. To automatically connect to available networks that do not appear in the Preferred
networks list, click Advanced, and then select the Automatically connect to non-
preferred networks check box.
Dial-up connections are inherently more private than a solution that uses a public network such
as the Internet. However, with dial-up networking, your organization faces a large initial
investment and continuing expenses throughout the life cycle of the solution. These expenses
include:
Hardware purchase and installation. Dial-up networking requires an initial investment
in modems or other communication hardware, server hardware, and phone line installation.
Monthly phone costs. Each phone line that is used for remote access increases the
cost of dial-up networking. If you use toll-free numbers or the callback feature to defray long
distance charges for your users, these costs can be substantial. Most businesses can
arrange a bulk rate for long distance, which is preferable to reimbursing users individually at
their more expensive residential rates.
Ongoing support. The number of remote access users and the complexity of your
remote access design significantly affects the ongoing support costs for dial-up networking.
Support costs include network support engineers, testing equipment, training, and help desk
personnel to support and manage the deployment. These costs represent the largest portion
of your organizations investment.
If you use a VPN for remote access, users connect to your corporate network over the Internet.
VPNs use a combination of tunneling, authentication, and encryption technologies to create
secure connections. VPNs reduce remote access expenses by using the existing Internet
infrastructure. You can use a VPN to partially or entirely replace your centralized, in-house, dial-
up remote access infrastructure and legacy services.
VPNs offer two primary benefits:
Reduced costs. Using the Internet as a connection medium saves long-distance phone
expenses and requires less hardware than a dial-up networking solution.
Sufficient security. Authentication prevents unauthorized users from connecting to your
network. Strong encryption methods make it extremely difficult for a hostile party to interpret
the data that is sent across a VPN connection.
Section 5
Key archival and recovery A feature that makes it possible to archive and recover the private
key portion of a public-private key pair, in the event that a user loses his or her private keys, or an
administrator needs to assume the role of a user for data access or data recovery. Private key
recovery does not recover any data or messages; it merely enables the recovery process.
Public key standards Standards developed to describe the syntax for digital signing and
encrypting of messages and to ensure that a user has an appropriate private key. To maximize
interoperability with third-party applications that use public key technology, the Windows
Server 2003 PKI is based on the standards recommended by the Public-Key Infrastructure
(X.509) (PKIX) working group of the Internet Engineering Task Force (IETF). Other standards that
the IETF has recommended also have a significant impact on public key infrastructure
interoperability, including standards for Transport Layer Security (TLS) ,Secure/Multipurpose
Internet Mail Extensions (S/MIME) and Internet Protocol security (IPSec)
Internet Protocol security (IPSec).
In this example, the issuing CA issued the User certificate, and the root CA issued the certificate
of the issuing CA. This is considered a trusted chain, because it terminates with a root CA
certificate that has been designed and implemented to meet the highest degree of trust.
The chain building process validates the certification path by checking each certificate in the
certification path from the end certificate to the certificate of the root CA. If the CryptoAPI
discovers a problem with one of the certificates in the path, or if it cannot find a certificate, the
certification path is either considered invalid or is given less weight than a fully validated
certificate.
Before you begin to design your public key infrastructure and configure certificate services, you
need to define the security needs of your organization. For example, does your organization
require electronic purchasing, secure e-mail, secure connections for roaming users, or digital
signing of files? If so, you need to configure CAs to issue and manage certificates for each of
these business solutions.
5.3 Windows Server 2003 PKI can support the following security applications:
Digital signatures
Secure e-mail
Software code signing
Internet authentication
IP security
Smart card logon
Encrypting file system user and recovery certificates
802.1x authentication
5.3.5 IP Security
Windows 2000 and Windows Server 2003 incorporate Internet Protocol security (IPSec) to
protect data moving across the network. IPSec is a suite of protocols that allows encrypted and
digitally signed communication between two computers or between a computer and a router over
an insecure network. The encryption is applied at the IP network layer, which means that it is
transparent to most applications that use specific protocols for network communication. IPSec
provides end-to-end security, meaning that the IP packets are encrypted or signed by the sending
entity, are unreadable en route, and can be decrypted only by the recipient entity. Due to a
special algorithm for generating the same shared encryption key at both ends of the connection,
the key does not need to be passed over the network.
You do not need to use public key technology to use IPSec; instead you can use the
Kerberos version 5 authentication protocol or shared secret keys that are communicated securely
by means of an out-of-band mechanism at the network end points for encryption. However, if you
use public key technology in conjunction with IPSec, you can create a scalable distributed trust
architecture in which IPSec devices can mutually authenticate each other and agree upon
encryption keys without relying on prearranged shared secrets, either out-of-band or in-band.
This, in turn, yields a higher level of security than IPSec without a PKI.
For example, certificate use might be based on job function, location, organizational structure, or
a combination of these three, or all computers or users in the organization might use certain
certificate applications.
For each of the groups that you have identified, you need to determine:
The types of certificates to be issued. This is based on the security application
requirements of your organization and the design of your PKI infrastructure.
The number of users, computers, and applications that need certificates. This
number can include as few as one or as many users, computers, or applications as are in an
entire organization.
The physical location of the users, computers, and applications that need
certificates. Different certificate solutions might be required for users in remote offices or for
users who travel frequently than are required for users in the headquarters office of an
organization. Also, requirements can differ based on geography. For example, you might
want to restrict users in one country/region from using their certificates to access data in an
organizational business unit in another country/region.
The level of security that is required to support the users, computers, and
applications that need certificates. Users who work with sensitive information typically
require higher levels of security than other members of the organization.
The number of certificates required for each user, computer, and application. In
some cases, one certificate can meet all requirements. Other times, you need multiple
certificates to enable specific applications and meet specific security requirements.
The enrollment requirements for each certificate that you plan to issue. For
example, do users have to present one or more pieces of physical identification, such as a
drivers license, or can they simply request a certificate electronically?
An individual departmental certification authority running on a server with a dual processor and
512 megabytes (MB) of RAM can issue more than 2 million standard-key-length certificates per
day. Even with an unusually large CA key, a single stand-alone CA with the appropriate hardware
is capable of issuing more than 750,000 user certificates per day.
Using a greater number of small CAs with strategically located CRL distribution points reduces
the risk that your organization might be forced to revoke and reissue all its certificates if a large
CA is compromised. However, using a greater number of CAs might increase your administrative
overhead.
For many organizations, the primary limitations to CA performance are the amount of physical
storage available and the quality of the clients network connectivity to the CA. If too many clients
attempt to access your CA over slow network connections, autoenrollment requests can be
delayed.
Another significant factor is the number of roles that a CA server performs on the network. If a CA
server is operating in more than one capacity in the network for example, if it also functions as
a domain controller it can negatively impact the capacity and performance of the CA. It can
also complicate the delegation of administration for the CA server. For this reason, unless your
organization is extremely small, use your CAs only to issue certificates.
Some hardware components impact PKI capacity and performance more than others. When you
are selecting the server hardware for your CAs, consider the following:
Number of CPUs. Large CA key sizes require more CPU resources. The greater the
number of CPUs, the better the performance of the CA. CPU power is the most critical
resource for a Windows Server 2003 certification authority.
Note
Because of the architecture of their databases, Windows Server 2003
certification authorities are CPU-intensive and use a substantial amount of the disk
subsystem. However, other hardware resources can also impact the performance of a
CA when the system is put under stress.
Disk performance. In general, a high-performance disk subsystem allows for a faster
rate of certificate enrollment. However, key length impacts disk performance. With a shorter
CA key length, the CPU has fewer calculations to perform and, therefore, it can complete a
large number of operations. With longer CA keys, the CPU needs more time to issue a
certificate and this results in a smaller number of disk input/output (IO) operations per time
interval.
Number of disks. You can improve performance slightly by using separate physical
disks for the database and log files. You can improve performance significantly by placing the
database and log files on RAID or striped disk sets. In general, the drive that contains the
certification authority database is used more than the drive hosting the log file.
Note
Using separate logical disks does not provide any performance advantages.
Amount of memory. The amount of memory that you use does not have a significant
impact on CA performance, but must meet general system requirements
Hard disk capacity. Certificate key length does not affect the size of an individual
database record. Therefore, the size of the CA database increases linearly as more records
are added. In addition, the higher the capacity of the hard disk, the greater the number of
certificates that a CA can issue.
Tip
Plan for your hard disk requirements to grow over time. In general, every
certificate that you issue requires 17 kilobytes (KB) in the database and 15 KB in the log
file.
The type of hardware that your clients use can also impact performance. When you are selecting
or evaluating the capabilities of the hardware for your CA clients, consider the following:
Key length. The greater the key length of a requested certificate, the greater the impact
on the CPU of the server hosting the CA.
Network bandwidth. Assuming that the CA is not serving in more than one capacity, a
100-megabit network connection is sufficient to prevent performance bottlenecks.
As you plan your CA infrastructure, you also need to ensure that your design is flexible enough to
accommodate changes to your organization. For example, you need to be able to accommodate:
Changes in the functionality that you require from your public key infrastructure.
Growth or decline in demand for certificates.
The addition or removal of locations that CAs need to serve.
The effect of revocation. Revoking large numbers of certificates can take several minutes
and increase the size of the database.
Using multiple CAs is an excellent way to ensure that your infrastructure can support enterprise
scalability. The use of multiple CAs, even for organizations with minimal certificate requirements,
provides the following advantages:
Greater reliability. If you need to take an individual CA offline for maintenance or
backup, another CA can service its requests.
Scalability. Increases in demand, either from new users or from new applications, can
be accommodated more easily.
Distributed administration. Many organizations distribute security administration across
a number of IT administrators to prevent one individual or team from controlling the entire
security technology infrastructure of the organization.
Improved availability. Users in remote offices can access a CA that is local to them
rather than accessing a CA across slow Wide Area Network (WAN) links.
is based on Windows 2000 Active Directory, you might need to extend it to support Windows
Server 2003 Certificate Services functionality, such as version 2 certificate templates.
For certificates with a long life, the availability of the CA services themselves is much less
important than the availability of the directory that holds the certificates and the certificate
revocation lists. If you integrate your CAs with Active Directory, your certificates and CRLs are
automatically published to the directory and replicated throughout the forest as part of the global
catalog.
Note
If you use Active Directory to publish and replicate information about CRLs throughout
your organization, be sure to review Active Directory replication schedules and policies in
order to ensure that this data is distributed in a timely manner.
Windows Server 2003 Certificate Services functions whether Active Directory in your organization
is based on Windows 2000 or Windows Server 2003. It also functions if your organization is
operating in mixed mode.
Auditor. Audits the actions of local administrators, service managers, and certificate
managers.
The extent to which you separate roles depends on the level of security that you require for a
particular service. Assign the fewest possible rights to users in order to achieve the greatest level
of security. For example, you can adopt the following rules:
No user can assume the roles of both CA Administrator and Certificate Manager.
No user can assume the roles of both User Manager and Certificate Manager.
If you need stricter guidelines, you can include the following:
No user can assume the roles of both Auditor and Certificate Manager.
To facilitate this delegation process, you need to understand how various PKI administrative roles
align with Windows Server 2003 administrative roles. Table 16.1 lists the Windows Server 2003
administrative roles that correspond to each PKI administrative role.
Table PKI Administrative Roles and Their Corresponding Windows Server 2003
Administrative Roles
PKI
Windows Server 2003 Administrative
Administrative Description
Role
Role
Configures, maintains, and
PKI Administrator User
renews the CA.
Performs system backup and Backup Operator on the server on which the
Backup Operator
recovery. CA is running
Configures, views, and Local Administrator on the server on which
Audit Manager
maintains audit logs. the CA is running
Key Recovery Requests retrieval of a private
User
Manager key stored by the service.
Approves certificate enrollment
Certificate Manager User
and revocation requests.
Manages users and their Account Operators (or person delegated to
User Manager
associated information. create user accounts in Active Directory)
Requests certificates form the
Enrollee Authenticated Users
CA
Table lists the actions that each PKI administrative role can perform.
Table Actions Performed By PKI Administrative Roles
Local
CA Certificate Audit Backup
Action Enrollee Server
Admin Manager Manager Operator
Admin
Install a CA
Configure a CA
Policy and exit module
configuration
Stop/start service
Change configuration
Assign user roles
Establish user accounts
Maintain user accounts
Configure profiles
Renew CA keys
Define key recovery
agent(s)
Define officer roles
Enable role separation
Issue/Approve
certificates
Deny certificates
Revoke certificates
Unrevoke certificates
Renew certificates
Enable, publish, or
configure CRL schedule
Configure audit
parameters
Audit logs
Back up system
Restore system
Read CA properties,
CRL
Request certificate
Read CA database
Read CA configuration
information
Read issued, Revoked,
pending certificates
Stand-alone CAs do not require Active Directory and do not use certificate templates. If you use
stand-alone CAs, all information about the requested certificate type must be included in the
certificate request. By default, all certificate requests submitted to stand-alone CAs are held in a
pending queue until a CA administrator approves them. You can configure stand-alone CAs to
issue certificates automatically upon request, but this is less secure and is usually not
recommended, because the requests are not authenticated.
From a performance perspective, using stand-alone CAs with automatic issuance enables you to
issue certificates at a faster rate than you can by using enterprise CAs. However, unless you are
using autoissuance, using stand-alone CAs to issue large volumes of certificates usually comes
at a high administrative cost because an administrator must manually review and then approve or
deny each certificate request. For this reason, stand-alone CAs are best used with public key
security applications on extranets and the Internet, when users do not have Windows 2000 or
Windows Server 2003 accounts, and when the volume of certificates to be issued and managed
is relatively low.
You must use stand-alone CAs to issue certificates when you are using a third-party directory
service or when Active Directory is not available.
Note
You can use both enterprise and stand-alone certification authorities in your organization.
Table 16.3 lists the options that each type of CA supports.
When you install a CA in your organization, you must specify a location for the database and log
files of the CA. You must also indicate whether you want to store the configuration information for
the CA. Storing the CA configuration information is helpful for backing up and, if necessary,
restoring your CA.
You can choose to copy the naming information and the certificate for the CA to the file system
(the configuration directory is automatically shared by means of a share named certconfig).
The CA database consists of the files listed in Table
Table CA Database Files
Database file Purpose
<CA name>.edb The CA store
edb.log The transaction log file for the CA store
res1.log Reservation log file to store transactions if disk space is exhausted
res2.log Reservation log file to store transactions if disk space is exhausted
edb.chk Database checkpoint file
Protection. Smart cards provide tamper-resistant storage for private keys and other
data. If a smart card is lost or stolen, it is difficult for anyone except the intended user to use
the credentials that it stores.
Isolation. Cryptographic operations are performed on the smart card itself rather than on
the client or on a network server. This isolates security-sensitive data and processes from
other parts of the system.
Portability. Credentials and other private information stored on smart cards can easily be
transported between computers at work, home, or other remote locations.
The number and variety of smart cardenabled applications is growing to meet the needs of
organizations that want to rely on smart cards to enable secure authentication and to facilitate
services.
Before you can deploy smart cards in your organization, you must have a public key infrastructure
(PKI) in place. Next, you need to identify applications to enable for use with smart cards, and plan
how to implement and support a smart card infrastructure before you can take advantage of the
security benefits of smart cards.
Smart cards Hardware tokens containing integrated processors and memory chips that can be
used to store certificates and private keys and to perform public key cryptography operations,
such as authentication, digital signing, and key exchange.
Smart card readers Devices that connect a smart card to a computer. Smart card readers can
also be used to write certificates to the smart card.
Smart card software The software provided by the smart card vendor to manage smart cards.
In some cases, organizations might choose to create their own software tools if customized
functionality is required.
You can also use smart cards for remote access logons, and for Terminal Services and shared
client logons.
of the smart card in your specifications. This is an important factor to consider when you select a
vendor to manufacture the stickers, as the material thickness for smart card chips can vary.
Token-style smart cards
Token-style smart cards are typically the size of a house key or automobile key. They plug
directly into a USB port, providing a more compact solution than separate cards and readers.
Token-style smart cards are ideal for laptop users who want to carry a minimum number of
peripherals, or for workers who use a number of different computers. However, you cannot use
token-style smart cards if your computers do not have USB connections, or if the USB
connections are full or difficult to access.
Memory
Your smart card requires enough memory to store the certificate of the user, the smart card
operating system, and additional applications. Smart cards run embedded operating systems,
and in many cases, a form of file system in which data can be stored. To enable Windows smart
card logon, you must be able to program the card to store a users key pair, retrieve and store an
associated public key certificate, and perform public and private key operations on behalf of the
user.
To calculate the amount of memory that you need, determine the space requirements for:
User certificates. A certificate typically requires about 1.5 kilobytes (KB). A smart card
logon certificate with a 1,024-bit key typically requires 2.5 KB of space.
The smart card operating system. The Windows for Smart Cards operating system
requires about 15 KB.
Applications required by the smart card vendor. A small application requires between
2 KB and 5 KB.
Your custom applications.
Future applications.
Figure shows the additional space requirements of a typical 32 KB smart card. The smart card
operating system requires about 15 KB, leaving 17 KB for the file system, which includes space
for the card management software, the certificate, and any other custom applications.
Figure Memory Use on a 32 KB Smart Card
It is possible to configure smart card file systems into public and private spaces. For example,
you can define segregated areas for protected information, such as certificates, e-purses, and
entire operating systems, and mark this data as Read Only to ensure the security of the smart
card and restrict the amount of data that can be modified. In addition, some vendors provide
cards with sub-states, such as Add Only, which is useful for organizations that want to restrict the
ability of a user to revise an existing credential, and Update Only, which is useful for
organizations that want to restrict ability of a user to add new credentials to a card.
The data capacity available on smart cards is increasing as smart card technology improves.
However, storage space on smart cards is expensive. Card vendors often restrict the amount of
storage available to individual applications so that multiple applications or services can be stored
on the card. Therefore, in your vendor specification, define all of your anticipated present and
future card usage requirements and the memory requirements for each certificate and application
that you require. If you plan to use your smart cards for multiple purposes, such as physical
access to facilities and user logon, or to store additional data, you must increase your memory
requirements. Also, when planning storage space on the chip, allocate space for applications that
you are planning for future implementation.
Note
Windows Server 2003 and Windows XP do not support the use of multiple certificates on
a smart card.
Life Expectancy
You must define the length of time for which you will use a smart card before you replace or
upgrade it. Contact your vendor for information about smart card life expectancy based on normal
wear and tear.
In addition, you must take into account your current and future space requirements, including the
anticipated need for additional applications and certificates with larger keys. Anticipate adding
new applications, and potentially issuing new smart cards, over an 18-24 month card lifecycle. In
the future, vendors are likely to introduce smart cards with more memory and other
enhancements for a lower cost.
Also, determine whether you want your smart cards to be reusable in the event that users leave
the organization. Reusing smart cards reduces the costs associated with issuing new ones.
However, the cost associated with removing existing data and writing new data and applications
is often equal to or more than the cost of preparing and issuing new smart cards.
your smart card requirements provide objective standards for measuring and documenting
satisfactory performance.
To minimize user dissatisfaction and maximize manageability, be sure to test the following:
Installation and removal of the smart card software. Make sure the smart cards work
after you install the software. If the installation is faulty, use the Windows Event Viewer to
access error messages that might explain the cause of the failure.
Fit of smart cards in readers. Smart card dimensions, such as thickness, are governed
by international standards. However, some organizations have found that, if the card-to-
reader interface is too tight or abrasive, the cards deteriorate more rapidly.
Reader reliability. To test reliability, create an environment that includes systems that
have slower CPUs and less memory than computers in your organization. Test how well your
smart card readers operate in this environment, as well as in other configurations. You can,
for example, run a number of memory-intensive applications or use the smart cards and
readers over slow connections to evaluate how each combination of smart cards and readers
functions in these conditions. Your smart card service level agreements provide objective
criteria for acceptable and unacceptable performance.
Card production. Slow card production processes can impede your deployment. If your
organization is unable to produce cards efficiently, use a third-party vendor to produce smart
cards.
Ability to deploy multiple types of cards and readers. If you are unable to efficiently
deploy the types of cards, readers, and servers that you require, your service might be
inconsistent and inefficient.
Establishing Certification Authorities
It is important to ensure that your public key infrastructure can support the issuance and
verification of smart card certificates for the users and applications that you have identified. To
ensure that your PKI can support a smart card infrastructure, you must do the following
Configure your certification authorities (CAs) as enterprise CAs. Windows Server 2003
smart card certificates require enterprise CAs.
Important
CAs that issue smart card certificates need to be trusted in the CA hierarchy and
must be continuously online while the user is enrolled.
Make sure that your issuing CAs are installed on servers that have enough storage and
central processing power to support the smart card users in your organization.
Many organizations pre-enroll users for smart card certificates several weeks before they
distribute smart cards to users. The certificate lifetime is determined by the date that you
issue the certificate, not the date that you distribute the card to the user. Therefore, factor any
distribution delays into your certificate lifetime and renewal strategy.
A Windows Server 2003 CA allows you to select a certificate public key length from 384 bits for
minimal security to 16,384 bits for maximum security. For typical logon applications, a 1,024-bit
key is adequate.
You can establish certificate lifetimes that are as long or as short as you need, and you can
configure certificates to be nonrenewable, renewable a finite number of times, or renewable
indefinitely.
To define public key values and certificate lifetimes and renewal policies, take into account:
The physical capacity of your smart cards. Most of the smart cards that are available
today have adequate space for all but the largest certificates.
How you define acceptable logon times. Public keybased authentication often takes
longer than authentication without certificates.
The nature of the business relationship. Smart card certificates issued to permanent
employees usually warrant a longer lifetime and renewal cycle than certificates issued to
short-term workers or to nonemployees.
The level of security that you want to enforce. Highly sensitive operations warrant
larger public key values and, typically, shorter certificate lifetimes.
Section 6
Schema
The set of definitions for the universe of objects that can be stored in a directory. For each object
class, the schema defines which attributes an instance of the class must have, which additional
attributes it can have, and which other object classes can be its parent object class.
Global catalog
A directory database that applications and clients can query to locate any object in a forest. The
global catalog is hosted on one or more domain controllers in the forest. It contains a partial
replica of every domain directory partition in the forest. These partial replicas include replicas of
every object in the forest, as follows: the attributes most frequently used in search operations and
the attributes required to locate a full replica of the object.
In Microsoft Provisioning System, the Exchange server maintains a list of global catalogs, and it
maintains a load balance across global catalogs.
Replication
The process of copying updated data from a data store or file system on a source computer to a
matching data store or file system on one or more destination computers to synchronize the data.
In Active Directory, replication synchronizes schema, configuration, application, and domain
directory partitions between domain controllers.In Distributed File System (DFS), replication
synchronizes files and folders between DFS roots and root targets.
domain controllers within each site, and the site links that support Active Directory replication
between sites.
Domain Controller Capacity Planning
To ensure efficient Active Directory performance, you must determine the appropriate number of
domain controllers for each site and verify that they meet the hardware requirements for Windows
Server 2003. Careful capacity planning for your domain controllers ensures that you do not
underestimate hardware requirements, which can cause poor domain controller performance and
application response time.
Advanced Active Directory Features
Functional levels in Windows Server 2003 Active Directory allow you to enable new features,
such as improved group membership replication, deactivation and redefinition of attributes and
classes in the schema, and forest trust relationships that require that all domain controllers within
the participating domain or forest run Windows Server 2003. Part of the Active Directory design
process involves identifying the domain and forest functional levels that your organization
requires. To implement these Windows Server 2003 Active Directory features in your
organization, you must first deploy Windows Server 2003 Active Directory and then raise the
forest and domain to the appropriate functional level.
Determining Your Active Directory Deployment Requirements
The structure of your existing environment determines your strategy for deploying Windows
Server 2003 Active Directory. If you are creating an Active Directory environment and you do not
have an existing domain structure, you must complete your Active Directory design before you
begin creating your Active Directory environment. Then you can deploy a new forest root domain
and deploy the rest of your domain structure according to your design.
Windows Server 2003 Forest Root
To deploy Active Directory, you must first deploy a Windows Server 2003 forest root domain. To
do this, you must configure DNS, deploy forest root domain controllers, configure the site
topology for the forest root domain, and configure operations master roles.
Windows Server 2003 Regional Domains
If you are creating one or more new regional domains in a Windows Server 2003 forest, you must
deploy each regional domain after you deploy your forest root domain. To do this, you must
delegate a DNS zone and deploy domain controllers for each regional domain.
Windows NT 4.0 Domain Upgrade to Windows Server 2003
When you perform an in-place domain upgrade of Windows NT 4.0 domains, you can begin to
use Active Directory without making any modifications to your existing domain structure.
Alternatively, if you do not want to retain your existing domain structure, you can restructure your
Windows NT 4.0 domains to a Windows Server 2003 forest. For more information about
restructuring your Windows NT 4.0 domains to a Windows Server 2003 forest,
6.2 Windows 2000 Domain Upgrade to Windows Server 2003
Upgrading your Windows 2000 domains to Windows Server 2003 domains is an efficient,
straightforward way to take advantage of additional Windows Server 2003 features and
functionality. Upgrading from Windows 2000 to Windows Server 2003 requires minimal network
configuration and has little impact on user operations.
The partial copies of all domain objects included in the global catalog are those most commonly
used in user search operations. These attributes are marked for inclusion in the global catalog as
part of their schema definition. Storing the most commonly searched upon attributes of all domain
objects in the global catalog provides users with efficient searches without affecting network
performance with unnecessary referrals to domain controllers.
You can manually add or remove other object attributes to the global catalog by using the Active
Directory Schema snap-in.
A global catalog is created automatically on the initial domain controller in the forest. You can add
global catalog functionality to other domain controllers or change the default location of the global
catalog to another domain controller.
A global catalog performs the following directory roles:
Finds objects
A global catalog enables user searches for directory information throughout all domains in a
forest, regardless of where the data is stored. Searches within a forest are performed with
maximum speed and minimum network traffic.
When you search for people or printers from the Start menu or choose the Entire Directory
option within a query, you are searching a global catalog. Once you enter your search
request, it is routed to the default global catalog port 3268 and sent to a global catalog for
resolution
Supplies user principal name authentication
A global catalog resolves user principal names (UPNs) when the authenticating domain
controller does not have knowledge of the account. For example, if a users account is
located in example1.microsoft.com and the user decides to log on with a user principal name
of [email protected] from a computer located in example2.microsoft.com, the
domain controller in example2.microsoft.com will be unable to find the users account, and
will then contact a global catalog to complete the logon process.
Supplies universal group membership information in a multiple domain
environment
Unlike global group memberships, which are stored in each domain, universal group
memberships are only stored in a global catalog. For example, when a user who belongs to a
universal group logs on to a domain that is set to the Windows 2000 native domain functional
level or higher, the global catalog provides universal group membership information for the
users account at the time the user logs on to the domain.
If a global catalog is not available when a user logs on to a domain set to the functional level
of Windows 2000 native or higher, the computer will use cached credentials to log on the
user if the user has logged on to the domain previously. If the user has not logged on to the
domain previously, the user can only log on to the local computer. However, if a user is a
member of the Domain Admins group, the user can always log on to the domain, even when
a global catalog is not available
Validates object references within a forest
A global catalog is used by domain controllers to validate references to objects of other
domains in the forest. When a domain controller holds a directory object with an attribute
containing a reference to an object in another domain, this reference is validated using a
global catalog.
common Active Directory queries. The following table will help you determine whether your
multiple site environment will benefit using additional global catalogs.
Use a global catalog when Advantage Disadvantage
Additional network
A commonly used application in the site utilizes port 3268 to Performance
traffic due to
resolve global catalog queries. improvement
replication
A slow or unreliable WAN connection is used to connect to
other sites. Use the same failure and load distribution rules Additional network
that you used for individual domain controllers to determine Fault tolerance traffic due to
whether additional global catalog servers are necessary in replication
each site.
Users in the site belong to a Windows 2000 domain running in
native mode. In this case, all users must obtain universal
group membership information from a global catalog server. If
a global catalog is not located within the same site all logon
Additional network
requests must be routed over your WAN connection to a Fast user
traffic due to
global catalog located in another site. logons
replication
If a domain controller running Windows Server 2003 in the site
has universal group membership caching enabled, then all
users will obtain a current cached listing of their universal
group memberships.
Note
Network traffic related to global catalog queries generally use more network resources
than normal directory replication traffic.
With a single forest, users do not need to be aware of directory structure because all users see a
single directory through the global catalog. When adding a new domain to a forest, no additional
trust configuration is required because all domains in a forest are connected by two-way,
transitive trust In a forest with multiple domains, configuration changes need be applied only once
to update all domains.
However, there are scenarios in which you might want to create more than one forest:
When upgrading a Windows NT domain to a Windows Server 2003 forest. You can
upgrade a Windows NT domain to become the first domain in a new Windows Server 2003
forest. To do this, you must first upgrade the primary domain controller in that domain. Then,
you can upgrade backup domain controllers, member servers, and client computers at any
time.
You can also keep a Windows NT domain and create a new Windows Server 2003 forest by
installing Active Directory on a member server running Windows Server 2003
To provide administrative autonomy. You can create a new forest when you need to
segment your network for purposes of administrative autonomy. Administrators who
currently manage the IT infrastructure for autonomous divisions within the organization
may want to assume the role of forest owner and proceed with their own forest design.
However, in other situations, potential forest owners may choose to merge their
autonomous divisions into a single forest to reduce the cost of designing and operating
their own Active Directory or to facilitate resource sharing
To create a different Domain Name System (DNS) namespace than an existing
forest. You can create a new forest when you need to use a noncontiguous DNS
namespace that is different from an existing forest on your network. It is recommended
that you create a new forest when you want a different DNS namespace rather than
creating additional domain trees with noncontiguous DNS namespaces within an existing
forest.
6.6 Trusts
A trust is a relationship established between domains that enables users in one domain to be
authenticated by a domain controller in the other domain. Trust relationships in Windows NT are
different than in Windows 2000 and Windows Server 2003 operating systems.
6.6.2 Trusts in Windows Server 2003 and Windows 2000 server operating systems
All trusts in a Windows 2000 and Windows Server 2003 forest are transitive & two way trust
Therefore, both domains in a trust relationship are trusted. As shown in the following figure, this
means that if Domain A trusts Domain B and Domain B trusts Domain C, then users from Domain
C can access resources in Domain A (when assigned the proper permissions). Only members of
the Domain Admins group can manage trust relationships.
Forest trust TDOs store additional attributes to identify all of the trusted namespaces from its
partner forest. These attributes include domain tree names ,UPN suffix,SPN suffix & SID
namespace
If you choose to create each side of the trust separately, then you will need to run the New Trust
Wizard twice--once for each domain. When creating trusts using the method, you will need to
supply the same trust password for each domain. As a security best practice, all trust passwords
should be strong passwords.
If you choose to create both sides of the trust simultaneously, you will need to run the New Trust
Wizard once. When you choose this option, a strong trust password is automatically generated
for you.
You will need the appropriate administrative credentials for each domain between which you will
be creating a trust. Netdom.exe can also be used to create trusts
All domain trust relationships have only two domains in the relationship: the trusting domain and
the trusted domain.
One-way trust
A one way trust is a unidirectional authentication path created between two domains. This means
that in a one-way trust between Domain A and Domain B, users in Domain A can access
resources in Domain B. However, users in Domain B cannot access resources in Domain A.
Some one-way trusts can be a non transitive or transitive trust
Two-way trust
All domain trusts in a Windows Server 2003 forest are two-way or transitive . When a new child
domain is created, a two-way, transitive trust is automatically created between the new child
domain and the parent domain. In a two-way trust, Domain A trusts Domain B and Domain B
trusts Domain A. This means that authentication requests can be passed between the two
domains in both directions. Some two-way relationships can be nontransitive or transitive
depending on the type of trust being created
A Windows Server 2003 domain can establish a one-way or two-way trust with:
Windows Server 2003 domains in the same forest
Windows Server 2003 domains in a different forest
Windows NT 4.0 domains
Kerberos V5 realms
Transitivity determines whether a trust can be extended outside of the two domains with which it
was formed. A transitive trust can be used to extend trust relationships with other domains and a
nontranstive trust can be used to deny trust relationships with other domains.
Transitive trusts
Each time you create a new domain in a forest , a two-way, transitive trust relationship is
automatically created between the new domain and its parent domain. If child domains are added
to the new domain, the trust path flows upward through the domain hierarchy extending the initial
trust path created between the new domain and its parent domain. Transitive trust relationships
flow upward through a domain tree as it is formed, creating transitive trusts between all domains
in the domain tree.
Authentication requests follow these trust paths, so accounts from any domain in the forest can
be authenticated at any other domain in the forest. With a single logon process, accounts with the
proper permissions can access resources in any domain in the forest.
The diagram displays that all domains in the Domain A tree and all domains in the Domain 1 tree
have transitive trust relationships by default. As a result, users in the Domain A tree can access
resources in domains in the Domain 1 tree and users in the Domain 1 tree can access resources
in the Domain A tree, when the proper permissions are assigned at the resource.
In addition to the default transitive trusts established in a Windows Server 2003 forest, using the
New Trust Wizard, you can manually create the following transitive trusts.
Shortcut trust. A transitive trust between a domain in the same domain tree or forest
used to shorten the trust path in a large and complex domain tree or forest.
Forest trust. A transitive trust between a forest root domain and a second forest root
domain.
Realm trust. A transitive trust between an Active Directory domain and an Kerberos V5
realm
Nontransitive trust
A Nontranstive trust is restricted by the two domains in the trust relationship and does not flow to
any other domains in the forest. A nontransitive trust can be a two way or one way trust
Nontransitive trusts are one-way by default, although you can also create a two-way relationship
by creating two one-way trusts. In summary, nontransitive domain trusts are the only form of trust
relationship possible between:
A Windows Server 2003 domain and a Windows NT domain
A Windows Server 2003 domain in one forest and a domain in another forest (when not
joined by a forest trust)
Using the New Trust Wizard, you manually create the following nontransitive trusts:
External trust. A nontransitive trust created between a Windows Server 2003 domain
and a Windows NT domain or a Windows 2000 domain or Windows Server 2003 domain in
another forest.
When you upgrade a Windows NT domain to a Windows Server 2003 domain, all existing
Windows NT trusts are preserved intact. All trust relationships between Windows
Server 2003 domains and Windows NT domains are nontransitive.
Realm trust. A nontransitive trust between an Active Directory domain and an
Kerberos V5 realm
When to create an external trust
Page 211 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10
You can create an external trust to form a one-way or two-way non transitive trust with domains
outside of your forest. External trusts are sometimes necessary when users need access to
resources located in a Windows NT 4.0 domain or in a domain located within a separate forest
that is not joined by a forest trust as shown in the figure.
When a trust is established between a domain in a particular forest and a domain outside of that
forest security principle from the external domain can access resources in the internal domain.
Active Directory creates a SPS object in the internal domain to represent each security principle
from the trusted external domain. These foreign security principals can become members of
domain local group in the internal domain. Domain local groups can have members from domains
outside of the forest.
Directory objects for foreign security principals are created by Active Directory and should not be
manually modified. You can view foreign security principal objects from ADS by enabling
advanced features
In domains with the functional level set to Windows 2000 mixed, it is recommended that you
delete external trusts from a domain controller running Windows Server 2003. External trusts to
Windows NT 4.0 or 3.51 domains can be deleted by authorized administrators on the domain
controllers running Windows NT 4.0 or 3.51. However, only the trusted side of the relationship
can be deleted on the domain controllers running Windows NT 4.0 or 3.51. The trusting side of
the relationship (created in the Windows Server 2003 domain) is not deleted, and although it will
not be operational, the trust will continue to display in Active Directory Domains and Trusts. To
remove the trust completely, you will need to delete the trust from a domain controller running
Windows Server 2003 in the trusting domain. If an external trust is inadvertently deleted from a
domain controller running Windows NT 4.0 or 3.51, you will need to recreate the trust from any
domain controller running Windows Server 2003 in the trusting domain.
Securing external trusts
To improve the security of Active Directory forests, domain controllers running Windows
Server 2003 and Windows 2000 Service Pack 4 (or higher) enable SID filtering on all new
outgoing external trusts by default. By applying SID filtering to outgoing external trusts, you
prevent malicious users who have domain administrator level access in the trusted domain from
granting, to themselves or other user accounts in their domain, elevated user rights to the trusting
domain.
When a malicious user can grant unauthorized user rights to another user it is known as an
elevation of privilege attack. For more information about SID Filtering and how to further mitigate
an elevation of privilege attack
An Active Directory container object used within domains. An organizational unit is a logical
container into which users, groups, computers, and other organizational units are placed. It can
contain objects only from its parent domain. An organizational unit is the smallest scope to which
a Group Policy object (GPO) can be linked, or over which administrative authority can be
delegated.
You can further refine your OU structure by creating subtrees of OUs for specific purposes, such
as the application of Group Policy, or to limit the visibility of protected objects so that only certain
users can see them. For example, if you need to apply Group Policy to a select group of users or
resources, you can add those users or resources to an OU and then apply the Group Policy to
that OU. You can also use the OU hierarchy to enable further delegation of administrative control.
Figure shows a subtree that was created inside the ResourceOU. The subtree includes two
additional OUs: The Print Servers OU includes all the computer accounts of the print servers; the
File Servers OU includes the computer accounts for all of the file servers. This subtree enables
the application of separate Group Policies to each type of server managed in the ResourceOU.
Figure Creating an Organizational Unit Subtree for Application of Group Policy
Note
While there is no technical limit to the number of levels in your OU structure, for the
purpose of manageability, it is recommended that you limit your OU structure to a depth of no
more than 10 levels. There is no technical limit to the number of OUs on each level. Note that
Active Directoryenabled applications might have restrictions on the number of characters
used in the distinguished name (the full LDAP path to the object in the directory), or on the
OU depth within the hierarchy.
The Active Directory organizational unit structure is not intended to be visible to end users. The
organizational unit structure is an administrative tool for service and data administrators and is
easy to change. Continue to review and update your OU structure design to reflect changes in
your administrative structure and to support policy-based administration.
group, place the set of objects to be controlled into an OU, and then delegate administrative tasks
for the OU to that group.
Active Directory enables you to control the administrative tasks that can be delegated at a very
granular level; for example, you can assign one group full control of all objects in an OU; assign
another group the rights only to create, delete, and manage user accounts in the OU; and assign
a third group the right only to reset user account passwords. You can make these permissions
inheritable so that they apply to not only a single OU, but also any OUs that are placed in
subtrees of the OU.
Default OUs and containers are created during the installation of Active Directory and are
controlled by service administrators. It is best if service administrators continue to control these
containers. If you need to delegate control over objects in the directory, create additional OUs
and place the objects in these OUs. Delegate control over these OUs to the appropriate data
administrators. This makes it possible to delegate control over objects in the directory without
changing the default control given to the service administrators.
The forest owner determines the level of authority that is delegated to an OU owner. This can
range from the ability to create and manipulate objects within the OU to only being allowed to
control a single attribute of a single type of object in the OU. Granting a user the ability to create
an object in the OU implicitly grants that user the ability to manipulate any attribute of any object
that the user creates. In addition, if the object that is created is a container, then the user implicitly
has the ability to create and manipulate any objects that are placed in the container.
Table lists and describes the child OUs that you can create in an account OU structure.
Table Child OUs in the Account OU Structure
OU Purpose
Users Contains user accounts for non-administrative personnel.
Some services that require access to network resources run as user accounts. This
Service OU is created to separate service user accounts from the user accounts contained
Accounts in the Users OU. Also, placing the different types of user accounts in separate OUs
enables you to manage them according to their specific administrative requirements.
Computers Contains accounts for computers other than domain controllers.
Contains groups of all types, except for administrative groups, which are managed
Groups
separately.
Contains user and group accounts for data administrators in the account OU
structure, to allow them to be managed separately from regular users. Enable
Admins
auditing for this OU so that you can track changes to administrative users and
groups.
Figure shows the administrative group design for an Account OU.
Figure Delegation Model for Account OUs
The owner of the Account OU is the acct_ou_OU_admins group, which is comprised of data
administrators. This group has full control of the acct_ou_OU subtree, and is responsible for
creating the standard set of child OUs and the groups to manage them.
Groups that manage the child OUs are granted full control only over the specific class of objects
that they are responsible for managing. For example, acct_ou_group_admins has control only
over group objects.
Note that no separate administrative group manages the Admins OU; rather, it inherits ownership
from its parent OU, so it is managed by acct_ou_OU_admins. The OU owner might choose to
create additional administrative groups, however. For example, the OU owner might create the
optional group acct_ou_helpdesk_admins in the Admins OU to control password resets.
If your organization has a department that manages its own user accounts and exists in more
than one region, you might have a group of data administrators who are responsible for managing
account OUs in more than one domain. If the accounts of the data administrators all exist in a
single domain and you have OU structures in multiple domains to which you need to delegate
control, make those administrative accounts members of global groups and delegate control of
the OU structures in each domain to those global groups, as shown in Figure 2.42
Figure Using Global Groups to Manage OUs in Multiple Domains
If the data administrators accounts to which you delegate control of an OU structure come from
multiple domains, you must use a universal group. Universal groups can contain users from
different domains and can therefore be used to delegate control in multiple domains.
Figure shows a configuration in which universal groups are used to manage OUs in multiple
domains.
Figure Using Universal Groups to Manage OUs in Multiple Domains
The resource OU can be located under the domain root or as a child OU of the corresponding
account OU in the OU administrative hierarchy. If the domain spans across different countries or
regions, or the domain owner is responsible for a large number of OUs, Windows NT 4.0
resource domain owners might prefer to control resource OUs that are subordinate to account
OUs, to ensure that they have direct access to support from the account OU owner. Resource
OUs do not have any standard child OUs. Computers and groups are placed directly in the
resource OU.
The resource OU owner owns the objects within the OU, but does not own the OU container
itself. Resource OU owners manage only computer and group objects; they cannot create other
classes of objects within the OU, and they cannot create child OUs.
Note
The creator or owner of an object has the ability to set the ACL on the object, regardless
of the permissions that are inherited from the parent container. If a resource OU owner can
reset the ACL on an OU, he or she can create any class of object in the OU, including users.
For this reason, resource OU owners are not permitted to create OUs.
For each resource OU in the domain, create a global group to represent the data administrators
who are responsible for managing the contents of the OU. This group has full control over the
group and computer objects in the OU, but not over the OU container itself.
Figure shows the administrative group design for a resource OU. The res_ou_OU_admin group
manages its own membership and is located in the resource OU.
Figure Resource OU Administrative Group Design
Placing the computer accounts into a resource OU gives the OU owner control over the account
objects but does not make the OU owner an administrator of the computers. In an Active
Directory domain, the Domain Admins group is, by default, placed in the local Administrators
group on all computers. This means that service administrators have control over those
computers. If resource OU owners require administrative control over the computers in their OU,
the forest owner can apply a Restricted Groups Group Policy to make the resource OU owner a
member of the Administrators group on the computers in that OU.
1. Create an OU for each source resource domain that is managed independently and that
is to be migrated into this domain.
o If you are restructuring multiple resource domains that are managed by the same
IT group, create a single resource OU and migrate the objects from all domains into that
OU.
o If the source resource domains are managed by former Master User Domain
owners, you can migrate objects into the Computers and Groups child OUs of the
corresponding account OU subtree, instead of creating a separate resource OU under
the account OU.
2. Place the resource OU under the domain root or under an account OU, depending on
whether the resource OU owner reports to the account OU owner or directly to the forest
owner. The source resource domain owner becomes the resource OU owner.
If the domain is not the target of a resource domain restructure, create resource OUs as needed
based on the requirements of each group for autonomy in the management of data and
equipment.
Section 7
The ability to rename a domain provides you with the flexibility to make important changes to your
forest structure and namespace as the needs of your organization change. Renaming domains
can accommodate acquisitions, mergers, name changes, or reorganizations. Domain rename
allows you to:
Restructure the position of any domain in the forest (except the forest root domain).
Change the DNS & NetBIOS name of any domain in Forest
Different levels of domain functionality and forest functionality are available depending on your
environment.
If all domain controllers in your domain or forest are running Windows Server 2003 and the
functional level is set to Windows Server 2003, all domain- and forest-wide features are available.
When Windows NT 4.0 or Windows 2000 domain controllers are included in your domain or forest
with domain controllers running Windows Server 2003, Active Directory features are limited.
The concept of enabling additional functionality in Active Directory exists in Windows 2000 with
mixed and native modes. Mixed-mode domains can contain Windows NT 4.0 backup domain
controllers and cannot use Universal security groups, group nesting, and security ID (SID) history
capabilities. When the domain is set to native mode, Universal security groups, group nesting,
and SID history capabilities are available. Domain controllers running Windows 2000 Server are
not aware of domain and forest functionality.
User password on
InetOrgPerson
Disabled Disabled Enabled
object
Domain rename
Disabled Enabled
InetOrgPerson objectClass
change Disabled Enabled
If the application directory partition is a child of a domain directory partition, by default, the parent
domain directory partition becomes the security descriptor reference domain. If the application
directory partition is a child object of another application directory partition, the security descriptor
reference domain of the parent application directory partition becomes the reference domain of
the new, child, application directory partition. If the new application directory partition is created
as the root of a new tree, then the forest root domain is used as the default security descriptor
reference domain.
You can manually specify a security reference domain using Ntdsutil. However, if you plan to
change the default security descriptor reference domain of a particular application directory
partition, you should do so before creating the first instance of that partition, to do this, you must
prepare the cross-reference object and change the default security reference domain before
completing the application directory partition creation process.
7.6.5 Identify the applications that use the application directory partition
To determine what application directory partitions are hosted on a computer, refer to the list on
the first page of the Active Directory Installation Wizard. If the list does not provide enough
information to identify the programs using a particular application directory partition, you may be
able to identify them in one of the following ways:
Speak to a member of the Enterprise Admins group.
Consult the network change control records for your organization.
Use LDP or ADSI Edit to view the data contained in the partition
7.6.8 Remove the application directory partition using the tool provided or use Ntdsutil
Refer to the application's documentation for information about removing application directory
partitions that were created and used by that application.
You can also configure how long it waits to send the subsequent change notification to its
remaining replication partners. These delays can be set for any directory partition (including
domain directory partitions) on a particular domain controller
Schema master
The Schema master domain controller controls all updates and modifications to the schema. To
update the schema of a forest, you must have access to the schema master. There can be only
one schema master in the entire forest.
Domain naming master
The domain controller holding the domain naming master role controls the addition or removal of
domains in the forest. There can be only one domain naming master in the entire forest.
Domain-wide operations master roles
Every domain in the forest must have the following roles:
Relative ID (RID) master
Primary domain controller (PDC) emulator master
Infrastructure master
These roles must be unique in each domain. This means that each domain in the forest can have
only one RID master, PDC emulator master, and infrastructure master.
RID master
The RID master allocates sequences of RIDs to each of the various domain controllers in its
domain. At any time, there can be only one domain controller acting as the RID master in each
domain in the forest.
Whenever a domain controller creates a user, group, or computer object, it assigns the object a
unique
The SID consists of a domain SID, which is the same for all SIDs created in the domain, and a
RID, which is unique for each SID created in the domain.
To move an object between domains (using Movetree.exe), you must initiate the move on the
domain controller acting as the RID master of the domain that currently contains the object.
PDC emulator master
If the domain contains computers operating without Windows 2000 or Windows XP Professional
client software or if it contains Windows NT backup domain controllers (BDCs), the PDC emulator
master acts as a Windows NT primary domain controller. It processes password changes from
clients and replicates updates to the BDCs. At any time, there can be only one domain controller
acting as the PDC emulator master in each domain in the forest.
By default, the PDC emulator master is also responsible for synchronizing the time on all domain
controllers throughout the domain. The PDC emulator of a domain gets its clock set to the clock
on an arbitrary domain controller in the parent domain. The PDC emulator in the parent domain
should be configured to synchronize with an external time source. You can synchronize the time
on the PDC emulator with an external server by executing the "net time" command with the
following syntax:
net time \\ServerName /setsntp:TimeSource
The end result is that the time of all computers running Windows Server 2003 or Windows 2000
in the entire forest are within seconds of each other.
The PDC emulator receives preferential replication of password changes performed by other
domain controllers in the domain. If a password was recently changed, that change takes time to
replicate to every domain controller in the domain. If a logon authentication fails at another
domain controller due to a bad password, that domain controller will forward the authentication
request to the PDC emulator before rejecting the log on attempt.
The domain controller configured with the PDC emulator role supports two authentication
protocols:
the Kerberos V5 protocol
the NTLM protocol
Infrastructure master
At any time, there can be only one domain controller acting as the infrastructure master in each
domain. The infrastructure master is responsible for updating references from objects in its
domain to objects in other domains. The infrastructure master compares its data with that of a
global catalog. Global catalogs receive regular updates for objects in all domains through
replication, so the global catalog data will always be up to date. If the infrastructure master finds
data that is out of date, it requests the updated data from a global catalog. The infrastructure
master then replicates that updated data to the other domain controllers in the domain.
The infrastructure master is also responsible for updating the group-to-user references whenever
the members of groups are renamed or changed. When you rename or move a member of a
group (and that member resides in a different domain from the group), the group may temporarily
appear not to contain that member. The infrastructure master of the group's domain is
responsible for updating the group so it knows the new name or location of the member. This
prevents the loss of group memberships associated with a user account when the user account is
renamed or moved. The infrastructure master distributes the update via multimaster replication.
There is no compromise to security during the time between the member rename and the group
update. Only an administrator looking at that particular group membership would notice the
temporary inconsistency.
Sites and subnets are represented in Active Directory by site and subnet objects
objects
An entity, such as a file, folder, shared folder, printer, or Active Directory object, described by a
distinct, named set of attributes. For example, the attributes of a File object include its name,
location, and size; the attributes of an Active Directory User object might include the user's first
name, last name, and e-mail address.
For OLE and ActiveX, an object can also be any piece of information that can be linked to, or
embedded into, another object.
, which you create through Active Directory Sites and Services. Each site object is associated
with one or more subnet objects.
connections of more than three hops, the topology can include shortcut connections across the
ring. The KCC updates the replication topology regularly.
Determining when intrasite replication occurs
Directory updates made within a site are likely to have the most direct impact on local clients, so
intrasite replication is optimized for speed. Replication within a site occurs automatically on the
basis of change notification. Intrasite replication begins when you make a directory update on a
domain controller. By default, the source domain controller waits 15 seconds and then sends an
update notification to its closest replication partner. If the source domain controller has more than
one replication partner, subsequent notifications go out by default at 3 second intervals to each
partner. After receiving notification of a change, a partner domain controller sends a directory
update request to the source domain controller. The source domain controller responds to the
request with a replication operation. The 3 second notification interval prevents the source
domain controller from being overwhelmed with simultaneous update requests from its replication
partners.
For some directory updates in a site, the 15 second waiting time does not apply and replication
occurs immediately. Known as urgent replication, this immediate replication applies to critical
directory updates, including the assigning of account lockouts and changes in the account lockout
policy, the domain password policy, or the password on a domain controller account.
Replication between sites
Active Directory handles replication between sites or intersite replication , differently than
replication within sites because bandwidth between sites is usually limited. The Active Directory
KCC builds the intersite replication topology using a least-cost spanning tree design. Intersite
replication is optimized for bandwidth efficiency, and directory updates between sites occur
automatically based on a configurable schedule. Directory updates replicated between sites are
compressed to preserve bandwidth.
Building the intersite replication topology
Active Directory automatically builds the most efficient intersite replication topology using
information you provide (through Active Directory Sites and Services) about your site
connections. The directory stores this information as site link objects. One domain controller per
site, called the intersite topology generator intersite topology generator
An Active Directory process that runs on one domain controller in a site that considers the cost of
intersite connections, checks if previously available domain controllers are no longer available,
and checks if new domain controllers have been added. The Knowledge Consistency Checker
(KCC) process then updates the intersite replication topology accordingly., is assigned to build
the topology. A least-cost spanning tree algorithm is used to eliminate redundant replication paths
between sites. The intersite replication topology is updated regularly to respond to any changes
that occur in the network. You can control intersite replication through the information you provide
when you create your site links
Determining when intersite replication occurs
Active Directory preserves bandwidth between sites by minimizing the frequency of replication
and by allowing you to schedule the availability of site links for replication. By default, intersite
replication across each site link occurs every 180 minutes (3 hours). You can adjust this
frequency to match your specific needs. Be aware that increasing this frequency increases the
amount of bandwidth used by replication. In addition, you can schedule the availability of site links
for use by replication. By default, a site link is available to carry replication traffic 24 hours a day,
7 days a week. You can limit this schedule to specific days of the week and times of day. You
can, for example, schedule intersite replication so that it only occurs after normal business hours.
Section 8
Administrators can use access control to manage user access to shared resources for security
purposes. In Active Directory, access control is administered at the object level by setting
different levels of access, or permissions , to objects, such as Full Control, Write, Read, or No
Access. Access control in Active Directory defines how different users can use Active Directory
objects. By default, permissions on objects in Active Directory are set to the most secure setting.
The elements that define access control permissions on Active Directory objects include security
descriptors, object inheritance, and user authentication.
Security ID (SID)
A data structure of variable length that identifies user, group, and computer accounts. Every
account on a network is issued a unique SID when the account is first created. Internal processes
in Windows refer to an account's SID rather than the account's user or group name.
with the user.
Access token. When a user is authenticated, LSA creates a security access token for
that user. An access token contains the user's name, the groups to which that user belongs,
a SID for the user, and all of the SIDs for the groups to which the user belongs. If you add a
user to a group after the user access token has been issued, the user must log off and log on
again before the access token will be updated.
Security ID (SID). Active Directory automatically assigns SIDs to security principal
objects at the time they are created. Security principals are accounts in Active Directory that
can be assigned permissions such as computer, group, or user accounts. Once a SID is
issued to the authenticated user, it is attached to the access token of the user.
The information in the access token is used to determine a user's level of access to objects
whenever the user attempts to access them. The SIDs in the access token are compared with the
list of SIDs that make up the DACL for the object to ensure that the user has sufficient permission
to access the object. This is because the access control process identifies user accounts by SID
rather than by name.
A particularly useful type of directory object contained within domains is the organizational unit
Organizational units are Active Directory containers into which you can place users, groups,
computers, and other organizational units. An organizational unit cannot contain objects from
other domains.
An organizational unit is the smallest scope or unit to which you can assign Group Policy settings
or delegate administrative authority. Using organizational units, you can create containers within
a domain that represent the hierarchical, logical structures within your organization. You can then
manage the configuration and use of accounts and resources based on your organizational
model
As shown in the figure, organizational units can contain other organizational units. A hierarchy of
containers can be extended as necessary to model your organization's hierarchy within a domain.
Using organizational units will help you minimize the number of domains required for your
network.
You can use organizational units to create an administrative model that can be scaled to any size.
A user can have administrative authority for all organizational units in a domain or for a single
organizational unit. An administrator of an organizational unit does not need to have
administrative authority for any other organizational units in the domain
The default groups that are put in the Built in container of Active Directory Users and Computers
are:
Account Operators
Administrators
Backup Operators
Guests
Incoming Forest Trust Builders (only appears in the forest root domain)
Network Configuration Operators
Performance Monitor Users
Page 247 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10
8.2.2 Summary
To manage groups in Windows Server 2003, follow these steps.
By default, the name that you type is also entered as the pre-Microsoft Windows 2000
name of the new group.
5. Under Group scope, click the option that you want, and then under Group type, click
the option that you want.
6. Click OK.
1. Click Start, point to All Programs, point to Administrative Tools, and then click Active
Directory Users and Computers.
2. In the console tree, expand DomainName, where DomainName is the name of your
domain.
3. Click the folder that contains the group where you want to add a member.
4. In the right pane, right-click the group where you want to add a member, and then click
Properties.
5. Click the Members tab, and then click Add.
6. In the Select User, Contacts, or Computers dialog box, type the names of the users
and computers that you want to add, and then click OK.
7. Click OK.
Note: In addition to users and computers, membership in a particular group can include
contacts and other groups.
4. In the Name box, type the name of the group that you want to find, and then click Find
Now.
Note For more powerful search options, click the Advanced tab, and then specify the
search conditions that you want.
Note The Member of tab for a user displays a list of groups in the domain where the
account of the user account is located. Active Directory does not display groups that are
located in trusted domains where the user is a member.
Using nesting, you can add a group as a member of another group. You nest groups to
consolidate member accounts and reduce replication traffic.
Nesting options depend on whether the domain functionality of your Windows Server 2003
domain is set to Windows 2000 native or Windows 2000 mixed.
Groups in domains set to the Windows 2000 native functional level or distribution groups in
domains set to the Windows 2000 mixed functional level can have the following members:
Groups with universal scope can have the following members: accounts, computer
accounts, other groups with universal scope, and groups with global scope from any domain.
Groups with global scope can have the following members: accounts from the same
domain and other groups with global scope from the same domain.
Groups with domain local scope can have the following members: accounts, groups with
universal scope, and groups with global scope, all from any domain. This group can also
have as members other groups with domain local scope from within the same domain.
8.6 Authentication
The process for verifying that an entity or object is who or what it claims to be. Examples include
confirming the source and integrity of information, such as verifying a digital signature or verifying
the identity of a user or computer.
Authentication protocols overview
Authentication is a fundamental aspect of system security. It confirms the identity of any user
trying to log on to a domain or access network resources. Windows Server 2003 family
authentication enables single sign-on to all network resources. With single sign-on, a user can log
on to the domain once, using a single password or smart card, and authenticate to any computer
in the domain.
Users who use a domain account do not see network authentication. Users who use a local
computer account must provide credentials (such as a user name and password) every time they
access a network resource. By using the domain account, the user has credentials that can be
used for single sign-on.
Stored User Names and Passwords also stores saved information as part of a user's profile. This
means that these user names and passwords will travel with the user from computer to computer
anywhere on the network.
Section 9
Core Group Policy or the Group Policy engine is the framework that handles common
functionalities across Administrative Template settings and other client-side extensions. The
following figure shows how the Group Policy engine interacts with other components as part of
processing policy settings. You use Group Policy Management Console (GPMC) to create, view,
and manage GPOs and use Group Policy Object Editor to set and configure the policy settings in
GPOs.
Group Policy Components
By linking GPOs to sites, domains, and OUs, you can implement Group Policy settings for as
broad or as narrow a portion of the organization as you want. GPO links affect users and
computers in the following ways:
A GPO linked to a site applies to all users and computers in the site.
A GPO linked to a domain applies directly to all users and computers in the domain and
by inheritance to all users and computers in child OUs. Note that policy is not inherited
across domains.
A GPO linked to an OU applies directly to all users and computers in the OU and by
inheritance to all users and computers in child OUs.
When a GPO is created, it is stored in the domain. When the GPO is linked to an Active Directory
container, such as an OU, the link is a component of that container, not a component of the GPO.
An example of how GPOs can be linked to sites, domains, and OUs is shown in the following
figure.
In this configuration, the Servers OUs have the following GPOs applied: A1, A2, A3, A4, A6. The
Marketing OUs have the following GPOs applied: A1, A2, A3, A5.
In GPMC, delegation is simplified because it manages the various Access Control Entries (ACEs)
required for a task as a single bundle of permissions for the task. You can also use the Access
Control List (ACL) editor to view or manage these permissions manually.
The underlying mechanism for achieving delegation is the application of the appropriate DACLs
to GPOs and other objects in Active Directory. This mechanism is identical to using security
groups to filter the application of GPOs to various users. You can also specify Group Policy to
control who can use MMC snap-ins. For example, you can use Group Policy to manage the rights
to create, configure, and use MMC consoles, and to control access to individual snap-ins.
Active Directory
Active Directory is the Windows 2000 Server and Windows Server 2003 directory service that
stores information about all objects on the computer network and makes this information easy for
administrators and users to find and apply. With Active Directory, users can gain access to
resources anywhere on the network with a single logon. Similarly, administrators have a single
point of administration for all objects on the network, which can be viewed in a hierarchical
structure. In a network environment, Group Policy depends on Active Directory as the targeting
framework that allows you to link GPOs to specific Active Directory containers such as sites,
domains, or OUs.
In a stand-alone environment without Active Directory, you can use Local Group Policy objects to
configure settings on individual computers.
Replication
Group Policy depends on other technologies in order to properly replicate between domain
controllers in a network environment. A GPO is a virtual object stored in both Active Directory and
the Sysvol of a domain controller. Property settings, stored in the Group Policy container, are
replicated through Active Directory replication. Replication automatically copies the changes that
originate on a writable directory partition replica to all other domain controllers that hold the same
directory partition replica. More specifically, a destination domain controller pulls these changes
from the source domain controller. Data settings, stored in the Sysvol as the Group Policy
template, are replicated through the File Replication Service (FRS), which provides multi-master
file replication for designated directory trees between designated servers running Windows
Server 2003. The Group Policy container stores GPO properties, including information about
version, GPO status, and a list of components that have settings in the GPO. The Group Policy
template is a directory structure within the file system that stores Administrative Template-based
policy settings, security settings, script files, and information regarding applications that are
available for software installation. The Group Policy template is located in Sysvol in the \Policies
sub-directory for its domain. GPOs are identified by their globally unique identifiers (GUIDs) and
stored at the domain level. The settings from a GPO are only applied when the Group Policy
container and Group Policy template are synchronized.
DFS publishing
The Sysvol folder is shared on each domain controller and is accessible through the UNC path
\\dcname.domainname\sysvol.
The Sysvol is also published as a domain-based Distributed File System (DFS) share. This allows
clients to access the Sysvol by using the generic path \\domainname\sysvol. A request for a DFS
referral for \\domainname\sysvol will always return a replica in the same Active Directory site as
the client if one is available. This is the mechanism that the Group Policy client-side extensions
use to retrieve a local copy of the Group Policy template information.
How Core Group Policy Works
Core Group Policy or the Group Policy engine is the infrastructure that processes Group Policy
components including server-side snap-in extensions and client-side extensions. You use
administrative tools such as Group Policy Object Editor and Group Policy Management Console
to configure and manage policy settings.
Page 261 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10
At a minimum, Group Policy requires Windows 2000 Server with Active Directory installed and
Windows 2000 clients. Fully implementing Group Policy to take advantage of all available
functionality and the latest policy settings depends on a number of factors including:
Windows Server 2003 with Active Directory installed and with DNS properly configured.
Windows XP client computers.
Group Policy Management Console (GPMC) for administration.
The following table describes the components that interact with the Group Policy engine.
Core Group Policy Components
Component Description
In an Active Directory forest, the domain controller is a server that contains a
Server (domain
writable copy of the Active Directory database, participates in Active Directory
controller)
replication, and controls access to network resources.
Active Directory, the Windows-based directory service, stores information
about objects on a network and makes this information available to users and
network administrators. Administrators link Group Policy objects (GPOs) to
Active Directory
Active Directory containers such as sites, domain, and organizational units
(OUs) that include user and computer objects. In this way, policy settings can
be targeted to users and computers throughout the organization.
The Sysvol is a set of folders containing important domain information that is
stored in the file system rather than in the directory. The Sysvol folder is, by
default, stored in a subfolder of systemroot folder (%\systemroot\sysvol\sysvol)
Sysvol and is automatically created when a server is promoted to a domain controller.
The Sysvol contains the largest part of a GPO: the Group Policy template,
which includes Administrative Template-based policy settings, security settings,
script files, and information regarding applications that are available for
Policy (RSoP) Information Model Object Management (CIMOM) database on the local
infrastructure computer. This information, such as the list, content, and logging of processing
details for each GPO, can then be accessed by tools using Windows
Management Instrumentation (WMI).
WMI is a management infrastructure that supports monitoring and controlling of
system resources through a common set of interfaces and provides a logically
organized, consistent model of Windows operation, configuration, and status.
WMI makes data about a target computer available for administrative use.
Such data can include hardware and software inventory, settings, and
configuration information. For example, WMI exposes hardware configuration
WMI
data such as CPU, memory, disk space, and manufacturer, as well as software
configuration data from the registry, drivers, file system, Active Directory, the
Windows Installer service, networking configuration, and application data. WMI
filtering in Windows Server 2003 allows you to create queries based on this
data. These queries (WMI filters) determine which users and computers
receive all of the policy configured in the GPO where you create the filter.
Windows Server 2003 collects Group Policy processing information and stores it in a WMI
database on the local computer. (The WMI database is also known as the CIMOM database.)This
information, such as the list, content and logging of processing details for each GPO, can then be
accessed by tools using WMI.
In Group Policy Results, RSoP queries the WMI database on the target computer, receives
information about the policies and displays it in GPMC. In Group Policy Modeling, RSoP
simulates the application of policy using the Resultant Set of Policy Provider on a domain
controller. Resultant Set of Policy Provider simulates the application of GPOs and passes them to
virtual CSEs on the domain controller. The results of this simulation are stored to a local WMI
database on the domain controller before the information is passed back and displayed in GPMC
(or the RSoP snap-in). This is explained in greater detail in the following section.
Status information. Indicates whether the user or computer portion of the GPO is
enabled or disabled.
List of components. Lists (extensions) that have settings in the GPO. These attributes
are gPCMachineExtensionNames and gPCUserExtensionNames.
File system path. Specifies the Universal Naming Convention (UNC) path to the Sysvol
folder. This attribute is gPCFileSysPath,
Functionality version. Gives the version of the tool that created the GPO. Currently, this
is version 2. This attribute is gPCFunctionalityVersion.
WMI filter. Contains the distinguished name of the WMI filter. This attribute is
gPCWQLFilter.
System Container
Each Windows Server 2003 domain contains a System container. The System container stores
per-domain configuration settings, including GPO property settings, Group Policy container
settings, IP Security settings, and WMI policy settings. IP Security and WMI policy are deployed
to client computers through the GPO infrastructure.
The following subcontainers of the System container hold GPO-related settings:
Policies. This object contains groupPolicyContainer objects listed by their unique name.
Each groupPolicyContainer object holds subcontainers for selected computer and user policy
settings.
Domain, OUs and Sites. These objects contain two GPO property settings, gPLink and
gPOptions.
Default Domain Policy. This object contains the AppCategories container, which is part
of the Group Policy Software installation extension.
IP Security. This object contains IP Security policy settings that are linked to a GPO. The
linked IP Security policy is applied to the recipients (user or computer) of the GPO.
WMIPolicy. This object contains WMI filters that can be applied to GPOs. WMI filters
contain one or more Windows Query Language (WQL) statements.
System\Policies Container
The System container is a top level container found in each domain naming context. It is normally
hidden from view in the Active Directory Users and Computers snap-in but can be made visible
by selecting "Advanced Features" from the snap-in View menu inside MMC. (Objects appear
hidden in the Active Directory Users and Computers snap-in when they have the property
showInAdvancedViewOnly = TRUE.) Group Policy information is stored in the Policies
subcontainer of this container. Each GPO is identified by a GroupPolicyContainer object stored
within the Policies container.
The Group Policy container is located in the Domain_Name/System/Policies container. Each
Group Policy container is given a common name (CN) and this name is also assigned as the
container name. For example, the name attribute of a Group Policy container, might be:
{923B9E2F-9757-4DCF-B88A-1136720B5AF2}, which is also assigned to the Group Policy
containers CN attribute.
The default GPOs are assigned the same Group Policy container CN on all domains. All other
GPOs are assigned a unique CN. The default GPOs and their Group Policy container common
names are:
Default Domain Policy: {31B2F340-016D-11D2-945F-00C04FB984F9}.
Default Domain Controllers Policy: {6AC1786C-016F-11D2-945F-00C04fB984F9}.
Knowing the common names of the default GPOs will help you distinguish them from non-default
GPOs.
There are also a number of optional attributes inherited from the top class, and others that are
assigned directly to the Group Policy container. Many optional attributes are required in order for
the Group Policy container to function properly. For example, the GPCFileSysPath optional
attribute must be present or the Group Policy container will not be linked to its corresponding
Group Policy template.
9.3.5 How WMIPolicy Objects are Stored and Associated with Group Policy Container
Objects
A single WMI filter can be assigned to a Group Policy container. The Group Policy container
stores the distinguished name of the filter in gPCWQLFilter attribute. The Group Policy container
locates the assigned filter in the System/WMIPolicy/SOM container. Each Windows Server 2003
domain stores its WMI filters in this Active Directory container. Each WMI filter stored in the SOM
container lists the rules that define the WMI filter. Each rule is listed separately. For example,
consider a WMI filter containing the following three WQL queries:
SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{5E076CF2-EFED-43A2-A623-
13E0D62EC7E0}"
SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{242365CD-80F2-11D2-989A-
00C04F7978A9}"
SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{00000409-78E1-11D2-B60F-
006097C998E7}"
Three WMI rules are defined in the details of the filter. Each rule contains a number of attributes,
including the query language (WQL) and the WMI namespace queried by the rule.
domain. The Group Policy template for the most part stores the actual data for the policy
extensions, for example Security Settings inf file, Administrative Template-based policy settings
.adm and .pol files, applications available for the Group Policy Software installation extension,
and potentially scripts.
The User and Machine folders are created at install time, and the other folders are created as
needed when policy is set.
The permissions of each Group Policy template reflect the read and write permissions applied to
the GroupPolicyContainer through the Group Policy Object Editor. These permissions are
automatically maintained and are shown in the following table.
Application of GPOs to targeted users and computers relies on many interactive processes. This
section explains how GPOs are applied and filtered to Active Directory containers such as sites,
domains, and OUs. It includes information about how the Group Policy engine processes GPOs
in conjunction with CSEs. In addition, it explains how Group Policy is replicated among domain
controllers.
The computer or user does not have permission to read and apply the GPO. You
control permission to a GPO through security filtering, as explained in the following section.
A WMI filter applied to a GPO evaluates to false on the client computer. A WMI filter
must evaluate to true before the Group Policy engine will allow it to be processed, as
explained in the following section.
Security Filtering
Security filtering is a way of refining which users and computers will receive and apply the
settings in a GPO. By using security filtering to specify that only certain security principals within a
container where the GPO is linked apply the GPO, you can narrow the scope of a GPO so that it
applies only to a single group, user, or computer. Security filtering determines whether the GPO
as a whole applies to groups, users, or computers; it cannot be used selectively on different
settings within a GPO.
In order for the GPO to apply to a given user or computer, that user or computer must have both
Read and Apply Group Policy (AGP) permissions on the GPO, either explicitly, or effectively
though group membership.
By default, all GPOs have Read and AGP both Allowed for the Authenticated Users group. The
Authenticated Users group includes both users and computers. This is how all authenticated
users receive the settings of a new GPO when it is applied to an organizational unit, domain or
site. Therefore, the default behavior is for every GPO to apply to every Authenticated User. By
default, Domain Admins, Enterprise Admins, and the local system have full control permissions,
without the Apply Group Policy access-control entry (ACE). However, administrators are
members of Authenticated Users, which means that they will receive the settings in the GPO by
default.
These permissions can be changed to limit the scope to a specific set of users, groups, or
computers within the organizational unit, domain, or site. The Group Policy Management Console
manages these permissions as a single unit, and displays the security filtering for the GPO on the
GPO Scope tab. In GPMC, groups, users, and computers can be added or removed as security
filters for each GPO.
When a GPO that is linked to a WMI filter is applied on the target computer, the filter is evaluated
on the target computer. If the WMI filter evaluates to false, the GPO is not applied (except if the
client computer is running Windows 2000, in which case the filter is ignored and the GPO is
always applied). If the WMI filter evaluates to true, the GPO is applied.
The WMI filter is a separate object from the GPO in the directory. A WMI filter must be linked to a
GPO in order to apply. Each GPO can have only one WMI filter; however the same WMI filter can
be linked to multiple GPOs. WMI filters, like GPOs, are stored only in domains. A WMI filter and
the GPO it is linked to must be in the same domain.
Windows XP clients support Fast Logon Optimization in any domain environment. Fast Logon
Optimization can be disabled with the following policy setting:
Computer Configuration\Administrative Templates\System\Logon\ Always wait for the
network at computer startup and logon.
Note that Fast Logon Optimization is not a feature of Windows Server 2003.
Folder Redirection and Software Installation Policies
Note that when Fast Logon Optimization is on, a user might need to log on to a computer twice
before folder redirection policies and software installation policies are applied. This occurs
because the application of these types of policies requires the synchronous policy application.
During a policy refresh (which is asynchronous), the system sets a flag indicating that the
application of folder redirection or a software installation policy is required. The flag forces
synchronous application of the policy at the users next logon.
Time Limit for Processing of Group Policy
Under synchronous processing, there is a time limit of 60 minutes for all of Group Policy to finish
processing on the client computer. Any CSEs that are not finished after 60 minutes are signaled
to stop, in which case the associated policy settings might not be fully applied.
Background Refresh of Group Policy
In addition to the initial processing of Group Policy at startup and logon, Group Policy is applied
subsequently in the background on a periodic basis. During a background refresh, a CSE will only
reapply the settings if it detects that a change was made on the server in any of its GPOs or its
list of GPOs.
In addition, software installation and folder redirection processing occurs only during computer
startup or user logon. This is because background processing could cause undesirable results.
For example, in software installation, if an application is no longer assigned, it is removed. If a
user is using the application while Group Policy tries to uninstall it or if an assigned application
upgrade takes place while someone is using it, errors would occur. Although the Scripts CSE is
processed during background refresh, the scripts themselves only run at startup, shutdown,
logon, and logoff, as appropriate.
Periodic Refresh Processing
By default, Group Policy is processed every 90 minutes with a randomized delay of up to 30
minutes for a total maximum refresh interval of up to 120 minutes.
Group Policy can be configured on a per-extension basis so that a particular extension is always
processed during processing of policy even if the GPOs havent changed. Policy settings for each
Normal user Group Policy processing specifies that computers located in the Servers
organizational unit have the GPOs A3, A1, A2, A4, and A6 applied (in that order) during computer
startup. Users of the Marketing organizational unit have GPOs A3, A1, A2, and A5 applied (in that
order), regardless of which computer they log on to.
In some cases this processing order might not be what you want. An example is when you do not
want applications that have been assigned or published to the users of the Marketing
organizational unit to be installed while they are logged on to the computers in the Servers
organizational unit. With the Group Policy loopback feature, you can specify two other ways to
retrieve the list of GPOs for any user of the computers in the Servers organizational unit:
Merge mode. In this mode, the computers GPOs have higher precedence than the users
GPOs. In this example, the list of GPOs for the computer is A3, A1, A2, A4, and A6, which is
added to the users list of A3, A1, A2, A5, resulting in A3, A1, A2, A5, A3, A1, A2, A4, and
9A6 (listed in lowest to highest priority).
Replace mode. In this mode, the users list of GPOs is not gathered. Only the list of
GPOs based upon the computer object is used. In this example, the list is A3, A1, A2, A4,
and A6.
The loopback feature can be enabled by using the User Group Policy loopback processing
mode policy under Computer Settings\Administrative settings\System\Group Policy.
The processing of the loopback feature is implemented in the Group Policy engine. When the
Group Policy engine is about to apply user policy, it looks in the registry for a computer policy,
which specifies which mode user policy should be applied in.
9.4.7 How Group Policy Processing History Is Maintained on the Client Computer
Each time GPOs are processed, a record of all of the GPOs applied to the user or computer is
written to the registry. GPOs applied to the local computer are stored in the following registry
path:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Group
Policy\History
GPOs applied to the currently logged on user are stored in the following registry path:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Group
Policy\History
Preferences and Policy Configuration
Manipulating these registry values directly is not recommended. Most of the items in which you
might need to change the behavior of an extension (such as forcing a CSE to run over a slow
link), are available as Group Policy settings. These can be found in the Group Policy Object
Editor in the following location:
Computer Settings\Administrative Templates\System\Group Policy
The behavior can be changed for the following CSEs:
Administrative Templates (Registry-based policy)
Internet Explorer Maintenance
Software Installation
Folder Redirection
Scripts
Security
IP Security
EFS recovery
Disk Quotas
Order of Extension Processing
Administrative Templates policy settings are always processed first. Other extensions are
processed in an indeterminate order.
Policy Application Processes
There are two primary milestones that the Group Policy engine uses for GPO processing:
Creating the list of GPOs targeted at the user or computer.
Invoking the relevant CSEs to process the policy settings relevant to them within the
GPO list.
The following figure shows the steps required to reach the first milestone in GPO processing,
GPO list creation.
Group Policy updates are dynamic and occur at specific intervals. If there have been no changes
to Group Policy, the client computer still refreshes the security policy settings at regular intervals
for the GPO.
If no changes are discovered, GPOs are not processed. Security policies have a periodic force
apply every 16 hours. For security policies, there is a value that sets a maximum limit of how long
a client can function without reapplying non-changed GPOs. By default, this setting is every 16
hours plus a randomized delay of up to 30 minutes. Even when GPOs that contain security policy
settings do not change, the policy is reapplied every 16 hours
FRS is a multi-master replication service that synchronizes folders between two or more Windows
Server 2003 or Windows 2000 systems. Modified files are queued for replication at the point the
file is closed. In the case of conflicting modifications between two copies of an FRS replica, the
file with the latest modification time will overwrite any other copies. This is referred to as a "last-
writer-wins" model.
FRS replication topology configuration is stored as a combination of child objects of each FRS
replica partner (in the FRS Subscriptions subcontainer) and objects within another hidden
subcontainer of the domain System container. Replication links between systems are maintained
as FRS subscription objects. These objects specify the replica partner and the replication
schedule. It is possible to view the schedule by browsing to an FRS subscription object and
viewing the properties. The replica partner is stored as the object GUID of the computer account
of that partner.
The Sysvol folder is a special case of FRS replication. Active Directory automatically maintains
the subscription objects and their schedules as the directory replication is built and maintained. It
is possible, but not recommended, to modify the properties (for example, the schedule) of the
Sysvol subscription objects manually.
The FRS replication schedule only approximates to the directory replication schedule so it is
possible for the directory-based Group Policy information and the file-based information to get
temporarily out of synch. Since GPO version information is stored in both the Group Policy
container object and in the Group Policy template, any discrepancy can be viewed with tools such
as Gpotool.exe and Repladmin.exe.
For those Group Policy extensions that store data in only one data store (either Active Directory
or Sysvol), this is not an issue, and Group Policy is applied as it can be read. Such extensions
include Administrative Templates, Scripts, Folder Redirection, and most of the Security Settings.
For any Group Policy extension that stores data in both storage places (Active Directory and
Sysvol), the extension must properly handle the possibility that the data is unsynchronized. This
is also true for extensions that need multiple objects in a single store to be atomic in nature, since
neither storage location handles transactions.
An example of an extension that stores data in Active Directory and Sysvol is Group Policy
Software installation extension. The .aas files are stored on Sysvol and the Windows Installer
package definition is in Active Directory. If the .aas file exists, but the corresponding Active
Directory components are not present, the software is not installed. If the .aas file is missing, but
the package is known in Active Directory, application installation fails gracefully and will be retried
on the next processing of Group Policy.
The tools used to manage Active Directory and Group Policy, such as GPMC, the Group Policy
Object Editor, and Active Directory Users and Computers all communicate with domain
controllers. If there are several domain controllers available, changes made to objects like users,
computers, organizational units, and GPOs might take time to appear on other domain
controllers. The administrator might see different data depending on the last domain controller on
which changes were made and which domain controller they are currently viewing the data from.
If multiple administrators manage a common GPO, it is recommended that all administrators use
the same domain controller when editing a particular GPO, to avoid collisions in FRS. Domain
Admins can use a policy to specify how Group Policy chooses a domain controller that is, they
can specify which domain controller option should be used. The Group Policy domain controller
selection policy setting is available in the Administrative Templates node for User Configuration,
in the System\Group Policy subcontainer.
(RSoP) snap-in offers administrators one solution. Administrators use the RSoP snap-in to see
how multiple Group Policy objects affect various combinations of users and computers, or to
predict the effect of Group Policy settings on the network
domain controller. The domain controller uses Resultant Set of Policy Provider to simulate the
application of GPOs. This service passes the GPO settings to virtual client-side extensions on the
domain controller. The results of the simulation are stored in the WMI repository on the domain
controller before the information is passed back to the RSoP snap-in for analysis. It is important
to remember that the results displayed in the RSoP snap-in are not actual Group Policy settings,
but simulated Group Policy based on the settings created using the wizard. If a custom client side
extension exists on a client but does not exist on the domain controller, then any Group Policy
settings this custom client side extension might create would not appear in the simulation results.
Planning mode in the RSoP snap-in is useful for planning Group Policy. The RSoP snap-in lists
each GPO from which the displayed setting came as well as any other lower priority GPOs that
attempted to configure settings. Using this information, an administrator can determine which
GPOs are applying a policy setting and which GPOs are not.
Although GPMC provides functionality that subsumes most of the reporting features of RSoP
snap-in, there is some Group Policy information that can only be reported on using RSoP snap-in.
For example, RSoP snap-in lists each GPO from which the displayed setting came as well as any
other lower priority GPOs that attempted to configure settings. Using this information, an
administrator can determine which GPOs are applying a policy setting and which GPOs are not.
In these cases, an administrator can use GPMC to open the RSoP snap-in by electing to view
advanced information about a Group Policy Results or Group Policy Modeling report.
In an ideal environment, administrators are encouraged to use the GPMC features for simulating
Group Policy or determining the effect of Group Policy on a particular user or computer.
Log, RSoP infrastructure, and Local GPO on the target computer. Although
RSoP is capable of read-only access to the Active Directory and Sysvol, most of
the work of predicting or reporting Group Policy is done using RPC/COM
communication with the RSoP provider, either on the client or the domain
controller.
In an Active Directory forest, the domain controller is a server that contains a
Domain writable copy of the Active Directory database, participates in Active Directory
Controller replication, and controls access to network resources. GPOs are stored in two
(Server) parts of domain controllers: The Active Directory database (sometimes called
Group Policy Container) and the Sysvol (known as the Group Policy template).
Active Directory, the Windows-based directory service, stores information about
objects on a network and makes this information available to users and network
Active administrators. Administrators link GPOs to Active Directory containers such as
Directory sites, domain, and OUs that include user and computer objects. In this way,
policy settings can be targeted to users and computers throughout the
organization.
Sysvol is a shared directory that stores the server copy of the domains public
files, which are replicated among all domain controllers in the domain. The Sysvol
contains the largest part of a GPO: the Group Policy template, which includes
Sysvol
Administrative Template-based policy settings, security settings, script files, and
information regarding applications that are available for software installation. File
Replication Service (FRS) replicates this information throughout the network.
LDAP (Lightweight Directory Access Protocol) is the protocol used by the Active
Directory directory service. RSoP snap-in uses LDAP for authentication and
LDAP Protocol
delegation checks. The client also uses LDAP to read the directory store on the
domain controller.
SMB (Server Message Block) protocol is the primary method of file and print
sharing. SMB can also be used for abstractions such as named pipes and mail
SMB Protocol
slots. RSoP snap-in and the client use SMB to access the Sysvol on the domain
controller.
RPC (Remote Procedure Call), DCOM (Distributed Component Object Model)
and COM (Component Object Model) enable data exchange between different
processes. The different process can be on the same computer, on the local area
RPC/COM
network, or across the Internet.
COM and RPC are used by the RSoP snap-in for communication with the RSoP
provider on the client or domain controller.
WMI is a management infrastructure that supports the monitoring and controlling
of system resources through a common set of interfaces and provides a logically
organized, consistent model of Windows operation, configuration, and status.
WMI makes available data about a target computer for administrative use. Such
data can include hardware and software inventory, settings, and configuration
information. For example, WMI exposes hardware configuration data such as
WMI
CPU, memory, disk space, and manufacturer, as well as software configuration
data from the registry, drivers, file system, Active Directory, the Windows Installer
service, networking configuration, and application data. WMI Filtering in Windows
Server 2003 allows you to create queries based on this data. These queries (also
called WMI filters) determine which users and computers receive all of the policy
configured in the GPO where you create the filter.
All Group Policy processing information is collected and stored in a namespace in
RSoP
WMI. This information, such as the list, content and logging of processing details
infrastructure
for each GPO, can then be accessed by tools using WMI.
In logging mode, RSoP snap-in queries the database on the target computer,
receives information about the policies and displays it in the RSoP snap-in.
In planning mode, RSoP snap-in simulates the application of policy using the
Resultant Set of Policy Provider on a Domain Controller. This simulates the
application of GPOs and passes them to Group Policy client-side extensions on
the Domain Controller. The results of this simulation are stored to a local WMI
database on the domain controller before the information is passed back and
displayed in the RSoP snap-in.
The Event log is a service that records events in various logs. The RSoP snap-in
Event Log reads the Event Log on client computers and domain controllers in order to
provide information about error events.
The local Group Policy object (local GPO) is stored on each individual computer,
in the hidden %systemroot%\System32\GroupPolicy directory. Each computer
running Windows 2000, Windows XP Professional, Windows XP 64-Bit Edition, or
Windows Server 2003 has exactly one local GPO, regardless of whether the
computers are part of an Active Directory environment.
Local Group Local GPOs do not support certain extensions, such as Folder Redirection or
Policy object Group Policy Software Installation. Local GPOs do support many security
settings, but the Security Settings extension of the Group Policy Object Editor
does not support remote management of local GPOs. Local GPOs are always
processed, but can be overridden by domain policy in an Active Directory
environment, because GPOs linked to Active Directory containers have
precedence.
GPResult for Windows 2000 estimates the Group Policy settings that would be applied at a
specific computer. Full documentation for this version of GPResult is available in the readme file
distributed with the tool.
Dcgpofix.exe: Dcgpofix
Category Dcgpofix ships with Windows Server 2003.
Version compatibility You can run Dcgpofix only on servers running Windows Server 2003
family. This tool can restore default domain policy and default domain controllers policy to their
original state after installation, except for some security-related settings that are impossible to
return to their exact original state. When you run Dcgpofix, you will lose any changes made to
these Group Policy objects. For more information about Dcgpofix, type Dcgpofix /? at the
command line.
This tool should be used as a last-resort disaster-recovery tool. A better solution is to use GPMC
to back up and restore these GPOs.
GPMonitor.exe: Group Policy Monitor Tool
Category Group Policy Monitor tool is included in the Windows Server 2003 Deployment Kit.
Version compatibility The Group Policy Monitor tool works on Windows XP and higher
computers. Group Policy Monitor tool collects Group Policy information at every Group Policy
refresh and sends that information to a centralized location that you specify. You can then use the
Group Policy Monitor user interface (UI) to view the data. The Group Policy Monitor UI can
provide a historical view of policy changes. The UI is also designed to make it easy to navigate
through historical snapshots of data and trace changes. For more information about the Group
Policy Monitor tool, type GPMonitor /? at the command line. You can find full documentation for
the Group Policy Monitor tool in the Windows Server 2003 Deployment Kit Tools.
GPOTool.exe: Group Policy Verification Tool
Category Group Policy Verification tool is included in the Windows Server 2003 Deployment Kit.
Version compatibility The Group Policy Verification tool works on Windows 2000 and higher
computers. You use Group Policy Verification tool to check the health of the Group Policy objects
on domain controllers. The tool checks GPOs for consistency on each domain controller in your
domain. The tool also determines whether the policies are valid and displays detailed information
about replicated Group Policy objects (GPOs).
Section 10
filtering, and manipulating inheritance of GPOs. You can also back up GPOs
to the file system as well as restore GPOs from backups. GPMC includes
features that enable an administrator to predict how GPOs are going to affect
the network as well as to determine how GPOs have actually changed
settings on any particular computer or user.
GPMC is capable of read and write access to the Sysvol using the SMB
protocol. It is also capable of read and write access to Active Directory via the
LDAP protocol. In addition, GPMC is capable of read access to the event log
and RSoP infrastructure.
The Resultant Set of Policy snap-in is an MMC used to determine which
policy settings are in effect for a given User or Computer, or to predict the
effect of applied policy.
The snap-in itself is contained within the same binary as the Group Policy
Object Editor snap-in (gpedit.dll). The user interface is mostly a read-only
view of the same information available in the Group Policy Object Editor.
However there is one important difference: while the Group Policy Object
Resultant Set of Editor can show only a single GPO setting at a time, the RSoP snap-in shows
Policy (RSoP) the cumulative effect of many GPOs.
snap-in For RsoP functionality it is recommended to use GPMC, which includes its
own integrated RsoP features.
The RSoP snap-in is capable of read access to the Active Directory, Sysvol,
Event Log, RSoP infrastructure, and Local GPO. Although the RSoP snap-in
is capable of read only access to the Active Directory and Sysvol, most of the
work of predicting or reporting Group Policy is done using RPC/COM
communication with the RSoP provider, either on the client or the domain
controller.
In an Active Directory forest, the domain controller is a server that contains a
writable copy of the Active Directory database, participates in Active Directory
Domain Controller
replication, and controls access to network resources. GPOs are stored in two
(Server)
parts of domain controllers: The Active Directory database and the Sysvol
share.
In an Active Directory forest, settings from GPOs are applied to clients.
Client GPMC and the RSoP snap-in query the client to determine how policy has
been applied to a particular user or computer.
Active Directory, the Windows-based directory service, stores information
about objects on a network and makes this information available to users and
network administrators. Administrators link GPOs to Active Directory
Active Directory
containers such as sites, domains, and OUs that include user and computer
objects. In this way, policy settings can be targeted to users and computers
throughout the organization.
Sysvol is a shared directory that stores the server copy of the domains public
files, which are replicated among all domain controllers in the domain. The
Sysvol contains the largest part of a GPO: the Group Policy template (GPT),
Sysvol
which includes Administrative Template-based policy settings, security
settings, and script files. File Replication Service (FRS) replicates this
information throughout the network.
All Group Policy processing information is collected and stored in a Common
Information Model Object Management (CIMOM) database on the local
RsoP computer. This information, such as the list of GPOs that have been
infrastructure processed, as well as content and logging of processing details for each
GPO, can then be accessed by tools using Windows Management
Instrumentation (WMI).
With Group Policy Results in GPMC, or logging mode for the RSoP snap-in,
the RSoP service is used to query the CIMOM database on the target
computer; it receives information about the policies that were applied and
displays the resulting information in GPMC or the RSoP snap-in.
With Group Policy Modeling in GPMC, or planning mode for the RSoP snap-
in, the RSoP service simulates the application of policy using the Group
Policy Directory Access Service (GPDAS) on a Domain Controller. GPDAS
simulates the application of GPOs and passes them to virtual client-side
extensions on the Domain Controller. The results of this simulation are stored
in a local CIMOM database on the domain controller before the information is
passed back and displayed in either GPMC or the RSoP snap-in.
GPMC can also be used to generate RSoP data that either predicts the cumulative effect of
GPOs on the network, or reports the cumulative effect of GPOs on a particular user or computer.
In addition, the administrator can use GPMC to perform GPO operations never possible before,
like backing up and restoring a GPO, copying a GPO, or even migrating a GPO to another forest.
Reading or generating HTML or XML reports of GPO settings is also possible.
Group Policy Object Editor to disable or enable computer, user, or both computer and user nodes
within a GPO.
Scoping GPOs
An administrator can use GPMC to link GPOs to sites, domains, or OUs in the Active Directory.
Administrators must link GPOs to apply settings to users and computers in Active Directory
Containers. Linking GPOs is the primary mechanism by which administrators apply Group Policy
settings.
In addition to linking, an administrator can manipulate permissions on GPOs to manage how
Group Policy applies. Prior to GPMC, an administrator would have to manually manipulate
access control entries (ACE) to modify the scope of the GPO. For example, the administrator
might remove Read and Apply Group Policy from the Authenticated Users group for GPO1.
This effectively disables GPO1, since users in the Authenticated Users group require both Read
and Apply Group Policy permissions to process Group Policy. To apply the settings in GPO1 to
select network users or computers, the administrator would add a new security principal (typically
a security group containing the target users or computers) to the ACL on the GPO and set Read
and Apply Group Policy permissions. This is known as security filtering.
With GPMC, security filtering has been simplified. The administrator adds the security principal to
the GPO, and GPMC automatically sets the Read and Apply Group Policy permissions.
Administrators can also use GPMC to link WMI Filters to GPOs. WMI Filters allow an
administrator to dynamically determine the scope of GPOs based on attributes (available through
WMI) of the target computer. A WMI filter consists of one or more queries that are evaluated to be
either true or false against the WMI repository of the target computer. The WMI filter is a separate
object from the GPO in the directory.
function also serves as the export function for GPOs. Backed up GPOs can be used either in
conjunction with Restore or Import operations.
Restoring a GPO takes an existing GPO backup and re-instantiates it back in the domain. The
purpose of a restore is to reset a specific GPO back to the identical state it was in when it was
backed up. This restoration does not include GPO links. This is because the links are a property
of the container the GPO is linked to, not the GPO itself. Since a restore operation is specific to a
particular GPO, it is based on the GUID and domain of the GPO. Therefore, a restore operation
cannot be used to transfer GPOs across domains.
Importing a GPO allows you to transfer settings from a backed up GPO to an existing GPO. You
can perform this operation within the same domain, across domains, or across forests. This
allows for many interesting capabilities such as staging of a test GPO environment in a lab before
importing into a production environment.
Restoring and Importing a GPO will remove any existing settings already in the target GPO. Only
the settings in the backup will be in the GPO when these operations are complete.
Copying a GPO is similar to an export/import operation only the GPO is not saved to a file system
location first. In addition, a copy operation creates a new GPO as part of the operation, whereas
an import uses an existing GPO as its destination.
performed by a service that is only present on Windows Server 2003 domain controllers.
However, with this feature, you can simulate the resultant set of policy for any computer in the
forest, including those running Windows 2000.
GPMC Scripting
The GPMC user interface is based on a set of COM interfaces that accomplish all of the
operations performed by GPMC. These interfaces are available to Windows scripting
technologies like JScript and VBScript, as well as programming languages such as Visual Basic
and VC++. An administrator can use these interfaces to automate many Group Policy
management tasks.
These interfaces are discussed in detail in the GPMC software development kit (SDK) located in
the %programfiles%\gpmc\scripts\gpmc.chm Help file on systems where GPMC has been
installed. The contents of the GPMC SDK are also available in the Platform SDK.
There is no dependency from the Group Policy perspective on whether a domain is in native
mode or mixed mode.
queries (also called WMI filters) determine which users and computers receive
the policy configured in the GPO.
Group Policy processing information is collected and stored in a Common
Information Model Object Management (CIMOM) database on the local
computer. This information, such as the list of GPOs, as well as the content and
logging of processing details for each GPO, can then be accessed by tools
using WMI.
With Group Policy Results in GPMC, or logging mode for the RSoP snap-in, the
RSoP service is used to query the CIMOM database on the target computer; it
RsoP receives information about the policies and displays it in GPMC or the RSoP
infrastructure snap-in.
With Group Policy Modeling in GPMC, or planning mode for the RSoP snap-in,
the RSoP service simulates the application of policy using the Group Policy
Directory Access Service (GPDAS) on a Domain Controller. GPDAS simulates
the application of GPOs and passes them to virtual client-side extensions on
the Domain Controller. The results of this simulation are stored to a local
CIMOM database on the domain controller before the information is passed
back and displayed in either GPMC or the RSoP snap-in.
The Group Policy Engine is a framework that handles common functionality
Group Policy
across client-side extensions. GPMC does not communicate directly with the
Engine
Group Policy Engine. For more information about the Group Policy Engine
Client-side extensions (CSEs) consist of one or more dynamic-link libraries
(DLLs) that are responsible for implementing Group Policy at the client
computer. CSEs typically correspond to snap-ins: Administrative Templates,
Client-Side
Scripts, Security Settings, Software Installation, Folder Redirection, Remote
Extensions
Installation Services, and Internet Explorer Maintenance. GPMC does not
communicate directly with the Client-side extensions. For more information
about client-side extensions.
GPMC can write to the file system of the local machine or any remote machine.
File System GPMC writes to the file system for GPO operations, such as backups or copy,
and for saving HTML/XML reports.
The Event log is a service that records events in the various logs on the
Event Log computer. GPMC is capable of read and write access to the Event Log on client
computers and domain controllers.
The local Group Policy object (local GPO) is stored on each individual
computer, in the hidden %systemroot%\System32\GroupPolicy directory.
Each computer running Windows 2000, Windows XP Professional,
Local Group Windows XP 64-Bit Edition, or Windows Server 2003 has exactly one local
Policy object GPO, regardless of whether the computers are part of an Active Directory
environment.
GPMC does not offer access to the local GPO. For more information about the
local Group Policy Object,.
GPMC identifies the GPO by its domain and globally unique identifier (GUID). The purpose of a
restore operation is to return the GPO to its original state, so the restore operation retains the
original GUID even if it is recreating a deleted GPO. This is a key difference between the restore
operation and the import or copy operations. You cannot use restore to transfer GPOs to different
domains or forests. That capability is provided by import and copy.
If the client computer has not yet detected that the GPO has been deleted (either
because the user has not logged on again or rebooted since the deletion of the GPO), and
the application was deployed with the option to Uninstall this application when it falls out
of scope of management then the next time the user logs on:
Published applications that the user has previously installed will be removed.
Assigned applications will be uninstalled before re-installation.
This issue can be avoided if all of the following conditions are met:
You perform the restore on a Windows Server 2003 domain controller instead of
a Windows 2000 domain controller.
The user performing the restore has permissions to re-animate tombstone
objects in the domain.
The time elapsed between deletion and restoration of the GPO does not exceed
the tombstone lifetime specified in Active Directory.
Tombstone re-animation is a new feature of Windows Server 2003 Active Directory. By default,
only Domain Admins and Enterprise Admins have this permission but you can delegate this right
to additional users at the domain level using the ACL editor.
As a general rule, if you deploy software using Group Policy, it is recommended that you perform
the restoration of GPOs that contain application deployments using a domain controller running
Windows Server 2003 and that you grant the tombstone re-animation right to the users who will
be performing restoration of those GPOs.
Finally, when restoring a GPO that contains software installation settings, if you are using
categories to classify applications, the application in the restored GPO will appear in its original
category only if the category exists at the time of restoration. Note that the category definition is
not part of the GPO.
Import
The Import operation transfers settings into an existing GPO in Active Directory using a backed-
up GPO in the file system location as its source. Import operations can be used to transfer
settings from one GPO to another GPO within the same domain, to a GPO in another domain in
the same forest, or to a GPO in a domain in a different forest. The import operation always places
the backed-up settings into an existing GPO. It erases any pre-existing settings in the destination
GPO. Import does not require trust between the source domain and destination domain.
Therefore it is useful for transferring settings across forests and domains that dont have trust.
Importing settings into a GPO does not affect its discretionary access control list (DACL), links on
sites domains or organizational units to that GPO, or a link to a WMI filter.
When using import to transfer GPO settings to a GPO in a different domain or different forest, you
might want to use a migration table in conjunction with the import operation. A migration table
allows you to facilitate the transfer of references to security groups, users, computers, and UNC
paths in the source GPO to new values in the destination GPO.
Copy
A copy operation allows you to transfer settings from an existing Group Policy object (GPO) in
Active Directory directly into a new GPO. The new GPO created during the copy operation is
given a new globally unique identifier (GUID) and is unlinked. You can use a copy operation to
transfer settings to a new GPO in the same domain, another domain in the same forest, or a
domain in another forest. Because a copy operation uses an existing GPO in Active Directory as
its source, trust is required between the source and destination domains. Copy operations are
suited for moving Group Policy between production environments, and for migrating Group Policy
that has been tested in a test domain or forest to a production environment, as long as there is
trust between the source and destination domains.
Copying is similar to backing up followed by importing, but there is no intermediate file system
step, and a new GPO is created as part of the copy operation. The import operation, in contrast
with the copy operation, does not require trust.
10.8.3 Specifying the discretionary access control list (DACL) on the new GPO
Page 306 of 312 Testking
Exam Name: Planning, Implementing, and Maintaining a Microsoft Windows Server
2003 Environment for an MCSE Certified on Windows 2000
Exam Type: Microsoft Exam Code: 70-296
Section Name Guide Contents Total Sections: 10
You have two options for specifying the DACL to use on the new GPO:
Use the default permissions that are used when creating new GPOs.
Preserve the DACL from the source GPO. For this option, you can specify a migration
table, used to facilitate the transfer of references to security groups, users, computers, and
UNC paths in the source GPO to new values in the destination GPO. If you specify a
migration table for the copy operation, and you choose the option to preserve the DACL from
the source GPO, the migration table will apply to any security principals in the DACL.
Users, groups, and computers (security principals) that are referenced in the following
places: the settings of the GPO, in the ACL for the GPO itself, and the ACL for any software
installation settings in the GPO. Security principals do not transfer well for several reasons.
For instance, domain local groups are invalid in external domains but other groups are valid if
there is trust in place. Even with a trust between domains, it may not always be appropriate
to use the exact same group in the new domain. If there is no trust, then none of the security
principals in the source domain will be available in the destination domain.
UNC paths. When you are migrating GPOs across domains that do not have trust, such
as from test to production environments, users in the destination domain may not have
access to paths in the source domain.
The following items can contain security principals and can be modified using a migration table:
Security policy settings of the following types:
User rights assignment
Restricted groups
Services
File system
Registry
Advanced folder redirection policies.
The GPO discretionary access control list (DACL), if you choose to preserve it during a
copy operation.
The DACL on software installation objects. This is only preserved if the option to copy the
GPO DACL is specified. Otherwise the default DACL is used.
Note
Security Principals referenced in Administrative Templates settings cannot be
migrated using a migration table.
The following items can contain UNC paths and can be modified using a migration table:
Folder redirection policies.
Software installation policies (for software distribution points).
Pointers to scripts deployed through Group Policy (such as startup and shutdown scripts)
that are stored outside the GPO. Note that the script itself is not copied as part of the GPO
copy operation, unless the script is stored inside the source GPO.
Note
Built-in groups such as "Administrators" and "Account Operators" have the same SID in
all domains. If references to built-in groups are stored in the GPO using their resolved format
(based on the underlying SID), they cannot be mapped using migration tables. However, if
references to built-in groups are stored as free text, you can map them using a migration
table, and in this case, you must specify source type="Free Text or SID."
Source Reference A source reference is the specific name of the User, Computer, Group or
UNC Path referenced in the source GPO. For example, \\server1\publicshare is a specific
example of a UNC path. The type of the source reference must match whatever source type has
been specified in the migration table.
Source Reference Syntax
Source
Syntax
Reference
UPN [email protected]
SAM example\someone
DNS example.com\someone
Free text someone (must be specified as the unknown type)
S-1-11-111111111-111111111-1111111111-1112 (must be specified as the
SID
unknown type.)
Destination Name The destination name specifies how the name of the User, Computer, Group
or UNC Path in the source GPO should be treated upon transfer to the destination GPO.
Destination Name Options
Destination
Description
Name
Copy without changes. Equivalent to not putting the source value in the
migration table at all.
Same as source
MTE value: <Same As Source>
XML tag:<DestinationSameAsSource />
Removes the User, Computer or Group from the GPO. This option cannot be
used with UNC paths
None
MTE value: <None>
XML tag: <DestinationNone />
For example, map SourceDomain\Group1 to TargetDomain\Group1. This
Map by relative option cannot be used with UNC paths.
name MTE value: <Map by Relative name>
XML tag: <DestinationByRelativeName />
In the destination GPO, replace the source value with the exact literal value
Explicitly specify you specify.
value MTE value: <exact name to use in target>
XML tag: <Destination>Exact Name to use in Destination</Destination>
Note
Administrators can specify security principals for destination names using any of the
syntactical formats described in source references, with the exception of using a raw SID. A
raw SIDs can never be used for the destination name.
Default Entries Valid migtable files must contain the following entries, which identifies the
namespace used by migtables. If you are creating migtables by hand, you need to add this.
Otherwise MTE does this for you.
<?xml version="1.0" encoding="utf-16"? >
<MigrationTable xmlns="http://www.microsoft.com/GroupPolicy/GPOOperations/MigrationTable">
Each mapping entry appears in the following format:
<Mapping>
<Type>Type</Type>
<Source>Source</Source>
<Destination>Destination</Destination>
</Mapping>
The Migration Table Editor
The Migration Table Editor (MTE) is provided with Group Policy Management Console (GPMC) to
facilitate the editing of migration tables. Migration tables are used for copying or importing Group
Policy objects (GPOs) from one domain to another, in cases where the GPOs include domain-
specific information that must be updated during copy or import.
MTE displays three columns of information:
Migration Table Editor columns
Source Name Source Type Destination Name
Name of the user, computer, The type of the reference The new value, after copy or
group, or UNC path, named in the Source Name import to the new GPO; or the
referenced in the source column. For example, Domain method used to calculate the new
GPO. Global Group. value.
Note that you do not specify Destination Type. The Destination Type is determined during the
import or copy operation by checking the actual reference identified in the Destination Name.
Migration Table Editor features
You can edit text fields using Cut, Copy, and Paste options, for example when you right-
click an item in the Source Name column.
You can add computers, users, and groups, for source and destination names, by using a
Browse dialog box.
You can fill in Source Type fields by using a drop-down menu.
MTE provides automatic syntax checking to make sure the XML file is properly populated
and also ensures that only compatible entries are entered in the table. For example,
\\server1\share1 is not a valid security group name name, and server1 is not a valid UNC
path so MTE prompts the user to correct those entries.
You can validate the overall migration table using MTE, in addition to basic syntax
checking using the Validate Table option. The Validate Table option checks to ensure:
The existence of destination security principals and UNC paths. This is important
to know before you copy or import a GPO, because if there are unresolvable entries in
the migration table, then the copy or import operation might fail.
Source entries with UNC paths do not have destinations of MapByRelative name
or NoDestination.
Type of the destination entry in the table matches the real type in the destination
source.
You can auto-populate a migration table by scanning one or more GPOs or backups to
extract all references to security principals and UNC paths, and then enter these items into
the table as source name entries. This capability is provided by the Populate from GPO and
Populate from Backup options available on the Tools menu. To complete the migration
table, you only need to adjust the destination values. By default, the destination name for
each entry will be set to Same as source when you use either of the Populate options.
Note
With either auto-populate option, when the source GPO or backup contains
references to built-in groups such as Administrators or Backup Operators, these
references will not be entered into the migration table if the references are stored in the
source GPO or backup in their resolved format [based on security identifier (SID)]. This is
because built-in groups have the same SID in all domains and therefore cannot be
mapped to new values using a migration table. However, if references to built-in groups
are stored as free text, they will be captured by the auto-populate options. In this case,
you can map them as long as you specify the source type as "Free Text or SID."
Explorer
Windows 2000 or
Settings to Windows Server 2003. Note: This tool is
Loaded by
Conf.adm configure not available on Windows XP 64-Bit Edition
default.
NetMeeting v3 and the 64-bit versions of the Windows
Server 2003 family.
Windows XP, Windows Server 2003.
Settings to
Note: This tool is not available on
configure Loaded by
Wmplayer.adm Windows XP 64-Bit Edition and the 64-bit
Windows Media default.
versions of the Windows Server 2003
Player
family.
Settings to
Windows 2000 SP3, Windows XP SP1, Loaded by
Wuau.adm configure
Windows Server 2003 default.
Windows Update