Read Windows Server 2008 R2 Unleashed Online
Authors: Noel Morimoto
part of the operating system. Given the complexity of IPv6, it will undoubtedly take some
time before it is adopted widely, but understanding that the support exists is the first step
ptg
toward deploying it widely.
Defining the Structure of IPv6
To say that IPv6 is complicated is an understatement. Attempting to understand IPv4 has
been difficult enough for network engineers; throw in hexadecimal 128-bit addresses, and
life becomes much more interesting. At a minimum, however, the basics of IPv6 must be
understood as future networks will use the protocol more and more as time goes by.
IPv6 was written to solve many of the problems that persist on the modern Internet today.
The most notable areas that IPv6 improved upon are the following:
.
Vastly improved address space—
The differences between the available addresses
from IPv4 to IPv6 are literally exponential. Without taking into account loss because
of subnetting and other factors, IPv4 could support up to 4,294,967,296 nodes. IPv6,
on the other hand, supports up to 340,282,366,920,938,463,463,374,607,431,
768,211,456 nodes. Even taking into account IP addresses reserved for overhead,
IPv6 authors were obviously thinking ahead and wanted to make sure that they
wouldn’t run out of space again.
.
Improved network headers—
The header for IPv6 packets has been streamlined,
standardized in size, and optimized. To illustrate, even though the address is four
times as long as an IPv4 address, the header is only twice the size. In addition, by
having a standardized header size, routers can more efficiently handle IPv6 traffic
than they could with IPv4.
Outlining Windows Server 2008 R2 IPv6 Support
215
.
Native support for automatic address configuration—
In environments where
manual addressing of clients is not supported or desired, automatic configuration of
IPv6 addresses on clients is natively built in to the protocol. This technology is the
IPv6 equivalent to the Automatic Private Internet Protocol Addressing (APIPA)
feature added to Windows for IPv4 addresses.
.
Integrated support for IPSec and QoS—
IPv6 contains native support for IPSec
encryption technologies and Quality of Service (QoS) network traffic optimization
approaches, improving their functionality and expanding their capabilities.
Understanding IPv6 Addressing
An IPv6 address, as previously mentioned, is 128 bits long, as compared with IPv4’s 32-bit
addresses. The address itself uses hexadecimal format to shorten the nonbinary written
form. Take, for example, the following 128-bit IPv6 address written in binary:
111111101000000000000000000000000000000000000000000000000000000000
00001000001100001010011111111111111110010001000111111000111111
The first step in creating the nonbinary form of the address is to divide the number in 16-
bit values, as follows:
ptg
1111111010000000 0000000000000000
0000000000000000 0000000000000000
0000001000001100 0010100111111111
1111111001000100 0111111000111111
7
Each 16-bit value is then converted to hexadecimal format to produce the IPv6 address:
FE80:0000:0000:0000:020C:29FF:FE44:7E3F
Luckily, the authors of IPv6 included ways of writing IPv6 addresses in shorthand by
allowing for the removal of zero values that come before other values. For example, in the
address listed previously, the 020C value becomes simply 20C when abbreviated. In addi-
tion to this form of shorthand, IPv6 allows continuous fields of zeros to be abbreviated by
using a double colon. This can only occur once in an address, but can greatly simplify the
overall address. The example used previously then becomes the following:
FE80:::20C:29FF:FE44:7E3F
NOTE
It is futile to attempt to memorize IPv6 addresses, and converting hexadecimal to deci-
mal format is often best accomplished via a calculator for most people. This has
proven to be one of the disadvantages of IPv6 addressing for many administrators.
216
CHAPTER 7
Active Directory Infrastructure
IPv6 addresses operate much in the same way as IPv4 addresses, with the larger network
nodes indicated by the first string of values and the individual interfaces illustrated by the
numbers on the right. By following the same principles as IPv4, a better understanding of
IPv6 can be achieved.
Migrating to IPv6
The migration to IPv6 has been, and will continue to be, a slow and gradual process. In
addition, support for IPv4 during and after a migration must still be considered for a
considerable period of time. It is consequently important to understand the tools and
techniques available to maintain both IPv4 and IPv6 infrastructure in place during a
migration process.
Even though IPv6 is installed by default on Windows Server 2008 R2, IPv4 support
remains. This allows for a period of time in which both protocols are supported. After
migrating completely to IPv6, however, connectivity to IPv4 nodes that exist outside of
the network (on the Internet, for example) must still be maintained. This support can be
accomplished through the deployment of IPv6 tunneling technologies.
Windows Server 2008 R2 tunneling technology consists of two separate technologies. The
first technology, the Intrasite Automatic Tunnel Addressing Protocol (ISATAP), allows for
ptg
intrasite tunnels to be created between pools of IPv6 connectivity internally in an organi-
zation. The second technology is known as 6to4, which provides for automatic intersite
tunnels between IPv6 nodes on disparate networks, such as across the Internet. Deploying
one or both of these technologies is a must in the initial stages of IPv6 industry adoption.
Making the Leap to IPv6
Understanding a new protocol implementation is not at the top of most people’s wish
lists. In many cases, improvements such as improved routing, support for IPSec, no NAT
requirements, and so on are not enough to convince organizations to make the change.
The process of change is inevitable, however, as the number of available nodes on the
IPv4 model decreases. Consequently, it’s good to know that Windows Server 2008 R2 is
well prepared for the eventual adoption of IPv6.
Detailing Real-World Replication Designs
Site topology in Windows Server 2008 R2’s AD DS has been engineered in a way to be
adaptable to network environments of all shapes and sizes. Because so many WAN topolo-
gies exist, a subsequently large number of site topologies can be designed to match the
WAN environment. Despite the variations, several common site topologies are imple-
mented, roughly following the two design models detailed in the following sections.
Detailing Real-World Replication Designs
217
These real-world models detail how the Windows Server 2008 R2 AD site topology can be
used effectively.
Viewing a Hub-and-Spoke Replication Design
CompanyA is a glass manufacturer with a central factory and headquarters located in
Leuven, Belgium. Four smaller manufacturing facilities are located in Marseille, Brussels,
Amsterdam, and Krakow. WAN traffic follows a typical hub-and-spoke pattern, as
diagrammed in Figure 7.9.
Brussels
Amsterdam
512Kbs
512Kbs
ptg
Leuven
7
128Kbs
256Kbs
Marseille
Krakow
FIGURE 7.9
CompanyA WAN diagram.
CompanyA decided to deploy Windows Server 2008 R2 to all its branch locations and allo-
cated several domain controllers for each location. Sites in AD DS were designated for each
major location within the company and given names to match their physical location.
Site links were created to correspond with the WAN link locations, and their replication
schedules were closely tied with WAN utilization levels on the links themselves. The result
was a Windows Server 2008 R2 AD DS site diagram that looks similar to Figure 7.10.
218
CHAPTER 7
Active Directory Infrastructure
DC
DC
DC
DC
Brussels
Amsterdam
Leuven-Brussels
Leuven-Amsterdam
Site Link
Site Link
DC
DC
DC
DC
(PDC
Emulator)
Leuven-Marseille
Leuven-Krakow
Leuven
Site Link
Site Link
ptg
DC
DC
DC
DC
Marseille
Krakow
FIGURE 7.10
CompanyA site topology.
Both domain controllers in each site were designated as a preferred bridgehead server to
lessen the replication load on the global catalog servers in the remote sites. However, the
PDC emulator in the main site was left off the list of preferred bridgehead servers to lessen
the load on that server. Site link bridging was kept activated because there was no specific
need to turn off this functionality.
This design left CompanyA with a relatively simple but robust replication model that it
can easily modify at a future time as WAN infrastructure changes.
Outlining Decentralized Replication Design
CompanyB is a mining and mineral extraction corporation that has central locations in
Duluth, Charleston, and Cheyenne. Several branch locations are distributed across the
continental United States. Its WAN diagram utilizes multiple WAN links, with various
connection speeds, as diagrammed in Figure 7.11.
CompanyB recently implemented Windows Server 2008 R2 AD DS across its infrastructure.
The three main locations consist of five AD DS domain controllers and two global catalog
servers. The smaller sites utilize one or two domain controllers for each site, depending on
the size. Each server setup in the remote sites was installed using the Install from Media
Detailing Real-World Replication Designs
219
Thunder
Hibbing
Bay
128Kbs
128Kbs
64Kbs
64Kbs
Ely
Duluth
Harrisburg
T1
64Kbs
128Kbs
T1
Charleston
Billings
128Kbs
T1
Cheyenne
64Kbs
128Kbs
ptg
64Kbs
256Kbs
Cumberland
Casper
Denver
FIGURE 7.11
CompanyB WAN diagram.
7
option because the WAN links were not robust enough to handle the site traffic that a full
dcpromo operation would involve.
A site link design scheme, like the one shown in Figure 7.12, was chosen to take into
account the multiple routes that the WAN topology provides. This design scheme provides
for a degree of redundancy as well, because replication traffic could continue to succeed
even if one of the major WAN links was down.
Each smaller site was designated to cache universal group membership because bandwidth
was at a minimum and CompanyB wanted to reduce replication traffic to the lowest levels
possible, while keeping user logons and directory access prompt. In addition, traffic on the
site links to the smaller sites was scheduled to occur only at hour intervals in the evening
so that it did not interfere with regular WAN traffic during business hours.
Each domain controller in the smaller sites was designated as a preferred bridgehead
server. In the larger sites, three domain controllers with extra processor capacity were
designated as the preferred bridgehead servers for their respective sites to off-load the extra
processing load from the other domain controllers in those sites.
220
CHAPTER 7
Active Directory Infrastructure
Thunder
Hibbing
Bay
DC
DC
DC
15
Duluth
15
Ely
20
DC
20
DC
GC
DC
Harrisburg
DC
DC
GC
DC
5
DC
20
10
5
DC
DC
GC
DC
DC
15
5
DC
DC
GC
DC
DC
Billings
GC
DC
Charleston
ptg
DC
20
GC
DC
15
20
10
Cheyenne
DC
Cumberland
DC
DC
DC
Casper
Denver
FIGURE 7.12
CompanyB site topology.
This design left CompanyB with a robust method of throttling replication traffic to its
slower WAN links, but at the same time maintaining a distributed directory service envi-
ronment that AD provides.
Deploying Read-Only Domain Controllers (RODCs)
A new concept in Windows Server 2008 R2 is the Read-Only Domain Controller (RODC)
Server role. RODCs, as their name implies, hold read-only copies of forest objects in their
directory partitions. This role was created to fill the need of branch office or remote site
locations, where physical security might not be optimal, and storing a read/write copy of
directory information is ill-advised.
Deploying Read-Only Domain Controllers (RODCs)
221
Understanding the Need for RODCs