Exam Ref 70-411 Administering Windows Server 2012 R2 (2014)
Chapter 2. Configure file and print services
This chapter covers the essential server functionality of file services. Despite the title, print services are not covered in this exam; they are covered in Exam 70-410. This chapter covers the advanced file services of Distributed File System (DFS), the File Server Resource Manager (FSRM), encryption, and advanced auditing policies.
Objectives in this chapter:
Objective 2.1: Configure Distributed File System (DFS)
Objective 2.2: Configure File Server Resource Manager (FSRM)
Objective 2.3: Configure file and disk encryption
Objective 2.4: Configure advanced audit policies
Objective 2.1: Configure Distributed File System (DFS)
Windows Server 2012 R2 Distributed File System (DFS) provides a simplified view of file resources across multiple servers and sites while enabling efficient replication of folder contents between servers. Windows PowerShell support for DFS Namespaces (DFS-N) was added in Windows Server 2012 and added for DFS Replication (DFS-R) in Windows Server 2012 R2.
This objective covers how to:
Install and configure DFS Namespaces (DFS-N)
Configure DFS Replication (DFS-R) targets
Configure replication scheduling
Configure Remote Differential Compression (RDC) settings
Configure staging
Configure fault tolerance
Clone a DFS database
Recover DFS databases
Optimize DFS Replication
Installing and configuring DFS Namespaces (DFS-N)
Before you can use DFS-N, you need to install the role on a server. For DFS-R, you have to install it on at least two servers. These servers must be part of an Active Directory Domain Services (AD DS) domain if the DFS-N is AD-integrated, and you must be a member of the Domain Admins group to install the DFS role and role services. You can install DFS-N in Standalone mode, which is required if you want to install DFS-N on a cluster.
DFS roles can be installed on a Windows Server 2012 R2 Server Core installation. DFS-N management can then be done locally or remotely with the Windows PowerShell DFSN module; DFS-R management is done with the DFSR module. Remote administration can also be performed by using the GUI administration console, which can be run from within Server Manager or by opening Dfsmgmt.msc directly.
Windows Servers can host multiple DFS-N, depending on the version of Windows Server they run. Table 2-1 lists the Windows Server versions and the DFS-N they support.
TABLE 2-1 Windows Server versions and their DFS-N support
Here are additional notes for stand-alone namespace servers:
They must contain an NTFS volume to host the namespace.
They can be a standalone server, a member server or a domain controller.
They can be hosted by a failover cluster to increase availability.
Here are additional notes for domain-based namespaces:
They must contain an NTFS file system volume to host the namespace.
They can be a member server or a domain controller in the same domain in which the namespace is configured.
They can use multiple namespace servers to increase availability.
The namespace can’t be a clustered resource in a failover cluster, but can be configured on an individual cluster node as long as it uses only local resources and nonshared storage.
Exam Tip
Most Microsoft exam questions focus on core understanding of the concepts rather than specific knowledge that can be learned by rote memory. However, the correct answer can often be constrained by very specific knowledge about which versions of Windows Server support which specific features.
Installing DFS-N by using Server Manager
You can install the DFS-N role by using the Add Roles And Features Wizard in Server Manager and following these steps:
1. Select the Role-Based Or Feature-Based Installation option.
2. On the Select Server Roles page, select DFS Namespaces. If it will be a replicated DFS-N, select DFS Replication as well, as shown in Figure 2-1.
FIGURE 2-1 The Select Server Roles page of the Add Roles And Features Wizard
3. In the Add Features That Are Required For DFS Namespaces dialog box, accept the default by clicking Add Features.
4. Click Next twice and then click Install to install DFS-N (and DFS-R if you selected that role). In most cases, a reboot is not required.
Installing DFS-N by using Windows PowerShell
To install the DFS-N and DFS-R roles on a Windows Server, run the following Windows PowerShell command:
Install-WindowsFeature -Name FS-DFS-Namespace,FS-DFS-Replication -IncludeManagementTools
If you want only the DFS-N role, simply eliminate FS-DFS-Replication from the preceding command.
Creating a DFS Namespace
You create a DFS-N by using either the DFS Manager console, or the Windows PowerShell DFSN module. To create a new DFS-N using the DFS Manager console, follow these steps:
1. Select Namespaces in the DFS Management console and then click New Namespace on the Actions menu to open the New Namespace Wizard.
2. On the Namespace Server page, enter the name of the server that will host the namespace (see Figure 2-2).
FIGURE 2-2 The Namespace Server page of the New Namespace Wizard
3. Click Next and enter a name for the new namespace.
4. Click Edit Settings to change the default local path to the namespace and to set the shared folder permissions, as shown in Figure 2-3. Click OK to close the Edit Settings box, and then click Next.
FIGURE 2-3 The Edit Settings dialog box of the New Namespace Wizard
5. On the Namespace Type page, choose whether the namespace will be a domain-based namespace or a stand-alone namespace. For domain-based namespaces, choose whether it will be Windows Server 2008 Mode.
Exam Tip
When creating a domain-based namespace, if you don’t create it in Windows Server 2008 Mode, the namespace won’t support access-based enumeration. The minimum Domain functional level for Windows Server 2008 Mode DFS-N is Windows Server 2008.
6. Click Next, then click Create, and then click Close to complete creating the namespace.
To create a namespace using Windows PowerShell, use the New-DfsnRoot cmdlet. To create a new domain-based DFS-N in Windows Server 2008 Mode that has a share path of “\\TreyResearch\Download” and a target path of “\\Trey-dc-02\Download”, use this command:
New-DfsnRoot -TargetPath \\trey-dc-02\Public `
-Path \\TreyResearch\Public `
-Type DomainV2 `
-Description "Central source for Publicly visible files"
Note: Windows PowerShell and Test-Path
When using the above command, or any other command that points to a shared folder, the folder needs to already exist. Windows PowerShell will not create it if it isn’t there. Use the Test-Path cmdlet to test if a path exists without generating an error. If the path exists, Test-Path returns $True. If it doesn’t, it returns $False.
For a full list of DFS-N cmdlets, use the following command:
Get-Command -Module DFSN
Exam Tip
If you look at the number of parameters available for the DFS-N and DFS-R cmdlets, you can see several ways to tweak each command, which makes them a tempting target for the exam question writer. You shouldn’t attempt to memorize every possible parameter, but you should try to understand what options are available and what they mean so that you’ll be able to recognize when an incorrect answer choice is leading you astray.
Adding a DFS-N folder
You can add a folder to an existing DFS-N root by using the DFS Management console or by using Windows PowerShell. To add a folder to an existing DFS-N root, follow these steps:
1. Expand the Namespaces section of the DFS Management console and select the root for the new folder.
2. Click New Folder in the Actions pane to open up the New Folder dialog box shown in Figure 2-4.
FIGURE 2-4 The New Folder dialog box of the DFS Management console
3. Enter a name for the folder and click Add to open the Add Folder Target dialog box. Enter the shared folder to use in the Path To Folder Target box or use the Browse button to open the Browse For Shared Folders dialog box shown in Figure 2-5.
FIGURE 2-5 The Browse For Shared Folders dialog box
4. You can browse for the server or use the local server. You can create a new shared folder or use an existing one.
5. You can add multiple folder targets to an existing DFS-N root.
To create a new DFS-N folder with Windows PowerShell, use the New-DFSNFolder cmdlet. Here is an example:
New-DFSNFolder -Path \\TreyResearch.net\Public\Videos `
-TargetPath \\Trey-srv-12\Videos `
-Description "Corporate Training and Marketing Videos"
Changing the properties of a DFS-N
You can modify the properties of a DFS-N, including delegating management permission, changing the cache duration and cost ordering, and setting polling optimization. These properties can be changed in the DFS Management console or by using the Windows PowerShell DFSN module.
For example, you can change a DFS-N by enabling access-based enumeration. You can do this in the DFS Management console by right-clicking the namespace, and choosing Properties from the menu. Then select Enable Access-based Enumeration For This Namespace on the Advanced tab and click OK.
To set access-based enumeration for the \\TreyResearch.net\Public namespace, use the following command:
Set-DfsnRoot -Path \\TreyResearch.net\Public -EnableAccessBasedEnumeration $True
Properties that you can set include:
Site costing
In-site referrals
Access-based enumeration
Root scalability
Target failback
Description
State
Time to Live
Important: Refreshing the DFS Management console
When working with both the Windows PowerShell DFSN cmdlets and the DFS Management console, the console does not automatically refresh. Always manually refresh before making changes in the console by right-clicking the DFS-N and choosing Refresh from the menu.
Configuring DFS-R targets
When you add multiple DFS-N targets for a DFS-N folder, you can (and in most cases should) set the targets to replicate. Doing so synchronizes the content in the targets and ensures that changes in one folder target are replicated to the other folder targets. Servers that are involved in the replication of folder targets form a replication group. The replication group name is, by default, the path of the replicated folder (see Figure 2-6).
FIGURE 2-6 The Replication Group And Replicated Folder Name page of the Replicate Folder Wizard
You can configure replication directly when you add more than a single folder target to a DFS-N, or you can skip that step initially and enable replication later. When you enable replication, you need to designate which server will be the Primary Member, as shown in Figure 2-7.
FIGURE 2-7 The Primary Member page of the Replicate Folder Wizard
The two basic replication technologies are full mesh and hub and spoke, or you can use a custom replication topology that combines some of the features of each. In a full mesh topology (see Figure 2-8), each member of the replication group replicates with every other member of the group. This works well for small replication groups, especially where new data can originate in any member of the group.
FIGURE 2-8 In a full mesh replication topology, each node replicates to every other node
In a hub and spoke technology, one or two hub servers each connect to multiple spokes, as shown in Figure 2-9. If there is more than one hub, the hubs also connect to each other. A hub and spoke topology requires a minimum of three members (a hub and two spokes), and works well in a publication or branch office scenario in which most changes originate at the hub and are replicated out to the spokes. Changes that originate at a spoke have at least two replication hops before being fully replicated to all members of the replication group.
FIGURE 2-9 In a hub and spoke replication technology, the hub replicates with multiple spokes
Finally, you need to configure the replication schedule and bandwidth, as covered in the following “Configuring replication scheduling” section.
Creating new DFS-R targets by using Windows PowerShell
New in Windows Server 2012 R2 is full Windows PowerShell support for DFS-R. To see a full list of DFS-R cmdlets, sorted by noun, use the following command:
Get-Command -Module DFSR | Sort-Object Noun,Verb | Format-Table Verb,Noun -auto
Exam Tip
The DFS-R Windows PowerShell cmdlets provide a rich source of possible questions and fussy syntax. Plus they are completely new in Windows Server 2012 R2. These properties make them rich fodder for exam question writers. Make sure you work through an example or two to have a clear understanding of how to use them.
To create a new DFS-R replication target, you follow a multicommand process. Create the DFS-R group, assign folders to it, and add member servers. Here is an example:
New-DfsReplicationGroup -GroupName "\\TreyResearch.net\Public\Build" `
| New-DfsReplicatedFolder -FolderName "Build" `
| Add-DfsrMember -ComputerName Trey-DC-02,Trey-Srv-13
Add a bidirectional connection between the two servers:
Add-DfsrConnection -GroupName "\\TreyResearch.net\Public\Build" `
-SourceComputerName Trey-DC-02 `
-DestinationComputerName Trey-Srv-13
Specify Trey-DC-02 as the primary:
Set-DfsrMembership -GroupName "\\TreyResearch.net\Public\Build" `
-FolderName "Build" `
-ContentPath C:\Downloads\Build `
-ComputerName Trey-DC-02 `
-PrimaryMember $True `
-StagingPathQuotaInMB 16384 -Force
Finally, specify that Trey-Srv-13 is a member server with the following:
Set-DfsrMembership -GroupName "\\TreyResearch.net\Public\Build" `
-FolderName "Build" `
-ContentPath C:\Downloads\Build `
-ComputerName Trey-Srv-13 `
-StagingPathQuotaInMB 16384 -Force
Configuring replication scheduling
DFS-R defaults to replicating 24 hours per day, 7 days per week over the full available bandwidth, as shown in Figure 2-10. This replication schedule is fine for some basic situations, but doesn’t take into account specific needs.
FIGURE 2-10 The Replication Group Schedule And Bandwidth page of the New Replication Group Wizard
For scenarios that require tuning this default schedule, you can set the specific times when replication should be available and the bandwidth to use for that replication, as shown in Figure 2-11. You can set different replication bandwidths in one-hour time blocks for the entire week if appropriate.
FIGURE 2-11 The Edit Schedule dialog box of the Replication Group Schedule And Bandwidth page
You can use the Set-DfsrGroupSchedule cmdlet to set replication schedule and bandwidth settings. The syntax is this:
Set-DfsrGroupSchedule [-GroupName] <String[]> [[-DomainName] <String>] [[-UseUTC]
<Boolean>] [[-ScheduleType]{Always | Never}] [-Confirm] [-WhatIf] [<CommonParameters>]
Set-DfsrGroupSchedule [-GroupName] <String[]> [[-DomainName] <String>] [[-UseUTC]
<Boolean>] [-Day] <DayOfWeek[]>[-BandwidthDetail] <String> [-Confirm] [-WhatIf]
[<CommonParameters>]
The first syntax either sets the schedule to Always (and Full) or disables it entirely. The second syntax group enables you to set the schedule and bandwidth in 15-minute intervals for each day of the week. (See Get-Help Set-DfsrGroupSchedule -Full for detailed syntax and examples.)
Configuring Remote Differential Compression (RDC) settings
Remote Differential Compression (RDC) is a client-server protocol that detects insertions, deletions, and changes in file data. With RDC enabled (the default in Windows Server 2012 R2), DFS-R copies a changed data segment only if the segment is not available on the replication partner. With cross-file RDC enabled, that data segment can be from a different file. RDC is designed to improve replication bandwidth usage of low-bandwidth connections.
By default, RDC is used only on files 64 KB or larger, but this threshold can be configured using Windows PowerShell. To change the connection from trey-dc-02 to trey-srv-13 in the Test replication group to use a threshold size of 128 KB, use the following command:
Set-DfsrConnection -GroupName "Test" `
-SourceComputerName "trey-dc-02" `
-DestinationComputerName "trey-srv-13" `
-MinimumRDCFileSizeInKB 128
To disable cross-file RDC and regular RDC completely on the same connection, use the following command:
Set-DfsrConnection -GroupName "Test" `
-SourceComputerName "trey-dc-02" `
-DestinationComputerName "trey-srv-13" `
-DisableRDC $True `
-DisableCrossFileRDC $True
You can enable or disable RDC for a given connection in the DFS Management console by selecting the replication group name in the console tree and clicking the Connections tab. Select a connection and then click Properties in the Actions menu for the connection, as shown in Figure 2-12.
FIGURE 2-12 The Properties dialog box of a replication group connection
Configuring staging
You can configure the size of the staging folder for a DFS-R group member. Each member of the replication group has its own staging folder, and each folder can be individually set. The default size is 4 GB (4096 MB). To change the size of the staging folder for a replication group member, select the replication group in the DFS Management console and select the member server. Right-click to open the properties and click the Staging tab, as shown in Figure 2-13. You can set the Staging Path, and the Quota (In Megabytes).
FIGURE 2-13 The Staging tab of the member server Properties dialog box
You can get the current staging size by using the Get-DfsrMembership cmdlet, and configure the staging size and path by using the Set-DfsrMembership cmdlet. To set the path and size for the server membership shown in Figure 2-13, use the following command:
Set-DfsrMembership -GroupName Test `
-ComputerName Trey-dc-02 `
-StagingPath "C:\PSHelp\DfsrPrivate\Staging" `
-StagingPathQuotaInMB 4096
You can also set the size and path of the Conflict and Deleted folder that caches folders and files that have been changed on two or more members or have been deleted. The default size is 4 GB (4096 MB).
Configuring fault tolerance
There are two kinds of fault tolerance available for DFS-N: multiple root targets for domain-based DFS-N and failover clusters for stand-alone DFS-N.
Domain-based DFS-N fault tolerance
When creating DFS-N root targets for domain-based DFS-N, you need a minimum of two domain controllers and two DFS-N root targets within the domain that is hosting the root. This ensures that the failure of either a domain controller or a server hosting a DFS-N root will not cause the namespace to be unavailable. Domain-based DFS roots can’t be created on cluster shared storage, although they can be created on nonshared storage of cluster nodes. So you can create a domain-based DFS-N root target on each node of a failover cluster using the local storage of each node. In the event of a node failure in the cluster, the DFS-N is still available. You can also create DFS-N roots on nonclustered servers to provide the same level of fault tolerance.
In a domain with a single domain controller, the unavailability of that domain controller also renders the DFS-N in the domain unavailable. To avoid a single point of failure for your DFS-N, you must ensure that there are at least two domain controllers for each domain that has a DFS-N root.
Stand-alone DFS-N fault tolerance
To ensure the availability of a stand-alone DFS root, you create the root on the shared cluster storage of a clustered file server by using the Cluster Administrator console. In the event of a failure of any node in the cluster, the DFS-N continues to be fully available as long as a single node of the cluster is available.
A stand-alone DFS-N is not dependent on Active Directory, so there is no specific requirement for domain controllers with a stand-alone DFS-N. If the cluster that hosts the stand-alone DFS-N is available, the DFS-N is available.
Cloning a DFS database
Windows Server 2012 R2 adds the capability to clone a DFS-R database to speed up initial synchronization time dramatically. You use preseeded files and an export/import process to quickly set up replication and synchronize the databases. Cloning a DFS-R database can be done only by using Windows PowerShell.
To clone a DFS database, first create the replication group and folders by following these steps:
1. Create and populate the folder that will be the source folder.
2. Create a replication group with New-DfsReplicationGroup.
3. Add a DFS replicated folder with New-DfsReplicatedFolder.
4. Add the source server as a DFS-R member server with Add-DfsrMember.
5. Set the source server as the PrimaryMember with Set-DfsrMembership.
After the replication folder is successfully initialized, a DFS-R event 4112 is issued, and you can export a clone of the database.
Note: Don’t add replication partners
During the initial setup of the replication database, don’t add the replication partners or create a connection to the secondary servers. Replicated folders that are in any state other than a Normal, non–initial sync state are ignored during cloning. You can determine the state of replicated folders with this:
Get-WmiObject -Namespace "root\Microsoft\Windows\DFSR" `
-Class msft_dfsrreplicatedfolderinfo `
-ComputerName <sourceserver> `
| ft replicatedfoldername,state -auto -wrap
Replicated folders that are ready for cloning will show as state 4 (that is, Normal).
After the replicated folders are ready to clone, export the database to a clone directory with the following (where “H:\DfsrClone” is the location that will host the exported clone database):
New-Item -Path "H:\DfsrClone" -Type Directory
Export-DfsrClone -Volume "H:" -Path "H:\DfsrClone"
When cloning is complete, a set of robocopy commands are displayed by the Export-DfsrClone cmdlet. It returns Ready when the export process is complete. You can monitor the progress of the cloning by using the Get-DfsrCloneState cmdlet. When cloning is complete, DFS-R issues an Event 2402 in the DFS-R event log.
Use Robocopy to move the exported database and preseed the replication folder by using:
Robocopy.exe "H:\DfsrClone" "<destination path>" /B
Robocopy.exe "<source path>" "<destination path>" /E /B /COPYALL /R:6 /W:5 /MT:64 /XD DfsrPrivate /TEE /LOG+:preseed.log
On the target server, verify that the replication database doesn’t already exist with the following command (where H: is the drive letter of the target replicated folder):
Get-ChildItem -Path "H:\System Volume Information\dfsr" -hidden
If there is no output, there are no replicated folders on the volume. If there is a listing, you need to do cleanup to remove any residual traces from a previous replication. You can’t clone into an existing DFS-R database and you have to remove traces from any previous DFS-R folders.
Exam Tip
You can’t remove residual DFS-R folders or files while the DFS-R service is running. You need to stop the DFS-R service, delete all files and folders in the “\System Volume Information\dfsr” folder and then restart the DFS-R service.
After preseeding with Robocopy is complete, you can import the cloned database and XML configuration with the following command (again H: is the target volume and \dfsrclone is the target path):
Import-DfsrClone -Volume H: -Path "H:\dfsrclone"
When Get-DfsrCloneState returns Ready, or when the DFS-R event log shows an Event 4104 (one event per replicated folder), you can complete configuring the replication by adding the target server to the replication group and setting its membership state with this:
$DfsrSourceComputerName = "<sourceserver>"
$DfsrDestinationComputerName = "<destinationserver>"
$DfsrReplicationGroupName = "<DFS-R Group>"
$DfsrReplicatedFolderName = "<DFS-R Folder>"
$DfsrReplicatedFolderPath = "<DFS-R Folder Path>"
Add-DfsrMember -GroupName $DfsrReplicationGroupName `
-ComputerName $DfsrDestinationComputerName
Add-DfsrConnection -GroupName $DfsrReplicationGroupName `
-SourceComputerName $DfsrSourceComputerName `
–DestinationComputerName $DfsrDestinationComputerName
Set-DfsrMembership -GroupName $DfsrReplicationGroupName `
-FolderName $DfsrReplicatedFolderName `
-ContentPath $DfsrReplicatedFolderPath `
-ComputerName $DfsrDestinationComputerName
Use the Get-DfsrPreservedFiles cmdlet to discover any files that had conflicts during the database cloning, and use the *-DfsrPropagationTest cmdlets to validate replication.
Recovering DFS databases
You can use the database cloning technique to speed up recovery from a corrupted DFS-R database on a server. This corruption can be caused by hardware issues, such as an abrupt power loss. Rather than wait for the slow process of an automatic nonauthoritative recovery to complete, you can clone the primary database, as described previously, and use it to recover the corrupted database. You won’t need to preseed the replicated folder, so you can skip that step, but you should remove the memberships of the problem server to prevent DFS-R from attempting to rebuild the database until the cloning is complete.
If the only DFS-R memberships for the server are on the same volume as the corrupted database, use Windows PowerShell to remove the member with this:
Remove-DfsrMember -GroupName <dfsrgroup> -ComputerName <dfsrservername>
Important: Using Remove-DfsrMember
Use Remove-DfsrMember only when all the server’s memberships are on a single volume. This command affects all memberships of the server.
If there are volumes with uncorrupted DFS-R databases, use the Dfsradmin command instead. This command enables you to specify only a single membership.
Dfsradmin membership delete /rgname:<dfsrgroup> /rfname:<dfsrfolder>
/memname:<dfsrservername>
Optimizing DFS-R
DFS-R can be optimized by tuning the file-staging sizes as appropriate on a per-server basis. Windows Server 2012 R2 adds the capability to tune the minimum staging size for files to increase performance when replicating large files.
In previous versions of Windows Server, DFS-R uses a fixed, 256 KB file size as the minimum size for staging. If a file is larger than 256 KB, it is staged before it replicates. Further, if the staging folder quota is configured too small, DFS-R consumes additional CPU and disk resources.
The staging folder quota should be large enough that replication can continue even if multiple large files are staged awaiting replication. To improve performance, the staging folder should be as close to the size of the replicated folder as possible.
Staging folder quota sizes are particularly a concern on hub members with multiple replication partners. When configuring staging folders, locate them on different physical disks from the folders that are being replicated.
Thought experiment: Configuring branch office access to corporate resources
In this thought experiment, apply what you’ve learned about this objective. You can find answers to these questions in the “Answers” section at the end of this chapter.
You are the network administrator for TreyResearch.net. The company has a large share of corporate training content, currently hosted on a single Windows Server 2012 R2 server in the main corporate datacenter. The share sits on a ReFS file system to provide additional resiliency.
Users in the branch offices complain of poor video performance when watching training videos, plus other users complain of slow access to other corporate resources when several users are watching training videos.
1. What would you suggest as a solution to improve the ability of branch office users to access corporate training resources? (Choose all that apply.)
A. Copy the current corporate training share to each of the branch offices.
B. Increase the bandwidth on the corporate wide area network (WAN).
C. Create a root DFS-N namespace and add the training share folder to the namespace.
D. Move the share to an NTFS file system server to improve performance.
E. Set up DFS-R to branch office servers in a hub-spoke configuration with the corporate office at the hub.
F. Set up DFS-R to the branch office servers in a mesh configuration.
2. All content is created in the main corporate headquarters, but the content management and creation role is being expanded to include content from all the branch offices. How does this affect the answer to question 1?
3. What steps can you take to improve the initial replication of data to the branch offices?
Objective summary
The DFS-N and DFS-R roles can be installed on Windows Servers running full or server-core installations and on an AD DS domain controller.
Windows Server 2012 added Windows PowerShell support for DFS-N.
Windows Server 2012 R2 added Windows PowerShell support for DFS-R.
Configure DFS-R scheduling and bandwidth usage to optimize WAN bandwidth while providing an appropriate replication speed.
RDC settings can be configured to disable cross-file RDC.
The threshold for RDC can be changed with the Set-DfsrConnection cmdlet.
Configure the size of DFS-R staging folders to minimize thrashing and provide efficient replication of large files. Windows Server 2012 R2 enables staging folder size to be set for each member server in a replication group.
Use failover clustering to provide fault tolerance of stand-alone DFS-N.
Use multiple DFS-N root targets and multiple AD DS domain controllers to provide fault tolerance for domain-based DFS-N.
Use DFS database cloning to speed up initial replication of large DFS-R folders.
Use DFS database cloning to speed up recovery from a corrupted DFS-R database on a member server.
Objective review
1. What commands do you need to run to enable DFS-N and DFS-R on the local server?
A. Add-WindowsPackage -online -PackagePath DFS
B. Enable-WindowsOptionalFeature -online -PackageName DFS-N,DFS-R
C. Install-WindowsFeature -Name FS-DFS-N,FS-DFS-R -IncludeManagementTools
D. Add-WindowsFeature -Name DFS -IncludeAllSubFeatures
2. You need to enable remote management of DFS from your Windows 8 workstation. What commands do you need to run? (Include only the minimum that apply.)
A. Install-WindowsFeature -name RSAT-DFS-Mgmt-Con
B. Enable-WindowsOptionalFeature -FeatureName *DFS* -online
C. Winrm QuickConfig
D. Enable-PSRemoting
Objective 2.2: Configure File Server Resource Manager (FSRM)
The File Server Resource Manager (FSRM) role enables folder-level quotas, file-type screening, and comprehensive reporting of file system usage. The FSRM role also allows you to define a subset of files on a server and then schedule a task to apply simple commands to that subset of files.
This objective covers how to:
Install the FSRM role
Configure quotas
Configure file screens
Configure reports
Configure file management tasks
Installing the FSRM role
FSRM is supported on all versions of Windows Server 2012 R2 and is supported for both full and Server Core installations. For Server Core installations, the graphical FSRM console is used remotely from another copy of Windows Server 2012, Windows Server 2012 R2, or a Windows 8 or Windows 8.1 computer with the Remote Server Administration Tools (RSAT) installed.
Installing FSRM by using Server Manager
Windows Server Manager is the GUI way to install and configure the FSRM role on a server. Windows Server Manager can be used to manage both the local server and remote servers, including those running a Server Core installation, which enables you to install FSRM graphically on Server Core without ever using the command line.
Use the Add Roles And Features Wizard to install the FSRM role. You’ll use the Role-based or Feature-based option. Because Windows Server Manager can be used to insert roles and features into virtual hard disks, or to manage multiple servers from a single console, you have to select the server on which you want to install FSRM; on the Select Server Roles page, expand File And Storage Service, and then expand File And iSCSI Services and select File Server Resource Manager (see Figure 2-14). Click Next a couple of times and then Install. For most cases, installing FSRM does not require a server restart.
FIGURE 2-14 The Select Server Roles page of the Add Roles And Features Wizard
Installing FSRM by using Windows PowerShell
You can install FSRM by using Windows PowerShell, either locally or remotely. Windows PowerShell supports remote installation of FSRM on Windows Server 2012 R2. To install FSRM locally, use the following command:
Install-WindowsFeature -Name FS-Resource-Manager -IncludeManagementTools
To install FSRM on a remote computer that has remote management configured, use this command:
Install-WindowsFeature -ComputerName <ServerName> -Name FS-Resource-Manager
-IncludeManagementTools
The Install-WindowsFeature command requires that the user have administrator credentials to install FSRM, and the command must be run from an elevated shell.
Exam Tip
Most management tasks that can be performed with Windows PowerShell support using a credential object. But even when providing a credential, they fail unless run from an elevated shell. To start an elevated Windows PowerShell command line, use the following:
Start-Process PowerShell.exe -verb RunAs
This command opens an elevated shell. Some commands might still require that you supply a credential object for the credentials required if the account you used to create the elevated shell doesn’t have sufficient privileges for the command.
Configuring quotas
FSRM quotas allow you to limit the space that is available for a folder or volume. Quotas can be applied to new folders automatically, or retroactively to existing folders. FSRM has quota templates that can be applied, or can be used to build new quota templates.
Creating a quota
To create a quota on a folder or volume using the File Server Resource Manager, right-click Quotas in the console tree and select Create Quota from the menu. The Create Quota dialog box opens (see Figure 2-15). In this dialog box, you specify the following:
Quota Path The root of the path on which that you want to apply quotas
Auto Apply Template And Create Quotas On Existing And New Subfolders Applies the quota and enables inheritance so that subfolders can’t be used to bypass the quota restrictions
Create Quota On Path Creates a quota on that path, but doesn’t automatically apply the quota to existing subfolders of the path or on new subfolders created later
Derive Properties From This Quota Template Selects from a list of existing quota templates to use to define the quota
Define Custom Quota Properties Defines a custom quota with hard or soft quotas and custom notification thresholds
FIGURE 2-15 The Create Quota dialog box
If you choose to create a custom template, you use the Quota Properties dialog box, as shown in Figure 2-16. You can specify the following:
Copy Properties From Quota Template Optionally choose from an existing template as the starting point to define the quotas for this path.
Quota Path Specifies the root of the quota path. It is unavailable when already specified in the Create Quota dialog box.
Description An optional description for the custom quota.
Limit The space limit for the custom quota. If you used a template as the starting point, this box is filled in based on that template (but you can change the limit).
Hard Quota If specified, enforces the limit on all nonadministrative users.
Soft Quota If specified, the limit is used for monitoring and reporting only, but users can continue to add files to the path even when they have exceeded the limit.
Notification Thresholds You can specify the percentage of the limit that will trigger a notification and also what kind of notification. You can also specify a command to run or reports to generate when the threshold is reached.
FIGURE 2-16 The Quota Properties dialog box
You can also use the Windows PowerShell FSRM module to create a quota or auto apply quota. You can create a quota on a path with the New-FsrmQuota cmdlet. The syntax is as follows:
New-FsrmQuota -Path <quotapath> -Size <int> -SoftLimit -Template <string>
To create thresholds, use the New-FsrmQuotaThreshold cmdlet. To specify an action to be taken at a threshold, use the New-FsrmAction cmdlet. The New-FsrmAction cmdlet supports the following actions:
Email Sends an email to the user or administrator that the event was triggered
Event Creates an event log entry
Command Runs the command specified
Report Runs one or more storage reports
Creating a quota template
You can create a quota template completely from scratch or by starting with one of the existing templates and modifying it for your own specific needs. Templates can have the following properties:
Hard or soft quotas A hard quota prevents a user from saving files when the limit is reached. A soft quota only warns the user (and usually the administrator) and is useful for monitoring usage.
Space Limits The space limits can be specified in kilobytes (KB), megabytes (MB), gigabytes (GB), or terabytes (TB). The limit is applied only to a specific path of a quota; it doesn’t affect other volumes or paths.
Notification Thresholds You can specify as many or as few notification thresholds as you need. Each threshold is triggered when usage reaches a percentage of the space limit and can trigger email, trigger an event in the event log, run a command or script, cause one or more reports to be generated, or a combination of these actions. For more on these actions, see the following section, “Notification actions.”
To create a new template, follow these steps:
1. Select Quota Templates in the console tree of the FSRM console and then click Create Quota Template in the Actions pane.
2. To start with the settings of an existing template, select the template from the Copy Properties From Quota Template (Optional) list and then click Copy.
3. Enter a Template Name and a description; and then modify the settings of Space Limit, Hard Quota Or Soft Quota, and Notification Thresholds as appropriate for your new template.
4. Click OK when you complete configuring the new quota and it is available for immediate use.
You can also create a new template by using the New-FsrmQuotaTemplate cmdlet or modify an existing template by using Set-FsrmQuotaTemplate. The process is similar to that used when creating a new quota. You first define the thresholds and actions for the quota, saving them in a variable, and then use the variables in the New-FsrmQuotaTemplate or Set-FsrmQuotaTemplate cmdlets. Alternately, you can pipe the thresholds and actions to the New-FsrmQuotaTemplate or Set-FsrmQuotaTemplate cmdlets.
Notification actions
When a quota reaches one of the thresholds set in the quota, it can trigger multiple actions, as specified in the Notification Thresholds box of the quota. There are four basic kinds of actions that can be triggered:
Email Message Sends an email to the user or administrator that the event was triggered
Event Log Creates an event log entry
Command Runs the command specified
Report Runs one or more storage reports
The email message event is configured by the E-mail Message tab of the Add Threshold dialog box (see Figure 2-17).
FIGURE 2-17 The E-Mail Message tab of the Add Threshold dialog box
This page allows you to set the receiver of the email and the text of the message. The text can include variables to personalize the email and add information about the quota threshold. You can also specify additional recipients of the message by clicking the Additional E-Mail Headers button.
The Event Log event is configured by the Event Log tab of the Add Threshold dialog box (see Figure 2-18).
FIGURE 2-18 The Event Log tab of the Add Threshold dialog box
The Command event is configured by the Command tab of the Add Threshold dialog box (see Figure 2-19).
FIGURE 2-19 The Command tab of the Add Threshold dialog box
The Command tab enables you to configure the threshold to run a single command or script. You can specify the command or script, any arguments to the command, a working directory, and the account the command or script is run as. The available choices are:
Local Service Runs the command or script with the same level of access as the user, but has access to network resources with no credentials.
Network Service Runs the command or script with the same level of access as the user and has access to network resources with the credentials of the machine account.
Local System Full access to the system. This is a very powerful and very dangerous level of access and should be avoided unless you are certain that the process can’t be compromised.
Exam Tip
When reading an exam question, watch for phrases such as “using least privilege.” They are a clue that one or more of the possible answers is likely to be wrong because it uses an account that has too much privilege. For example, if the action is running only locally and doesn’t need to access network resources, it shouldn’t be run as Network Service. And if it is specified as running as Local System, it almost certainly doesn’t meet that requirement of the question.
The report event is configured by the Report tab of the Add Threshold dialog box, as shown in Figure 2-20. When a threshold limit is reached, you can automatically generate one or more of the standard reports and then optionally have the reports sent to the user or to one or more administrators. The reports are automatically saved in the default location for reports.
FIGURE 2-20 The Report tab of the Add Threshold dialog box
Configuring file screens
FSRM includes the capability to screen files based on a file name pattern. This screen can be an active screen, preventing the files from being saved to a specific path or volume or a passive screen that is used to monitor only. The file screen functionality of FSRM is based on the file names of the files, not on their content. Although it is traditionally used to filter by file extension, it can actually be used to filter on any portion of the file name.
Note: File name patterns only
The file screens implemented by FSRM do not prevent users from saving files that they shouldn’t; they only prevent users from saving files whose names match a pattern. Screening for audio MP3 files (for example, with a pattern of *.mp3) prevents someone from saving mymusicfile.mp3, but doesn’t stop them from saving that same file renamed to mymusicfile.np3.
Creating a file screen
To create a file screen on a folder or volume using the FSRM, right-click File Screens in the console tree and select Create File Screen from the menu. The Create File Screen dialog box shown in Figure 2-21 opens. In this dialog box, you specify the following:
File Screen Path The root of the path on which you want to apply the file screen
Derive Properties From This File Screen Template Select from a list of existing file screen templates to use to define the quota
Define Custom File Screen Properties Define a custom file screen with an active or passive file screen, and notification actions
FIGURE 2-21 The Create File Screen dialog box
If you choose to create a custom template, use the File Screen Properties dialog box, as shown in Figure 2-22, which enables you to specify the following:
Copy Properties From Template Optionally choose from an existing template as the starting point to define the file screen for this path.
File Screen Path Specifies the root of the file screen path. It is unavailable when already specified in the Create File Screen dialog box.
Active Screening If specified, does not allow users to save the file type on the file screen path specified.
Passive Screening If specified, users are allowed to save the file type, but monitoring actions such as email messages are initiated.
File Groups Specifies the type of file to screen. You must select one or more types to create a screen.
FIGURE 2-22 The File Screen Properties dialog box
You can also use the Windows PowerShell FSRM module to create a file screen. You can create a file screen on a path with the New-FsrmFileScreen cmdlet. The syntax is this:
New-FsrmFileScreen -Path <quotapath> -IncludeGroup <int> -Active -Template <string>
To specify an action to be taken on a file screen, use the New-FsrmAction cmdlet. The New-FsrmAction cmdlet supports the following actions:
Email Sends an email to the user or administrator that the event was triggered
Event Creates an event log entry
Command Runs the command specified
Report Runs one or more storage reports
Creating a file screen exception
You can create a file screen exception that allows files of a particular file group to be saved to the specified path even when there is a file screen in place. You create a file screen exception just as you would a file screen, but instead of blocking the file group, it allows the file group. A typical example has a file screen that prohibits saving any audio or video files in Public folders, but allows an exception for corporate training videos saved in the designated training folder.
Creating a file screen template
You can create a file screen template completely from scratch or by starting with one of the existing templates and modifying it for your own specific needs. Templates can have the following properties:
Active Or Passive Screens An active file screen prevents a user from saving files of the specified type. A passive file screen only warns the user (and usually the administrator) and is useful for monitoring usage.
File Groups Define the files to be screened. File screens are based on file name patterns and can include multiple file name matches in a group.
Path The root of the path in which the file screen will apply.
To create a new template, follow these steps:
1. Select File Screen Templates in the console tree of the File Server Resource Manager console and then click Create File Screen Template in the Actions pane.
2. To start with the settings of an existing template, select the template from the Copy Properties From File Screen Template (Optional) list and then click Copy.
3. Enter a Template Name, and then modify the settings of File Groups To Block, Active Screening or Passive Screening as appropriate for your new template.
4. Specify any actions to take when the file screen is triggered.
5. Click OK when you’ve completed configuring the new file screen and it is available for immediate use.
You can also create a new template by using the New-FsrmFileScreenTemplate cmdlet or modify an existing template by using Set-FsrmFileScreenTemplate. The process is similar to the one used when creating a new file screen. You first define the actions for the file screen, and any new file groups (using the New-FsrmFileGroup cmdlet), saving them in variables, and then use the variables in the New-FsrmFileScreenTemplate or Set-FsrmFileScreenTemplate cmdlets. Alternately, you can pipe the groups and actions to the New-FsrmFileScreenTemplate or Set-FsrmFileScreenTemplate cmdlets.
File screen notification actions
When a file screen is triggered, it can initiate multiple actions, as specified in the file screen. There are four kinds of actions that can be triggered:
Email Message Sends an email to the user or administrator that the event was triggered
Event Log Creates an event log entry
Command Runs the command specified
Report Runs one or more storage reports
The notification actions are essentially the same as for quota threshold notifications, as described earlier in the “Notification actions” section.
Creating file groups
File screens use pattern matching to describe groups of files to screen. FSRM includes 11 predefined file groups, as shown in Figure 2-23.
FIGURE 2-23 The File Server Resource Manager console
You can modify an existing file group or create a new file group. All the default file groups are based on pattern matching of the file extension, but file groups you create can use pattern matching against any portion of the file name. Furthermore, file groups you create have files to include in the screen, files to exclude from the screen, or both. To create a file group, select File Groups in the console tree of the File Server Resource Manager console and then click Create File Group in the Actions pane to open the Create File Group Properties dialog box (see Figure 2-24).
FIGURE 2-24 The Create File Group Properties dialog box
Because the file groups are based on whole file name pattern matching, you can use file screening to control exactly which files are allowed in a particular folder. For example, if you wanted to ensure that only screen captures for this chapter were allowed to be saved, you could create a file group that included the patterns “F??xx??.bmp” and “G??xx??.bmp”, and excluded the pattern “F02xx??.bmp”. This file group would define a file screen that screened all screen captures except those that begin with “F02”.
To create a file group using Windows PowerShell, use the New-FsrmFileGroup cmdlet. For
example:
New-FsrmFileGroup -name Chp2Files -ExcludePattern "F02xx??.bmp" -IncludePattern
"F??xx??.bmp","G??xx??.bmp"
Description :
ExcludePattern : {F02xx??.bmp}
IncludePattern : {F??xx??.bmp, G??xx??.bmp}
Name : Chp2Files
PSComputerName :
Notice that the -ExcludePattern and -IncludePattern parameters take a string list, which allows you to include or exclude multiple file name patterns from the group. You can modify a file group by using the Set-FsrmFileGroup cmdlet.
Configuring reports
FSRM includes 10 predefined reports. You can change the parameters of these reports, but you can’t create new reports from scratch. Reports can be scheduled to run on a daily, weekly, or monthly schedule at a specific time. You can also configure the particular parameters of a report. Each report allows specific parameters relevant to the report. For example, the Files By Owner report defaults to all file owners and all files, but you can specify that you want the report to only report on specific users and only on files matching a specific pattern.
To generate a Files By Owner report of all the MP3 files on a server, select Storage Reports Management in the console tree and then click Generate Reports Now to open the Storage Reports Task Properties dialog box. Select Files By Owner in the Select Reports To Generate, and then click Edit Parameters. Enter *.mp3 into the Include Only Files Matching The Following File Name Pattern box, as shown in Figure 2-25.
FIGURE 2-25 The Report Parameters dialog box
When you run reports interactively, as opposed to as a scheduled task, you can choose to have the report open as soon as it finishes, as shown in Figure 2-26.
FIGURE 2-26 The Generate Storage Reports dialog box
You can use Windows PowerShell to manage Storage Reports by using the *-FsrmStorageReport cmdlets. For details on how to create a new Storage Report, see http://go.microsoft.com/fwlink/?LinkID=289432.
Configuring file management tasks
FSRM enables you to run file management tasks, which enable you to take actions on files based on the file’s properties. You can schedule tasks to run daily, weekly, or monthly; and to generate reports after the task runs. You can also have the task send a warning notification to users before it runs.
To create a file management task, select File Management Tasks in the console tree of the File Server Resource Manager console and click Create File Management Task in the Actions pane. The Create File Management Task dialog box shown in Figure 2-27 opens.
FIGURE 2-27 The Create File Management Task dialog box
Enter a name for the task and then click the Scope tab. Specify which folders the task will run against. On the Action tab, specify what action to take, as shown in Figure 2-28. You can set different settings depending on the action being taken.
FIGURE 2-28 The Action tab of the Create File Management Task dialog box
On the Notification tab, you can set warnings and actions to take before the file management task runs, as shown in Figure 2-29. You can send email, enter an event in the event log, or run a custom command or script. You can set multiple notifications prior to the file management task to provide plenty of warning.
FIGURE 2-29 The Add Notification dialog box
On the Report tab, you can enable a log file, an error log file and an audit log file, along with generating a report. On the Condition tab, you can add conditions to limit the file management task to act only on files with specific property conditions. Finally, on the Schedule tab, you can specify how often a task runs. You can also set the task to run continuously against new files that match the property classifications in the condition.
You can also create file management tasks with the New-FsrmFileManagementJob cmdlet. This cmdlet uses an FsrmScheduledTask object, an FsrmFmjAction object, and an FsrmFmjNotification object. For details, see http://go.microsoft.com/fwlink/?LinkID=289420.
Thought experiment: Using FSRM to manage file system usage
In this thought experiment, apply what you’ve learned about this objective. You can find answers to these questions in the “Answers” section at the end of this chapter.
You are the network administrator for TreyResearch.net. The company uses a single main public share as a file-sharing resource to allow users to share and swap files, as well as to host shared internal corporate resources. Lately, there has been a large increase in file system usage, and adding additional disk space isn’t an easy option on the hosting server.
1. What reports can you use to get a clear understanding of which files are taking up the most space and who their owners are?
2. Could you use quotas to control how much space each user is allowed to use on the share?
3. What file screens could you put in place to ensure that inappropriate files are not saved on the share?
4. How could you use file screen exceptions to allow the saving of sanctioned files even if they might violate the file screens in question 3?
Objective summary
The FSRM role can be installed on both full installations and Server Core installations.
Quotas, which can be hard or soft, can act on a specific path or on a path and all subfolders of the path.
Quotas can be created from scratch or can be based on quota templates.
Notification actions include sending an email, entering an alert in the event log, running reports, and executing a command.
Executing commands based on a quota, file screen, or file management task should be done with the least privilege possible to accomplish the goal.
File screens can be active or passive.
File screens are based on file name patterns, not on file content.
There are 10 standard storage reports,
Storage reports can run on a schedule or can be run interactively.
File management tasks can be set to run on a schedule, or continuously on new files.
File management tasks can be set to notify users days before the task actually runs to prevent data loss.
Objective review
1. The company provides a public transfer share to allow users to easily share files. Without using excess privilege, you have to ensure that all files are deleted after they have been on the server for 3 days. What PowerShell command should you run as part of the file management task?
A. get-childitem -recurse | where-object {$_.CreationTime -ge (get-date).Add(-3)} | remove-item
B. get-childitem -recurse | where-object {$_.CreationTime -le (get-date).Add(-3)} | remove-item
C. get-childitem -recurse | where-object {$_.CreationTime -ge (get-date).AddDays(-3)} | remove-item
D. get-childitem -recurse | where-object {$_.CreationTime -le (get-date).AddDays(-3)} | remove-item
2. In the scenario of question 1, what command security should the script run as?
A. Local Service
B. Domain Users
C. Local System
D. Protected Users
3. You need to allow users to store files for sharing with other users. These files are stored on the D:\UserShare folder of SRV2, which is shared as \\srv2\share. Each user’s use of the space is limited to 250 MB. When users reach 200 MB, they should be warned via email and the Administrator account should also be notified by email. How can you implement this?
A. Create an FSRM quota on the D:\UserShare folder of SRV2 based on the Monitor 500 MB Share template, but change the Limit to 250 MB.
B. Create an FSRM quota on the D:\UserShare folder of SRV2 based on the 200 MB Limit With 50 MB Extension template.
C. Create an FSRM quota on the \\srv2\UserShare share based on the Monitor 500 MB Share template, but change the Limit to 250 MB.
D. Create an FSRM quota on the \\srv2\UserShare share based on the 200 MB Limit Reports To User template, but change the limit to 250 MB.
Objective 2.3: Configure file and disk encryption
Windows Server 2012 R2 supports two different types of file and disk encryption: BitLocker and the Encryptying File System (EFS). BitLocker uses a Trusted Platform Module (TPM) version 1.2 or later when available to provide whole-disk encryption, but can use a removable USB key when a TPM is not available. EFS is useful for user-level file and folder encryption on both client computers and remote file servers.
This objective covers how to:
Configure BitLocker encryption
Configure the Network Unlock feature
Configure BitLocker policies
Configure the EFS recovery agent
Manage EFS and BitLocker certificates, including backup and restore
Configuring BitLocker encryption
To enable BitLocker encryption on Windows Server, you need to install the BitLocker feature. Furthermore, all disks encrypted with BitLocker must use the NTFS file system. To do this in Server Manager, select Add Roles And Features and then follow these steps:
1. Select Role-Based Or Feature-Based Installation.
2. On the Select Features page, select BitLocker Drive Encryption. You’ll be prompted to add additional supporting features, as shown in Figure 2-30.
FIGURE 2-30 The Add Features That Are Required For BitLocker Drive Encryption? page
3. The actual supporting features that will be added will depend on which features are already installed on the server. Click Add Features and then click Next.
4. Click Install to complete the installation. At least one restart is required.
To install BitLocker using Windows PowerShell, use the following command:
Install-WindowsFeature -Name BitLocker -IncludeAllSubFeature `
-IncludeManagementTools -Restart
Enabling BitLocker protectors
When enabling BitLocker from the command line, it’s a good practice to add BitLocker protectors prior to enabling BitLocker on a volume. At a minimum, you should add the recovery password protector to ensure that you have a way to recover if your hardware changes. Even very small changes can trigger a BitLocker failure. Keep a copy of the recovery password in a safe place that is accessible in an emergency, but not with the computer you’re trying to protect.
The other protector you should add is the recovery key protector. This protector writes a recovery key to a USB key, allowing you to recover and boot by inserting the USB key. Keep this key in a safe place separate from the server it is protecting.
You can add a BitLocker protector with the Add-BitLockerKeyProtector cmdlet or with the manage-bde.exe command-line utility. You can add only one protector at a time. To add the recovery password protector with a default, generated, numerical key and add the recovery key protector to the operating system drive (C:), use the following Windows PowerShell commands:
Add-BitLockerKeyProtector -MountPoint C: -RecoveryPasswordProtector
Add-BitLockerKeyProtector -MountPoint C: -RecoveryKeyProtector -RecoveryKeyPath <string>
In the second of these commands, <string> should be replaced with the path to the USB key onto which you want to write the recovery key.
To add the same protectors by using the manage-bde command, use the following:
manage-bde -protectors -add C: -RecoveryPassword
manage-bde -protectors -add C: -RecoveryKey <string>
The available protectors are as follows:
Recovery password
Recovery key
Startup key
Certificate
TPM (operating system drive only)
Password (data drives only)
TPM and pin
TPM and startup key
TPM and pin and startup key
AD DS (data drives only)
Enabling BitLocker encryption of the operating system drive
You can enable BitLocker from the command line, with the manage-bde command, or with the Windows PowerShell Enable-BitLocker cmdlet.
Note: Using the BitLocker Drive Encryption control panel
When you install the BitLocker feature in Windows Server 2012 R2, the control panel application is not normally visible until you encrypt your first volume unless you have the Desktop Experience feature installed (you normally would not, except on a Remote Desktop Session Host computer). If you have Desktop Experience installed, you can use the BitLocker Drive Encryption control panel application for your first volume encryption.
BitLocker works best with a TPM of at least version 1.2. This hardware encryption module works with BitLocker to do full volume encryption. If the hardware changes in any significant way, BitLocker will not recognize an encrypted volume. If the encrypted volume is the operating system volume, Windows Server can’t boot.
Suspending BitLocker
Whenever you need to make changes to the hardware or BIOS of a BitLocker-protected server, or install system updates, you should suspend BitLocker on the operating system drive to ensure that you can boot after the change. You can suspend BitLocker for a single restart (the default) or for more than a single restart by using the -RebootCount parameter. When BitLocker is suspended, the data on the volume is not decrypted; instead, the BitLocker encryption key is available to everyone in the clear. New data written to the volume is still encrypted, and BitLocker does not do a system integrity check on startup, allowing you to start Windows Server even though there has been a change that would have normally triggered an integrity check. To suspend BitLocker, use the BitLocker Drive Encryption control panel item or use the Suspend-BitLocker cmdlet. For a suspension on the C: drive of three restarts, use this:
Suspend-BitLocker -MountPoint C: -RebootCount 3
If you specify a RebootCount of 0, BitLocker is suspended until you resume BitLocker protection by using the Resume-BitLocker cmdlet.
Locking or unlocking BitLocker volumes
You can lock a BitLocker volume to prevent any access to the volume by using the Lock-BitLocker cmdlet. The volume remains locked until it is unlocked with the Unlock-BitLocker cmdlet. Operating system volumes can’t be locked.
Enabling and disabling auto-unlock of a BitLocker volume
Data volumes and removable drives that are encrypted by BitLocker can be automatically unlocked whenever they are present in the host computer. You can’t automatically unlock the operating system volume. After a user unlocks the operating system volume, BitLocker uses encrypted information in the registry and volume metadata to unlock any data volumes that have automatic unlocking enabled. To enable auto-unlock of a BitLocker volume, use the BitLocker Drive Encryption control panel item or use the Enable-BitLockerAutoUnlock cmdlet. You can disable the auto-unlock feature of one or more BitLocker volumes by using the Disable-BitLockerAutoUnlock cmdlet. You can clear all automatic unlocking keys on a server with the Clear-BitLockerAutoUnlock cmdlet. Clear BitLocker automatic unlocking keys prior to disabling BitLocker on a volume.
Disabling BitLocker encryption on a volume
When you want to remove the BitLocker encryption on a volume, you can disable BitLocker on that volume by using the BitLocker Drive Encryption control panel item or by using the Disable-BitLocker cmdlet. Disabling BitLocker encryption on a volume removes all key protectors on the volume and begins decrypting the data on the volume.
Configuring the Network Unlock feature
Beginning with Windows Server 2012 and Windows 8, BitLocker supports a new protector option for operating system volumes called Network Unlock. Network Unlock allows for automatic unlocking of operating system volumes on domain-joined servers and desktops that are connected over a wired corporate network.
Network Unlock requires the server or desktop to have a Dynamic Host Configuration Protocol (DHCP) driver implemented in Unified Extensible Firmware Interface (UEFI) firmware. Without Network Unlock, computers protected with TPM+PIN require a PIN to be entered whenever the computer restarts or resumes from hibernation. Therefore, enabling TPM+PIN without Network Unlock prevents remote updating with unattended distribution of software updates. With Network Unlock enabled, BitLocker-protected systems that use TPM+PIN can be remotely started or restarted without direct interaction at the console.
The Network Unlock feature requires the following:
Computers running Windows 8, Windows 8.1, Windows Server 2012, or Windows Server 2012 R2 with UEFI DHCP drivers
Windows Deployment Services (WDS) role installed on Windows Server 2012 or Windows Server 2012 R2
BitLocker Network Unlock optional feature installed on Windows Server 2012 or Windows Server 2012 R2
DHCP server
Properly configured public/private key pairing
Network Unlock Group Policy settings configured
Enabling the Windows Deployment Services (WDS) server role
If WDS is not configured on your network, install the WDS server role on a Windows Server 2012 or Windows Server 2012 R2 server. It does not require a fully configured WDS server, just the WDS service to be running. You can install the WDS role as part of the Network Unlock feature install or separately by using this:
Install-WindowsFeature WDS-Deployment
Confirm that the WDS service is running with this:
Get-Service WDSServer
Creating the Network Unlock certificate
On the WDS server, follow these steps to create the Network Unlock certificate if working in an environment with an existing certification authority (CA):
1. Open Certificate Manager on the WDS server using certmgr.msc.
2. Request a new personal certificate, as shown in Figure 2-31.
FIGURE 2-31 The Certificate Manager console
3. Click Next and select Active Directory Enrollment Policy; then click Next.
4. Choose the certificate template created for the Network Unlock on the domain controller and select Enroll.
5. Add a Subject Name value that clearly identifies the purpose of the certificate, such as BitLocker Network Unlock Certificate for TreyResearch domain.
6. Create the certificate and verify that the certificate appears in the Personal folder.
7. Export the public key certificate for Network Unlock using DER Encoded Binary X.509 format and do not export the private key. Export the key to a file with a .cer extension, such as BitLocker-NetworkUnlock.cer.
8. Export the public key with a private key for Network Unlock by selecting the previously created certificate and selecting All Tasks and then Export. Select Yes, Export The Private Key and save to a .pfx file.
In environments that do not have a fully functional CA, create a self-signed certificate by following these steps:
1. Create a new blank text file BitLocker-NetworkUnlock.inf.
2. Enter the following into the file using a plain text editor such as Notepad.exe:
[NewRequest]
Subject="CN=BitLocker Network Unlock certificate"
Exportable=true
RequestType=Cert
KeyLength=2048
[Extensions]
1.3.6.1.4.1.311.21.10 = "{text}"
_continue_ = "OID=1.3.6.1.4.1.311.67.1.1"
2.5.29.37 = "{text}"
_continue_= "1.3.6.1.4.1.311.67.1.1"
3. From an elevated prompt, create a new certificate with certreq.exe:
certreq -new BitLocker-NetworkUnlock.inf BitLocker-NetworkUnlock.cer
4. Verify that the certreq command created and imported the certificate by running certmgr.msc. The certificate should be in the Current User, Personal store.
5. Export the certificate to create a .pfx file.
Deploying a private key and certificate to a WDS server
To deploy the private key and certificate to the WDS server, follow these steps:
1. On the WDS server, from an elevated prompt, open a new Microsoft Management Console (MMC) with mmc.exe.
2. Add the Certificates snap-in. Select the Computer Account and Local Computer options.
3. Select BitLocker Drive Encryption Network Unlock in the console tree. Right-click and select Import from the All Tasks menu.
4. On the File To Import page, select the .pfx file you exported, as shown in Figure 2-32.
FIGURE 2-32 The Certificate Import Wizard
5. Enter the password of the .pfx file you exported and complete the wizard.
Configuring Group Policy settings for Network Unlock
To configure the Group Policy settings for Network Unlock, you need to deploy the certificate to the clients that will use it. To do this, follow these steps:
1. Copy the .cer file that was created earlier to a domain controller for the domain on which you’re enabling Network Unlock.
2. Open the Group Policy Management Console (GPMC).
3. Create and edit a new GPO or modify an existing one to enable the Allow Network Unlock At Startup setting. This setting is in Computer Configuration\Policies\Administrative Templates\Windows Components\BitLocker Drive Encryption\Operating System Drives.
4. Select the Computer Configuration\Policies\Windows Settings\Security Settings\Public Key Policies\BitLocker Drive Encryption Network Unlock Certificate folder in the console tree.
5. Right-click and select Add Network Unlock Certificate to add the .cer file you copied from the WDS server.
Configuring BitLocker policies
Windows Server 2012 R2 has a full set of BitLocker policies. There are policies that affect all BitLocker drives; and different policies for fixed data drives, operating system drives, and removable data drives. The BitLocker policies are in the Computer Configuration\Policies\Administrative Templates\Windows Components\BitLocker Drive Encryption folder. Table 2-2 lists the general BitLocker policies.
TABLE 2-2 General BitLocker policies
The settings for fixed data drives are shown in Table 2-3.
TABLE 2-3 Fixed data drive BitLocker policies
The settings for operating system drives are shown in Table 2-4.
TABLE 2-4 Operating system drive BitLocker policies
The settings for removable data drives are shown in Table 2-5.
TABLE 2-5 Removable data drive BitLocker policies
Configuring the EFS recovery agent
The EFS, which was introduced in Windows 2000, provides a method for users to encrypt and protect sensitive files and folders. To ensure that encrypted files can be recovered in the event of emergency, the Administrator account on the first domain controller in the domain is automatically designated the recovery agent for the domain, allowing this account to access and recover encrypted files.
In addition to the default data recovery agent for a domain, you can add additional recovery agents. To add a recovery agent, follow these steps:
1. Open the GPMC and select the GPO you want to configure. For an EFS recovery agent, it is usually the Default Domain Policy.
2. Right-click the policy and select Edit to open the Group Policy Management Editor. Select Computer Configuration\Policies\Windows Settings\Security Settings\Public Key Policies\Encrypting File System in the console tree, as shown in Figure 2-33.
FIGURE 2-33 The Group Policy Management Editor
3. Right-click and select Add Data Recovery Agent. Click Next and then click Browse Folders. Select the certificate for the account that will be the data recovery agent.
The account used for data recovery should not be an account that is online and available under normal circumstances. You should export the private key for the account to a .pfx file, deleting the key during the export. Then move the key to removable media and store in a secure location.
Note: Creating a self-signed file recovery certificate
If your domain does not include a CA, you can create a self-signed certificate for use as an EFS recovery agent. To create a self-signed certificate, use the cipher.exe command. From a command prompt, logged on as the account that will be the designated recovery agent, use this:
Cipher /r:<filename>
This command creates two files: a .cer file and a .pfx file. The .cer file is added to the GPO as a recovery agent, and the .pfx file should be copied to removable media and safely stored in a secure location and then deleted from the original location.
Managing EFS and BitLocker certificates, including backup and restore
It is important that you enable EFS and BitLocker recovery procedures for all encrypted data and volumes. Without a full backup of recovery information, vital information might be unavailable in an emergency. This recovery information is sensitive, however, and should be stored in secure locations and not be readily available except in an emergency. And in all cases, it should never be in the same location as the item it is protecting. (Printing out your BitLocker recovery key and then taping it to the back of your laptop is a really, really bad idea.)
Enabling AD DS storage of BitLocker recovery keys
You can enable the storage of BitLocker recovery keys in AD DS by enabling the GPO settings. There are three settings that control recovery key saving for Windows Server 2008 R2, Windows 7, Windows Server 2012, Windows 8, Windows Server 2012 R2, and Windows 8.1. These settings, which are in the Computer Configuration\Policies\Administrative Templates\Windows Components\BitLocker Drive Encryption folder, are these:
Choose How BitLocker-Protected Fixed Data Drives Can Be Recovered
Choose How BitLocker-Protected Operating System Drives Can Be Recovered
Choose How BitLocker-Protected Removable Drives Can Be Recovered
Exam Tip
When you enable BitLocker policies, create them on the policy that applies to the computer on which you’re enabling BitLocker. While you can use the Default Domain Policy for BitLocker policies, that doesn’t allow you to save the recovery passwords for your BitLocker protected domain controllers.
When one of these policies is set to Enabled, you have additional options (see Figure 2-34), including the following:
Allow Or Require 48-Digit Recovery Password
Allow Or Require 256-Bit Recovery Password
Save BitLocker Recovery Information To AD DS
Backup Recovery Passwords And Key Packages
FIGURE 2-34 The Choose How BitLocker-Protected Fixed Drives Can Be Recovered policy setting page
After recovery password saving to AD DS is enabled, you can save the recovery password with the Backup-BitLockerKeyProtector cmdlet. Use the following commands to back up the Recovery Password for the operating system volume:
$blC = Get-BitLockerVolume -MountPoint C:
Backup-BitLockerKeyProtector `
-MountPoint "C:" `
-KeyProtectorId $blC.KeyProtector[1].KeyProtectorId
This will back up the second key protector for the drive mounted at C. The first key protector is the TPM whenever there is a TPM present. To recover the key, search the AD DS domain by following these steps:
1. Open Active Directory Users And Computers.
2. Right-click the domain in the console tree and select Find BitLocker Recovery Password from the Action menu.
3. Enter the first eight characters of the Password ID and click Search, as shown in Figure 2-35.
FIGURE 2-35 The Find BitLocker Recovery Password dialog box
Saving BitLocker recovery passwords
Although saving BitLocker recovery passwords to Active Directory is an excellent way to save them securely and where they can be easily recovered, you can also do the following:
Print the recovery password
Save it to a file
Create a USB recovery key
Whatever methods you use, make sure that they are kept up to date, are secure, and are available when needed.
Saving EFS certificates
Although having an extra recovery agent for EFS is one form of backup, another important backup is to export the EFS certificates for users and back them up to secure storage. If this isn’t done, and a user’s computer needs to be rebuilt, the user could lose access to all EFS–protected files and folders on the computer. The simplest solution is to export the EFS certificate to a .pfx file, which can then be part of normal backup procedures.
You can automate the EFS certificate export with the following script:
$Cert=(Get-childitem -path cert:\CurrentUser\My | where {$_.Subject -match "OU=EFS" } )
Write-host "Enter the password for the .pfx file: " -nonewline
$pfxPW = read-host -assecurestring
Export-PfxCertificate -Cert $cert -password $pfxPW -filepath C:\MyEFScert.pfx
This script prompts the user for a password and then saves the EFS certificate for the current user in the file C:\MyEFScert.pfx, with the password typed in at the prompt.
Thought experiment: Configuring Network Unlock for BitLocker
In this thought experiment, apply what you’ve learned about this objective. You can find answers to these questions in the “Answers” section at the end of this chapter.
You are the network administrator for TreyResearch.net. Company policy mandates that all computers have multifactor encryption on boot devices and data drives. You have to configure the network to enable automatic unlock of boot drives for clients and servers that are hard-wired to the corporate network.
1. What are the minimum hardware requirements to support Network Unlock?
2. What server roles are needed to support Network Unlock?
3. What Group Policy settings need to be configured to support Network Unlock and require BitLocker encryption?
Objective summary
Configure BitLocker policies to allow backup to Active Directory.
Use Windows PowerShell to back up the BitLocker Recovery Password to Active Directory.
Back up BitLocker recovery passwords to USB, files, and hard copy.
Back up EFS certificates with Export-PfxCertificate.
Enable the Network Unlock protector to allow automatic boot even with a TPM+PIN configuration.
Create a BitLocker Network Unlock certificate and use Group Policy to distribute the public key. Use WDS to distribute the private key to allow Network Unlock.
BitLocker Network Unlock certificates can be created with AD CS or by creating a self-signed certificate with certreq.exe.
Objective review
1. To which GPOs do you need to link to ensure that all BitLocker passwords can be backed up to Active Directory?
A. Default Domain Policy
B. Default Domain Controller Policy
C. Both the A and B
D. A new BitLocker GPO linked to the Domain Users folder
2. What features are required and installed for the BitLocker Drive Encryption feature? (Choose all that apply.)
A. BitLocker Drive Encryption
B. Remote Server Administration Tools - BitLocker Drive Encryption Administration Utilities
C. Remote Server Administration Tools - AD DS Tools
D. File Server VSS Agent Service
E. Enhanced Storage
F. BitLocker Network Unlock
3. Company policy requires that all servers be encrypted with BitLocker on all fixed internal drives and volumes. Several existing servers do not support a TPM. You created a special OU for these servers and linked a GPO to the OU. What policy do you need to configure to enable BitLocker encryption for the servers?
A. Choose Drive Encryption Method And Cipher Strength
B. Choose How BitLocker-Protected Fixed Drives Can Be Recovered
C. Require Additional Authentication At Startup
D. Use Enhanced Boot Configuration Data Validation Profile
E. Allow Network Unlock At Startup
Objective 2.4: Configure advanced audit policies
Advanced audit policies extend the basic audit policies to provide granular auditing of events. Advanced audit policies were extended in Windows Server 2012 with Global Object Access Auditing and Dynamic Access Control (DAC) to allow for expression-based auditing, giving administrators more selective auditing of events. Also added in Windows Server 2012 is the ability to audit removable devices.
This objective covers how to:
Implement auditing using Group Policy and AuditPol.exe
Create expression-based audit policies
Create removable device audit policies
Implementing auditing using Group Policy and AuditPol.exe
You can implement advanced audit policies by configuring the Group Policy settings for the type of advanced auditing you want to enable. The advanced audit policies are grouped into 10 subcategories:
Account Logon
Account Management
Detailed Tracking
DS Access
Logon/Logoff
Object Access
Policy Change
Privilege Use
System
Global Object Access Auditing
Advanced auditing is located in Computer Configuration\Policies\Windows Settings\Security Settings\Advanced Audit Policies. To configure advanced auditing, select a subcategory and then double-click the policy you want to configure and set the audit on success or failure. For example, to audit logon success, select the Logon/Logoff category and double-click Audit Logon to open the Audit Logon Properties dialog box shown in Figure 2-36. Select the Configure The Following Audit Events check box; select Success, Failure, or both; and click OK to apply.
FIGURE 2-36 The Audit Logon Properties dialog box
The Logon/Logoff policy settings are straightforward success or failure settings. But other settings, such as those for Global Object Access Auditing, are more involved and are described in the following “Creating expression-based audit policies” section.
To ensure that advanced auditing isn’t overridden by basic auditing policies, set the Force Audit Policy Subcategory Settings (Windows Vista Or Later) To Override Audit Policy Category Settings policy in the Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options folder to Enabled.
Creating expression-based audit policies
Windows Server 2012 enables expression-based audit policies that enable you to audit only the specific actions and users of interest. You can build expression-based audit policies for either the file system or the registry by using Global Object Access Auditing. To enable an expression-based audit of a file system folder, for example, follow these steps:
1. In the GPMC, select the GPO for which you want to enable an expression-based audit and select Edit from the context menu to open the Group Policy Management Editor.
2. Double-click File System under Global Object Access Auditing in the Advanced Audit Policy Configuration section of the Computer Configuration\Policies\Windows Settings\Security Settings folder.
3. Select Define This Policy Setting in the File System Properties dialog box and then click Configure to open the Advanced Security Settings for Global File SACL dialog box shown in Figure 2-37.
FIGURE 2-37 The Advanced Security Settings For Global File SACL dialog box
4. Click Add to open the Auditing Entry For Global SACL dialog box.
5. Click Select A Principal to open the familiar Select User, Computer, Service Account, Or Group dialog box. Add groups, computers, or users to audit and then click Next.
6. Select the Type of audit from the list.
7. Select the Permissions to audit.
8. Use the Add A Condition To Limit The Scope section to limit the scope of this audit, as shown in Figure 2-38, in which I’m building a condition that will tell me if any Domain Admins who are not also Enterprise Admins take ownership of a file.
FIGURE 2-38 The Auditing Entry For Global File SACL dialog box
9. Click OK to add the audit expression, as shown in Figure 2-39.
FIGURE 2-39 The Advanced Security Settings For Global File SACL dialog box with the auditing expression
10. Click Apply to continue adding audit entries, or click OK to complete the audit entry and complete the configuration of the expression-based audit policy.
Creating removable device audit policies
To audit the success or failure of access to removable devices, use the Audit Removable Storage setting in the Computer Configuration\Policies\Windows Settings\Security Settings\Advanced Audit Policy Configuration\Audit Policies\Object Access folder. You can audit Success (event 4663), Failure (event 4656), or both. If you enable Failure tracking, you need to also enable the Audit Handle Manipulation For Failure events.
Thought experiment: Disabling and auditing removable USB drives
In this thought experiment, apply what you’ve learned about this objective. You can find answers to these questions in the “Answers” section at the end of this chapter.
You are the network administrator for TreyResearch.net. Because of past concerns and the sensitive nature of the research being conducted at Trey, the company has issued a policy that no one is to use USB flash drives on company computers. You have been asked to implement the policy. You have also been asked to audit any attempts to use USB drives, even though they are not allowed. Users will continue to be allowed to connect and use cell phones and media players, but all use of them is to be audited.
1. What settings do you need to enable to ensure that users can’t use USB disks? All policies are in the \Computer Configuration\Policies\Administrative Templates\System\Removable Storage Access folder. (Choose all that apply.)
A. Enable All Removable Storage Classes: Deny All Access
B. Disable All Removable Storage Classes: Deny All Access
C. Enable Removable Disks: Deny Execute Access
D. Enable Removable Disks: Deny Read Access
E. Enable Removable Disks: Deny Write Access
F. Disable Removable Disks: Deny Write Access
G. Disable Removable Disks: Deny Read Access
H. Disable Removable Disks: Deny Execute Access
2. What settings do you need to set to ensure that users can continue to connect their cell phones and media players? All policies are in the \Computer Configuration\Policies\Administrative Templates\System\Removable Storage Access folder. (Choose all that apply.)
A. Enable All Removable Storage Classes: Deny All Access
B. Disable All Removable Storage Classes: Deny All Access
C. Enable WPD Devices: Deny Read Access
D. Enable WPD Devices: Deny Write Access
E. Disable WPD Devices: Deny Read Access
F. Disable WPD Devices: Deny Write Access
3. What settings do you need to set to ensure that all attempts to use USB devices, including cell phones and media players, are audited for success and failure? All policies are in the Computer Configuration\Policies\Windows Settings\Security Settings\Advanced Audit Policy Configuration\Audit Policies folder. (Choose all that apply.)
A. Configure Audit Object Access Success
B. Configure Audit Object Access Failure
C. Configure Audit Handle Management Success
D. Configure Audit Handle Management Failure
E. Configure Audit File System Success
F. Configure Audit File System Failure
Finally, in thinking about the policy, what recommendations could you make to management to ensure that the policy accomplishes the goals described and what concerns do you have about the specific details of the policy? How will auditing help alleviate these concerns?
Objective summary
Implement advanced audit policies in Group Policy to enable fine-grained control of auditing.
Use the Force Audit Policy Subcategory Settings (Windows Vista Or Later) To Override Audit Policy Category Settings policy to enforce advanced audit policies.
For even more specific auditing of file system and registry events, use expression-based audit policies based on DAC Global Object Access Auditing.
Use GPOs to audit removable device access or attempts. You can audit the success or failure (or both) of attempts to use removable devices.
Enabling Failure auditing of removable devices also requires enabling the Audit Handle Manipulation For Failure Events policy.
Objective review
1. You monitor changes to distribution groups and you don’t want to get events from other account management events because it would tend to hide the specific events you’re looking for in the high noise levels. What policy do you need to set and what setting should it have?
A. Set the Computer Configuration\Policies\Security Settings\Local Policies\Audit Policy\Audit Account Management policy to Enabled, Audit Success.
B. Set the Computer Configuration\Policies\Security Settings\Local Policies\Audit Policy\Audit Account Management policy to Enabled, Audit Failure.
C. Set the Computer Configuration\Policies\Security Settings\Advanced Audit Policy Configuration\Audit Policies\Account Management\Audit Distribution Group Management policy to Enabled, Audit Success.
D. Computer Configuration\Policies\Security Settings\Local Policies\Audit Policy\Audit Account Management policy to Enabled, Audit Failure.
2. What Group Policy setting do you need to enable in order to enable auditing of logoff events?
A. Computer Configuration\Policies\Windows Settings\Security Settings\Advanced Audit Policies\Audit Logoff
B. Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Audit Logon Events
C. User Configuration\Policies\Windows Settings\Security Settings\Advanced Audit Policies\Audit Logon
D. User Configuration\Policies\Windows Settings\Security Settings\\Local Policies\Audit Logoff Events
3. What are the minimal Group Policy settings that you need to set in order to ensure that removable optical disks can be used to read data only? (Choose all that apply.)
A. Computer Configuration\Policies\Administrative Templates\System\Removable Storage Access\Removable Disks: Deny Execute Access
B. Computer Configuration\Policies\Administrative Templates\System\Removable Storage Access\ Removable Disks: Deny Write Access
C. Computer Configuration\Policies\Administrative Templates\System\Removable Storage Access\CD and DVD: Deny Execute Access.
D. Computer Configuration\Policies\Administrative Templates\System\Removable Storage Access\CD and DVD: Deny Write Access.
E. Computer Configuration\Policies\Administrative Templates\System\Removable Storage Access\All Removable Storage Classes: Deny All Access
Answers
This section contains the solutions to the thought experiments and answers to the lesson review questions in this chapter.
Objective 2.1: Thought experiment
1. Correct answers: C, D, E. You need to move the share to an NTFS volume before you can enable DFS for it. Creating a DFS-N with the shared folder then allows you to configure the replication in a hub/spoke configuration. A mesh configuration would work, but it would create more replication traffic, given that the description implies that the data is currently stored and created on the main corporate datacenter.
2. The change in content origination lends itself to a change in replication from hub/spoke to mesh to reduce the number of hops to fully replicate.
3. A combination of preseeding the changes to each of the branches and cloning the DFS database improves the initial DFS-R time.
Objective 2.1: Review
1. Correct answer: C
A. Incorrect. This command adds a .cab or .msu package to a Windows image or, with the -online parameter, the currently running Windows.
B. Incorrect. This command enables an optional feature on Windows, but can’t be used to install a server role.
C. Correct. This command installs the two DFS roles and the management tools.
D. Incorrect. Although Add-WindowsFeature is an alias for Install-WindowsFeature, there isn’t a feature named DFS; it is actually two features: FS-DFS-N and FS-DFS-R.
2. Correct answers: A, D
A. Correct. Installs the RSAT DFS Management console along with the Windows PowerShell modules for DFS.
B. Incorrect. Enables an optional feature on Windows, but there isn’t a client feature named *DFS*.
C. Incorrect. Only partially enables what needs enabling for remote management.
D. Correct. Enables all the Windows PowerShell remoting features and also enables WinRM.
Objective 2.2: Thought experiment
1. Use the File By Owner report to identify the files owned by each user and how much space they take up. Use the Files By File Group report to tell what kinds of files are being stored on the server. Use the Large Files report to see whether there are very large files taking up excess space that might be a target for removal. The Large Files report is likely to be the least useful because substantial space is being used by the shared internal corporate resources and they are probably large files. However, you can include specific file name patterns to help narrow the scope of the report.
2. Yes. By enabling quotas with the Auto Apply Template And Create Quotas On Existing And New Subfolders option on the folder at the top of the Public share, you can enforce quotas on all users except administrators. You can then require an administrator to post any files that were to bypass the quotas, such as internal corporate resources. This is less than optimal, however, because it requires action by an administrator to add any new files to the share corporate resources if you want to avoid quota limits. A possible solution is to use soft quotas and use the Files By Owner report to identify problems.
3. By using the Files By File Group report, you can quickly identify where the problem file types are and then create a file screen to block any file types that shouldn’t be there.
4. You can get around the limitations of the file screen by using a file screen exception on the specific path for corporate resources to allow the files blocked in the previous answer and then restricting the file saving to that path to specific users. Adding a different share name with share permissions that limit who can write to the exception path can also help. Ideally, the solution involves all four of these features of FSRM.
Objective 2.2: Review
1. Correct answer: D
A. Incorrect. The (Get-Date).add(-3) portion of the command is a time only three seconds prior to the time the command is run.
B. Incorrect. The (Get-Date).add(-3) portion of the command is a time only three seconds prior to the time the command is run.
C. Incorrect. The .AddDays property is the correct property to use, and -3 is the correct number of days, but this command removes all the files newer than three days.
D. Correct. Finds all files in the path that are three days or more old at the time the command is run and removes them.
2. Correct answer: A
A. Correct. This is the least privilege that will allow the command to run.
B. Incorrect. Domain Users isn’t an option here; Furthermore, it would provide substantially more privilege than local service if it were available.
C. Incorrect. Local System is full access and control. Way more than you need to use for this question.
D. Incorrect. Protected Users is not an option and has no special limitations on privilege that would help even if it were an option.
3. Correct answer: B
A. Incorrect. All Monitor templates are soft quotas and will warn but not actually limit.
B. Correct. This template has a hard limit of 200 MB, with warnings, but automatically extends the limit to 250 MB.
C. Incorrect. You can’t create limits on shares; you can create limits only on file system folders or disks. Also, Monitor templates are soft quotas and will warn but not actually limit.
D. Incorrect. You can’t create limits on shares; you can create limits only on file system folders or disks.
Objective 2.3: Thought experiment
1. Wired network and a TPM. Further, the computers being unlocked must support DHCP in UEFI.
2. The WDS role must be enabled on the network on a Windows Server 2012 or Windows Server 2012 R2 server (you don’t need to set up full WDS deployment.) AD DS and a working DHCP server must be present on the network (for Group Policy). Additionally, only computers running Windows 8 or Windows 8.1, or servers running Windows Server 2012 or Windows Server 2012 R2 can participate in Network Unlock, and the private and public keys for Network Unlock must be deployed.
3. To enable Network Unlock, use the Allow Network Unlock At Startup policy, and add the public key certificate from the WDS server to the BitLocker Drive Encryption Network Unlock Certificate folder. To require BitLocker encryption of data drives, use the Deny Write Access To Fixed Drives Not Protected By BitLocker policy; and to require BitLocker on the operating system drive, use the Require Additional Authentication At Startup policy for operating system drives. Set it to Allow TPM and Allow BitLocker Without A Compatible TPM.
Objective 2.3: Review
1. Correct answer: C
A. Incorrect. This does not include the domain controllers
B. Incorrect. This does not include computers that are not domain controllers
C. Correct. Includes both domain controllers and nondomain controllers
D. Incorrect. You can’t link a GPO to the Domain Users folder.
2. Correct answers: A, B, C, E
A. Correct. This is the correct name for the BitLocker feature.
B. Correct. This is automatically installed by the GUI and should be included in the command line if you use Windows PowerShell to install BitLocker. It provides the tools to manage BitLocker.
C. Correct. This is automatically installed by the GUI and is included if you use the -IncludeManagementTools parameter. It is required to configure AD DS storing of recovery passwords.
D. Incorrect. This has nothing to do with BitLocker.
E. Correct. This is automatically installed with BitLocker and is a required prerequisite.
F. Incorrect. This is not required for BitLocker, though it is an optional feature for networks that choose to set it up.
3. Correct answer: C
A. Incorrect. This sets the drive encryption method, but doesn’t do anything to enable a non-TPM server to use BitLocker.
B. Incorrect. This controls recovery methods.
C. Correct. This policy includes an option for enabling BitLocker without a TPM.
D. Incorrect. This sets enhanced BCD validation.
E. Incorrect. This policy actually requires a TPM to work.
Objective 2.4: Thought experiment
1. Correct answers: C, D, E. Answers A and B affect all removable drives, including backup tapes and cell phones. Answers F, G, and H explicitly allow USB drive access.
2. Correct answers: E, F. Answers A and B affect all removable drives, including backup tapes and cell phones. Answers C and D explicitly prohibit the use of cell phones or media devices.
3. Correct answers: A, B, D. Both B and D are required to audit failures, C isn’t required to audit USB drive access success, and E and F are not directly related to USB drives.
4. I have significant concerns about a policy that allows cell phones and media players because both can have significant file system access. By enabling auditing, you can mitigate this somewhat because you’ll get an audit event if someone uses any USB device. A more restrictive policy that prohibits all removable devices, including Windows Portable Devices (WPDs), would be more effective in preventing data theft and malware insertion.
Objective 2.4: Review
1. Correct answer: C
A. Incorrect. This would capture all successful account management changes and hide the events of interest in the overall noise level.
B. Incorrect. This would capture all failures to change account management and would not capture successful changes. It would also be hidden in the overall noise level.
C. Correct. This would capture only the successful changes to distribution groups.
D. Incorrect. This would capture attempts to change distribution groups, but only if they failed.
2. Correct answer: A
A. Correct: This uses the Advanced Auditing Policy setting to enable auditing of Logoff events.
B. Incorrect: This is not an Advanced Auditing Policy setting and only audits success or failure of logon.
C. Incorrect. All logon and logoff events are Computer Configuration policies, not user configuration policies.
D. Incorrect. All logon and logoff events are Computer Configuration policies, not user configuration policies.
3. Correct answers: C, D
A. Incorrect. This would deny execute access to removable hard disks, but not affect optical disks.
B. Incorrect. This would deny write access to removable hard disks, but not affect optical disks.
C. Correct. This would deny execute access to optical disks, preventing them from being used to run programs.
D. Correct. This would deny write access to optical disks, preventing their use as removable writable media.
E. Incorrect. This would deny all access of any sort to removable media of all classes. This doesn’t meet the minimal requirement of the question and furthermore doesn’t allow the optical disks to be used to read data.