Configuring storage - Windows Server 2012 R2 Pocket Consultant: Storage, Security, & Networking (2014)

Windows Server 2012 R2 Pocket Consultant: Storage, Security, & Networking (2014)

Chapter 2. Configuring storage

§ Using volumes and volume sets

§ Improving performance and fault tolerance with RAID

§ Implementing RAID on Windows Server 2012 R2

§ Managing RAID and recovering from failures

§ Standards-based storage management

§ Managing existing partitions and drives

Storage management and the ways in which Windows Server works with disks have changed substantially over the past few years. Although traditional storage management techniques relate to physical drives located inside the server, many servers today use attached storage and virtual disks.

Generally, when you work with internal fixed drives, you often need to perform advanced disk setup procedures, such as creating a volume set or setting up a redundant array of independent disks (RAID) array. Here, you create volumes or arrays that can span multiple drives and you know the exact physical layout of those drives.

However, when you work with attached storage, you might not know which actual physical disk or disks the volume you are working with resides on. Instead, you are presented with a virtual disk, also referred to as a logical unit number (LUN), which is a logical reference to a portion of the storage subsystem. Although the virtual disk can reside on one or more physical disks (spindles), the layout of the physical disks is controlled separately from the operating system (by the storage subsystem).

When I need to differentiate between the two storage management approaches, I refer to the former technique as traditional and the latter technique as standards-based. In this chapter, I look at traditional techniques for creating volume sets and arrays first, and then I look at standards-based techniques for creating volumes. Whether a volume is created by using the traditional approach or the standards-based approach, you manage it by using similar techniques. For this reason, in the final section of this chapter, I discuss techniques for working with existing volumes and drives.

REAL WORLD

Standards-based approaches to storage management can also be used with a server’s internal disks. When internal disks are used in this way, however, the internal disks—such as virtual disks on attached storage—are resources to be allocated by using standards-based approaches. This means you can create virtual disk volumes on the physical disks, add the physical disks to storage pools, and create Internet SCSI (iSCSI) virtual disks that can be targeted. You can also enable data deduplication on your virtual disks. You can’t, however, use the operating system’s volume set or RAID array features, because standards-based, storage management approaches rely on the storage subsystem to manage the physical disk architecture.

Using volumes and volume sets

You create volume sets and RAID arrays on dynamic drives. With a volume set, you can create a single volume that spans multiple drives. Users can access this volume as if it were a single drive, regardless of how many drives the volume is spread over. A volume that’s on a single drive is referred to as a simple volume. A volume that spans multiple drives is referred to as a spanned volume.

With a RAID array, you can protect important business data and sometimes improve the performance of drives. RAID can be implemented by using the built-in features of the operating system (a software approach) or by using hardware. Windows Server 2012 R2 supports three levels of software RAID: 0, 1, and 5. RAID arrays are implemented as mirrored, striped, and striped with parity volumes.

You create and manage volumes in much the same way in which you create and manage partitions. A volume is a drive section you can use to store data directly.

NOTE

With spanned and striped volumes on basic disks, you can delete a volume but you can’t create or extend volumes. With mirrored volumes on basic disks, you can delete, repair, and resync the mirror. You can also break the mirror. For striped with parity volumes (RAID-5) on basic disks, you can delete or repair the volume, but you can’t create new volumes.

Understanding volume basics

Disk Management color codes volumes by type, much like it does partitions. As Figure 2-1 shows, volumes also have the following properties:

§ Layout. Volume layouts include simple, spanned, mirrored, striped, and striped with parity.

§ Type. Volumes always have the type dynamic. Partitions always have the type basic.

§ File System. Like partitions, each volume can have a different file system type, such as FAT or NTFS file system. Note that FAT16 is available only when the partition or volume is 2 GB or less in size.

§ Status. The state of the drive. In Graphical View, the state is shown as Healthy, Failed Redundancy, and so on. The next section, Understanding volume sets, discusses volume sets and the various states you might encounter.

§ Capacity. The total storage size of the drive.

§ Free Space. The total amount of available space on the volume.

§ % Free. The percentage of free space out of the total storage size of the volume.

Disk Management displays volumes much like it does partitions.

Figure 2-1. Disk Management displays volumes much like it does partitions.

An important advantage of dynamic volumes over basic volumes is that dynamic volumes enable you to make changes to volumes and drives without having to restart the system (in most cases). Volumes also let you take advantage of the fault-tolerance enhancements of Windows Server 2012 R2. You can install other operating systems and dual boot a Windows Server 2012 R2 system by creating a separate volume for the other operating system. For example, you could install Windows Server 2012 R2 on volume C and Windows 8.1 on volume D.

With volumes, you can do the following:

§ Assign drive letters and drive paths as discussed in Assigning drive letters and paths later in this chapter

§ Create any number of volumes on a disk as long as you have free space

§ Create volumes that span two or more disks and, if necessary, configure fault tolerance

§ Extend volumes to increase the volumes’ capacity

§ Designate active, system, and boot volumes as described in Special considerations for basic and dynamic disks in Chapter 1

Understanding volume sets

With volume sets, you can create volumes that span several drives by using free space on different drives to create what users perceive as a single volume. Files are stored on the volume set segment by segment, with the first segment of free space being used to store files before other segments. When the first segment fills up, the second segment is used, and so on.

You can create a volume set using free space on up to 32 hard disk drives. The key advantage to volume sets is that they let you tap into unused free space and create a usable file system. The key disadvantage is that if any hard disk drive in the volume set fails, the volume set can no longer be used, which means that essentially all the data on the volume set is lost.

Understanding the volume status is useful when you install new volumes or are trying to troubleshoot problems. Disk Management shows the drive status in Graphical View and Volume List view. Table 2-1 summarizes status values for dynamic volumes.

Table 2-1. Understanding and resolving volume status issues

Status

Description

Resolution

Data Incomplete

Spanned volumes on a foreign disk are incomplete. You must have forgotten to add the other disks from the spanned volume set.

Add the disks that contain the rest of the spanned volumes, and then import all the disks at one time.

Data Not Redundant

Fault-tolerant volumes on a foreign disk are incomplete. You must have forgotten to add the other disks from a mirror or RAID-5 set.

Add the remaining disks, and then import all the disks at one time.

Failed

An error disk status. The disk is inaccessible or damaged.

Ensure that the related dynamic disk is online. As necessary, press and hold or right-click the volume, and then tap or click Reactivate Volume. For a basic disk, you might need to check the disk for a faulty connection.

Failed Redundancy

An error disk status. One of the disks in a mirror or RAID-5 set is offline.

Ensure that the related dynamic disk is online. If necessary, reactivate the volume. Next, you might need to replace a failed mirror or repair a failed RAID-5 volume.

Formatting

A temporary status that indicates the volume is being formatted.

The progress of the formatting is indicated as the percent complete unless you choose the Perform A Quick Format option.

Healthy

The normal volume status.

The volume doesn’t have any known problems. You don’t need to take any corrective action.

Healthy (At Risk)

Windows had problems reading from or writing to the physical disk on which the dynamic volume is located. This status appears when Windows encounters errors.

Press and hold or right-click the volume, and then tap or click Reactivate Volume. If the disk continues to have this status or has this status periodically, the disk might be failing, and you should back up all data on the disk.

Healthy (Unknown Partition)

Windows does not recognize the partition. This can occur because the partition is from a different operating system or is a manufacturer-created partition used to store system files.

No corrective action is necessary.

Initializing

A temporary status that indicates the disk is being initialized.

The drive status should change after a few seconds.

Regenerating

A temporary status that indicates that data and parity for a RAID-5 volume are being regenerated.

Progress is indicated as the percent complete. The volume should return to Healthy status.

Resynching

A temporary status that indicates that a mirror set is being resynchronized.

Progress is indicated as the percent complete. The volume should return to Healthy status.

Stale Data

Data on foreign disks that are fault tolerant are out of sync.

Rescan the disks or restart the computer, and then check the status. A new status should be displayed, such as Failed Redundancy.

Unknown

The volume cannot be accessed. It might have a corrupted boot sector.

The volume might have a boot sector virus. Check it with an up-to-date antivirus program. Rescan the disks or restart the computer, and then check the status.

Creating volumes and volume sets

You can format simple volumes as exFAT, FAT, FAT32, or NTFS. To make management easier, you should format volumes that span multiple disks as NTFS, which enables you to expand the volume set if necessary. If you find you need more space on a volume, you can extend simple and spanned volumes by selecting an area of free space and adding it to the volume. You can extend a simple volume within the same disk, and you can also extend a simple volume onto other disks. When you do this, you create a spanned volume, which you must format as NTFS.

You create volumes and volume sets by following these steps:

1. In Disk Management’s Graphical View, press and hold or right-click an unallocated area, and then tap or click New Spanned Volume or New Striped Volume as appropriate. Read the Welcome page, and then tap or click Next.

2. You should get the Select Disks page, shown in Figure 2-2. Select the disks that you want to be part of the volume, and then size the volume segments on those disks.

On the Select Disks page, select disks to be a part of the volume, and then size the volume on each disk.

Figure 2-2. On the Select Disks page, select disks to be a part of the volume, and then size the volume on each disk.

3. Available disks are shown in the Available list. If necessary, select a disk in this list, and then tap or click Add to add the disk to the Selected list. If you make a mistake, you can remove disks from the Selected list by selecting the disk, and then tapping or clicking Remove.

CAUTION

The disk wizards in Windows Server 2012 R2 show both basic and dynamic disks with available disk space. If you add space from a basic disk, the wizard converts the disk to a dynamic disk before creating the volume set. Before tapping or clicking Yes to continue, be sure you really want to do this because it can affect how the disk is used by the operating system.

4. Select a disk in the Selected list, and then specify the size of the volume on the disk in the Select The Amount Of Space In MB box. The Maximum Available Space In MB box shows you the largest area of free space available on the disk. The Total Volume Size In Megabytes box shows you the total disk space selected for use with the volume. Tap or click Next.

TIP

Although you can size a volume set any way you want, consider how you’ll use volume sets on the system. Simple and spanned volumes aren’t fault tolerant; rather than creating one monstrous volume with all the available free space, you might want to create several smaller volumes to help ensure that losing one volume doesn’t mean losing all your data.

5. Specify whether you want to assign a drive letter or path to the volume, and then tap or click Next. You use the available options as follows:

o Assign The Following Drive Letter. To assign a drive letter, choose this option, and then select an available drive letter in the list provided.

o Mount In The Following Empty NTFS Folder. To assign a drive path, choose this option, and then type the path to an existing folder on an NTFS drive, or tap or click Browse to search for or create a folder.

o Do Not Assign A Drive Letter Or Drive Path. To create the volume without assigning a drive letter or path, choose this option. You can assign a drive letter or path later if necessary.

6. Specify whether the volume should be formatted. If you elect to format the volume, set the following formatting options:

o File System. Specifies the file system type, such as NTFS or ReFS.

o Allocation Unit Size. Specifies the cluster size for the file system. This is the basic unit in which disk space is allocated. The default allocation unit size is based on the volume’s size and is set dynamically prior to formatting. Although you can’t change the default size if you select ReFS, you can set the allocation unit size to a specific value with other formats. If you use a lot of small files, you might want to use a smaller cluster size, such as 512 or 1,024 bytes. With these settings, small files use less disk space.

o Volume Label. Specifies a text label for the partition. This label is the partition’s volume name.

o Perform A Quick Format. Tells Windows to format without checking the partition for errors. With large partitions, this option can save you a few minutes. However, it’s more prudent to check for errors, which allows Disk Management to mark bad sectors on the disk and lock them out.

o Enable File And Folder Compression. Turns on compression for the disk. Compression is transparent to users, and compressed files can be accessed just like regular files. If you select this option, files and directories on this drive are compressed automatically. For more information about compressing drives, files, and directories, see Compressing drives and data in Chapter 1. (For NTFS only)

7. Tap or click Next, and then tap or click Finish.

Deleting volumes and volume sets

You use the same technique to delete all volumes, whether they’re simple, spanned, mirrored, striped, or RAID-5 (striped with parity). Deleting a volume set removes the associated file system, and all associated data is lost. Before you delete a volume set, you should back up any files and directories the volume set contains.

You can’t delete a volume that contains the system, boot, or active paging files for Windows Server 2012 R2.

To delete volumes, follow these steps:

1. In Disk Management, press and hold or right-click any volume in the set, and then tap or click Delete Volume. You can’t delete a portion of a spanned volume without deleting the entire volume.

2. Tap or click Yes to confirm that you want to delete the volume.

Managing volumes

You manage volumes much like you manage partitions. Follow the techniques outlined in Managing existing partitions and drives later in this chapter.

Improving performance and fault tolerance with RAID

You’ll often want to give important data increased protection from drive failures. To do this, you can use RAID technology to add fault tolerance to your file systems. With RAID, you increase data integrity and availability by creating redundant copies of the data. You can also use RAID to improve your disks’ performance.

Different implementations of RAID technology are available, and these implementations are described in terms of levels. Each RAID level offers different features. Windows Server 2012 R2 supports RAID levels 0, 1, and 5. You can use RAID-0 to improve the performance of your drives, and you use RAID-1 and RAID-5 to provide fault tolerance for data.

Table 2-2 provides a brief overview of the supported RAID levels. This support is completely software-based.

The most common RAID levels in use on servers running Windows Server 2012 R2 are level 1 (disk mirroring), and level 5 (disk striping with parity). With respect to upfront costs, disk mirroring is the least expensive way to increase data protection with redundancy. Here, you use two identically sized volumes on two different drives to create a redundant data set. If one of the drives fails, you can still obtain the data from the other drive.

However, disk striping with parity requires more disks—a minimum of three—but offers fault tolerance with less overhead than disk mirroring. If any of the drives fail, you can recover the data by combining blocks of data on the remaining disks with a parity record. Parity is a method of error checking that uses an exclusive OR operation to create a checksum for each block of data written to the disk. This checksum is used to recover data in case of failure.

Table 2-2. Windows Server 2012 R2 support for RAID

RAID Level

RAID Type

Description

Major Advantages

0

Disk striping

Two or more volumes, each on a separate drive, are configured as a striped set. Data is broken into blocks, called stripes, and then written sequentially to all drives in the striped set.

Speed and performance.

1

Disk mirroring

Two volumes on two drives are configured identically. Data is written to both drives. If one drive fails, no data loss occurs because the other drive contains the data. (This level doesn’t include disk striping.)

Redundancy. Better write performance than disk striping with parity.

5

Disk striping with parity

Uses three or more volumes, each on a separate drive, to create a striped set with parity error checking. In the case of failure, data can be recovered.

Fault tolerance with less overhead than mirroring. Better read performance than disk mirroring.

REAL WORLD

Although it’s true that the upfront costs for mirroring should be less than the upfront costs for disk striping with parity, the actual cost per gigabyte might be higher with disk mirroring. With disk mirroring, you have an overhead of 50 percent. For example, if you mirror two 750-gigabyte (GB) drives (a total storage space of 1500 GB), the usable space is only 750 GB. With disk striping with parity, on though, you have an overhead of around 33 percent. For example, if you create a RAID-5 set by using three 500-GB drives (a total storage space of 1500 GB), the usable space (with one-third lost for overhead) is 1,000 GB.

Implementing RAID on Windows Server 2012 R2

Windows Server 2012 R2 supports disk mirroring, disk striping, and disk striping with parity. Implementing these RAID techniques is discussed in the sections that follow.

CAUTION

Some operating systems, such as MS-DOS, don’t support RAID. If you dual boot your system to one of these noncompliant operating systems, your RAID-configured drives will be unavailable.

Implementing RAID-0: disk striping

RAID level 0 is disk striping. With disk striping, two or more volumes—each on a separate drive—are configured as a striped set. Data written to the striped set is broken into blocks called stripes. These stripes are written sequentially to all drives in the striped set. You can place volumes for a striped set on up to 32 drives, but in most circumstances sets with 2 to 5 volumes offer the best performance improvements. Beyond this, the performance improvement decreases significantly.

The major advantage of disk striping is speed. Data can be accessed on multiple disks by using multiple drive heads, which improves performance considerably. However, this performance boost comes with a price tag. As with volume sets, if any hard disk drive in the striped set fails, the striped set can no longer be used, which essentially means that all data in the striped set is lost. You need to re-create the striped set and restore the data from backups. Data backup and recovery is discussed in Chapter 11.

CAUTION

The boot and system volumes shouldn’t be part of a striped set. Don’t use disk striping with these volumes.

When you create striped sets, you should use volumes that are approximately the same size. Disk Management bases the overall size of the striped set on the smallest volume size. Specifically, the maximum size of the striped set is a multiple of the smallest volume size. For example, if you want to create a three-volume striped set but the smallest volume is 20 GB, the maximum size for the striped set is 60 GB, even if the other two values are 2 terabytes (TB) each.

You can maximize performance of the striped set in a couple of ways:

§ Use disks that are on separate disk controllers. This allows the system to simultaneously access the drives.

§ Don’t use the disks containing the striped set for other purposes. This allows the disk to dedicate its time to the striped set.

You can create a striped set by following these steps:

1. In Disk Management’s Graphical View, press and hold or right-click an area marked Unallocated on a dynamic disk, and then tap or click New Striped Volume. This starts the New Striped Volume Wizard. Read the Welcome page, and then tap or click Next.

2. Create the volume as described in Creating volumes and volume sets earlier in this chapter. The key difference is that you need at least two dynamic disks to create a striped volume.

After you create a striped volume, you can use the volume as you would any other volume. You can’t extend a striped set after it’s created; therefore, you should carefully consider the setup before you implement it.

Implementing RAID-1: disk mirroring

RAID level 1 is disk mirroring. With disk mirroring, you use identically sized volumes on two different drives to create a redundant data set. The drives are written with identical sets of information, and if one of the drives fails, you can still obtain the data from the other drive.

Disk mirroring offers about the same fault tolerance as disk striping with parity. Because mirrored disks don’t need to write parity information, they can offer better write performance in most circumstances. However, disk striping with parity usually offers better read performance because read operations are spread over multiple drives.

The major drawback to disk mirroring is that it effectively cuts the amount of storage space in half. For example, to mirror a 500-GB drive, you need another 500-GB drive. That means you use 1000 GB of space to store 500 GB of information.

TIP

If possible, you should mirror boot and system volumes. Mirroring these volumes ensures that you are able to boot the server in case of a single drive failure.

As with disk striping, you’ll often want the mirrored disks to be on separate disk controllers to provide increased protection against failure of the disk controller. If one of the disk controllers fails, the disk on the other controller is still available. Technically, when you use two separate disk controllers to duplicate data, you’re using a technique known as disk duplexing. Figure 2-3 shows the difference between the two techniques. Where disk mirroring typically uses a single drive controller, disk duplexing uses two drive controllers; otherwise, the two techniques are essentially the same.

Although disk mirroring typically uses a single drive controller to create a redundant data set, disk duplexing uses two drive controllers.

Figure 2-3. Although disk mirroring typically uses a single drive controller to create a redundant data set, disk duplexing uses two drive controllers.

If one of the mirrored drives in a set fails, disk operations can continue. Here, when users read and write data, the data is written to the remaining disk. You need to break the mirror before you can fix it. To learn how, see Managing RAID and recovering from failures later in this chapter.

Creating a mirror set in Disk Management

You create a mirror set by following these steps:

1. In the Disk Management Graphical View, press and hold or right-click an area marked Unallocated on a dynamic disk, and then tap or click New Mirrored Volume. This starts the New Mirrored Volume Wizard. Read the Welcome page, and then tap or click Next.

2. Create the volume as described in Creating volumes and volume sets earlier in this chapter. The key difference when creating the mirror set is that you must create two identically sized volumes, and these volumes must be on separate dynamic drives. You won’t be able to continue past the Select Disks window until you select the two disks with which you want to work.

Like other RAID techniques, mirroring is transparent to users. Users experience the mirrored set as a single drive they can access and use like any other drive.

NOTE

The status of a normal mirror is Healthy. During the creation of a mirror, you’ll get a status of Resynching, which tells you that Disk Management is creating the mirror.

Mirroring an existing volume

Rather than create a new mirrored volume, you can use an existing volume to create a mirrored set. To do this, the volume you want to mirror must be a simple volume, and you must have an area of unallocated space on a second drive of equal or larger size than the existing volume.

In Disk Management, you mirror an existing volume by following these steps:

1. Press and hold or right-click the simple volume you want to mirror, and then tap or click Add Mirror. This displays the Add Mirror dialog box.

2. In the Disks list, shown in Figure 2-4, select a location for the mirror, and then tap or click Add Mirror. Windows Server 2012 R2 begins the mirror creation process. In Disk Management, you’ll get a status of Resynching on both volumes. The disk on which the mirrored volume is being created has a warning icon.

Select the location for the mirror.

Figure 2-4. Select the location for the mirror.

Implementing RAID-5: disk striping with parity

RAID level 5 is disk striping with parity. With this technique, you need a minimum of three hard disk drives to set up fault tolerance. Disk Management sizes the volumes on these drives identically.

RAID-5 distributes data and parity data sequentially across the disks in the array. Fault tolerance ensures that the failure of a single drive won’t bring down the entire drive set. Instead, the set continues to function with disk operations directed at the remaining volumes in the set.

To allow for fault tolerance, RAID-5 writes parity checksums with the blocks of data. If any of the drives in the striped set fails, you can use the parity information to recover the data. (This process, called regenerating the striped set, is covered in Managing RAID and recovering from failureslater in the chapter.) If two disks fail, however, the parity information isn’t sufficient to recover the data, and you need to rebuild the striped set from backup.

Creating a striped set with parity in Disk Management

In Disk Management, you can create a striped set with parity by following these steps:

1. In Disk Management’s Graphical View, press and hold or right-click an area marked Unallocated on a dynamic disk, and then tap or click New RAID-5 Volume. This starts the New RAID-5 Volume Wizard. Read the Welcome page, and then tap or click Next.

2. Create the volume as described previously in Creating volumes and volume sets. The key difference when creating a striped set with parity is that you must select free space on three separate dynamic drives.

After you create a striped set with parity (RAID-5), users can use the set just like they would a normal drive. Keep in mind that you can’t extend a striped set with parity after you create it; therefore, you should carefully consider the setup before you implement it.

Managing RAID and recovering from failures

Managing mirrored drives and striped sets is somewhat different from managing other drive volumes, especially when it comes to recovering from failure. The techniques you use to manage RAID arrays and to recover from failure are covered in this section.

Breaking a mirrored set

You might want to break a mirror for two reasons:

§ If one of the mirrored drives in a set fails, disk operations can continue. When users read and write data, these operations use the remaining disk. At some point, however, you need to fix the mirror, and to do this you must first break the mirror, replace the failed drive, and then reestablish the mirror.

§ If you no longer want to mirror your drives, you might also want to break a mirror. This allows you to use the disk space for other purposes.

BEST PRACTICES

Although breaking a mirror doesn’t delete the data in the set, you should always back up the data before you perform this procedure to ensure that if you have problems, you can recover your data.

In Disk Management, you can break a mirrored set by following these steps:

1. Press and hold or right-click one of the volumes in the mirrored set, and then tap or click Break Mirrored Volume.

2. Confirm that you want to break the mirror by tapping or clicking Yes. If the volume is in use, you’ll get another warning dialog box. Confirm that it’s okay to continue by tapping or clicking Yes.

Windows Server 2012 R2 breaks the mirror, creating two independent volumes.

Resynchronizing and repairing a mirrored set

Windows Server 2012 R2 automatically synchronizes mirrored volumes on dynamic drives; however, data on mirrored drives can become out of sync. For example, if one of the drives goes offline, data is written only to the drive that’s online.

You can resynchronize and repair mirrored sets, but you must rebuild the set by using disks with the same partition style—either master boot record (MBR) or GUID partition table (GPT). You need to get both drives in the mirrored set online. The mirrored set’s status should read Failed Redundancy. The corrective action you take depends on the failed volume’s status:

§ If the status is Missing or Offline, be sure that the drive has power and is connected properly. Then start Disk Management, press and hold or right-click the failed volume, and tap or click Reactivate Volume. The drive status should change to Regenerating and then to Healthy. If the volume doesn’t return to the Healthy status, press and hold or right-click the volume, and then tap or click Resynchronize Mirror.

§ If the status is Online (Errors), press and hold or right-click the failed volume, and then tap or click Reactivate Volume. The drive status should change to Regenerating and then to Healthy. If the volume doesn’t return to the Healthy status, press and hold or right-click the volume, and then tap or click Resynchronize Mirror.

§ If one of the drives shows a status of Unreadable, you might need to rescan the drives on the system by choosing Rescan Disks from Disk Management’s Action menu. If the drive status doesn’t change, you might need to reboot the computer.

§ If one of the drives still won’t come back online, press and hold or right-click the failed volume, and then tap or click Remove Mirror. Next, press and hold or right-click the remaining volume in the original mirror, and then tap or click Add Mirror. You now need to mirror the volume on an unallocated area of free space. If you don’t have free space, you need to create space by deleting other volumes or replacing the failed drive.

Repairing a mirrored system volume to enable boot

The failure of a mirrored drive might prevent your system from booting. Typically, this happens when you’re mirroring the system or boot volume, or both, and the primary mirror drive has failed. In previous versions of the Windows operating system, you often had to go through several procedures to get the system back up and running. With Windows Server 2012 R2, the failure of a primary mirror is usually much easier to resolve.

When you mirror a system volume, the operating system should add an entry to the system’s boot manager that allows you to boot to the secondary mirror. Resolving a primary mirror failure is much easier with this entry in the boot manager file than without it because all you need to do is select the entry to boot to the secondary mirror. If you mirror the boot volume and a secondary mirror entry is not created for you, you can modify the boot entries in the boot manager to create one by using the BCD Editor (Bcdedit.exe).

If a system fails to boot to the primary system volume, restart the system and select the Windows Server 2012 R2—Secondary Plex option for the operating system you want to start. The system should start up normally. After you successfully boot the system to the secondary drive, you can schedule the maintenance necessary to rebuild the mirror as described in the following steps:

1. Shut down the system, and replace the failed volume or add a hard disk drive. Then restart the system.

2. Break the mirror set, and then re-create the mirror on the drive you replaced, which is usually drive 0. Press and hold or right-click the remaining volume that was part of the original mirror, and then tap or click Add Mirror. Next, follow the technique in Mirroring an existing volumeearlier in the chapter.

3. If you want the primary mirror to be on the drive you added or replaced, use Disk Management to break the mirror again. Be sure that the primary drive in the original mirror set has the drive letter that was previously assigned to the complete mirror. If it doesn’t, assign the appropriate drive letter.

4. Press and hold or right-click the original system volume, and then tap or click Add Mirror. Now re-create the mirror.

5. Check the boot entries in the boot manager and use the BCD Editor to ensure that the original system volume is used during startup.

Removing a mirrored set

Using Disk Management, you can remove one of the volumes from a mirrored set. When you do this, all data on the removed mirror is deleted and the space it used is marked as Unallocated.

To remove a mirror, follow these steps:

1. In Disk Management, press and hold or right-click one of the volumes in the mirrored set, and then tap or click Remove Mirror to display the Remove Mirror dialog box.

2. In the Remove Mirror dialog box, select the disk from which to remove the mirror.

3. Confirm the action when prompted. All data on the removed mirror is deleted.

Repairing a striped set without parity

A striped set without parity doesn’t have fault tolerance. If a drive that’s part of a striped set fails, the entire striped set is unusable. Before you try to restore the striped set, you should repair or replace the failed drive. Then you need to re-create the striped set and recover the data contained on the striped set from backup.

Regenerating a striped set with parity

With RAID-5, you can recover the striped set with parity if a single drive fails. You’ll know that a striped set with parity drive has failed because the set’s status changes to Failed Redundancy and the individual volume’s status changes to Missing, Offline, or Online (Errors).

You can repair RAID-5 disks, but you must rebuild the set by using disks with the same partition style—either MBR or GPT. You need to get all drives in the RAID-5 set online. The set’s status should read Failed Redundancy. The corrective action you take depends on the failed volume’s status:

§ If the status is Missing or Offline, make sure the drive has power and is connected properly. Then start Disk Management, press and hold or right-click the failed volume, and select Reactivate Volume. The drive’s status should change to Regenerating and then to Healthy. If the drive’s status doesn’t return to Healthy, press and hold or right-click the volume and select Regenerate Parity.

§ If the status is Online (Errors), press and hold or right-click the failed volume, and select Reactivate Volume. The drive’s status should change to Regenerating and then to Healthy. If the drive’s status doesn’t return to Healthy, press and hold or right-click the volume and select Regenerate Parity.

§ If one of the drives shows as Unreadable, you might need to rescan the drives on the system by choosing Rescan Disks from Disk Management’s Action menu. If the drive status doesn’t change, you might need to reboot the computer.

§ If one of the drives still won’t come back online, you need to repair the failed region of the RAID-5 set. Press and hold or right-click the failed volume, and then select Remove Volume. You now need to select an unallocated space on a separate dynamic disk for the RAID-5 set. This space must be at least as large as the region to repair, and it can’t be on a drive that the RAID-5 set is already using. If you don’t have enough space, the Repair Volume command is unavailable, and you need to free space by deleting other volumes or by replacing the failed drive.

BEST PRACTICES

If possible, you should back up the data before you perform this procedure to ensure that if you have problems, you can recover your data.

Standards-based storage management

Standards-based storage management focuses on the storage volumes themselves rather than the underlying physical layout, relying on hardware to handle the architecture particulars for data redundancy and the portions of disks that are presented as usable disks. This means the layout of the physical disks is controlled by the storage subsystem instead of by the operating system.

Getting started with standards-based storage

With standards-based management, the physical layout of disks (spindles) is abstracted, so a “disk” can be a logical reference to a portion of a storage subsystem (a virtual disk) or an actual physical disk. This means a disk simply becomes a unit of storage and volumes can be created to allocate space within disks for file systems.

Taking this concept a few steps further, you can pool available space on disks so that units of storage (virtual disks) can be allocated from this pool on an as-needed basis. These units of storage, in turn, are apportioned with volumes to allocate space and create usable file systems.

Technically, the pooled storage is referred to as a storage pool and the virtual disks created within the pool are referred to as storages spaces. Given a set of “disks,” you can create a single storage pool by allocating all the disks to the pool or create multiple storage pools by allocating disks separately to each pool.

REAL WORLD

Trust me when I say this all sounds more complicated than it is. When you throw storage subsystems into the mix, it’s really a three-layered architecture. In Layer 1, the layout of the physical disks is controlled by the storage subsystem. The storage system likely will use some form of RAID to ensure that data is redundant and recoverable in case of failure. In Layer 2, the virtual disks created by the arrays are made available to servers. The servers simply see the disks as storage that can be allocated. Windows Server can apply software-level RAID or other redundancy approaches to help protect against failure. In Layer 3, the server creates volumes on the virtual disks, and these volumes provide the usable file systems for file and data storage.

Working with standards-based storage

When you are working with File And Storage Services, you can group available physical disks into storage pools so that you can create virtual disks from available capacity. Each virtual disk you create is a storage space. Storage Spaces are made available through the Storage Services role service, which is automatically installed on every server running Windows Server 2012 R2.

To integrate Storage Spaces with standards-based storage management frameworks, you’ll want to add the Windows Standards-Based Storage Management feature to your file servers. When a server is configured with the File Services And Storage role, the Windows Standards-Based Storage Management feature adds components and updates Server Manager with the options for working with standards-based volumes. You might also want to do the following:

§ Add the Data Deduplication role service if you want to enable data deduplication.

§ Add the iSCSI Target Server and iSCSI Target Storage Provider role services if you want the server to host iSCSI virtual disks.

After you configure your servers as appropriate for your environment, you can select the File And Storage Services node in Server Manager to work with your storage volumes, and additional options will be available as well. The Servers subnode lists file servers that have been configured for standards-based management.

As Figure 2-5 shows, the Volumes subnode lists allocated storage on each server according to how volumes are provisioned and how much free space is available. Volumes are listed regardless of whether the underlying disks are physical or virtual. Press and hold or right-click a volume to display management options, including the following:

§ Configure Data Deduplication. Allows you to enable and configure data deduplication for NTFS volumes. If the feature is enabled, you can then also use this option to disable data deduplication.

§ Delete Volume. Allows you to delete the volume. The space that was used is then marked as unallocated on the related disk.

§ Extend Volume. Allows you to extend the volume to unallocated space of the related disk.

§ Format. Allows you to create a new file system on the volume that overwrites the existing volume.

§ Manage Drive Letter And Access Paths. Allows you to change the drive letter or access path associated with the volume.

§ New iSCSI Virtual Disk. Allows you to create a new iSCSI virtual disk that is stored on the volume.

§ New Share. Allows you to create new Server Message Block (SMB) or Network File System (NFS) shares on the volume.

§ Properties. Displays information about the volume type, file system, health, capacity, used space, and free space. You can also use this option to set the volume label.

§ Repair File System Errors. Allows you to repair errors detected during an online scan of the file system.

§ Scan File System For Errors. Allows you to perform an online scan of the file system. Although Windows attempts to repair any errors that are found, some errors can be corrected only by using a repair procedure.

Note how volumes are provisioned.

Figure 2-5. Note how volumes are provisioned.

As Figure 2-6 shows, the Disks subnode lists the disks available to each server according to total capacity, unallocated space, partition style, subsystem, and bus type. Server Manager attempts to differentiate between physical disks and virtual disks by showing the virtual disk label (if one was provided) and the originating storage subsystem. Press and hold or right-click a disk to display management options, including the following:

§ Bring Online. Enables you to take an offline disk and make it available for use.

§ Take Offline. Enables you to take a disk offline so that it can no longer be used.

§ Reset Disk. Enables you to completely reset the disk, which deletes all volumes on the disk and removes all available data on the disk.

§ New Volume. Enables you to create a new volume on the disk.

Note the disks available and how much unallocated space is available.

Figure 2-6. Note the disks available and how much unallocated space is available.

The version of Storage Spaces that ships with Windows Server 2012 R2 is different from the version that ships with Windows Server 2012. If you have any questions about which version of Storage Spaces is being used, complete the following steps to check the version:

1. Press and hold or right-click the storage pool you want to examine, and then select Properties.

2. In the Properties dialog box, select Details in the left pane, and then choose Version on the Property selection list.

You also can use this technique to check capacity, status, logical sector size, physical sector size, provisioned space size, thin provisioning alert threshold, and total used space. By default, Storage Space alerts you when storage is approaching capacity and when a storage space reaches 70 percent of the total provisioned size. When you get such an alert, you should consider allocating additional storage.

You can upgrade the version of Storage Spaces being used by a storage pool by pressing and holding or right-clicking a storage pool, and then selecting Upgrade Storage Pool Version. In addition to correcting issues that can result in error and warning states when creating and managing Storage Spaces, the Windows Server 2012 R2 version of Storage Spaces does the following:

§ Supports storage spaces that have dual parity, and parity and dual parity spaces on failover clusters. With dual parity, storage spaces are protected against two simultaneous drive failures.

§ Supports automatic rebuild of storage spaces from storage pool free space instead of having to use hot spares to recover from drive failures. Here, instead of writing a copy of data that was on a failed disk to a hot spare, the parity or mirrored data is copied to multiple drives in the pool to achieve the previous level of resiliency automatically. As a result, you don’t need to specifically allocate hot spares in storage pools, provided that a sufficient number of drives is assigned to the pool to allow for automatic resiliency recovery.

§ Supports storage tiers to automatically move frequently used files from slower physical disks to faster Solid State Drive (SSD) storage. This feature is applicable only when a storage space has a combination of SSD storage and hard disk drive (HDD) storage. Additionally, the storage type must be set as fixed, the volumes created on virtual disks must be the same size as the virtual disk, and enough free space must be available to accommodate the preference. For fine-grained management, use the Set-FileStorageTier cmdlet to assign files to standard physical drive storage or faster SSD storage.

§ Supports write-back caching when a storage pool uses SSD storage. Write-back cache buffers small random writes to SSD storage before later writing the data to HDD storage. Buffering writes in this way helps to protect against data loss in the event of power failures. For write-back cache to work properly, storage spaces with simple volumes must have at least one SSD, storage spaces with two-way mirroring or single-parity must have at least two SSDs, and storage spaces with three-way mirroring or dual parity must have at least three SSDs. When these requirements are met, the volumes automatically will use a 1-GB write-back cache by default. You can designate SSDs that should be used for write-back caching by setting the usage as Journal (the default in this configuration). If enough SSDs are not configured for journaling, the write-back cache size is set to 0 (meaning write-back caching will not be used). The only exception is for parity spaces, which will then have the write-back cache size set to 32 MB.

If you have any question about the size of the write-back cache, complete the following steps to check the cache size:

1. Press and hold or right-click the virtual disk you want to examine, and then select Properties.

2. In the Properties dialog box, select Details in the left pane, and then choose WriteCacheSize on the Property selection list.

You also can use this technique to check allocated size, status, provisioned size, provision type, redundancy type, and more.

Using storage pools and allocating space

In Server Manager, you can work with storage pools and allocate space by selecting the File And Storage Services node, and then selecting the related Storage Pools subnode. As Figure 2-7 shows, the Storage Pools subnode lists the available storage pools, the virtual disks created within storage pools, and the available physical disks. Keep in mind that what’s presented as physical disks might actually be LUNs (virtual disks) from a storage subsystem.

Create and manage storage pools.

Figure 2-7. Create and manage storage pools.

Working with storage pools is a multistep process:

1. You create storage pools to pool available space on one or more disks.

2. You allocate space from this pool to create one or more virtual disks.

3. You create one or more volumes on each virtual disk to allocate storage for file systems.

The sections that follow examine procedures related to each of these steps.

Creating a storage pool

Storage pools allow you to pool available space on disks so that units of storage (virtual disks) can be allocated from this pool. To create a storage pool, you must have at least one unused disk and a storage subsystem to manage it. This storage subsystem can be the one included with the Storage Spaces feature or a subsystem associated with attached storage.

When a computer has extra hard drives in addition to the hard drive on which Windows is installed, you can allocate one or more of the additional drives to a storage pool. However, keep in mind that if you use a formatted drive with a storage pool, Windows permanently deletes all the files on that drive. Additionally, it’s important to point out that physical disks with the MBR partition style are converted automatically to the GPT partition style when you add them to a storage pool and create volumes on them.

Each physical disk allocated to the pool can be handled in one of three ways:

§ As a data store that is available for use

§ As a data store that can be manually allocated for use

§ As a hot spare in case a disk in the pool fails or is removed from the subsystem

Types of volumes you can create are as follows:

§ Simple Volumes. Creates a simple volume by writing one copy of your data to one or more drives. With simple volumes, there is no redundancy and no associated overhead. As an example, you can create a single volume that spans two 2-TB drives, making 4 TB of storage available. However, because there is no resiliency, a failure of any drive in a simple volume causes the entire volume to fail.

§ Two-way mirrors. Creates a mirrored set by writing two copies of a computer’s data, helping to protect against a single drive failure. Two-way mirrors require at least two drives. With two-way mirrors, there is a 1/2 (50 percent) overhead for redundancy with two drives. As an example, you could allocate two 2-TB drives as a two-way mirror, giving you 2 TB of mirrored storage.

§ Parity volumes. Creates a volume that uses disk striping with parity, helping to provide fault tolerance with less overhead than mirroring. Parity volumes require at least three drives. With parity volumes there is a 1/3 (33.33 percent) overhead for redundancy with three drives. As an example, you could allocate three 2-TB drives as a parity volume, giving you 4 TB of protected storage.

§ Dual parity volumes. Creates a volume that uses disk striping with two sets of parity data, helping to protect against two simultaneous drive failures while requiring less overhead than three-way mirroring. Dual parity volumes require at least seven drives.

§ Three-way mirrors. Creates a mirrored set by writing three copies of a computer’s data and by using disk striping with mirroring, helping to protect against two simultaneous drive failures. Although three-way mirrors do not have a penalty for read operations, they do have a performance penalty for write operations because of the overhead associated with having to write data to three separate disks. This overhead can be mitigated by using multiple drive controllers. Ideally, you’ll want to ensure that at least three drive controllers are used. Three-way mirrors require at least five drives.

If you are familiar with RAID 6, you are familiar with dual parity volumes. Although dual parity does not have a penalty for read operations, it does have a performance penalty for write operations because of the overhead associated with calculating and writing dual parity values. With standard dual parity volumes, the usable capacity of dual parity volumes is calculated as the sum of the number of volumes minus two times the size of the smallest volume in the array or (N -2) X MinimumVolumeSize. For example, with 7 volumes and the smallest volume size of 1 TB, the usable capacity typically is 5 TB [calculated as (7 - 2) * 1 TB = 5 TB].

Although logically it would seem that you need at least six drives to have three mirrored copies of data, mathematically you need only five. Why? If you want three copies of your data, you need at least 15 logical units of storage to create those three copies. Divide 15 by 3 to come up with the number of disks required, and the answer is that you need 5 disks. Thus, Storage Spaces uses 1/3 of each disk to store original data and 2/3 of each disk to store copies of data. Following this, a three-way mirror with five volumes has a 2/3 (66.66 percent) overhead for redundancy. Or putanother way, you could allocate five 3-TB drives as a three-way mirror, giving you 5 TB of mirrored storage (and 10 TB of overhead).

With single parity volumes, data is written out horizontally with parity calculated for each row of data. Dual parity differs from single parity in that row data is not only stored horizontally, it is also stored diagonally. If a single disk fails or a read error from a bad bit or block error occurs, the data is re-created by using only the horizontal row parity data (just as in single parity volumes). In the case of a multiple drive issue, the horizontal and diagonal row data are used for recovery.

To understand how dual parity typically works, consider the following simplified example. Each horizontal row of data has a parity value, the sum of which is stored on the parity disk for that row (and calculated by using an exclusive OR). Each horizontal parity stripe misses one and only one disk. If the parity value is 2, 3, 1, and 4 on disks 0, 1, 2, and 3 respectively, the parity sum stored on the disk 4 (the parity disk for this row) is 10 (2 + 3 + 1 + 4 = 10). If disk 0 were to have a problem, the parity value for the row on this disk could be restored by subtracting the remaining horizontal values from the horizontal parity sum (10 – 3 – 1 – 4 = 2).

The second set of parity data is written diagonally (meaning in different data rows on different disks). Each diagonal row of data has a diagonal parity value, the sum of which is stored on the diagonal parity disk for that row (and calculated by using an exclusive OR). Each diagonal parity stripe misses two disks: one disk in which the diagonal parity sum is stored and one disk that is omitted from the diagonal parity striping. Additionally, the diagonal parity sum includes a data row from the horizontal row parity as part of its diagonal parity sum.

If the diagonal parity value is 1, 4, 3, and 7 on disks 1, 2, 3, and 4 respectively (with four associated horizontal rows), the diagonal parity sum stored on disk 5 (the diagonal parity disk for this row) is 15 (4 + 1 + 3 + 7 = 15) and the omitted disk is disk 0. If disk 2 and disk 4 were to have a problem, the diagonal parity value for the row can be used to restore both of the lost values. The missing diagonal value is restored first by subtracting the remaining diagonal values from the diagonal parity sum. The missing horizontal value is restored next by subtracting the remaining horizontal values for the subject row from the horizontal parity sum for that row.

NOTE

Keep in mind that dual parity, as implemented in Storage Spaces, uses seven disks, and the previous example was simplified. Although parity striping with seven disks works differently than as discussed in this example, the basic approach uses horizontal and vertical stripes.

You can create a storage pool by completing the following steps:

1. In Server Manager, select the File And Storage Services node, and then select the related Storage Pools subnode.

2. Select Tasks in the Storage Pools panel, and then select New Storage Pool. This starts the New Storage Pool Wizard. If the wizard displays the Before You Begin page, read the Welcome text, and then click Next.

3. On the Specify A Storage Pool Name And Subsystem page, enter a name and description of the storage pool, and then select the primordial pool with which you want to work. (A primordial pool is simply a group of disks managed by and available to a specific server via a storage subsystem.) Click Next.

TIP

Select the primordial pool for the server you want to associate the pool with and allocate storage for. For example, if you are configuring storage for CorpServer38, select the primordial pool that is available to CorpServer38.

4. On the Select Physical Disks For The Storage Pool page, select the unused physical disks that should be part of the storage pool, and then specify the type of allocation for each disk. A storage pool must have more than one disk to use the mirroring and parity features available to protect data in case of error or failure. When setting the Allocation value, keep the following in mind:

o Choose Automatic to allocate the disk to the pool and make it available for use as needed.

o Choose Manual to allocate the disk to the pool but not allow it to be used until it is manually allocated.

o Choose Hot Spare to allocate the disk to the pool as a spare disk that is made available for use if another disk in the pool fails or is removed from the subsystem.

5. When you are ready to continue, click Next. After you confirm your selections, click Create. The wizard tracks the progress of the pool creation. When the wizard finishes creating the pool, the View Results page will be updated to reflect this. Review the details to ensure that all phases were completed successfully, and then click Close.

If any portion of the configuration failed, note the reason for the failure and take corrective actions as appropriate before repeating this procedure.

§ If one of the physical disks is currently formatted with a volume, you’ll get the following error:

Could not create storage pool. One of the physical disks specified is not supported by this operation.

This error occurs because physical disks that you want to add to a storage pool cannot contain existing volumes. To resolve the problem, you’ll need to repeat the procedure and select a different physical disk, or remove any existing volumes on the physical disk, repeat the procedure, and then select the disk again. Keep in mind that deleting a volume permanently erases all data it contains.

§ If one of the physical disks is unavailable after being select, you’ll get the error:

Could not create storage pool. One or more parameter values passed to the method were invalid.

This error occurs because a physical disk that was available when you started the New Storage Pool Wizard has become unavailable or is offline. To resolve the problem, you’ll need to a) repeat the procedure and select a different physical disk, or b) bring the physical disk online or otherwise make it available for use, repeat the procedure, and then select the disk again.

NOTE

External storage can become unavailable for a variety of reasons. For example, an external connected cable might have been disconnected or a LUN previously allocated to the server might have been reallocated by a storage administrator.

Creating a virtual disk in a storage space

After you create a storage pool, you can allocate space from the pool to virtual disks that are available to your servers. Each physical disk allocated to the pool can be handled in one of three ways:

§ As a data store that is available for use

§ As a data store that can be manually allocated for use

§ As a hot spare in case a disk in the pool fails or is removed from the subsystem

When a storage pool has a single disk, your only option for allocating space on that disk is to create virtual disks with a simple layout. A simple layout does not protect against disk failure. If a storage pool has multiple disks, you have these additional layout options:

§ Mirror. With a mirror layout, data is duplicated on disks by using a mirroring technique similar to what I discussed previously in this chapter. However, the mirroring technique is more sophisticated in that data is mirrored onto two or three disks at a time. Like standard mirroring, this approach has its advantages and disadvantages. If a storage space has two or three disks, you are fully protected against a single disk failure, and if a storage space has five or more disks, you are fully protected against two simultaneous disk failures. The disadvantage is that mirroring reduces capacity by up to 50 percent. For example, if you mirror two 1-TB disks, the usable space is 1 TB.

§ Parity. With a parity layout, data and parity information are striped across physical disks by using a striping-with-parity technique similar to what I discussed previously in this chapter. Like standard striping with parity, this approach has its advantages and disadvantages. You need at least three disks to fully protect yourself against a single disk failure. You lose some capacity to the striping, but not as much as with mirroring.

You can create a virtual disk in a storage pool by completing the following steps:

1. In Server Manager, select the File And Storage Services node, and then select the related Storage Pools subnode.

2. Select Tasks in the Virtual Disks panel, and then select New Virtual Disk. This starts the New Virtual Disk Wizard.

3. On the Storage Pool page, select the storage pool in which you want to create the virtual disk, and then click Next. Each available storage pool is listed according to the server it is managed by and available to. Make sure the pool has enough free space to create the virtual disk.

TIP

Select the storage pool for the server you want to associate the virtual disk with and allocate storage from. For example, if you are configuring storage for CorpServer38, select a storage pool that is available to CorpServer38.

4. On the Specify The Virtual Disk Name page, enter a name and description for the virtual disk. If you are using a combination of SSD storage and HDD storage, use the check box provided to specify whether you want to create storage tiers. With storage tiers, the most frequently accessed files are automatically moved from slower HDD to faster SSD storage. This option is not applicable when the server has only HDD or SSD storage. To continue, click Next.

5. On the Select The Storage Layout page, select the storage layout as appropriate for your reliability and redundancy requirements. The simple layout is the only option for storage pools that contain a single disk. If the underlying storage pool has multiple disks, you can choose a simple layout, a mirror layout, or a parity layout. Click Next.

REAL WORLD

If there aren’t enough available disks to implement the storage layout, you’ll get the error: The storage pool does not contain enough physical disks to support the selected storage layout. Select a different layout or repeat this procedure and select a different storage pool to work with initially.

Keep in mind the storage pool might have one or more disks allocated as hot spares. Hot spares are made available automatically to recover from disk failure when you use mirroring or parity volumes—and cannot otherwise be used. To force Windows to use a hot spare, you can remove the hot spare from the storage pool by pressing and holding or right-clicking it and selecting Remove, and then adding the drive back to the storage pool as an automatically allocated disk by pressing and holding or right-clicking the storage pool and selecting Add Physical Drive. Unfortunately, doing so might cause a storage pool created with a hot spare to report that it is in an Unhealthy state. If you subsequently try to add the drive again in any capacity, you’ll get an error stating “Error adding task: The storage pool could not complete the operation because its configuration is read-only.” The storage pool is not, in fact, in a read-only state. If the storage pool were in a read-only state you could enter the following command at an elevated Windows PowerShell prompt to clear this state:

Get-Storagepool "PoolName" | Set-Storagepool -IsReadonly $false

However, entering this command likely will not resolve the problem. To clear this error, I needed to reset Storage Spaces and the related subsystem. You might find it easier to simply restart the server. After you reset or restart the server, the storage pool will transition from an error state (where a red circle with an ‘x’ is showing) to a warning state (where a yellow triangle with an ‘!’ is showing). You can then remove the physical disk from the storage pool by pressing and holding or right-clicking it and selecting Remove. Afterward, you will be able to add the physical disk as an automatically-allocated disk by pressing and holding or right-clicking the storage pool and selecting Add Physical Drive.

6. On the Specify The Provisioning Type page, select the provisioning type. Storage can be provisioned in a thin disk or a fixed disk. With thin-disk provisioning, the volume uses space from the storage pool as needed, up to the volume size. With fixed provisioning, the volume has a fixed size and uses space from the storage pool equal to the volume size. Click Next.

7. On the Specify The Size Of The Virtual Disk page, use the options provided to set the size of the virtual disk. With fixed provisioning, selecting Maximum Size ensures that the disk is created and sized with the maximum space possible given the available space. For example, if you use a 2-TB disk and a 1.5-TB disk with a mirrored layout, a 1.5-TB fixed disk will be created because this is the maximum mirrored size possible.

8. When you are ready to continue, click Next. After you confirm your selections, click Create. The wizard tracks the progress of the disk creation. When the wizard finishes creating the disk, the View Results page will be updated to reflect this. Review the details to ensure that all phases were completed successfully. If any portion of the configuration failed, note the reason for the failure and take corrective actions as appropriate before repeating this procedure.

9. When you click Close, the New Volume Wizard should start automatically. Use the wizard to create a volume on the disk as discussed in the following section.

Creating a standard volume

Standard volumes can be created on any physical or virtual disk available. You use the same technique regardless of how the disk is presented to the server. This allows you to create standard volumes on a server’s internal disks, on virtual disks in a storage subsystem available to a server, and on virtual iSCSI disks available to a server. If you add the data deduplication feature to a server, you can enable data deduplication for standard volumes created for that server.

You can create a standard volume by completing the following steps:

1. Start the New Volume Wizard. If you just created a storage space, the New Volume Wizard might start automatically. If it did not, do one of the following:

o On the Disks subnode, all available disks are listed in the Disks panel. Select the disk with which you want to work, and then under Tasks, select New Volume.

o On the Storage Pools subnode, all available virtual disks are listed in the Virtual Disks panel. Select the disk with which you want to work, and then under Tasks, select New Volume.

2. On the Select The Server And Disk page, select the server for which you are provisioning storage, select the disk where the volume should be created, and then click Next. If you just created a storage space and then New Volume Wizard started automatically, the related server and disk are selected automatically and you simply need to click Next.

3. On the Specify The Size Of The Volume page, use the options provided to set the volume size. By default, the volume size is set to the maximum available on the related disk. Click Next.

4. On the Assign To A Drive Letter Or Folder page, specify whether you want to assign a drive letter or path to the volume, and then click Next. You use these options as follows:

o Drive Letter. To assign a drive letter, choose this option, and then select an available drive letter in the list provided.

o The Following Folder. To assign a drive path, choose this option, and then enter the path to an existing folder on an NTFS drive, or select Browse to search for or create a folder.

o Don’t Assign To A Drive Letter Or Drive Path. To create the volume without assigning a drive letter or path, choose this option. You can assign a drive letter or path later if necessary.

5. On the Select File System Settings page, specify how the volume should be formatted by using the following options:

o File System. Sets the file system type, such as NTFS or ReFS.

o Allocation Unit Size. Sets the cluster size for the file system. This is the basic unit in which disk space is allocated. The default allocation unit size is based on the volume’s size and is set dynamically prior to formatting. To override this feature, you can set the allocation unit size to a specific value.

o Volume Label. Sets a text label for the partition. This label is the partition’s volume name.

6. If you elected to create an NTFS volume and added data deduplication to the server, you can enable and configure data deduplication. When you are ready to continue, click Next.

7. After you confirm your selections, click Create. The wizard tracks the progress of the volume creation. When the wizard finishes creating the volume, the View Results page will be updated to reflect this. Review the details to ensure that all phases were completed successfully. If any portion of the configuration failed, note the reason for the failure and take corrective actions as appropriate before repeating this procedure.

8. Click Close.

REAL WORLD

In the Registry under HKLM\SYSTEM\CurrentControlSet\Control \FileSystem, the NtfsDisableLastAccessUpdate and RefsDisableLastAccessUpdate values control whether NTFS and ReFS update the last-access time stamp on each directory when it lists directories on a volume. If you notice that a busy server with a large number of directories isn’t very responsive when you list directories, this could be because the filesystem log buffer in physical memory is getting filled with time stamp update records. To prevent this, you can set the value to 1. When the value is set to 1, the filesystem does not update the last-access time stamp, and it does not record time stamp updates in the file system log. Otherwise, when the value is set to 0 (the default), the filesystem updates the last-access time stamp on each directory it detects, and it records each time change in the filesystem log.

Troubleshooting storage spaces

Typical problems creating storage spaces and allocating storage were discussed previously. You also might find that a physical disk that should be available for use isn’t available. With the Storage Pools node selected in Server Manager, you can add a physical disk that has been detected but isn’t listed as available by selecting Tasks on the Physical Disks panel, and then selecting Add Physical Disk. Next, in the Add Physical Disk dialog box, select the physical disk, and then click OK. Alternatively, if the physical disk has not been detected by the storage system, select Tasks on the Storage Pools panel, and then select Rescan Storage.

Other problems you might experience with storage spaces relate to drive failures and a loss of resiliency. When a storage space uses two-way mirroring, three-way mirroring, parity, or dual parity, you can recover resiliency by reconnecting a disconnected drive or replacing a failed drive. When a storage space uses a simple volume and drives were disconnected, you can recover the volume by reconnecting the drives.

Selecting the notification icon for Action Center displays the related notifications. If there is a problem with storage spaces, Action Center updates the related notification panel in the desktop notification area with a message stating “Check Storage Spaces for issues.” To open Server Manager, select the notification icon, and then select the link provided. In Server Manager, you’ll need to select the File And Storage Services node, and then select Storage Pools to get the relevant error and warning icons.

To view errors and warnings for storage pools, press and hold or right-click the storage pool with the error or warning icon, and then select Properties. In the Properties dialog box, select Health in the left pane to display the health status and operational status in the main pane. For example, you might find that the health status is listed as Warning and the operation status is listed as Degraded. Degraded is a status you’ll get when there is a loss of redundancy.

To view errors and warnings for virtual disks and their associated physical disks, press and hold or right-click the virtual disk with the error or warning icon, and then select Properties. In the Properties dialog box, select Health in the left pane to display the health status and operational status in the main pane. Note the storage layout and the physical disks in use as well. If there is a problem with a physical disk, such as a loss of communication, this status will be displayed. You’ll get a Loss of Communication status when a physical disk is missing, failed, or disconnected.

When storage spaces use external drives, a missing drive might be a common problem you encounter. In this case, users can continue to work, and redundancy will be restored when you reconnect the drive. However, if a drive failed, you’d need to complete the following steps to restore redundancy:

1. Physically remove the failed drive. If the drive is connected internally, you’ll need to shut down and unplug the computer before you can remove the drive; otherwise, simply disconnect an externally connected drive.

2. Physically add or connect a replacement drive. Next, add the drive to the storage space by doing the following:

a. On the Storage Spaces panel, press and hold or right-click the storage space you want to configure, and then select Add Physical Drive.

b. In the Add Physical Disk dialog box, select the drive that should be allocated to the storage pool.

c. When you click OK, Windows Server will prepare the drive and allocate it to the storage pool.

3. At this point, the failed drive should be listed as “Retired.” Remove the failed drive from the storage space by selecting the related Remove Disk option, and then confirm that you want to remove the drive by selecting Yes when prompted.

Windows Server restores redundancy by copying data as necessary to the new disk. During this process, the status of the storage space ordinarily is listed as “Repairing.” A value depicting how much of the repair task is completed is also shown. When this value reaches 100 percent, the repair is complete.

Managing existing partitions and drives

Disk Management provides many features to manage existing partitions and drives. Use these features to assign drive letters, delete partitions, set the active partition, and more. In addition, Windows Server 2012 R2 provides other utilities to carry out common tasks such as converting a volume to NTFS, checking a drive for errors, and cleaning up unused disk space.

NOTE

Windows Vista and all later releases of Windows support hot-pluggable media that use NTFS volumes. This new feature enables you to format USB flash devices and other similar media with NTFS. There are also enhancements to prevent data loss when ejecting NTFS-formatted removable media.

Assigning drive letters and paths

You can assign drives one drive letter and one or more drive paths, provided that the drive paths are mounted on NTFS drives. Drives don’t have to be assigned a drive letter or path. A drive with no designators is considered to be unmounted, and you can mount it by assigning a drive letter or path at a later date. You need to unmount a drive before moving it to another computer.

Windows cannot modify the drive letter of system, boot, or page-file volumes. To change the drive letter of a system or boot volume, you need to edit the registry as described in Microsoft Knowledge Base article 223188 (support.microsoft.com/kb/223188/). Before you can change the drive letter of a page-file volume, you might need to move the page file to a different volume.

To manage drive letters and paths, press and hold or right-click the drive you want to configure in Disk Management, and then tap or click Change Drive Letter And Paths to open the dialog box (shown in Figure 2-8). You can now do the following:

§ Add a drive path. Tap or click Add, select Mount In The Following Empty NTFS Folder, and then type the path to an existing folder, or tap or click Browse to search for or create a folder.

§ Remove a drive path. Select the drive path to remove, tap or click Remove, and then tap or click Yes.

§ Assign a drive letter. Tap or click Add, select Assign The Following Drive Letter, and then choose an available letter to assign to the drive.

§ Change the drive letter. Select the current drive letter, and then tap or click Change. Select Assign The Following Drive Letter, and then choose a different letter to assign to the drive.

§ Remove a drive letter. Select the current drive letter, tap or click Remove, and then tap or click Yes.

NOTE

If you try to change the letter of a drive that’s in use, Windows Server 2012 R2 displays a warning. You need to exit programs that are using the drive and try again, or allow Disk Management to force the change by tapping or clicking Yes when prompted.

You can change the drive letter and path assignment in the Change Drive Letter And Paths dialog box.

Figure 2-8. You can change the drive letter and path assignment in the Change Drive Letter And Paths dialog box.

Changing or deleting the volume label

The volume label is a text descriptor for a drive. With FAT, the volume label can be up to 11 characters and can include spaces. With NTFS, the volume label can be up to 32 characters. Additionally, although FAT doesn’t allow you to use some special characters—including * / \ [ ] : ; | = , . + “ ? < >—NTFS does allow you to use these special characters.

Because the volume label is displayed when the drive is accessed in various Windows Server 2012 R2 utilities, including File Explorer, it can provide information about a drive’s contents. You can change or delete a volume label by using Disk Management or File Explorer.

Using Disk Management, you can change or delete a label by following these steps:

1. Press and hold or right-click the partition, and then tap or click Properties.

2. On the General tab of the Properties dialog box, enter a new label for the volume in the Label text box or delete the existing label. Tap or click OK.

Using File Explorer, you can change or delete a label by following these steps:

1. Press and hold or right-click the drive icon, and then tap or click Properties.

2. On the General tab of the Properties dialog box, enter a new label for the volume in the Label text box or delete the existing label. Tap or click OK.

Deleting partitions and drives

To change the configuration of a drive that’s fully allocated, you might need to delete existing partitions and logical drives. Deleting a partition or a drive removes the associated file system, and all data in the file system is lost. Before you delete a partition or a drive, you should back up any files and directories that the partition or drive contains.

NOTE

To protect the integrity of the system, you can’t delete the system or boot partition. However, Windows Server 2012 R2 does let you delete the active partition or volume if it is not designated as boot or system. Always check to be sure that the partition or volume you are deleting doesn’t contain important data or files.

You can delete a primary partition, volume, or logical drive by following these steps:

1. In Disk Management, press and hold or right-click the partition, volume, or drive you want to delete, and then tap or click Explore. Using File Explorer, move all the data to another volume or verify an existing backup to ensure that the data was properly saved.

2. In Disk Management, press and hold or right-click the partition, volume, or drive again, and then tap or click Delete Partition, Delete Volume, or Delete Logical Drive as appropriate.

3. Confirm that you want to delete the selected item by tapping or clicking Yes.

The steps for deleting an extended partition differ slightly from those for deleting a primary partition or a logical drive. To delete an extended partition, follow these steps:

1. Delete all the logical drives on the partition following the steps listed in the previous procedure.

2. Select the extended partition area itself and delete it.

Converting a volume to NTFS

Windows Server 2012 R2 provides a utility for converting FAT volumes to NTFS. This utility, Convert (Convert.exe), is located in the %SystemRoot% folder. When you convert a volume by using this tool, the file and directory structure is preserved and no data is lost. Keep in mind, however, that Windows Server 2012 R2 doesn’t provide a utility for converting NTFS to FAT. The only way to go from NTFS to FAT is to delete the partition by following the steps listed in the previous section, and then re-create the partition as a FAT volume.

Understanding the Convert utility syntax

Convert is run at the command prompt. If you want to convert a drive, use the following syntax:

convert volume /FS:NTFS

Here volume is the drive letter followed by a colon, drive path, or volume name. For example, if you want to convert the D drive to NTFS, use the following command:

convert D: /FS:NTFS

If the volume has a label, you are prompted to enter the volume label for the drive, but you are not prompted if the disk doesn’t have a label.

The complete syntax for Convert is shown here:

convert volume /FS:NTFS [/V] [/X] [/CvtArea:filename] [/NoSecurity]

The options and switches for Convert are used as follows:

volume

Sets the volume with which to work

/FS:NTFS

Converts to NTFS

/V

Sets verbose mode

/X

Forces the volume to dismount before the conversion (if necessary)

/CvtArea: filename

Sets the name of a contiguous file in the root directory to be a placeholder for NTFS system files

/NoSecurity

Removes all security attributes, and makes all files and directories accessible to the group Everyone

The following sample statement uses Convert:

convert C: /FS:NTFS /V

Using the Convert utility

Before you use the Convert utility, determine whether the partition is being used as the active boot partition or a system partition containing the operating system. You can convert the active boot partition to NTFS. Doing so requires that the system gain exclusive access to this partition, which can be obtained only during startup. Thus, if you try to convert the active boot partition to NTFS, Windows Server 2012 R2 displays a prompt asking if you want to schedule the drive to be converted the next time the system starts. If you tap or click Yes, you can restart the system to begin the conversion process.

TIP

Often, you will need to restart a system several times to completely convert the active boot partition. Don’t panic. Let the system proceed with the conversion.

Before the Convert utility actually converts a drive to NTFS, the utility checks whether the drive has enough free space to perform the conversion. Generally, Convert needs a block of free space that’s roughly equal to 25 percent of the total space used on the drive. For example, if the drive stores 200 GB of data, Convert needs about 50 GB of free space. If the drive doesn’t have enough free space, Convert aborts and tells you that you need to free up some space. On the other hand, if the drive has enough free space, Convert initiates the conversion. Be patient. The conversion process takes several minutes (longer for large drives). Don’t access files or applications on the drive while the conversion is in progress.

You can use the /CvtArea option to improve performance on the volume so that space for the master file table (MFT) is reserved. This option helps to prevent fragmentation of the MFT. How? Over time, the MFT might grow larger than the space allocated to it. The operating system must then expand the MFT into other areas of the disk. Although the Optimize Drives utility can defragment the MFT, it cannot move the first section of the MFT, and it is very unlikely that there will be space after the MFT because this will be filled by file data.

To help prevent fragmentation in some cases, you might want to reserve more space than the default (12.5 percent of the partition or volume size). For example, you might want to increase the MFT size if the volume will have many small or average-size files rather than a few large files. To specify the amount of space to reserve, you can use FSUtil to create a placeholder file equal in size to that of the MFT you want to create. You can then convert the volume to NTFS and specify the name of the placeholder file to use with the /CvtArea option.

In the following example, you use FSUtil to create a 1.5-GB (1,500,000,000 bytes) placeholder file named Temp.txt:

fsutil file createnew c:\temp.txt 1500000000

To use this placeholder file for the MFT when converting drive C to NTFS, you would then type the following command:

convert c: /fs:ntfs /cvtarea:temp.txt

Notice that the placeholder file is created on the partition or volume that is being converted. During the conversion process, the file is overwritten with NTFS metadata and any unused space in the file is reserved for future use by the MFT.

Resizing partitions and volumes

Windows Server 2012 R2 doesn’t use Ntldr and Boot.ini to load the operating system. Instead, Windows Server 2012 R2 has a preboot environment in which Windows Boot Manager is used to control startup and load the boot application you selected. Windows Boot Manager also finally frees the Windows operating system from its reliance on MS-DOS so that you can use drives in new ways. With Windows Server 2012 R2, you can extend and shrink both basic and dynamic disks. You can use Disk Management, DiskPart, or Windows PowerShell to extend and shrink volumes. You cannot shrink or extend striped, mirrored, or striped-with-parity volumes.

In extending a volume, you convert areas of unallocated space and add them to the existing volume. For spanned volumes on dynamic disks, the space can come from any available dynamic disk, not only from those on which the volume was originally created. Thus, you can combine areas of free space on multiple dynamic disks and use those areas to increase the size of an existing volume.

CAUTION

Before you try to extend a volume, be aware of several limitations. First, you can extend simple and spanned volumes only if they are formatted and the file system is NTFS. You can’t extend striped volumes, volumes that aren’t formatted, or volumes that are formatted with FAT. Additionally, you can’t extend a system or boot volume, regardless of its configuration.

You can shrink a simple volume or a spanned volume by following these steps:

1. In Disk Management, press and hold or right-click the volume you want to shrink, and then tap or click Shrink Volume. This option is available only if the volume meets the previously discussed criteria.

2. In the box provided in the Shrink dialog box, shown in Figure 2-9, enter the amount of space to shrink.

Specify the amount of space to shrink from the volume.

Figure 2-9. Specify the amount of space to shrink from the volume.

The Shrink dialog box provides the following information:

o Total Size Before Shrink In MB. Lists the total capacity of the volume in megabytes. This is the formatted size of the volume.

o Size Of Available Shrink Space In MB. Lists the maximum amount by which the volume can be shrunk. This doesn’t represent the total amount of free space on the volume; rather, it represents the amount of space that can be removed, not including any data reserved for the master file table, volume snapshots, page files, and temporary files.

o Enter The Amount Of Space To Shrink In MB. Lists the total amount of space that will be removed from the volume. The initial value defaults to the maximum amount of space that can be removed from the volume. For optimal drive performance, you’ll want to ensure that the drive has at least 10 percent of free space after the shrink operation.

o Total Size After Shrink In MB. Lists what the total capacity of the volume will be (in megabytes) after the shrink. This is the new formatted size of the volume.

3. Tap or click Shrink to shrink the volume.

You can extend a simple volume or a spanned volume by following these steps:

1. In Disk Management, press and hold or right-click the volume you want to extend, and then tap or click Extend Volume. This option is available only if the volume meets the previously discussed criteria and free space is available on one or more of the system’s dynamic disks.

2. In the Extend Volume Wizard, read the introductory message, and then tap or click Next.

3. On the Select Disks page, select the disk or disks from which you want to allocate free space. Any disks currently being used by the volume are automatically selected. By default, all remaining free space on those disks is selected for use.

4. With dynamic disks, you can specify the additional space you want to use on other disks by performing the following tasks:

o Tap or click the disk, and then tap or click Add to add the disk to the Selected list.

o Select each disk in the Selected list, and then, in the Select The Amount Of Space In MB list, specify the amount of unallocated space to use on the selected disk.

5. Tap or click Next, confirm your options, and then tap or click Finish.

Repairing disk errors and inconsistencies automatically

Windows Server 2012 R2 includes feature enhancements that reduce the amount of manual maintenance you must perform on disk drives. The following enhancements have the most impact on the way you work with disks:

§ Transactional NTFS

§ Self-healing NTFS

Transactional NTFS allows file operations on an NTFS volume to be performed transactionally, which means programs can use a transaction to group sets of file and registry operations so that all of them succeed or none of them succeed. While a transaction is active, changes are not visible outside the transaction. Changes are committed and written fully to disk only when a transaction is completed successfully. If a transaction fails or is incomplete, the program rolls back the transactional work to restore the file system to the state it was in prior to the transaction.

REAL WORLD

Resilient File System (ReFS) takes the transactional and self-healing features of NTFS a few steps further. With ReFS, several background processes are used to maintain disk integrity automatically. The scrubber process checks the disk for inconsistencies and errors. If any are found, a repair process localizes the problems and performs automatic online correction. In the rare case that a physical drive has bad sectors that are causing the problem, ReFS uses a salvage process to mark the bad sectors and remove them from the file system—and all while the volume is online.

Transactions that span multiple volumes are coordinated by the Kernel Transaction Manager (KTM). The KTM supports independent recovery of volumes if a transaction fails. The local resource manager for a volume maintains a separate transaction log and is responsible for maintaining threads for transactions separate from threads that perform the file work.

Traditionally, you had to use the Check Disk tool to fix errors and inconsistencies in NTFS volumes on a disk. Because this process can disrupt the availability of Windows systems, Windows Server 2012 R2 uses self-healing NTFS to protect file systems without requiring you to use separate maintenance tools to fix problems. Because much of the self-healing process is enabled and performed automatically, you might need to perform volume maintenance manually only when you are notified by the operating system that a problem cannot be corrected automatically. If such an error occurs, Windows Server 2012 R2 notifies you about the problem and provides possible solutions.

Self-healing NTFS has many advantages over Check Disk, including the following:

§ Check Disk must have exclusive access to volumes, which means system and boot volumes can be checked only when the operating system starts up. On the other hand, with self-healing NTFS, the file system is always available and does not need to be corrected offline (in most cases).

§ Self-healing NTFS attempts to preserve as much data as possible if corruption occurs and reduces failed file system mounting that previously could occur if a volume was known to have errors or inconsistencies. During restart, self-healing NTFS repairs the volume immediately so that it can be mounted.

§ Self-healing NTFS reports changes made to the volume during repair through existing Chkdsk.exe mechanisms, directory notifications, and update sequence number (USN) journal entries. This feature also allows authorized users and administrators to monitor repair operations through Verification, Waiting For Repair Completion, and Progress Status messages.

§ Self-healing NTFS can recover a volume if the boot sector is readable but does not identify an NTFS volume. In this case, you must run an offline tool that repairs the boot sector, and then allow self-healing NTFS to initiate recovery.

Although self-healing NTFS is a terrific enhancement, at times you might want to (or might have to) manually check the integrity of a disk. In these cases, you can use Check Disk (Chkdsk.exe) to check for and (optionally) repair problems found on FAT, FAT32, exFAT, and NTFS volumes.

IMPORTANT

Because ReFS is self-correcting, you do not need to use Check Disk to check ReFS volumes for errors. However, it’s important to point out that ReFS as originally released did not efficiently correct corruption on parity spaces. With Windows Server 2012 R2, this deficiency has been corrected. ReFS automatically corrects corruption on parity spaces when integrity streams are enabled to detect corrupt data. When corruption is detected, ReFS examines the data copies that the parity spaces contain and then uses the correct version of the data to correct the problem. As ReFS now supports concurrent I/O requests to the same file, the performance of integrity streams also has been improved.

Although Check Disk can check for and correct many types of errors, the utility primarily looks for inconsistencies in the file system and its related metadata. One of the ways Check Disk locates errors is by comparing the volume bitmap to the disk sectors assigned to files in the file system. Beyond this, the usefulness of Check Disk is rather limited. For example, Check Disk can’t repair corrupted data within files that appear to be structurally intact.

As part of automated maintenance, Windows Server 2012 R2 performs a proactive scan of NTFS volumes. As with other automated maintenance, Windows scans disks using Check Disk at 2:00 A.M. if the computer is running on AC power and the operating system is idle. Otherwise, Windows scans disks the next time the computer is running on AC power and the operating system is idle. Although automated maintenance triggers the disk scan, the process of calling and managing Check Disk is handled by a separate task. In Task Scheduler, you’ll find the ProactiveScan task in the scheduler library under Microsoft\Windows\Chkdsk, and you can get detailed run details by reviewing the information provided on the task’s History tab.

REAL WORLD

Automatic Maintenance is built on the Windows Diagnostics framework. By default, Windows periodically performs routine maintenance at 2:00 A.M. if the computer is running on AC power and the operating system is idle. Otherwise, maintenance will start the next time the computer is running on AC power and the operating system is idle. Because maintenance runs only when the operating system is idle, maintenance is allowed to run in the background for up to three days. This allows Windows to complete complex maintenance tasks automatically. Maintenance tasks include software updates, security scanning, system diagnostics, checking disks, and disk optimization. You can change the run time for automated maintenance by opening Action Center, expanding the Maintenance panel, selecting Change Maintenance Settings, and then selecting a new run schedule.

Checking disks manually

With Windows Server 2012 R2, Check Disk performs enhanced scan and repair automatically, instead of using the legacy scan and repair available with earlier releases of Windows. Here, when you use Check Disk with NTFS volumes, Check Disk performs an online scan and analysis of the disk for errors. Check Disk writes information about any detected corruptions in the $corrupt system file. If the volume is in use, detected corruptions can be repaired by taking the volume offline temporarily; however, unmounting the volume for the repair invalidates all open file handles. With the boot/system volume, the repairs are performed the next time you start the computer.

Storing the corruption information and then repairing the volume while it is dismounted enables Windows to rapidly repair volumes. You can also keep using the disk while a scan is being performed. Typically, offline repair takes only a few seconds, compared to what otherwise would have been hours for very large volumes using the legacy scan and repair technique.

NOTE

FAT, FAT32, and exFAT do not support the enhanced features. When you use Check Disk with FAT, FAT32, or exFAT, Windows Server 2012 R2 uses the legacy scan and repair process. This means the scan and repair process typically requires taking the volume offline and preventing it from being used. You can’t use Check Disk with ReFS.

You can run Check Disk from the command prompt or within other utilities. At a command prompt, you can test the integrity of the E drive by entering the following command:

chkdsk /scan E:

Check Disk then performs an analysis of the disk and returns a status message regarding any problems it encounters. Check Disk won’t repair problems, however, unless you specify further options. To repair errors on drive E, use this command:

chkdsk /spotfix E:

Fixing the volume requires exclusive access to the volume. The way this works depends on the type of volume:

§ For nonsystem volumes, you’ll get a prompt asking whether you would like to force a dismount of the volume for the repair. In this case, you can enter Y to proceed or N to cancel the dismount. If you cancel the dismount, you’ll get the prompt asking whether you would like to schedule the volume for the repair the next time the computer is started. In this case, you can enter Y to schedule the repair or N to cancel the repair.

§ For system volumes, you’ll get a prompt asking whether you would like to schedule the volume for the repair the next time the computer is started. In this case, you can enter Y to schedule the repair or N to cancel the repair.

You can’t run Check Disk with both the /scan and /spotfix options because the scan and repair tasks are now independent of each other.

The complete syntax for Check Disk is as follows:

CHKDSK [volume[[path]filename]] [/F] [/V] [/R] [/X] [/I] [/C] [/B]

[/L[:size]] [/scan] [/forceofflinefix] [/perf] [/spotfix]

[/sdcleanup] [/offlinescanandfix]

The options and switches for Check Disk are used as follows:

volume

Sets the volume with which to work.

[path]filename

FAT only. It specifies files to check for fragmentation.

/B

Reevaluates bad clusters on the volume (NTFS only; implies /R).

/C

NTFS only. It skips the checking of cycles within the folder structure.

/F

Fixes errors on the disk by using the offline (legacy) scan and fix behavior.

/I

NTFS only. It performs a minimum check of index entries.

/L:size

NTFS only. It changes the log file size.

/R

Locates bad sectors, and recovers readable information (implies /F).

/V

On FAT, it displays the full path and name of every file on the disk. On NTFS, it displays cleanup messages, if there are any.

/X

Forces the volume to dismount first if necessary (implies /F). For NTFS volumes, Check Disk supports these enhanced options:

/forceofflinefix

Must be used with /scan. It bypasses all online repair and queues errors for offline repair.

/offlinescanandfix

Performs an offline scan and fix of the volume.

/perf

Performs the scan as fast as possible by using more system resources.

/scan

Performs an online scan of the volume (the default). Errors detected during the scan are added to the $corrupt system file.

/sdcleanup

Cleans up unneeded security descriptor data. It implies /F (with legacy scan and repair).

/spotfix

Allows certain types of errors to be repaired online (the default).

Running Check Disk interactively

You can run Check Disk interactively through File Explorer or Disk Management by following these steps:

1. Press and hold or right-click the drive, and then tap or click Properties.

2. On the Tools tab of the Properties dialog box, tap or click Check. This displays the Error Checking dialog box, shown in Figure 2-10.

Use Check Disk to check a disk for errors and repair any that are found.

Figure 2-10. Use Check Disk to check a disk for errors and repair any that are found.

3. Click Scan Drive to start the scan. If no errors are found, Windows confirms this; otherwise, if errors are found, you’ll be prompted with additional options. As with checking disks at a prompt, the way this works depends on whether you are working with a system or nonsystem volume.

NOTE

For FAT, FAT32, and exFAT volumes, Windows uses the legacy Check Disk. You tap or click Scan And Repair Drive to start the scan. If the scan finds errors, you might need to restart the computer to repair them.

Analyzing and optimizing disks

Any time you add files to or remove files from a drive, the data on the drive can become fragmented. When a drive is fragmented, large files can’t be written to a single continuous area on the disk. As a result, the operating system must write the file to several smaller areas on the disk, which means more time is spent reading the file from the disk. To reduce fragmentation, Windows Server 2012 R2 can manually or automatically analyze and optimize disks by using the Optimize Drives utility.

With manual optimization, Optimize Drives performs an online analysis of volumes, and then reports the percentage of fragmentation. If defragmentation is needed, you can then elect to perform online defragmentation. System and boot volumes can be defragmented online as well, and Optimize Drives can be used with FAT, FAT32, exFAT, NTFS, and ReFS volumes.

You can manually analyze and optimize a disk by following these steps:

1. In Computer Management, select the Storage node and then the Disk Management node. Press and hold or right-click a drive, and then tap or click Properties.

2. On the Tools tab, tap or click Optimize. In the Optimize Drives dialog box, select a disk, and then tap or click Analyze. Optimize Drives then analyzes the disk, as shown in Figure 2-11, to determine whether it needs to be defragmented. If so, it recommends that you defragment at this point.

3. If a disk needs to be defragmented, select the disk, and then tap or click Optimize.

NOTE

Depending on the size of the disk, defragmentation can take several hours. You can tap or click Stop Operation at any time to stop defragmentation.

Optimize Drives analyzes and defragments disks efficiently.

Figure 2-11. Optimize Drives analyzes and defragments disks efficiently.

Automatic analysis and optimization of disks can occur while the disks are online, so long as the computer is on AC power and the operating system is running but otherwise idle. By default, disk optimization is a weekly task rather than a daily task—and there’s a good reason for this. Normally, you need only to periodically optimize a server’s disks, and optimization once a week is sufficient in most cases. Note, however, that although nonsystem disks can be rapidly analyzed and optimized, it can take significantly longer to optimize system disks online.

You can control the approximate start time for the analysis and optimization of disks by changing the automated maintenance start time. Windows Server also notifies you if three consecutive runs are missed. All internal drives and certain external drives are optimized automatically as part of the regular schedule, as are new drives you connect to the server.

NOTE

Windows Server 2012 R2 automatically performs cyclic pickup defragmentation. With this feature, when a scheduled defragmentation pass is stopped and rerun, the computer automatically picks up where it left off or starts with the next unfinished volume in line to be defragmented.

You can configure and manage automated defragmentation by following these steps:

1. In Computer Management, select the Storage node and then the Disk Management node. Press and hold or right-click a drive, and then tap or click Properties.

2. On the Tools tab, tap or click Optimize. This displays the Optimize Drives dialog box.

3. If you want to change how optimization works, tap or click Change Settings. This displays the dialog box shown in Figure 2-12. To cancel automated defragmentation, clear the Run On A Schedule check box. To enable automated defragmentation, select Run On A Schedule.

Set the run schedule for automated defragmentation.

Figure 2-12. Set the run schedule for automated defragmentation.

4. The default run frequency is set as shown. In the Frequency list, you can choose Daily, Weekly, or Monthly as the run schedule. If you don’t want to be notified about missed runs, clear the Notify Me check box.

5. If you want to manage which disks are defragmented, tap or click Choose, and then select the volumes to defragment. By default, all disks installed within or connected to the computer are defragmented, and any new disks are defragmented automatically as well. Select the check boxes for disks that should be defragmented automatically, and clear the check boxes for disks that should not be defragmented automatically. Tap or click OK to save your settings.

6. Tap or click OK, and then tap or click Close.