Some of the Cluster Recovery functionality is built into Windows 2008 Failover Clustering.
Say I have 4 disks in the Cluster (1 witness and 3 data) and I want to replace them with new ones.
Here are the steps I would take. Please ensure all nodes are up and functioning in the Cluster.1. Change the Quorum Setting from Witness to Node Majority.a. Right Mouse Click the name of the Cluster on left and choose PROPERTIESb. Choose MORE ACTIONSc. Choose CONFIGURE CLUSTER QUORUM SETTINGSd. Select NODE MAJORITYe. FINISH
“How do I replace a disk?”
I’ll walkthrough the process of replacing a 1GB disk with a 2GB disk.
This process is similar to how you could go about doing a SAN migration where you are replacing all of your shared disks with storage from a new SAN.
The preferred way of getting a larger cluster disk is to use the built in capability of most SANs to dynamically expand a LUN then use an OS utility like DiskPart or Disk Manager to extend the size of the disk.
If that’s not feasible or you simply want to replace a LUN with a larger one or as I mentioned, as part of a SAN migration, this process works well.
The first thing we need to do is present your new disk to the cluster.
The nuts and bolts of how to do that are outside the scope of this post so just ask your SAN administrator for a new LUN and present it to all nodes of the cluster.
Since by default in Server 2008, we leave new LUNs offline, there’s no risk in presenting a new LUN to all nodes at the same time.
In the below figure is what Disk Manager would look like after my new disk had been presented.
Note how the new disk ‘Disk 9’ is in an ‘Offline’ state. In order to prepare it to be the replacement disk for an existing disk, we need to do the following.
Figure 2 now shows ‘Disk 9’ as Online and formatted with an NTFS partition.
Add the NEW disk to the Available Storage Group
Take all the resources OFFLINE except the NEW and the OLD Disks,
Copy the Data:
robocopy G: B: /MIR /SEC /R:5 /W:1 /LOG:C:\copia_log.txt /TEE /ETA /COPYALL
robocopy H: E: /MIR /SEC /R:5 /W:1 /LOG:C:\copia_log.txt /TEE /ETA /COPYALL
robocopy I: F: /MIR /SEC /R:5 /W:1 /LOG:C:\copia_log.txt /TEE /ETA /COPYALL
robocopy J: S: /MIR /SEC /R:5 /W:1 /LOG:C:\copia_log.txt /TEE /ETA /COPYALL
robocopy K: T: /MIR /SEC /R:5 /W:1 /LOG:C:\copia_log.txt /TEE /ETA /COPYALL
robocopy M: U: /MIR /SEC /R:5 /W:1 /LOG:C:\copia_log.txt /TEE /ETA /COPYALL
robocopy N: V: /MIR /SEC /R:5 /W:1 /LOG:C:\copia_log.txt /TEE /ETA /COPYALL
robocopy L: W: /MIR /SEC /R:5 /W:1 /LOG:C:\copia_log.txt /TEE /ETA /COPYALL
robocopy P: X: /MIR /SEC /R:5 /W:1 /LOG:C:\copia_log.txt /TEE /ETA /COPYALL
robocopy R: Y: /MIR /SEC /R:5 /W:1 /LOG:C:\copia_log.txt /TEE /ETA /COPYALL
robocopy O: Z: /MIR /SEC /R:5 /W:1 /LOG:C:\copia_log.txt /TEE /ETA /COPYALL…
At this point, we can now go into Failover Cluster Manager to complete the rest of the replacement.
The screenshot below shows a File Server group with ‘Cluster Disk X:’ of size 1GB. This is the disk that I am going to replace with the new 2GB disk from above.
Failover Cluster Manager has a built in ‘repair’ functionality that allows replacing a failed disk with a new disk.
Since we’re not really replacing a failed disk but a working one, we need to ‘trick’ the cluster into putting that disk into a failed state so that the ‘repair’ function will be enabled.
First we need to change the properties of ‘Cluster Disk X:’ to not restart or failover when I simulate a failure of the disk. Right-click the disk resource, [Properties], [Policies Tab]. Set the ‘Response to resource failure’ to ‘If resources fails, do not restart’.
Now right-click ‘Cluster Disk X:’, ‘More Actions’, ‘Simulate failure of this resource’
You’ll now see that the disk resource is in a ‘Failed’ state. Don’t worry, data is still intact and disk is fine.
Now right-click the disk resource, ‘More actions…’, ‘Repair’. This will launch the ‘Repair a Disk Resource’ window.
Figure 7. shows the disk that we presented and created in Figure 2. Select that disk, click [OK]/
So after this we now have the NEW disk,
Now bring the resource online. You’ll see in Figure 8. that the disk now shows as 2GB. We essentially swapped one disk for another without having to worry about resource dependencies.
If the drive letter needs to be changed to match the old drive letter, do so now.
Now, we need to set the restart properties of the resource back to their default. Right click on the disk, Properties. Select the ‘If resource fails, attempt restart on current node’. We’re undoing what we did in Figure 4.
Other Option to copy the DATA is:
So now that we’ve replaced the 1GB disk with the 2GB disk, what happened to the old disk?
When you used the ‘Repair’ function, the old disk got removed from under the control of the cluster.
The final step in the replacement is to bring the old disk back into the cluster so that we can bring it online and move the data from the old disk to the new.
To add the disk back in, from Failover Cluster Manager, go to the ‘Storage’ group. In the right-most column, in the ‘Actions’ pane, click on ‘Add a disk’
Figure 11 shows the disk we just removed from the cluster. Select this disk, click [OK]
This disk now shows up in ‘Available Storage’. Figure 12.
The final steps in the replacement are to assign this disk a drive letter so that it’s exposed to the OS to get your data moved from the old disk to the new.
Now that ‘Cluster Disk 7’ (the old disk) shows as online and has a drive letter (D:) , you can use your favorite data copy method to move the data from the old disk to the new disk.
If you are no longer going to use the old LUN, you can simply delete this resource from Failover Cluster Manager and unpresent that LUN from all nodes of the cluster.
That finishes up the clean-up process. You can also just leave the disk in ‘Available Storage’, format it, and have it ready for some other ‘Service or application’ cluster group to use in the future.
Hope you find this useful especially for those SAN migrations.
Awesome post! I wasn't aware of this method of migration. One question, though, you do a robocopy both just after Figure 2 (when the Cluster Resources are offline), then you also suggest copying again at the end of your post, just after Figure 13. Isn't the second copy unnecessary? Or does using a new disk wipe out the data on that disk, in which case, why do the first copy at all?
What about if SQL 2008 R2 is installed and clustered on the server? I want to replace data and log drives with new SAN drives and have concerns about MSDTC, SQL databases, and other components installed on the data drive.