A blog by Jose Barreto, a member of the File Server team at Microsoft.
All messages posted to this blog are provided "AS IS" with no warranties, and confer no rights.
Information on unreleased products are subject to change without notice.
Dates related to unreleased products are estimates and are subject to change without notice.
The content of this site are personal opinions and might not represent the Microsoft Corporation view.
The information contained in this blog represents my view on the issues discussed as of the date of publication.
You should not consider older, out-of-date posts to reflect my current thoughts and opinions.
© Copyright 2004-2012 by Jose Barreto. All rights reserved.
Follow @josebarreto on Twitter for updates on new blog posts.
This post describes how to configure the Microsoft iSCSI Software Target offered with Windows Storage Server.
One of the goals here is to describe the terminology used like iSCSI Target, iSCSI Initiator, iSCSI Virtual Disk, etc. It also includes the steps to configure the iSCSI Software Target and the iSCSI Initiator.
We’ll start with a simple scenario with three servers: one Storage Server and two Application Servers.
In our example, the Storage Server runs WSS 2008 and the two Application Servers run Windows Server 2008.
The Application Servers could be running any edition of Windows Server 2003 (using the downloadable iSCSI Initiator) or Windows Server 2008 / Windows Server 2008 R2 (which come with an iSCSI Initiator built-in).
The iSCSI Initiator configuration applet can be found in the Application Server’s Control Panel. In the “General” tab of that applet you will find the iQN (iSCSI Qualified Name) for the iSCSI Initiator, which you may need later while configuring the Storage Server.
The Microsoft iSCSI Software Target Management Console can be found on the Administration Tools menu in the Storage Server.
Add iSCSI Targets
The first thing to do is add two iSCSI Targets to the Storage Server. To do this, right-click the iSCSI Targets node in the Microsoft iSCSI Software Target MMC and select the “Create iSCSI Target” option. You will then specify a name, an optional description and the identifier for the iSCSI Initiator associated with that iSCSI Target.
There are four methods to identify the iSCSI Initiators: iQN (iSCSI Qualified Name), DNS name, IP address and MAC address. However, you only need to use one of the methods. The default is the iQN (which can be obtained from the iSCSI Initiator’s control panel applet). If you don’t have access to the iSCSI Initiator to check the iQN, you can use its DNS name. If you’re using the Microsoft iSCSI Initiator on your application server, that iQN is actually constructed with a prefix (“iqn.1991-05.com.microsoft:”) combined with the DNS name of the computer.
For instance, if the Application Server runs the Microsoft iSCSI Initiator, is named “s2” and is a member of the “contoso.com” domain, its iQN would be “iqn.1991-05.com.microsoft:s2.contoso.com” and its DNS name would be “s2.contoso.com”. You could also use Its IP address (something like “10.1.1.120”) or its MAC address (which would look like “12-34-56-78-90-12”).
Typically, you assign just one iSCSI Initiator to each iSCSI Target. If you assign multiple iSCSI Initiators to the same iSCSI Targets, there is a potential for conflict between Application Servers. However, there are cases where this can make sense, like when you are using clusters.
In our example, we created two iSCSI Targets named T1 (assigned to the iSCSI Initiator in S2) and T2 (assigned to the iSCSI Initiator in S3). It did not fit in the diagram, but assume we used the complete DNS names of the Application Servers to identify their iSCSI Initiators.
Add Virtual Disks
Next, you need to create the Virtual Disks on the Storage Server. This is the equivalent of creating an LU in a regular SAN device. The Microsoft iSCSI Software Target stores those Virtual Disks as files with the VHD extension in the Storage Server.
This is very similar to the Virtual Disks in Virtual PC and Virtual Server. However, you can only use the fixed size format for the VHDs (not the dynamically expanding or differencing formats). You can extend those fixed-size VHDs later if needed.
Right-click the “Devices” node in the Microsoft iSCSI Software Target MMC and select the “Create Virtual Disk” option. For each Virtual Disk you will specify a filename (complete with drive, folder and extension), a size (between 8MB and 16TB) and an optional description. You can also assign the iSCSI Targets at this point, but we’ll skip that and do it as a separate step.
In our example, we created three virtual disks: D:\VHD1.vhd, E:\VHD2.vhd and E:\VHD3.vhd.
You can create multiple VHD files on the same disk. However, keep in mind that there are performance implications in doing so, since these VHDs will be sharing the same spindles (not unlike any scenario where two applications store data in the same physical disks).
The VHD files created by the Microsoft iSCSI Software Target cannot be used by Virtual PC or Virtual Server, since the format was adapted to support larger sizes (up to 16 TB instead of the usual 2 TB limit in Virtual PC and Virtual Server).
Assign Virtual Disks to iSCSI Targets
Once you created the iSCSI Targets and the Virtual Disks, it’s time to associate each virtual disk to their iSCSI Targets. Since the iSCSI Initiators were already assigned to the iSCSI Targets, this is the equivalent of unmasking an LU in a regular SAN device.
Right-click the “Devices” node in the Microsoft iSCSI Software Target MMC and select the “Assign/Remove Target” option. This will take you directly to the “Target Access” tab in the properties of the virtual disk. Click the “Add” button to pick a target. You will typically assign a virtual disk to only one iSCSI Target. As with multiple iSCSI Initiators per iSCSI Target, if you assign the same disk to multiple iSCSI Targets, there is a potential for conflict if two Application Servers try to access the virtual disk at the same time.
You can assign multiple disks to a single iSCSI Target. This is very common when you are exposing several disks to the same Application Server. However, you can also expose multiple virtual disks to the same Application Server using multiple iSCSI Targets, with a single virtual disk per iSCSI Target. This will improve performance if your server runs a very demanding application in terms of storage, since each target will have its own request queue. Having too many iSCSI Targets will also tax the system, so you need to strike a balance if you have dozens of Virtual Disks, each associated with very demanding Application Servers.
In our example, we assigned VHD1 and VHD2 to T1, then assigned VHD3 to T2.
Add Target Portal
Now that we finished the configuration on the Storage Server side, let’s focus on the Application Servers.
Using the iSCSI Initiator control panel applet, click on the “Discovery” tab and add your Storage Server DNS name or IP address to the list of Target Portals. Keep the default port (3260).
Next, select the “Targets” tab and click on the “Refresh” button. You should see the iQNs of iSCSI Targets that were assigned to this specific iSCSI Initiator.
In our example, the iSCSI Initiators in Application Server S2 and S3 were configured to use Storage Server S1 as target portal.
The iQN of the iSCSI Target (which you will see in the iSCSI Initiator) is constructed by the Storage Server using a prefix (“iqn.1991-05.com.microsoft:”) combined with the Storage Server computer name, the name of the iSCSI Target and a suffix (“-target”). In our example, when checking the list of Targets on the iSCSI Initiator in S3, we found “iqn.1991-05.com.microsoft:s1-t2-target” .
Logon to iSCSI Targets
Now you need to select the iSCSI Target and click on the “Log on” button to connect to the target, making sure to select the “Automatically restore this connection when the system boots” option.
Once the iSCSI Initiators have successfully logged on to the targets, the virtual disks will get exposed to the Application Servers.
In our example, S2’s iSCSI Initiator was configured to logon to the T1 target and S3’s iSCSI Initiator was configured to logon to the T2 target.
Format, Mount and Bind Volumes
At this point, the virtual disks look just like locally-attached disk, showing up in the Disk Management MMC as an uninitialized disk. Now you need to format and mount the volumes.
To finish the configuration, open the Computer Management MMC (Start, Administrative Tools, Computer Management or right-click Computer and click Manage). Expand the “Storage” node on the MMC tree to find the “Disk Management” option. When you click on the Disk Management option, you should immediately see the “Initialize and Convert Disk Wizard”. Follow the wizard to initialize the disk, making sure to keep it as a basic disk (as opposed to dynamic).
You should then use the Disk management tool to create a partition, format it and mount it (as a drive letter or a path), as you would for any local disk. For larger volumes, you should convert the disk to a GPT disk (right click the disk, select “Convert to GPT Disk”). Do not convert to GPT if you intend to boot from that disk.
After the partition is created and the volumes are formatted and mounted, you can go to the “Bound Volumes/Devices” tab in the iSCSI Initiator applet, make sure all volumes mounted are listed there and then use the “Bind All” option. This will ensure that the volumes will be available to services and applications as they are started by Windows.
In our example, we created a single partition for each disk, formatted them as NTFS and mounted each one in an available drive letter. In Application Server S2, we ended up with disks F: (for VHD1) and G: (for VHD2). On S3, we used F: (for VHD3).
Next, we’ll create a snapshot of a volume. This is basically a point-in-time copy of the data, which can be used as a backup or an archive. You can restore the disk to any previous snapshot in case your data is damaged in any way. You can also look at the data as it was at that time without restoring it. If you have enough disk space, you can keep many snapshots of your virtual disks, going back days, months or years.
To create a snapshot in the Storage Server, right-click the Devices node in the Microsoft iSCSI Software Target MMC and select the “Create Snapshot” option. No additional information is required and a snapshot will be created.
You can also schedule the automatic creation of snapshots. For example, you could do it once a day at 1AM. This is done using the “Schedules” option under the “Snapshots” node in the Microsoft iSCSI Software Target MMC.
In our example, we created a snapshot of the VHD3 virtual disk at 1AM.
Microsoft also offers a VSS Provider for the Microsoft iSCSI Software Target, which you can use on the Application Server to create a VSS-based snapshot.
Export Snapshot to iSCSI Target
Snapshots are usually not exposed to targets at all. You can use them to “go back in time” by rolling back to a previous snapshot, which requires no reconfiguration of the iSCSI Initiators. In some situations, however, it might be useful to expose a snapshot so you can check what’s in it before you roll back.
You might also just grab one or two files from the exported snapshot and never really roll back the entire virtual disk. Keep in mind that snapshots are read-only.
To make a snapshot visible to an Application Server, right-click the snapshot in the Microsoft iSCSI Software Target MMC and select the “Export Snapshot” option. You will only need to pick the target you want to use.
Unlike regular virtual disks, you can choose to export snapshots to multiple iSCSI Targets or to an iSCSI Target with multiple iSCSI Initiators assigned. This is because you cannot write to them and therefore there is no potential for conflicts.
In our example, we exported the VHD3 at 1AM snapshot to target T2, which will caused it to show up on Application Server S3.
Mount Snapshot Volume
The last step to expose the snapshot is to mount it as a path or drive on your Application Server. Note that you do not need to initialize the disk, create a partition or format the volume, since these things were already performed with the original virtual disk. You would not be able to perform any of those operations on a snapshot anyway, since you cannot write to it.
Again, open the Computer Management MMC, expand the “Storage” node and find the “Disk Management” option. If you already have it open, simply refresh the view to find the additional disk. Then use the properties of the volume to mount it.
In our example, we mounted the snapshot of VHD3 at 1AM as the G: drive on Application Server S3.
Now you might be able to find a file you deleted on that F: drive after 1AM by looking at drive G:. You can then decide to copy files from the G: drive to F: drive at the Application Server side. You can also decide to roll back to that snapshot on the Storage Server side, keeping in mind that you will lose any changes to F: after 1AM.
Now that you have the basics, you can start designing more advanced scenarios. As an example, see the diagram showing two Storage Servers and two Application Servers.
There are a few interesting points about that diagram that are worth mentioning. First, the iSCSI Initiators in the Application Servers (S3 and S4) point to two Target Portals (S1 and S2).
Second, you can see that VHD1 and VHD2 are exposed to Application Server S3 using two separate iSCSI Targets (T1 and T2). A single iSCSI Target could be used, but this was done to improve performance.
You can also see that the snapshot of VHD5 at 3AM is being exported simultaneously to Application Servers S3 and S4. This is fine, since snapshots are write-protected.
This last scenario shows how to configure the Microsoft iSCSI Software Target for a cluster environment. The main difference here is the fact that we are assigning the same iSCSI Target to multiple iSCSI Initiators at the same time. This is usually not a good idea for regular environments, but it is common for a cluster.
This example shows an active-active cluster, where the node 1 (running on Application Server S2) has the Quorum disk and the Data1 disk, while node 2 (running on Application Server S3) has the Data2 disk. When running in a cluster environment, the servers know how to keep the disks they’re not using offline, bringing them online just one node at time, as required.
In case of a failure of node 1, node 2 will first verify that it should take over the services and then it will mount the disk resources and start providing the services that used to run on the failed node. Also note that we avoid conflicting drive letters on cluster nodes, since that could create a problem when you move resources between them. As you can see, the nodes need a lot of coordination to access the shared storage and that’s one of the key abilities of the cluster software.
Again, we could have used a single iSCSI Target for all virtual disks, but two were used because of performance requirements of the application associated with the Data2 virtual disk.
I hope this explanation helped you understand some of the details on how to configure the Microsoft iSCSI Software Target included in Windows Storage Server.
Links and References
For general information on iSCSI on the Windows platform, including a link to download the iSCSI Initiator for Windows Server 2003, check http://www.microsoft.com/iscsi
For step-by-step instructions on how to configure the Microsoft iSCSI Software Target, with screenshots, check this post: http://blogs.technet.com/josebda/archive/2009/02/02/step-by-step-using-the-microsoft-iscsi-software-target-with-hyper-v-standalone-full-vhd.aspx
For details on how VSS works, check this post: http://blogs.technet.com/josebda/archive/2007/10/10/the-basics-of-the-volume-shadow-copy-service-vss.aspx
For details on how iSCSI names are constructed using the iQN format, check IETF’s RFC 3721 at http://www.ietf.org/rfc/rfc3721
PingBack from http://www.nullsession.com/?p=118
Hi there! I'm Scott Johnson, Program Manager for the Windows Storage Server product line. We have many
Overview I am glad to share that Windows Storage Server 2008 (WSS 2008) with the Microsoft iSCSI Software
For many of you it can be known that the only option to share storage to form failover clusters INSIDE
Para muchos es sabido que la única forma de compartir almacenamiento para formar un cluster DENTRO de
Introduction For the last few years, I’ve been blogging about the Microsoft iSCSI Software Target
FYI. Twitter is blowing up about about this, so I figured I'd post on what it does. This install file brings the functionality that is in Windows Storage Server to all Windows Server 2008 R2 boxes. The biggest win, you...