Windows Storage Server

News and cool info about the Windows Storage Server product line.

Six Uses for the Microsoft iSCSI Software Target

Six Uses for the Microsoft iSCSI Software Target

  • Comments 4
  • Likes

The iSCSI Software Target can be used in so many ways it’s like the Swiss army knife of transportable storage. In this post I outline my favorite uses for an iSCSI Software Target, if you know of other creative uses please let me know!

Windows Storage Server 2008 and the iSCSI Software Target 3.2 are available from original equipment manufacturers (OEMs) in a variety of preconfigured storage appliances. Partners and customers can test and develop on the platform without having to buy a fully configured appliance from an OEM by getting an evaluation copy on MSDN or TechNet. See Jose’s blog for information about that. 


A NAS refers to servers that communicate over a network using a file protocol (like CIFS, SMB or NFS) and a SAN refers to a network that connects application servers to dedicated storage devices, using protocols like Fibre Channel or iSCSI to transport SCSI 'block' commands and data. NAS appliances control the file system, while in a SAN, the raw storage is exposed to the application servers and users can partition and format it using whatever file-system the client supports. Block-storage devices make it appear to the servers that the storage is a locally-attached hard disk drive.

When you take a Windows Storage Server and you add iSCSI functionality, it becomes a NAS-SAN hybrid device. This has become known in the industry as a “unified” storage appliance. Either way, it is remote storage and it is going to be a big part of the future.

 

What is the Microsoft iSCSI Software Target?

image

This is the simplest way to understand the Microsoft iSCSI Software Target.  Remote VHD files appear on the application server as locally attached hard disks. Application servers running just about any workload can connect to the target using an iSCSI initiator.

 

How do servers and clients access the iSCSI Software Target or file shares on a Windows Storage Server?

image

In a typical network, administrators have separate iSCSI networks from user-accessible file-protocol networks for SMB, SMB 2.0, NFS or CIFS traffic. Windows Storage Server allows a diverse mix of heterogeneous clients or servers to access data.

 

Six Uses for the Microsoft iSCSI Software Target

Before we get started I want to call out that there are very different hardware requirements between the development, testing or demonstration scenarios compared to a production scenario. It is a good idea to follow the guidance from your OEM on the types of workloads they support for every configuration.


1) Consolidate storage for multiple application servers.

Why are people still cramming hard-disks into application servers?  Either they don’t know they can consolidate their storage into a SAN or they have a range of factors that compel them to use Directly Attached Storage (DAS). Using iSCSI makes it easy since it uses the same network cards, switches and cables that your Ethernet provider has been selling for years. The rise of huge data storage requirements has coincided with the biggest drop in the cost per GB the world has ever seen. However, the battle is still lost if you have to keep adding disks and backing up each server separately. Having a dedicated storage box to service groups of application servers not only makes it easy to provision storage, it also allows you to have only one critical system to backup and recover when disaster strikes.

How much load can it handle?  Well, it depends.  It depends on the system specs, network cards, network switches, the RAID card, storage drivers, the number of spindles and the IOPS that the workload is generating against the storage. If you look at a notoriously I/O-heavy Exchange Server 2003, some ESRP submissions show solutions that use the iSCSI Software Target. One solution, the HP AIO 1200 has a configuration with 12 disks that supports 1500 exchange users. If you added another 12 disks and dedicated NIC, you could probably support another exchange server in the same configuration. Contrasting this to a low-I/O workload like an intranet web server you could easily host and consolidate the storage for dozens of them.

The key takeaway here is that you don’t want to oversubscribe any part of the system to more application servers than the storage server can handle.  Testing and validating the system at peak workloads before deploying into production is an important best practice.

2) Test and Development scenarios are endless, especially for Clustering, Live Migration, SAN transfer and Storage Manager for SANs.

Setup an iSCSI SAN for a clustered SQL Server on a single laptop! Testing SAN-environments without a SAN or creating a killer demo of a solution running on a single laptop is actually easy to setup with Hyper-V. See the virtualization section below for some drawings that outline different options to get storage to Hyper-V virtual machines. You can test these SAN technologies for $0 out of pocket on industry standard hardware using Windows Storage Server 2008 and the iSCSI Software Target. Being able to do proof-of-concept testing with the semantics of a high-availability application attaching to a SAN without spending a ton of money on a SAN is a big plus.

The bare-minimum configuration is a single-CPU server with a SATA drive and an Ethernet port. This is great for testing in a developer’s office, but it will not meet most workload requirements for data throughput and high availability. Certainly not for production.

If you want to test the throughput of a solution and remove the spinning disks as a bottleneck you could also use the RAMDISK command:

To create a VHD in system memory, use “RAMDISK:<size-in-MB>” for the device path.

For example, to create two 100MB VHDs in memory, use the following device paths:

“RAMDISK:100” for the first VHD

“RAMDISK:101” for the second VHD (we enforce device path uniqueness, so you need to add 1 to the size to make it unique)

Note:  This is an undocumented and unsupported command, but it is useful.

3) Setup an iSCSI SAN for a Windows cluster. 

The Microsoft iSCSI Software Target supports persistent reservations so that your storage resources can failover from one cluster node to another. The Microsoft iSCSI Software Target supports both SCSI-3 and SCSI-2 reservation commands. Fibre Channel and SAS interfaces and the associated fabric/switches dominate the market for cluster shared storage today. Using iSCSI Targets to back a cluster is an option and when you use a nice RAID card with a bunch of fast drives, you will get much better performance. In an upcoming post we will have a detailed setup document outlining how to create clustered storage servers with some great recommendations for highly available file servers and iSCSI Target clusters.

4) Consolidate servers into Hyper-V VMs and migrate the data to a Windows Storage Server. 

Finance dept. wants another server to run their LOB application, but you are out of servers?  Here is one quick solution, convert one of your servers to a Hyper-V server and create several VMs. After migrating the server instance to a VM, create an iSCSI LUN on a Windows Storage Server and attach it to the VM and migrate the data to the new LUN. Enable Hyper-V guests to migrate from one host to another and quickly transport the LUNs from one to another using SCVMM. Hyper-V and iSCSI storage servers go together like PBJ (That’s peanut butter and jelly).

5) Diskless SAN boot over iSCSI!

Ok, now we are getting somewhere.  While we were just celebrating not putting data disks in all these servers, why not remove all the disks?  You can boot from an iSCSI SAN! Imagine your datacenter blades humming along without a bunch of spinning platters! Not only does this save big bucks in hard-disk costs, but it also reduces your power consumption. iSCSI booting has been possible since the 2003 release of the iSCSI Initiator. If you want to boot a physical server off of iSCSI, you need an iSCSI-boot capable NIC like the Intel PCI-E pro/1000 PT or the Broadcom BCM5708C NetXtreme II GigE or you can use an iSCSI HBA like the Qlogic QLE 4062c.

If you want to boot Hyper-V VMs off iSCSI, you could make the connection in the parent OS using the iSCSI initiator and then carve up storage for the VMs, but if you want to boot directly off of iSCSI, you will need a 3rd party solution like DoubleTake’s NetBoot/i or gPXE, which is an open-source bootloader.

Windows doesn’t care that there are no hard-disks in the box, as long as the network can handle it. Checkout the iSCSI Boot Step-by-Step Guide for more information.

6)  “Bonus storage” for people in your organization. Storage Administrators can be a hero! (for once).

Did you know that you can setup an iSCSI Target with some drives and carve-up and hand-out the storage to people running Windows Clients?  The iSCSI initiator is built into every version of Windows, you can quickly provision storage and assign it to just about anybody. Our storage guru recently sent out an email to everybody in the team that said, “get 20GB of storage that will be backed up each week, just send me your IQN (Control Panel –> iSCSI Initiator) and I will grant you access to your personal, private storage.”  That is pretty cool, especially when you run out of space or you need a place to backup some files.

Topologies of Common Configurations

Here is a simple configuration.  The storage backend usually refers to an array of disks in a RAID configuration, but it could also be a JBOD (just a bunch of disks). It could be as small as a single SATA disk attached to the motherboard, or it could be a rack of 1000 Fibre Channel drives in a RAID configuration with redundant host bus adapters (HBAs) in the storage server.

There is no limit to how much you can spend on a backend storage array that meets your needs for high availability, I/O bandwidth or advanced features like array-based replication. There are no vendors that I know of that are attaching the disks directly to a SATA controller, at a bare minimum people usually use some sort of RAID controller to get the optimal I/O and data protection requirements.



image

 

Here is a simple configuration using redundant networking. Multipathing using the Microsoft MPIO framework is recommended to ensure redundancy and maximum throughput. See this recent Multipath I/O Step-by-Step guide for details. Many storage arrays that are SPC-3 compliant will work by using the MPIO Microsoft DSM. Some storage array partners also provide their own DSMs to use with the MPIO architecture.

image 

 

Here we have a high-availability configuration of storage servers. If one of the machines sucks in a bee that shorts out the motherboard, the other machine will pick-up and the application server doesn’t have to know the storage server went down.

image

 

Now we are getting close to Nirvana. Here we have a high-availability configuration of storage servers and redundant networking paths. Now storage servers and network switches can fail and service continues without interruption. 

image

 

Ok, time to cluster the front-end servers.  Now we have a highly-available configuration of application servers and another cluster for the storage servers.

image

 

Now let’s talk about using all of it together: clustered front-end (application servers) and/or back-end (storage servers) along with MPIO. MPIO path failover times can be impacted by the number of LUNs and the amount of I/O being generated, so make sure you test a fully configured machine running peak I/O before moving it into production.

image

We tested failover on various configurations at Microsoft with MPIO while the servers were being hammered with I/O using Jetstress or IOMeter. Using the inbox Microsoft DSM, we see good fail-over performance while using two 2-node application server clusters (running Windows Server 2008) with 32 LUNs for each cluster (a total of 64 LUNs). The key here is that the high availability failover must be quick enough to support application servers that throw a fit if the disk stops responding. 

When using MPIO in these advanced configurations, the iSCSI Software Target team recommends using Windows Server 2008 initiators.

 

Microsoft iSCSI Software Target
Multipath and Single-path Support Matrix

The following tables define tested limits and support for using the Microsoft iSCSI Software Initiator with a single network path or multipath (MPIO) when connecting to the Microsoft iSCSI Software Target in clustered and non-clustered environments.

image

image

*There is limited support for Windows Server 2003 iSCSI hosts when connected to the Microsoft iSCSI Software Target if the iSCSI hosts or iSCSI Targets are clustered. Failures on the iSCSI network path may result in delayed failover and recovery times. Failures in non-network related areas have been tested with acceptable recovery times. The time to complete a failover and recovery may vary and is dependent on the application IO workload at the time of failure.

Microsoft strongly recommends the use of Windows Server 2008 iSCSI hosts for clustered configurations when connecting to the Microsoft iSCSI Software Target.

Note: This above is specific to Microsoft iSCSI target configurations.  Customers using Windows Server, the Microsoft iSCSI software initiator and a logo’d iSCSI hardware array should refer to the storage array vendor support statements for applicable supported configurations.

 

Virtualization and iSCSI

Three ways to expose iSCSI LUNs to Hyper-V Virtual Machines

Here is a cool diagram that shows three different ways to get storage to a VM. See the Storage options for Windows Server 2008 Hyper-V blog post for a complete breakdown. Checkout the Hyper-V Planning and Deployment Guide for deployment best practices. 

image 

I hope this post was helpful and gives you some ideas on different ways to use your new NAS device and the Microsoft iSCSI Software Target.

Cheers,
Scott M. Johnson
Program Manager 
Windows Storage Server

Comments
  • Hi --

    Nice intro info, but where would one actually TYPE this "RAMDISK" command?  Some bcdedit setting?  Diskpart?  The command line?  CONFIG.SYS?  (Okay, joking on that last one, but it's the last time I used a RAMDISK command in Windows.<g>.)  Thanks.

  • I have four virtual servers
    • Domain Controller (Window Server 2012)
    • SQL-01 (Window Server 2012 R2)
    • SQL-02 (Window Server 2012 R2)
    • SAN-01 (Window Server 2008 R2)
    o Microsoft iSCSI Software 3.3 Target
    I installed, setup and configured Microsoft iSCSI 3.3 in SAN-01 that run Window Server 2008 R2 and created the iSCSI target, three virtual disks (SQL_Data, SQL_log, and Quorum), then mounted and format those disk.
    SQL-01 and SQL-02 iSCSI initiator connected and detect the volumes.
    Next, I started up Failover Cluster Manager and run the Create Cluster Wizard.
    The creating cluster runs into error “ An error occurred while creating the cluster. This operation returned because the timeout period expired”
    I do some researches and google and it did not find the answers yet.
    Can someone help?
    Thank you in advance.

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment