What is a Microsoft Failover Cluster Virtual Adapter anyway?

What is a Microsoft Failover Cluster Virtual Adapter anyway?

  • Comments 16
  • Likes

What is a Microsoft Failover Cluster Virtual Adapter anyway?

A question often asked is, "What is the Microsoft Cluster Virtual Adapter and what can I do with it?" The typical, and correct answer, is to leave it alone and let it just work for you. While that answer satisfies most, others may want just a little more by way of an explanation, so hopefully, this blog will provide that.

The networking model in Windows Server 2008 Failover Clustering was rewritten to accommodate new functionality which included being able to obtain IP addresses from DHCP servers and being able to locate Cluster nodes on separate, routed subnets. Additionally, communications went from being UDP Broadcast transmissions to UDP Unicast with a smattering of TCP connections thrown in for good measure. What this all adds up to is more reliable and robust communication connectivity within the Cluster, no matter where the Cluster nodes were located. It no longer matters if Cluster nodes are located in the same physical rack in the same datacenter or in a server rack in a server room in a remote datacenter located at the end of an OC3 WAN connection. This now makes the Cluster more tolerant of single points of failure, e.g. Network Interface Card (NIC) card (and hence the new driver name 'Network Fault-Tolerant or NetFT.sys). The only real minimum requirement is multiple (at least two), redundant communication paths between all nodes in the Cluster. This way, the Cluster network driver (NETFT.SYS) could build a complete routing structure to provide the redundant communication connectivity the Cluster would need to keep applications and services highly available.

Note: Not having at least two networks available for cluster communications will result in a Warning (violation of a 'best practice') being recorded during the Cluster validation process. This is noted in the hardware requirements under Network Adapters and cable section.

To provide some examples of this new functionality and still not get deep into the new networking model, I generated a cluster log from a cluster node so I could illustrate how this new network model is reflected as the cluster service starts. In the cluster log, several entries are associated with NETFT. Some of these include, but may not be limited to, the following:

NETFT - Network Fault-Tolerant
TM - Topology Manager (discovers and maintains the cluster network topology. Reports failures of any networks or network interfaces. configures the Microsoft Failover Cluster Virtual Adapter)
IM - Interface Manager (Responsible for any network interfaces that are part of a cluster configuration)
NETFTAPI - NETFT Application Programming Interface (API)
FTI - Fault-Tolerant Interface

As the cluster service starts, there are events registered indicating NETFT is preparing for communications with other pieces of the cluster architecture -

00000784.000007cc::2009/01/30-14:26:38.199 INFO [NETFT] FTI NetFT event handler ready for events.
00000784.000007b0::2009/01/30-14:26:39.369 INFO [NETFT] Starting NetFT eventing for TM
00000784.000007b0::2009/01/30-14:26:39.369 INFO [NETFT] TM NetFT event handler ready for events.
00000784.000007b0::2009/01/30-14:26:39.369 INFO [CS] Starting IM
00000784.000007b0::2009/01/30-14:26:39.369 INFO [NETFT] Starting NetFT eventing for IM
00000784.000007b0::2009/01/30-14:26:39.369 INFO [NETFT] IM NetFT event handler ready for events.

As connectivity is established with other nodes in the cluster, routes are added -

00000784.00000648::2009/01/30-14:26:39.744 INFO [NETFT] Added route <struct mscs::FaultTolerantRoute>
00000784.00000648::2009/01/30-14:26:39.744 INFO <realLocal>172.16.0.181:~3343~</realLocal>
00000784.00000648::2009/01/30-14:26:39.744 INFO <realRemote>172.16.0.182:~3343~</realRemote>
00000784.00000648::2009/01/30-14:26:39.744 INFO <virtualLocal>fe80::2474:73f1:4b12:8096:~3343~</virtualLocal>
00000784.00000648::2009/01/30-14:26:39.744 INFO <virtualRemote>fe80::8b6:30ea:caa3:8da7:~3343~</virtualRemote>
00000784.00000648::2009/01/30-14:26:39.744 INFO <Delay>1000</Delay>
00000784.00000648::2009/01/30-14:26:39.744 INFO <Threshold>5</Threshold>
00000784.00000648::2009/01/30-14:26:39.744 INFO <Priority>99</Priority>
00000784.00000648::2009/01/30-14:26:39.744 INFO <Attributes>1</Attributes>
00000784.00000648::2009/01/30-14:26:39.744 INFO </struct mscs::FaultTolerantRoute>

Additional events are registered as the routes to the nodes become 'reachable' -

00000784.0000039c::2009/01/30-14:26:39.759 DBG [NETFTAPI] Signaled NetftRemoteReachable event, local address 172.16.0.181:003853 remote address 172.16.0.182:003853
00000784.0000039c::2009/01/30-14:26:39.759 DBG [NETFTAPI] Signaled NetftRemoteReachable event, local address 172.16.0.181:003853 remote address 172.16.0.182:003853
00000784.0000039c::2009/01/30-14:26:39.759 DBG [NETFTAPI] Signaled NetftRemoteReachable event, local address 172.16.0.181:003853 remote address 172.16.0.182:003853
00000784.000004f4::2009/01/30-14:26:39.759 INFO [FTI] Got remote route reachable from netft evm. Setting state to Up for route from 172.16.0.181:~3343~ to 172.16.0.182:~3343~.
00000784.000002f4::2009/01/30-14:26:39.759 INFO [IM] got event: Remote endpoint 172.16.0.182:~3343~ reachable from 172.16.0.181:~3343~
00000784.000002f4::2009/01/30-14:26:39.759 INFO [IM] Marking Route from 172.16.0.181:~3343~ to 172.16.0.182:~3343~ as up
00000784.000001f8::2009/01/30-14:26:39.759 INFO [TM] got event: Remote endpoint 172.16.0.182:~3343~ reachable from 172.16.0.181:~3343~
00000784.00000648::2009/01/30-14:26:39.759 INFO [FTW] NetFT is ready after 0 msecs wait.
00000784.00000648::2009/01/30-14:26:39.759 INFO [FTI] Route is up and NetFT is ready. Connecting to node W2K8-CL2 on virtual IP fe80::8b6:30ea:caa3:8da7%15:~3343~
00000784.0000061c::2009/01/30-14:26:39.759 INFO [CONNECT] fe80::8b6:30ea:caa3:8da7%15:~3343~: Established connection to remote endpoint fe80::8b6:30ea:caa3:8da7%15:~3343~.

A consequence of the changes made to the Cluster networking model is the fact that the Cluster network driver now manifests itself as a network adapter, a hidden adapter, but an adapter nonetheless.

image

While this is hidden from normal view (by default) in Device Manager (must select “Show hidden devices” to see it), it is plainly visible when listing the network configuration of a Cluster node using the ipconfig /all command line.

image

Like other adapters, the Microsoft Failover Cluster Virtual Adapter has a MAC address and both IPv4 and IPv6 addresses assigned to it. The IPv4 address is an Automatic Private Internet Protocol Addressing (APIPA) address and the IPv6 address is a non-routable Link-Local address, but that does not matter as all cluster communications are tunneled through the networks supported by the physical NICs as shown here using the route information obtained during the cluster service startup.

image

The MAC address that is assigned to the Microsoft Failover Cluster Virtual Adapter is based on the MAC address of one of the physical NICs

image

The Cluster network driver (netft.sys) is a kernel mode driver and is started and stopped by the Cluster Service.

image

The Cluster network driver has an entry under HKLM\System\CurrentControlSet\Services.

image

Additionally, there is an entry for the Microsoft Failover Cluster Virtual Adapter in the routing table for each Cluster node. Here are sample outputs for the three sections of the route print command executed on a Cluster node. The first part shows the listing of all the interfaces on the node. Interface 15 is the Microsoft Failover Cluster Virtual Adapter.

image

This next screen shows the IPv4 Route Table which reflects three entries for the Microsoft Failover Cluster Virtual Adapter.

image

And finally, the adapter appears in the IPv6 Route Table (If 15).

image

So, how can one get in trouble? Here are a couple of ways:

  1. Disable the Microsoft Failover Cluster Virtual Adapter.
  2. Sysprep an installation of Windows Server 2008, 2008R2, or 2012 with the Failover Cluster feature installed. This will cause an error in the Cluster Validation Process.  Windows 2012R2 can be sysprepped with the Failover Feature installed.
  3. Modifying any properties of the adapter.

Hopefully, this gives you a better feel for this new functionality in Windows Server 2008 Failover Clusters, and like I stated at the beginning of the blog, the correct answer is to not do anything to the adapter - just let it work for you. Thanks and we hope this has been helpful.

Chuck Timon and John Marlin
Senior Support Escalation Engineers
Microsoft Enterprise Platforms Support

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • Hi Chuck and John

    you just provided valuable info here on how cluster network communication works.

    I 'm just having problem configuring 2 cluster nodes with their own firewall enabled.

    The cluster validation utility is reporting bad firewall configuration , without saying which rule group is misconfigured ,even if node failover is working fine!

    So can you provide us a list of firewall exception rules should that should be setted up for 2 ( or more ) cluster 2008 nodes?

    thanks for your preciuos work

  • What tool did you use in the screenshot which shows the drivers and their status?

    Regards,

    Thomas

  • Thanks for explaining this.

    I first noticed this adapter when performed a ping of the local host name as it reply's on this adapter!

    MattMC

  • Thanks for the info. I searched and found this article when the phenomenom occurred.

    Strange thing is that this adapter only exists on one of my nodes and only after reinstalling it.

    I run a two node Hyper-V cluster on 2008 R2. When we set it up a few month ago there was no Cluster Virtual Adapters on any of the nodes. Due to hardware problems on one of the nodes we have re-installed it. First evicted the node in the cluster and cleaned out all traces of the old account and then reinstalled OS from scratc on the new server and then joined the cluster. This "Microsoft Failover Cluster Virtual Adapter" now turned up on the newly installed node but does not exist on the other.

    On another cluster I manage which is also a two node 2008 R2 Hyper-V there are no such adapters.

    Is there a change in this behaviour between 2008 and 2008 R2?

    How come this adapter only shows up on one of the nodes in one of the clusters?

    /Mats

  • Jeff,

    Great article! thanks for the explanation. Also, I think that in your "how to get in trouble" section, you should include the fact that some companies have the "DHCP client" disabled as a server policy. This requires some explanation as to why is it needed even tough no Dynamic IPs are required

  • Jeff,

    Great article! thanks for the explanation. Also, I think that in your "how to get in trouble" section, you should include the fact that some companies have the "DHCP client" disabled as a server policy. This requires some explanation as to why is it needed even tough no Dynamic IPs are required

  • Jeff,

    Great article! thanks for the explanation. Also, I think that in your "how to get in trouble" section, you should include the fact that some companies have the "DHCP client" disabled as a server policy. This requires some explanation as to why is it needed even tough no Dynamic IPs are required

  • The problem is this virtual or hidden adapter IP address then shows up when you ping the local host name instead of the actuly assigned IP address of   the server. This messes with name resoultion when software tries  resolve the host name via ping.

  • Hi Chuck and John,

    we recieved following error with event ID 1126

    Cluster network interface '%1- Cluster %2' for cluster node '%1' on network 'Cluster Network %3' is unreachable by at least one other cluster node attached to the network. The failover cluster was not able to determine the location of the failure. Run the Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapter. Also check for failures in any other network components to which the node is connected such as hubs, switches, or bridges.

    Could You please explain the reason why did we recieved following error in Event logs. Also Event ID 1129 in listed in the event logs. could you please help us on this.

    Many Thanks

    Junaid

  • This note help me a lot. Virtual IP Address and Virtual Name can not be access it from main site to altern site.

    I had a case were I had 2 more NIC and the wizar took another NIC different one that I used to configure the Cluster node, and the MAC Address was not realted with the NIC included into the Cluster, that was a crazy situation until found the reason. And after the the Router can not update the ARP request at the one end, so every time when a fail over happend the MAC address can not be update by the router an the connection is lost to Nodes and Virtual IP.

    Look this note:

    support.microsoft.com/.../244331

    Thanks

  • Hi Chuck and John,

    while searching a solution for an issue we currently have in a 2012 Hyper-V Cluster, I've seen your blog entry. One Cluster Node stops with a bugcheck, following are debug details:

    PROCESS_OBJECT: fffffa809d6ad980

    DEFAULT_BUCKET_ID:  WIN8_DRIVER_FAULT

    BUGCHECK_STR:  0x9E

    PROCESS_NAME:  System

    CURRENT_IRQL:  2

    TAG_NOT_DEFINED_c000000f:  FFFFF88003563FB0

    LAST_CONTROL_TRANSFER:  from fffff88000ce4845 to fffff802abef3040

    FOLLOWUP_IP:

    netft!NetftProcessWatchdogEvent+dd

    fffff880`00ce4845 cc              int     3

    SYMBOL_STACK_INDEX:  1

    SYMBOL_NAME:  netft!NetftProcessWatchdogEvent+dd

    FOLLOWUP_NAME:  MachineOwner

    MODULE_NAME: netft

    IMAGE_NAME:  netft.sys

    DEBUG_FLR_IMAGE_TIMESTAMP:  5010aa07

    BUCKET_ID_FUNC_OFFSET:  dd

    FAILURE_BUCKET_ID:  0x9E_netft!NetftProcessWatchdogEvent

    BUCKET_ID:  0x9E_netft!NetftProcessWatchdogEvent

    Followup: MachineOwner

    Hopefully you got any idea how to resolve this issue. OS is Windows 2012 Datacenter, all updates installed.

    Thanks,

    Jens

  • Thanks a lot for the detailed information; it's really useful.

    BTW, I am having troubles with the "Microsoft Failover Cluster Virtual Adapter" on of the SQL 2008 cluster nodes. I was upgrading the NIC firmware/driver on this server and after this the "Microsoft Failover Cluster Virtual Adapter" is missing from the server (I can't see it in device manager). I am getting the below error:

    "The Cluster Service was unable to access network adapter 'Microsoft Failover Cluster Virtual Miniport'. Verify that other network adapters are functioning properly and check the device manager for errors associated with adapter 'Microsoft Failover Cluster Virtual Miniport'. If the configuration for adapter 'Microsoft Failover Cluster Virtual Miniport' has been changed, it may become necessary to reinstall the failover clustering feature on this computer."

    I refereed to support.microsoft.com/.../973838, but it suggests to re-install failover-cluster :-( is there any other way to fix this?

    Thanks,

    Chetan

  • Great Information.

  • How does installing the hotfix for running VMXNET3 adapters?

    http://support.microsoft.com/default.aspx?scid=kb;en-US;2344941

  • Thanks for this article. I'm a little late to the party, however I noticed a cluster today where the Failover Cluster Virtual Adapter is running at 10Gbps even though the physical NICs are 1Gbps. I'll leave it alone, but it looks odd, and I'd like to understand why.