Share via

Building windows 2025 core Hyper-V and then Cluster for VMs

Dupler,Michael 0 Reputation points
2026-04-01T17:34:32.2+00:00

I have been tasked with building a Hyper-V and cluster for VMs.

I have two Cisco UCS B200 M5 2 Socket Blade Servers that I have loaded windows 2025 core bare metal install. I made the mistake of doing all the networking first. Now when I used Windows Admin Center to create the cluster, it wants me to blank out the switches and start over.

I have 6 NICs currently. 2 for management, 2 for Prod and the last 2 is iSCSI for SAN access

Questions:

1: How many NICs do I need for clustering? 2 x1 gig for the heart beat per blade? and then whatever many Prod Nics. So 3 min for clustering?

2: Is it ok to make the management vswitch before the cluster is made? For management and vmotion (whatever MSFT calls it) and backups.

3: When you make the

I'm not sure how this works. Since I have NICs doing iSCSI to the SANs to present drives to the hyper-V hosts, do I need single switch (compute + storage)?

thanks

Windows for business | Windows Server | Storage high availability | Virtualization and Hyper-V

2 answers

Sort by: Most helpful
  1. Domic Vo 19,030 Reputation points Independent Advisor
    2026-04-01T20:55:21.2533333+00:00

    Hello,

    Let’s break this down with precision because clustering and Hyper‑V networking can get messy if the foundation isn’t right.

    For clustering, you don’t need a dedicated pair of NICs just for heartbeat anymore. Failover Clustering in Windows Server 2022/2025 uses all available cluster networks and dynamically selects the best path. As long as you have at least one reliable, redundant network path between nodes, the cluster heartbeat will function. In practice, you should have at least two independent networks so the cluster can fail over if one path is lost. With six NICs per blade, you’re in a good position: management, production, and iSCSI can all be separated cleanly.

    It is fine to create your management vSwitch before building the cluster. Windows Admin Center’s cluster creation wizard prefers to take control of networking from scratch, but that’s not mandatory. If you already have a management vSwitch bound to your management NICs, you can keep it. For live migration (Microsoft’s equivalent of vMotion) and backup traffic, you can either carve out a dedicated vSwitch or reuse the production vSwitch with QoS policies. Best practice is to isolate live migration on its own NICs if you have the capacity, but it’s not a hard requirement.

    Regarding iSCSI, you should not collapse compute and storage traffic onto a single vSwitch. iSCSI should remain on its own physical NICs with dedicated IPs, ideally on a separate VLAN or physical fabric. Mixing VM traffic and storage traffic on the same vSwitch introduces latency and congestion. The cluster will see the iSCSI paths as storage networks, not cluster communication networks, so keep them isolated.

    So the clean design for your UCS blades would be: management vSwitch on the two management NICs, production vSwitch on the two production NICs, and iSCSI bound directly to the two storage NICs without a vSwitch. That gives you redundancy, separation of traffic types, and compliance with Microsoft’s clustering best practices.

    I hope you've found something useful here. If it helps you get more insight into the issue, it's appreciated to accept the answer. Should you have more questions, feel free to leave a message. Have a nice day!

    Domic Vo.

    0 comments No comments

  2. Q&A Assist
    2026-04-01T17:34:56.89+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more
    1. For clustering NICs

    From the clustering guidance, at least two independent networks are recommended so that cluster heartbeats are not a single point of failure.

    • Use at least two networks for Failover Clustering so that a single switch or NIC failure does not take the cluster down. All networks that are found are used for cluster heartbeats.
    • In practice on each blade, this usually means at least:
      • One management/cluster network
      • One additional network (can be converged with VM/storage traffic if designed that way)

    So the minimum is not “3 NICs for clustering” as such, but “at least two separate networks” per node. With 6 NICs per blade (2 mgmt, 2 prod, 2 iSCSI) there are already multiple networks available, which satisfies the “at least two networks” recommendation for Failover Clustering.

    iSCSI and SMB storage traffic should use multiple networks as well for reliability and performance, which aligns with having dedicated iSCSI NICs.

    1. Creating the management vSwitch before the cluster

    The Windows Admin Center cluster-creation workflow expects to control virtual switch creation and may prompt to remove existing vSwitches and recreate them. The documented flow is:

    • On “Check the network adapters”, it may ask to remove existing virtual switches and let Windows Admin Center create the appropriate switches.
    • On “Select the adapters to use for management”, at least one dedicated physical NIC must be selected for cluster management. Once an adapter is designated for management, it is excluded from the rest of the wizard workflow.
    • Then, on the “Virtual switch” page, the wizard offers:
      • Skip virtual switch creation
      • Create one virtual switch for compute and storage together
      • Create one virtual switch for compute only
      • Create two virtual switches (compute + storage)

    If a management vSwitch is created manually beforehand, Windows Admin Center may want to remove it to align with its own configuration model. To avoid this conflict, either:

    • Let the wizard create the virtual switches, or
    • Choose “Skip virtual switch creation” in the wizard and manage all vSwitches manually later, understanding that the wizard will still require at least one dedicated physical NIC for management.
    1. Choice of “single switch (compute + storage)” vs others with iSCSI SAN

    The “Virtual switch” options in the Windows Admin Center cluster wizard are about Hyper‑V virtual switches used for VM and Storage Spaces Direct (S2D) traffic, not about traditional iSCSI SAN NICs:

    • “Create one virtual switch for compute and storage together” is the converged option where the same vSwitch carries VM traffic and S2D storage traffic.
    • “Create one virtual switch for compute only” is for VM traffic only.
    • “Create two virtual switches” is for separate vSwitches for VM and S2D storage traffic.

    The support matrix:

    • Single switch (compute + storage): enabled with 1–2 adapters and with 3+ adapters, not supported with teamed adapters.
    • Single switch (compute only): enabled with 3+ adapters and with teamed adapters, not supported with only 1–2 adapters.
    • Two switches: enabled with 3+ adapters and with teamed adapters, not supported with only 1–2 adapters.

    Dedicated iSCSI NICs that connect to a SAN and present LUNs to the Hyper‑V hosts are not part of the Hyper‑V vSwitch; they are used by the host OS for storage connectivity. For those NICs, the guidance is:

    • iSCSI and SMB both use the network to connect storage to cluster members.
    • Use multiple networks (physical or logical) to ensure reliable storage connectivity and performance.

    Therefore:

    • iSCSI NICs to the SAN do not require “single switch (compute + storage)” because they are not attached to the Hyper‑V virtual switch.
    • Choose the vSwitch option based on whether Storage Spaces Direct is used and whether VM and S2D traffic should be converged.
    • If RDMA is required for SMB Direct/Storage Spaces Direct, avoid NIC Teaming on those RDMA NICs, as NIC Teaming disables RDMA.

    With 6 NICs per host, a typical pattern is:

    • 1–2 physical NICs dedicated to management/cluster.
    • 2 physical NICs for VM/“prod” traffic, optionally in a team or SET switch (not RDMA).
    • 2 physical NICs dedicated to iSCSI (no vSwitch, no teaming if RDMA/iSCSI offloads are needed).

    This layout satisfies the “at least two networks” requirement for clustering and the recommendation to use multiple networks for storage connectivity.


    References:

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.