This section explains how to wire NIC ports on HPE Apollo 4200 Gen10 nodes and how to network a cluster.

This platform uses a unified networking configuration in which the same NIC handles back-end and front-end traffic. In this configuration, each networking port provides communication with clients and between nodes. You can connect the NIC’s ports to the same switch or to different switches.

Node NIC and Ports

The following diagrams show the NIC and ports on HPE Apollo 4200 Gen10 nodes.

NIC ports on HPE Apollo 4200 Gen10 nodes
NIC Manufacturer Port Location Port Labels
Broadcom Top 0 (eth2)
Broadcom Bottom 1 (eth3)
Mellanox Top 1 (eth2)
Mellanox Bottom 2 (eth3)

Prerequisites

  • A network switch with the following criteria:
    • Ethernet connection
      • 36T and 90T: 25, 40, or 100 Gbps
      • 192T: 100 Gbps
      • 336T: 25 Gbps or 40 Gbps
    • Fully non-blocking architecture
    • IPv6 compatibility
  • Compatible network cables
  • A sufficient number of ports for connecting all nodes to the same switch fabric
  • One static IP for each node, for each defined VLAN
  • Two redundant switches
  • One physical connection to each redundant switch, for each node
  • One Link Aggregation Control Protocol (LACP) port-channel for each node with the following configuration:
    • Active mode
    • Slow transmit rate
    • Trunk port with a native VLAN
    • Enabled IEEE 802.3x flow control (full-duplex mode)
  • DNS servers
  • Network Time Protocol (NTP) server
  • Firewall protocol or ports configured for Qumulo Care Proactive Monitoring
  • Where N is the number of nodes, up to 10 N-1 floating IP addresses for each node, for each client-facing VLAN

  • Nodes connected at their maximum Ethernet speed (this ensures advertised performance). To avoid network bottlenecks, Qumulo validates system performance with this configuration by using clients connected at the same link speed and to the same switch as the nodes.

Connecting to Redundant Switches

This section explains how to connect a four-node HPE cluster to dual switches for redundancy. We recommend this configuration for HPE hardware. If either switch becomes inoperative, the cluster remains accessible through the remaining switch.

  • Connect the two 25 Gbps, 40 Gbps, or 100 Gbps ports on the nodes to separate switches.
  • Use at least one port on both switches as an uplink to the client network. To ensure an acceptable level of physical network redundancy and to meet the necessary client access throughput rates, use an appropriate combination of 10 Gbps, 25 Gbps, 40 Gbps, or 100 Gbps network uplinks.
  • Use at least one peer link between the switches.

Connecting to a Single Switch

This section explains how to connect a four-node HPE cluster to a single switch.

  • Connect two 25 Gbps, 40 Gbps, or 100 Gbps ports to the switch.
  • Connect any uplink ports to the client network.