This section explains how to wire NIC ports on HPE Apollo 4200 Gen9 nodes and how to network a cluster.

This platform uses a unified networking configuration in which the same NIC handles back-end and front-end traffic. In this configuration, each networking port provides communication with clients and between nodes. You can connect the NIC’s ports to the same switch or to different switches.

Node NICs and Ports

The following diagrams show the NICs and ports on HPE Apollo 4200 Gen9 node types.

288T

NIC ports on the dual HPE Apollo 4200 Gen9 288T node type, Port 1 (eth4) at the top and Port 2 (eth5) at the bottom. Currently, NIC2 (on the left) is unused.

288T

NIC ports on the single HPE Apollo 4200 Gen9 288T node type, Port 2 (eth3) at the top and Port 1 (eth2) at the bottom.

180T

NIC ports on the HPE Apollo 4200 Gen9 180T node type, Port 1 (eth2) at the top and Port 2 (eth3) at the bottom. Currently, NIC2 (on the right) is unused.

90T

NIC ports on the HPE Apollo 4200 Gen9 90T node type, Port 2 (eth3) at the top and Port 1 (eth2) at the bottom.

Prerequisites

  • A network switch with the following criteria:
    • 40 Gbps Ethernet connection
    • Fully non-blocking architecture
    • IPv6 compatibility
  • Compatible network cables
  • A sufficient number of ports for connecting all nodes to the same switch fabric
  • One static IP for each node, for each defined VLAN
  • Two redundant switches
  • One physical connection to each redundant switch, for each node
  • One Link Aggregation Control Protocol (LACP) port-channel for each node with the following configuration:
    • Active mode
    • Slow transmit rate
    • Trunk port with a native VLAN
    • Enabled IEEE 802.3x flow control (full-duplex mode)
  • DNS servers
  • Network Time Protocol (NTP) server
  • Firewall protocol or ports configured for Qumulo Care Proactive Monitoring
  • Where N is the number of nodes, up to 10 N-1 floating IP addresses for each node, for each client-facing VLAN

  • Nodes connected at their maximum Ethernet speed (this ensures advertised performance). To avoid network bottlenecks, Qumulo validates system performance with this configuration by using clients connected at the same link speed and to the same switch as the nodes.

Connecting to Redundant Switches

This section explains how to connect a four-node HPE cluster to dual switches for redundancy. We recommend this configuration for HPE hardware. If either switch becomes inoperative, the cluster remains accessible through the remaining switch.

  • Connect the two 40 Gbps ports on the nodes to separate switches.
  • Use at least one port on both switches as an uplink to the client network. To ensure an acceptable level of physical network redundancy and to meet the necessary client access throughput rates, use an appropriate combination of 10 Gbps, 25 Gbps, 40 Gbps, or 100 Gbps network uplinks.
  • Use at least one peer link between the switches.

Connecting to a Single Switch

This section explains how to connect a four-node HPE cluster to a single switch.

  • Connect two 40 Gbps ports to the switch.
  • Connect any uplink ports to the client network.