This section explains how to network a Quiver 1U All-NVMe Gen1 cluster.
To identify the
eth
port, run the following command:for i in /sys/class/net/eth*; \
do echo $i; \
cat $i/device/uevent | \
grep -i pci_slot; \
done
Prerequisites
Before you create your Qumulo cluster, if your client environment requires Jumbo Frames (9,000 MTU), configure your switch to support a higher MTU.
Your node requires the following resources.
-
A network switch with the following specifications:
-
100 Gbps Ethernet
-
Fully non-blocking architecture
-
IPv6 capability
-
-
Compatible networking cables
-
A sufficient number of ports for connecting all nodes to the same switch fabric
-
One static IP for each node, for each defined VLAN
Recommended Configuration
-
Single NIC: This platform uses a unified networking configuration in which the same NIC handles back-end and front-end traffic. In this configuration, each networking port provides communication with clients and between nodes. You can connect the NIC’s ports to the same switch or to different switches. However, for greater reliability, we recommend connecting both 100 Gbps ports on every node to each switch.
-
Dual NIC: This platform uses a split networking configuration in which different NICs handle back-end and front-end traffic. You can connect the front-end and back-end NICs to the same switch or to different switches. However, for greater reliability, we recommend connecting all four 100 Gbps ports on every node: Connect both front-end NIC ports to the front-end switch and both back-end NIC ports to the back-end switch.
- Never network single-NIC and and dual-NIC nodes within the same cluster.
- Never configure a dual-NIC platform with a unified networking configuration.
- We don't recommend connecting to a single back-end NIC port because the node becomes unavailable if the single connection fails.
We recommend the following configuration for your node.
-
Your Qumulo MTU configured to match your client environment
-
Physical connections
-
Single NIC: Two physical connections for each node, one connection for each redundant switch
-
Dual NIC: One physical connection for each node, for each redundant switch
-
-
One Link Aggregation Control Protocol (LACP) port-channel for each network on each node, with the following configuration
-
Active mode
-
Slow transmit rate
-
Access port or trunk port with a native VLAN
-
-
DNS servers
-
A Network Time Protocol (NTP) server
-
Firewall protocols or ports allowed for proactive monitoring
-
Where
N
is the number of nodes,N-1
floating IP addresses for each node, for each client-facing VLAN
Connecting to Redundant Switches
For redundancy, we recommend connecting your cluster to dual switches. If either switch becomes inoperative, the cluster is still be accessible from the remaining switch.
Single NIC
-
Connect the two NIC ports (2 × 100 Gbps) on your nodes to separate switches.
-
The uplinks to the client network must equal the bandwidth from the cluster to the switch.
-
The two ports form an LACP port channel by using a multi-chassis link aggregation group.
-
Use an appropriate inter-switch link or virtual port channel.
Dual NIC
-
Front End
-
Connect the two front-end NIC ports (2 × 100 Gbps) on your nodes to separate switches.
-
The uplinks to the client network must equal the bandwidth from the cluster to the switch.
-
The two ports form an LACP port channel by using a multi-chassis link aggregation group.
-
-
Back End
-
Connect the two back-end NIC ports (2 × 100 Gbps) on your nodes to separate switches.
-
Use an appropriate inter-switch link or virtual port channel.
-
-
Link Aggregation Control Protocol (LACP)
- For all connection speeds, the default behavior is that of an LACP with 1,500 MTU for the front-end and 9,000 MTU for the back-end interfaces.
Connecting to a Single Switch
You can connect a your cluster to a single switch. If this switch becomes inoperative, the entire cluster becomes inaccessible.
Single NIC
-
Connect the two NIC ports (2 × 100 Gbps) on your nodes to a single switch.
-
The uplinks to the client network must equal the bandwidth from the cluster to the switch.
-
The two ports form an LACP port channel.
Dual NIC
-
Front End
-
Connect the two front-end NIC ports (2 × 100 Gbps) to a single switch.
-
The uplinks to the client network must equal the bandwidth from the cluster to the switch.
-
The two ports form an LACP port channel.
-
-
Back End
-
Connect the two band-end ports (2 × 100 Gbps) to a single switch.
-
Link Aggregation Control Protocol (LACP)
-
For all connection speeds, the default behavior is that of an LACP with 1,500 MTU for the front-end and 9,000 MTU for the back-end interfaces.
-
The dual-NIC variant of your Quiver 1UA Gen1 node uses a split networking configuration. Ensure that the front-end and back-end networks are connected and operational before creating your cluster. If only one of the networks is connected and operational during the cluster creation process, Qumulo Core deploys with the unified networking configuration.
Four-Node Cluster Architecture Diagrams
Single NIC
The following is the recommended configuration for a four-node cluster connected to an out-of-band management switch and redundant switches.
Dual NIC
The following is the recommended configuration for a four-node cluster connected to an out-of-band management switch, redundant front-end switches, and redundant back-end switches.