This section describes the known limits in Cloud Data Fabric that pertain to the file system, data caching, portal relationships, portal connectivity, and protocols.

General

Currently, it is possible to configure and manage Cloud Data Fabric functionality only by using the qq CLI.

File System

  • While Qumulo Core doesn’t support hard links between the files local to the spoke portal host cluster and files within the spoke portal root directory, it does support hard links entirely outside or inside the spoke portal root directory.

Spoke Portals

  • It is possible to create up to 32 hub portals—or 32 spoke portals (Qumulo Core 7.5.0.3 and higher)—on a single Qumulo cluster.

  • It isn’t possible to nest spoke portal root directories within other spoke portal root directories.

  • In Qumulo Core 7.6.2 (and higher), it is possible to configure up to 32 spoke portal root directories for each Qumulo cluster, for each portal relationship.

Data Caching

  • Although first-time data access to data in a portal root directory is subject to round-trip latency between the spoke portal host cluster and the hub portal host cluster, subsequent access to the data is faster. Making changes to data under a portal root directory is also subject to latency when the system recaches these changes upon access.

  • The cache of a spoke portal is inherently ephemeral. You must not use it in place of data replication or backup.

Portal Connectivity

  • For a spoke portal to be accessible, there must be full connectivity between the two clusters in a portal relationship, without which files or directories with outstanding modifications on one portal are inaccessible on other portals. Specifically, every node in the spoke portal host cluster must be able to connect to the configured hub portal host cluster address, and the other way around.

  • A spoke portal is inaccessible if the hub portal host cluster and the spoke portal host cluster run different versions of Qumulo Core.

Portal Relationships

  • In Qumulo Core 7.5.2 (and higher), it is possible for a Qumulo cluster to host both up to 32 spoke portals and up to 32 hub portals at the same time.

    • Currently, Qumulo Core doesn’t support a single cluster establishing two portal relationships with the same remote cluster.
  • In Qumulo Core 7.5.0.1 to 7.5.1, it is possible for a Qumulo cluster to host only up to 32 hub portals or up to 32 spoke portals.

  • Your cluster’s Qumulo Core version determines whether the host cluster for each portal relationship must be unique. For example:

    • A spoke portal on Cluster A can propose a relationship to a hub portal on Cluster B.

    • Another spoke portal on Cluster A can propose a relationship to a hub portal on Cluster C.

    • In Qumulo Core 7.5.2 (and higher), it is possible for a spoke portal on Cluster B to propose a relationship to a hub portal on Cluster A or Cluster C (despite Cluster B already having a hub portal).

    • In Qumulo Core versions lower than 7.5.2, another spoke portal on Cluster A can’t propose a relationship to a hub portal on Cluster B, because a relationship of that type between portals on the host clusters already exists.

Protocols

S3

  • Currently, Qumulo Core allows only partial access to portal data through the S3 protocol, including:

  • S3 buckets are always local to the Qumulo cluster on which they are created.

NFS

  • While NFSv3 is a stateless protocol, NFSv4.1 is a stateful protocol which permits open file handles to remain open after a file is unlinked. However, Qumulo Core doesn’t always maintain access to files deleted from a portal in a relationship. For example, if you open a file on the spoke portal host cluster and then delete the same file on the hub portal host cluster, an application that uses the file on the spoke portal host cluster will lose access to the file unexpectedly.

  • When you authenticate over NFSv4.1 by using Kerberos, you can use Kerberos principals only from the Active Directory domain associated with the Qumulo cluster to which you are connected. It isn’t possible to use principals from a remote Qumulo cluster.”

  • When you edit ACLs over NFSv4.1 by using editfacl or similar tools, you can use only Kerberos principals from the Active Directory domain associated with the Qumulo cluster to which you are connected. It isn’t possible to use principals from a remote Qumulo cluster.

  • Protocol locks don't synchronize between the hub portal host cluster and the spoke portal host cluster. Specifically, NFSv3 or NLM byte-range locks, NFSv4.1 locking operations, SMB share-mode locks, SMB byte-range locks, and SMB leases function independently on the two clusters. For example, while two exclusive locks on the same spoke portal host cluster contend with each each other, an exclusive lock on a spoke portal host cluster doesn’t contend with an exclusive lock on the hub portal host cluster.