This section explains how to deploy Cloud Native Qumulo (CNQ) on Azure by creating the persistent storage and the cluster compute and cache resources with Terraform. It also provides recommendations for Terraform deployments and information about post-deployment actions and optimization.
For an overview of CNQ on Azure, its prerequisites, and limits, see How Cloud Native Qumulo Works.
The azure-terraform-cnq-<x.y>.zip
file (the version in the file name corresponds to the provisioning scripts, not the version of Qumulo Core) contains comprehensive Terraform configurations that let you deploy Azure Storage Accounts and then create a CNQ cluster with 3 to 24 instances and have fully elastic compute and capacity.
Prerequisites
This section explains the prerequisites to deploying CNQ on Azure.
-
To allow your Qumulo instance to report metrics to Qumulo, your Azure Virtual Network must have outbound Internet connectivity through a NAT gateway or a firewall. Your instance shares no file data during this process.
Important
Connectivity to the following endpoints is required for a successful deployment of a Qumulo instance and quorum formation:api.missionq.qumulo.com
api.nexus.qumulo.com
-
Before you configure your Terraform environment, you must sign in to the
az
CLI.Azure role assignments in your target subscription must include the following:
Reader
Contributor
How the CNQ Provisioner Works
The CNQ Provisioner is an Azure Compute instance that configures your Qumulo cluster and any additional Azure environment requirements.
The Provisioner stores all necessary state information in Azure App Configuration (on the left navigation panel, click Operations > Configuration Explorer) and shuts down automatically when it completes its tasks.
Step 1: Deploying Cluster Persistent Storage
This section explains how to deploy the Azure Storage Accounts that act as persistent storage for your Qumulo cluster.
Your Azure Storage Account Container must have hierarchical spaces enabled.
-
Log in to Nexus and click Downloads > Cloud Native Qumulo Downloads.
-
On the Azure tab and, in the Download the required files section, select the Qumulo Core version that you want to deploy and then download the corresponding Terraform configuration, Debian package, and host configuration file.
-
In an Azure Storage Account Container create the
qumulo-core-install
directory. Within this directory, create another directory with the Qumulo Core version as its name. The following is an example path:my-storage_account/my-prefix/qumulo-core-install/7.2.3.2
Tip
Make a new subdirectory for every new release of Qumulo Core. -
Copy
qumulo-core.deb
andhost_configuration.tar.gz
into the directory named after the Qumulo Core version (in this example, it is7.2.3.2
). -
Copy
azure-terraform-cnq-<x.y>.zip
to your Terraform environment and decompress it. -
Navigate to the
persistent-storage
directory and take the following steps:-
Run the
terraform init
command.Terraform prepares the environment and displays the message
Terraform has been successfully initialized!
-
Review the
terraform.tfvars
file.-
Specify the
deployment_name
and the correctaz_subscription_id
for your cluster’s persistent storage. -
Specify the correct
az_location
for your cluster’s persistent storage. -
Leave the
soft_capacity_limit
at1000
.
-
-
Use the
az
CLI to authenticate to your Azure account. -
Run the
terraform apply
command.Terraform displays the execution plan.
-
Review the Terraform execution plan and then enter
yes
.Terraform creates resources according the execution plan and displays:
-
The
Apply complete!
message with a count of added resources -
The names of the created Storage Accounts
-
Your deployment’s unique name
For example:
Apply complete! Resources: 15 added, 0 changed, 0 destroyed. Outputs: bucket_names = [ "ab5cdefghij1", "ab4cdefghij2", "ab3cdefghij3", "ab2cdefghij4", ] deployment_unique_name = "my-deployment-ABCDEFGH1IJ" ...
Tip
You will need thedeployment_unique_name
value to deploy your cluster. -
-
Step 2: Deploying Cluster Compute and Cache Resources
This section explains how to deploy compute and cache resources for a Qumulo cluster by using a Ubuntu AMI and the Qumulo Core .deb
installer.
- Provisioning completes successfully when the provisioning instance shuts down automatically. If the provisioning instance doesn't shut down, the provisioning cycle has failed and you must troubleshoot it. To monitor the provisioner's status, you can watch the Terraform status posts in your terminal or in Azure App Configuration (on the left navigation panel, click Operations > Configuration Explorer).
- The first variable in the example configuration files in the
azure-terraform-cnq
repository isdeployment_name
. To help avoid conflicts between resource groups and other deployment components, Terraform ignores thedeployment_name
value and any changes to it. Terraform generates the additionaldeployment_unique_name
variable; appends a random, 7-digit alphanumeric value to it; and then tags all future resources with this variable, which never changes during subsequent Terraform deployments. - If you plan to deploy multiple Qumulo clusters, give the
q_cluster_name
variable a unique name for each cluster.
-
Configure your Virtual Network to use the
Microsoft.KeyVault
andMicrosoft.Storage
service endpoints.Important
It isn’t possible to deploy your cluster without these endpoints. -
Navigate to the
azure-terraform-cnq-<x.y>
directory and then run theterraform init
command.Terraform prepares the environment and displays the message
Terraform has been successfully initialized!
-
In
terraform.tfvars
, fill in the values for all variables.For more information, see
README.pdf
inazure-terraform-cnq-<x.y>.zip
. -
Run the
terraform apply
command.Terraform displays the execution plan.
-
Review the Terraform execution plan and then enter
yes
.Terraform creates resources according the execution plan and displays:
-
The
Apply complete!
message with a count of added resources -
Your deployment’s unique name
-
The floating IP addresses for your Qumulo cluster
-
The primary (static) IP addresses for your Qumulo cluster
-
The Qumulo Core Web UI endpoint
For example:
Apply complete! Resources: 62 added, 0 changed, 0 destroyed. Outputs: cluster_provisioned = "Success" deployment_unique_name = "my-deployment-ABCDEFGH1IJ" ... persistent_storage_bucket_names = tolist([ "ab5cdefghij1", "ab4cdefghij2", "ab3cdefghij3", "ab3cdefghij3", ]) qumulo_floating_ips = [ "203.0.113.42", "203.0.113.84", ... ] ... qumulo_primary_ips = [ "203.0.113.0", "203.0.113.1", "203.0.113.2", "203.0.113.3" ] ... qumulo_private_url_node1 = "https://203.0.113.10"
-
-
To log in to your cluster’s Web UI, use the endpoint from the Terraform output as the endpoint and the username and password that you have configured during deployment as the credentials.
Important
If you change the administrative password for your cluster by using the Qumulo Core Web UI,qq
CLI, or REST API after deployment, you must add your new password in Azure App Configuration (on the left navigation panel, click Operations > Configuration Explorer).You can use the Web UI to create and manage NFS exports, SMB shares, snapshots, and continuous replication relationships You can also join your cluster to Active Directory, configure LDAP, and perform many other operations.
-
Mount your Qumulo file system by using NFS or SMB and your cluster’s DNS name or IP address.
Step 3: Performing Post-Deployment Actions
This section describes the common actions you can perform on a CNQ cluster after deploying it.
Adding a Node to an Existing Cluster
To add a node to an existing cluster, the total node count must be greater than that of the current deployment.
- Edit
terraform.tfvars
and change the value ofq_node_count
to a new value. - Run the
terraform apply
command. -
Terraform displays the execution plan.
Review the Terraform execution plan and then enter
yes
.Terraform changes resources according the execution plan and displays an additional primary (static) IP for the new node. For example:
qumulo_primary_ips = [ "203.0.113.0", "203.0.113.1", "203.0.113.2", "203.0.113.3", "203.0.113.4" ]
- To ensure that the Provisioner shut downs automatically, review the
last-run-status
parameter in Azure App Configuration (on the left navigation panel, click Operations > Configuration Explorer). - To check that the cluster is healthy, log in to the Web UI.
Increasing the Soft Capacity Limit for an Existing Cluster
Increasing the soft capacity limit for an existing cluster is a two-step process:
- Configure new persistent storage parameters.
- Configure new compute and cache deployment parameters.
Step 1: Set New Persistent Storage Parameters
- Edit the
terraform.tfvars
file in thepersistent-storage
directory and set theq_cluster_soft_capacity_limit
variable to a higher value. -
Run the
terraform apply
command.Review the Terraform execution plan and then enter
yes
.Terraform creates new Azure Storage Accounts as necessary and displays:
-
The
Apply complete!
message with a count of changed resources -
The names of the created Azure Storage Accounts
-
Your deployment’s unique name
-
The new soft capacity limit
Apply complete! Resources: 0 added, 1 changed, 0 destroyed. Outputs: bucket_names = [ "ab5cdefghij-my-deployment-klmnopqr9st-qps-1", "ab4cdefghij-my-deployment-klmnopqr8st-qps-2", "ab3cdefghij-my-deployment-klmnopqr7st-qps-3", "ab3cdefghij-my-deployment-klmnopqr7st-qps-3", ] deployment_unique_name = "lucia-deployment-GKMVD58UF2F" ... soft_capacity_limit = "1000 TB"
-
Step 2: Update Existing Compute and Cache Resource Deployment
- Navigate to the root directory of the
azure-terraform-cnq-<x.y>
repository. -
Run the
terraform apply -var-file config-standard.tfvars
command.Review the Terraform execution plan and then enter
yes
.Terraform updates the necessary roles and Azure Storage Account policies, adds Azure Storage Accounts to the persistent storage list for the cluster, increases the soft capacity limit, and displays the
Apply complete!
message.When the Provisioner shuts down automatically, this process is complete.
Deleting an Existing Cluster
Deleting a cluster is a two-step process:
- Delete your Cloud Native Qumulo resources.
- Delete your persistent storage.
- When you no longer need your cluster, you must back up all important data on the cluster safely before deleting the cluster.
- When you delete your cluster's cache and computer resources, it isn't possible to access your persistent storage anymore.
Step 1: To Delete Your Cluster’s Cloud Native Qumulo Resources
- Back up your data safely.
-
Run the
terraform destroy
command.Review the Terraform execution plan and then enter
yes
.Terraform deletes all of your cluster’s CNQ resources and displays the
Destroy complete!
message and a count of destroyed resources.
Step 2: To Delete Your Cluster’s Persistent Storage
-
Navigate to the
persistent-storage
directory. -
Run the
terraform destroy
command.Review the Terraform execution plan and then enter
yes
.Terraform deletes all of your cluster’s persistent storage and displays the
Destroy complete!
message and a count of destroyed resources.