2.3 Qumulo Getting Started Guide 1
2.3 Qumulo Getting Started Guide 1
Started Guide
Version 2.3
2020
Overview
Welcome to our Getting Started Guide! Here you’ll find all the details you need to install your new
nodes, configure your cluster, and hit the ground running as a new customer. While this guide serves
as a great starting point, there’s so much more you can do with Qumulo!
For a deeper dive into our features and administering your cluster, be sure to visit our Q
umulo Care
support portal where you can open a case, read articles and watch videos in our online content library,
check out our product release notes, and get involved in the Qumulo community to make your voice
heard.
If you do have any additional questions or want to provide some feedback, we would love to hear from
you! Feel free to open a case, shoot an email over to [email protected], or ping us in your private
Slack channel so that we can get you the answers you need.
2. Technical Specifications 6
2.1 QC Series 1U, 4U, and C-Series 6
2.2 Qumulo P-Series 2U 7
2.3 Qumulo K-Series 1U 7
2.4 Qumulo for HPE 8
4. Networking 35
4.1 Recommendations for QC Series 35
4.2 Recommendations for Qumulo K-Series 37
4.3 Recommendations for Qumulo P-Series 38
4.4 Recommendations for Qumulo C-Series 40
4.5 Configure LACP 41
5. Create a Cluster 42
5.1 Set up cluster 42
5.2 Confirm cluster protection level 42
5.3 Create a password for your admin account 43
6. Configure IP Failover 44
6.1 Web UI 45
6.2 QQ CLI 45
Mechanical Loading
Mounting of the equipment in the rack or cabinet should be such that a hazardous condition is not
achieved due to uneven mechanical loading.
Circuit Overloading
Consideration should be given to the connection between the equipment and the supply circuit.
Appropriate consideration of equipment nameplate ratings should be used when addressing the
effect that overloading the circuits might have on current protection and supply wiring.
Reliable Earthing
Reliable earthing of rack-mounted equipment should be maintained. Particular attention should be
given to supply connections other than direct connections to the branch circuit (e.g., use of power
strips).
Servicing
Disconnect all power supplies prior to servicing the equipment.
Caution: Risk of explosion if battery is replaced by incorrect type. Dispose of used batteries according
to the instructions provided.
Additional technical specifications for these platforms are provided by HPE. Check out the links below
for details.
3.1 QC Series 1U
1. Slide the inner rail in place and verify the front end of the rail.
2. Place the front of the rail into the holes on the rack using the numbers as a guide.
5. Place node into the rail system by aligning the rails between the node and the rack.
6. Release the blue button on the side of the node to slide the node and rails into the rack.
7. Tighten the thumbscrew to secure the node in place.
8. Attach the network cables (3) and plug in the power cables on the back of the node (5).
9. Connect any one of the nodes to a display, keyboard and mouse (4).
11. Check that all drive lights (red, blue, green) illuminate before proceeding to create a cluster.
3.2 QC Series 4U
1. Slide the inner rail in place and verify the front end of the rail.
2. Place the front of the rail into the holes on the rack using the numbers as a guide.
3. Hold the lock and place the rear of the rail into the holes using the same numerical placement
as the front.
4. Release to lock the rail in place.
8. Insert the included hard drives (HDD) into any open slot on the node.
9. Attach the network cables and plug in the power cables on the back of the node.
12. Check that all drive lights (red, blue, green) illuminate before proceeding to create a cluster.
4. Repeat the steps above to install the rear of the sled using the same numerical placement as
the front.
CAUTION! Sleds do not fully extend like other rail systems and are stationary in the racks. Use caution
when installing or removing nodes.
8. Verify that all HDDs in the driver drawer are fully seated.
9. Push the drive drawer back into place until the drawer latch clicks.
10. Attach the network cables and plug in the power cables on the back of the node.
CAUTION: Do not use the LOM ports. Only use the external NIC ports for the 25Gb connections as
highlighted above.
11. Connect any one of the nodes to a display, keyboard and mouse.
2. Place the front of the rail into the holes on the rack using the numbers as a guide.
3. Hold the lock and place the rear of the rail into the holes using the same numerical placement
as the front; release to lock the rail in place.
10. Check that all drive lights illuminate confirming that drives and nodes are ready for
configuration.
2. Press the release lever on the front end while aligning the sled into the holes on the rack.
CAUTION! Sleds do not fully extend like other rail systems and are stationary in the racks. Use caution
when installing or removing nodes.
11. Place the back of the node on the sleds and slide the node into the rack.
12. Tighten the two front thumbscrews to secure the node in place.
13. Press the drawer latch up on the front of the node and pull out the drawer using the handle.
14. Verify that all HDDs in the driver drawer are fully seated.
15. Push the drive drawer back into place until the drawer latch clicks.
16. Attach the network cables and plug in the power cables on the back of the node.
13. Connect any one of the nodes to a display, keyboard and mouse.
14. Turn on the nodes by pressing the power button on the front.
1. Shut down the node and connect it to a display, keyboard, and mouse.
2. Plug in the Qumulo Core Installer USB key to an available USB port.
3. Press the power button highlighted below to power the node on and wait for the machine’s
boot screen to display.
1. Press F11 to access the Boot Menu when prompted at the HPE ProLiant screen. Note that this
boot may take a few minutes.
IMPORTANT! DO NOT run the following Field Verification Tool if any live data is present on the node.
3. Type 1 or F
VT on the main menu to continue with the test.
4. Type 2
or V
ERIFY and hit ENTER to check the node configuration.
5. Review the results and consider the following before proceeding with a clean install of Qumulo
Core:
● FAIL messages reported from VERIFY are not indicative of an unsuccessful F LASH
command and can be resolved with a power-cycle to reflect recent firmware changes.
● FAIL messages on the boot order when running VERIFY can be ignored at this time.
If all fields pass, you may skip the FLASHING OF HPE INTELLIGENT PROVISIONING FIRMWARE
section and continue cluster configuration by following the steps outlined in the INSTALL QUMULO
CORE VIA THE USB KEY section.
If the category for the Intelligent Provisioning Version returns FAILED, execute the steps in the
FLASHING OF HPE INTELLIGENT PROVISIONING FIRMWARE section below. Once complete, return to
step 3 in this section and run the VERIFY command for FVT. If all fields pass, you may continue to the
INSTALL QUMULO CORE VIA THE USB KEY section.
IMPORTANT! ONLY execute these instructions if the Intelligent Provisioning check in the FVT failed.
The HPE Intelligent Provisioning firmware for the HPE Apollo 4200 has no method available to flash
this component in the system. To acquire the firmware, download the binary file from HPE Support
Center and follow the instructions below.
1. Put the iso in an accessible location over the network for the node.
2. Select Insert Media and check the boot on next reboot option for the iso on the virtual media
page.
3. Reset the node and allow the install to complete.
IMPORTANT! If you mistype DESTROY ALL DATA three times or type no, the installation will be
aborted.
The node will automatically shut down once the installation of Qumulo Core is complete. At that time,
remove the USB stick and press the power button to turn on the node. A successful install using the
Qumulo Core USB Installer Key will boot the node to the End User Agreement page, the first step in
creating a new cluster with Qumulo Core. Before you agree and continue, repeat the steps outlined
above for each node that will be included in your Qumulo cluster. Leave the display, keyboard and
mouse connected to the last imaged node and follow the instructions below to C reate a Cluster.
1. Shut down the node and connect it to a display, keyboard, and mouse.
2. Plug in the Qumulo Core Installer USB key to an available USB port.
3. Press the power button highlighted below to power the node on and wait for the machine’s
boot screen to display.
● If the Boot Mode is Legacy BIOS, disregard the rest of the steps in this section and
proceed to the B
OOT TO QUMULO CORE USB INSTALLER KEY section.
● If the Boot Mode is not Legacy BIOS, press F9 to access the System Utilities menu
and proceed with the subsequent steps.
1. Press F11 to access the Boot Menu when prompted at the HPE ProLiant screen. Note that this
boot may take a few minutes.
IMPORTANT! DO NOT run the following Field Verification Tool if any live data is present on the node.
The Field Verification Tool will automatically start after reboot.
The test results display once it has concluded. Refer to the following sections for details on Pass and
Fail scenarios.
If you see an FVT passed! message, proceed to the Installing Qumulo Core section. If FAIL messages
are present, review the example below to determine the appropriate course.
When presented with this menu, select option 1 to have the tool attempt to fix the issues. If the fixes
are successful, the FVT will automatically reboot the node. Return to the B
oot To The Qumulo Core
USB Installer Key section to re-attempt verification and continue the install.
Non-Fixable Issues
If the FVT is unable to automatically fix any failures detected, the message “Not fixable issues were
detected” will display after providing failure reasons.
● BIOS version
● ILO version
● NIC FW
Now that the server has verified it is ready to be configured, you can start to install Qumulo Core.
IMPORTANT! Be sure to store the key in a secure location for the lifetime of the cluster.
The node will automatically shut down once the installation of Qumulo Core is complete. At that time,
remove the USB stick and press the power button to turn on the node. A successful install using the
Qumulo Core USB Installer Key will boot the node to the End User Agreement page, the first step in
creating a new cluster with Qumulo Core. Before you agree and continue, repeat the steps outlined
above for each node that will be included in your Qumulo cluster. Leave the display, keyboard and
mouse connected to the last imaged node and follow the instructions below to C reate a Cluster.
For additional guidance on cluster configuration and getting started, reference the Q umulo
Installation FAQ article in the Getting Started section of Qumulo Care for more details.
For additional details on configuring your network, check out the Networking section available on
Qumulo Care.
TIP! For IPMI configuration details, check out the IPMI Quick Reference Guide on Qumulo Care for
information on port location and setup.
NOTE: Currently only the left-most network card is utilized on the 4U platforms. The card on the right
is reserved for future expansion and is not available for use.
Below are the different types of supported bonding for active port communication:
● Link aggregation control protocol (LACP)
○ Active-active functionality
○ Requires switch-side configuration
○ May span multiple switches when utilizing multi-chassis link aggregation
● Active-backup NIC bonding
○ Automatic fail-back
○ Does not require switch-side configuration
○ All active ports must reside on the same switch
CAUTION: Do not use the LOM ports. Only use the external NIC ports for the 10Gb connections as
highlighted below.
Recommendations:
● One set of redundant switches
○ Jumbo Frame support with a minimum of 9000 MTU
● One physical connection per node to each redundant switch
● One LACP port-channel on each node
○ Active mode
○ Slow transmit rate
○ Trunk port with a native VLAN
● N-1 (N=number of nodes) floating IPs per node per client-facing VLAN
● DNS servers
● Time server (NTP)
● Firewall protocol/ports allowed to E
nable Proactive Monitoring
TIP! For IPMI configuration details, check out the IPMI Quick Reference Guide for Qumulo K-Series on
Qumulo Care for information on port location and setup.
Recommendations:
● One set of redundant switches for the front end network with minimum 9000 MTU configured
● One set of redundant switches for the back end network with minimum 9000 MTU configured
● One physical connection per node to each redundant switch
● One LACP port-channel per network (front end and back end) on each node with the
following:
○ Active mode
○ Slow transmit rate
○ Trunk port with a native VLAN
● N-1 (N=number of nodes) floating IPs per node per client-facing VLAN
● DNS servers
● Time server (NTP)
● Firewall protocol/ports allowed to E
nable Proactive Monitoring
TIP! For IPMI configuration details, check out the IPMI Quick Reference Guide for information on port
location and setup.
Front End
● The two front end NIC ports (2x40Gb or 2x100Gb) on the nodes are connected to separate
switches
● Uplinks to the client network should equal the bandwidth from the cluster to the switch
● The two ports form an LACP port channel via multi-chassis link aggregation group
Front End
● Each node contains two front end ports (2x40Gb or 2x100Gb) that are connected to the switch
● Uplinks to the client network should equal the bandwidth from the cluster to the switch
● The two ports form an LACP port channel
Back End
● Each node contains two back end ports (2x40Gb or 2x100Gb) that are connected to the switch
● For all connection speeds, the default behavior is LACP with a 9000 MTU. This Qumulo
configuration, in conjunction with a Link-Aggregation configuration on the switch side and
associated 9216 network MTU, benefits from both link redundancy and increased bandwidth
capacity.
● For all connection speeds, the default behavior is LACP with 9000 MTU. The switch should be
running in a Link-Aggregation configuration with 9216 network MTU. You can optionally
configure these ports in Active-Backup through the qq command-line interface. Please see
the section below for those instructions.
Set the Back End MTU and Bonding Mode via QQ CLI
The bonding mode and MTU for the back end network can be configured via the qq command-line
with the following qq commands.
qq network_get_interface --interface-id 2
IMPORTANT! If Active_Backup is set, please ensure all configurations related to LACP or other
redundancy methodologies are removed from the switch configurations on the affected ports. Failure
to do so may result in unpredictable behaviors.
To list both interfaces on the back end network, run the following command:
qq network_list_interfaces
CAUTION: Do not use the LOM ports. Only use the external NIC ports for the 25Gb connections as
highlighted below.
Recommendations:
● One set of redundant switches
○ Jumbo Frame support with a minimum of 9000 MTU
● One physical connection per node to each redundant switch
● One LACP port-channel on each node
○ Active mode
○ Slow transmit rate
○ Trunk port with a native VLAN
● N-1 (N=number of nodes) floating IPs per node per client-facing VLAN
● DNS servers
● Time server (NTP)
● Firewall protocol/ports allowed to E
nable Proactive Monitoring
TIP! For IPMI configuration details, check out the IPMI Quick Reference Guide for Qumulo C-Series on
Qumulo Care for information on port location and setup.
To avoid this behavior, you can explicitly set the ports to active-backup with the following command:
NOTE: Be aware that making these changes will trigger a cluster event while the new network bond
is negotiated per node and will result in a small outage.
NOTE: The total capacity for the cluster is dynamically updated at the bottom of the page when
selecting nodes.
NOTE: The option for selecting the drive protection level is only available at cluster creation and
cannot be changed after the fact.
To access the dashboard in the Qumulo Core UI remotely, use any node's IP address to connect via
web browser.
For example, in a BIND zone, your records may look something like this where 10.101.1.201-204 is the
floating range:
Mac Client
mount -t nfs -o
rsize=65536,wsize=65536,intr,hard,tcp,locallocks,rdirplus,readahead=128
your.qumulo.ip:/share /path/to/mountpoint
Linux Client
● Please note that modern Linux distributions auto negotiate a 1MB read/write block size
(rsize/wsize of rsize=1048576).
You can use the Qumulo Core Web UI or the CLI to set up IP Failover on your Qumulo cluster as
detailed below.
6.1 Web UI
1. Log in to your cluster's Web UI as 'admin'.
2. Hover over the C luster m
enu and select Network Configuration.
3. On the Network Configuration page, click on E dit Static Settings.
4. In the fields for Persistent IPv4 Addresses and Floating IPv4 Addresses, enter your fixed and
floating ranges.
5. Click Save.
6.2 QQ CLI
1. Using a node's IP address, ssh to the cluster as admin.
2. Login as root:
sudo -s
Note: We recommend assigning enough floating IP addresses so that each node will have the total
number of nodes minus one for the number of floating IP addresses (up to 10 per node). The math to
use is N-1*N where N is the total number of nodes in the cluster. Assuming many client connections,
this best practice could help evenly distribute the connections from the lost node onto the remaining
nodes as needed. For example, in a 4 node cluster when 1 node goes offline, its 3 virtual IPs would
then float to each of the remaining 3 nodes.
● Whitelist missionq.qumulo.com, e
p1.qumulo.com, and monitor.qumulo.com and permit
outbound HTTPS traffic over port 443
NOTE: I f the firewall performs Stateful Packet Inspection (sometimes called SPI or Deep Packet
Inspection), the firewall admin must explicitly Allow OpenVPN (SSL VPN) rather than simply opening
port 443.
7.1 Mac
1. Download and unzip the zip file that your Customer Success Manager provided onto a
computer running Mac OS X on the same network as the cluster.
2. Bring up a terminal and copy the 3 files onto one of the nodes.
3. SSH to the same node where you’ve copied the VPN key files.
5. Proceed to F
inal Steps below.
7.2 Windows
1. Download the latest version of putty.exe and pscp.exe from here onto a Windows machine.
2. Download and unzip the zip file that your Customer Success Manager provided onto the same
Windows machine on the same network as the cluster.
3. Bring up a command line window, browse to the folder that contains putty.exe and pscp.exe
and copy the three files onto one of the nodes.
cd \Users\<username>\Downloads\
pscp \<VPN Key file path>\* admin@<node ip address>:/home/admin
admin@<node ip address>
sudo qq get_vpn_keys
rm /home/admin/*.key
rm /home/admin/*.crt
sudo qq node_state_get
4. Send the Customer Success team the output and provide the name of the cluster.
5. Enable the Qumulo Care Remote Support option via the Web UI.
6. Notify Customer Success Team when this is complete so that VPN connectivity can be tested
and the cluster can be added to Qumulo’s Cloud-Based Monitoring service.
To use Qumulo’s proactive monitoring, make sure that you have done the following:
● Installed VPN Keys as instructed above
● Protocols/ports allowed to the following destination hostnames as outlined in the table below:
Once enabled, the following data will be collected by Qumulo so that our team can proactively reach
out if an incident occurs.
NOTE: Qumulo’s Cloud-Based Monitoring service does not collect file & path names, client IP
addresses, and login information (such as usernames & passwords).
1. In the Web UI, hover over the S upport menu and click Qumulo Care.
2. Click the E
dit button for Cloud-Based Monitoring.
4. Click S
ave.
Once enabled, Cloud-Based Monitoring will display as Enabled | Connected on the Qumulo Care
page.
qq set_monitoring_conf --enabled
qq set_monitoring_conf --disabled
qq monitoring_conf
Our team receives alerts 24/7 for the following incidents via Cloud-based Monitoring so that we can
be available for help when you need it the most:
Depending on the severity of the issue and the current state of the cluster, a member from our team
will reach out in the following ways. Primarily your team will be notified via Slack or email for most
incidents listed above. For critical alerts, our team will call the phone number provided for the
technical contact to resolve the issue. Reference the table below for additional details.
Remote Support relies on a VPN connection (IPv6 configurations not supported) from your cluster to
a server accessed only by Qumulo using industry standard authentication and encryption. To secure
this connection, VPN Keys are installed on each Qumulo node in /etc/openvpn at initial installation.
Once Remote Support is enabled on your cluster, an authorized member of the Qumulo Care team
can open a connection to your cluster via the openvpn tunnel that is closed by default. This
connection will remain established for a fixed period of four hours or can be modified per customer
security requirements if necessary.
IMPORTANT! If your company has an intrusion detection device or firewall that performs SSL/HTTPS
Deep Packet Inspection, you will need to add an exception for the ep1.qumulo.com IP address. Run
the command below on your cluster to identify the IP address for ep1.qumulo.com:
nslookup ep1.qumulo.com
1. In the Web UI, hover over the S upport menu and click Q
umulo Care.
2. Click the E
dit button for Remote Support.
qq set_monitoring_conf --vpn-enabled
qq set_monitoring_conf --vpn-disabled
Lastly, verify the cluster's support configuration by using the following command:
qq monitoring_conf
1. The customer initiates a VPN connection by enabling the Remote Support option in the UI on
the Qumulo Care page.
2. The customer notifies the Qumulo Customer Success team that Remote Support is enabled.
We highly recommend that you enable Cloud-Based Monitoring with Remote Support so that our
team can proactively provide fast support when you need it the most.
These are the permissions of the root directory of a newly-created Qumulo Cluster. One User Account
and two groups are given rights to the root share by default:
● Qumulo\admin (User): All ACEs except Full Control and Delete for “This folder only”
● Qumulo\users (Group): “Modify” ACL for “This folder only”
● Everyone (Group): “Modify” ACL for “This folder only”
● User will be able to create files and directories in the current and all future directories.
● User will be able to read all files and file attributes and list all directories in the current and all
future directories.
● User will be able to delete or rename all files and directories in the current and all future
directories
● User will be able to change ownership and permissions for all files and directories in the
current and all future directories.
● This is the default group that all non-Guest accounts belong to at time of account creation.
● User will be able to read all files and file attributes and list all directories in the root directory
and any future directories created by other members of the Qumulo\users group in the root
directory.
● User will be able to rename, delete and modify permissions on any files or directories created
by this user in the current directory and in any subsequent sub-directories created in this
directory.
● User will be able to create or append new files and directories in the root directory and in any
subsequently created sub-directories. The new files and directories created will be owned by
this user and will receive the following permissions:
○ File/Folder Creator - “Modify” ACL
○ Everyone (Group) - “Read” ACL
○ Qumulo\Users (Group) - “Read” ACL
NOTE: This means that the files and directories inside the Qumulo root share cannot be modified by
anyone other than Qumulo admin users and users that are implicitly granted permission to do so.
This includes all other non-admin members of the Qumulo\users group.
Guest will be able to create files and directories in the Qumulo share root directory as inherited by the
root directories Everyone permissions ACL.
Files created by Guest will have the owner Qumulo\guest and receive the following permissions:
● Guest - “Modify” ACL
● Everyone (Group) - “Read” ACL
● Qumulo\Guests (Group) - “Read” ACL
Non-Qumulo admin members of other user groups will be able to read files and list directories
created by Guest but will not be able to write to, append or modify those files or directories. Guest will
be able modify permissions and change ownership of files and directories created by this account.
3. Click Save to create the new export and add it to the NFS Exports page.
A list of SMB shares displays, including the name of each SMB share and corresponding file system
path.
3. Click Create Share to create the new share and add it to the SMB Shares page.
NOTE: When you add a Deny entry, it is added to the top of the listing, while Allow entries are added
to the bottom. This ensures that users who are explicitly denied access are processed prior to granting
access to any.
○ If you have both SMB and NFS users, input an NFS UID that matches the user's POSIX
UID on their client machine.
○ Optionally, click the Groups tab and select the user's primary group, and any other
groups they should belong to. Note that while a user can be a member of multiple
groups, there can only be one primary group per user.
3. Click the C
reate button when finished.
You will now be able to connect to an SMB share or mount an NFS export as a Qumulo user. Keep in
mind that for NFS users, the UID/GIDs of users in their Linux/Unix/Mac environment need to match
the UID/GIDs used when creating users above.
1. In the Web UI, hover over the C luster menu and click A
ctive Directory under Authentication
and Authorization.
2. Fill in the following mandatory fields:
○ Domain Name: name of your domain. Example: ad.example.com
○ Domain Username: the user account or service account you will use to authenticate
against the domain
○ Domain Password: the password for the user account or service account
3. Fill in the following two optional fields:
○ NetBIOS name: This is the first portion of the fully-qualified domain name. If your
Qumulo cluster name is Qumulo and you are joined to the a d.example.com domain,
then your NetBIOS name will be Qumulo.
○ Organizational Unit (OU): If known, this information can be entered and can normally
be obtained from your Systems Administrator. If unknown, leave it blank and Qumulo
will attempt to join the domain without an OU specified.
6. Optionally, enter your Base DN for User and Group Accounts in the text field.
7. Click Join.
NOTE: D NS entries will be automatically created from the node used to add the cluster to Active
Directory and can be removed without issue.
In the Qumulo Core UI, you’ll find an API and Tools menu that provides direct, navigable “live”
documentation where you can read about the different APIs and experiment by trying things out
directly in one place.
14.1 Authentication
Qumulo API endpoints can be divided into three categories:
● APIs that don’t require any authentication like / v1/version
● A login API at / v1/session/login which takes a username and password
● APIs that take a bearer token returned from the / v1/session/login API
When using Qumulo’s API, you need to start an authentication session by logging in. Calling the login
API gives you a temporary credential called a bearer token, which is sent along with subsequent API
calls as proof that you have been authenticated.
NOTE: Non-admin users can login but may not have access to certain endpoints.
Output:
{ "bearer_token": "1:ATwAAABlSnp6MVZvUXhRQUViN2RCYUFVZy9zTElBQWFNVEZBYWljME94R3hBSEp
PWWtwdVpad2RrQVFBNEtnZmIgAAAAXU/JXGz/syigeb+FQ5zEzmNtk8L8GtaQ0M3UejImW4k=" }
Bearer tokens can also be obtained from using the interactive API available in Qumulo Core.
3. Type in admin for the username value and the assigned password.
IMPORTANT! The bearer token is valid for 10 hours and can be used to make API requests. To
continue using the API after 10 hours, you must re-authenticate with your username and password to
start a new authentication session.
Output:
TIP! I n a UNIX shell like bash, assign the bearer token to a variable so that authentication does not
require the full token value from the original login request. See the example below where our bearer
token is assigned to the q_prod variable.
$ q_prod="1:ATwAAABlSnp6MVZvUXhRQUViN2RCYUFVZy9z
TElBQWFNVEZBYWljME94R3hBSEpPWWtwdVpad2RrQVFBNEtnZmIgAAAAXU/JXGz/syigeb+FQ5zEzmNtk8L8GtaQ0M3UejIm
W4k="
The API leverages the HTTP ETag mechanism to handle concurrent resource modifications. We return
an ETag containing a version string for each versioned resource. If conflict detection is desired, the
caller should provide an If-Match header containing the ETag associated with the expected resource
version.
Let’s say an administrator is editing a file share on the cluster using the Interactive API in API & Tools.
Between the time the UI retrieves the file share details and when the administrator saves their
changes, another user or process could change that file share. By default in our API, the last writer
wins so the administrator would unwittingly clobber these changes. That’s not the user experience
we want, so we use ETag and If-Match HTTP headers for all of our documents to prevent accidental
overwrites.
When the UI retrieves a document, it reads the ETag response header (entity tag or essentially a
hashcode) and stores that. Later, when updating that same document, the UI sends an If-Match
request header which tells the cluster to only perform the action if the document is the same as we
expect. If the document changed, we’ll get back a 412 Precondition Failed response which allows us
to build a better experience for the user.
14.3 GitHub
Qumulo culture values openness and transparency, with an emphasis on sharing. We want to extend
this culture to customers that use the Qumulo REST APIs by sharing samples using our APIs via
GitHub. Our goals in sharing samples on GitHub include:
● Make it easy for our customers new to the Qumulo REST API to understand how it works
● Provide a good, representative cross-section of samples for common tasks including disk
utilities, creating shares and storage statistics
● Provide reference implementations for common customer sample requests such as
monitoring agents and working with time series data from our clusters
● Provide a central clearinghouse for customers who want to share their own Qumulo REST API
samples with others
● Provide guidance to customers to help ensure code quality through good coding standards
and tests
3.Copy the qumulo_api directory to your home directory to ensure that only you are able to run
the qq command on the computer where you are installing. If others need access, copy the
qumulo_api directory to one of the following:
● Apple and Linux computers - copy to /opt/
● Windows - copy to C:\Program Files (x86)\
Windows
On Windows, the qq file must be run with the python.exe 2.7 interpreter:
$ python.exe /Users/qumulo_user/qq
NOTE: You can also install the Qumulo API tools via the Python SDK by running the command below.
The qq file (qq.exe for Qumulo Core 3.1.4 and higher) is installed in the Scripts directory next to your
Python installation (e.g., C:\Python27\Scripts). Add that directory to your PATH to allow qq.exe to be
executed from anywhere.
$ python ~/qq
Alternatively, you can configure the file's executable bit and run it directly:
$ chmod +x ~/qq
$ ~/qq
Once you've accessed a node via ssh, you can see the full list of qq commands by heading on over to
the QQ CLI section of Qumulo Care or by running the following command:
qq -h
To help demonstrate what is and is not supported for upgrades with Qumulo Core 2.13.0 and above,
we've provided some specific examples below:
● You CAN upgrade from 2.13.3 to 2.14.0 – t his path is supported since all versions of Qumulo
Core now act as a quarterly release source.
● You CAN upgrade from 2.13.1 to 2.13.5 – this path is supported since you can now skip versions
within as long as the jump does not include a quarterly release.
● You CANNOT upgrade from 2.13.5 to 2.14.1 – this path is not supported since you cannot skip a
quarterly release (X.X.0). You need to install 2.14.0 before you can upgrade to the 2.14.1 release.
● You CANNOT upgrade from 2.12.4 to 2.13.0 – this path is not supported since the new relaxed
upgrade restrictions are only available starting with the 2.13.0 version of Qumulo Core. You
need to install 2.12.5 and 2.12.6 before upgrading to the 2.13.0 release.
Recommended upgrade paths when moving from a past version of Qumulo Core to a recent release
are outlined below:
IMPORTANT! Back to back upgrades of Qumulo Core may require a wait period between specific
releases to allow background processes to run. Before attempting to install multiple releases of
Qumulo Core in an extended maintenance window, reach out to the Q umulo Care team for guidance
on your upgrade path.
Example: If the share/export that contains the upgrade file is /upgrade/ your file system path should
be u
pgrade/qumulo_core_2.8.7.qimg
5. Click the U
pgrade button.
ssh admin@your_IP_address
sudo -s
4. Confirm that the upgrade status is “IDLE” using the command below:
qq upgrade_status
"details": "",
"install_path": "",
"state": "UPGRADE_IDLE"
6. Prepare the upgrade by running the following command using the path to the .qimg file you
uploaded:
qq upgrade_status --monitor
UPGRADE_PREPARED
9. Arm the upgrade to begin the installation using the command below:
10. Re-login after the upgrade completes and the Qumulo process is restarted.
11. Check that the upgrade was successful by running the following command and verifying the
new version number:
qq version