Container-native storage 3.9: Enhanced day-2 management and unified installation within OpenShift Container Platform

By  Annette Clewett, Humble Chirammal, Daniel Messer, and Sudhir Prasad

With today’s release of Red Hat OpenShift Container Platform 3.9, you will now have the convenience of deploying Container-Native Storage (CNS) 3.9 built on Red Hat Gluster Storage as part of the normal OpenShift deployment process in a single step. At the same time, major improvements in ease of operation have been introduced to give you the ability to monitor provisioned storage consumption, expand persistent volume (PV) capacity without downtime, and use a more intuitive naming convention for persistent volume names.

For easy evaluation of these features, an OpenShift Container Platform (OCP) evaluation subscription now includes access to CNS evaluation binaries and subscriptions.

New features in Container-Native Storage 3.9

Addition of volume metricsVolume consumption metrics data (e.g., volume capacity, available space, number of inodes in use, number of inodes free) for CNS is now available. These volume metrics can be viewed using Prometheus. This enables you to monitor storage capacity and consumption trends and take timely actions to ensure applications do not get impacted.

Online expansion of provisioned storageYou can now expand the CNS-backed persistent volumes (PVs) within OpenShift by editing the corresponding claim (oc edit pvc <claim_name>) with new desired capacity (‘spec→ requests → storage: new value’).  This opt-in feature is enabled by configuring the StorageClass for CNS with the parameter allowVolumeExpansion set to “true,” enabling the  feature gate `ExpandPersistentVolumes` and including a new admission controller called `PersistentVolumeClaimResize.` You can now dynamically resize storage volumes attached to containerized applications without needing to first detach and then attach a storage volume with increased capacity, which enhances application availability and uptime.

Custom volume namingBefore this release, the names of the dynamically provisioned GlusterFS volumes were auto-generated with unique id number (vol_<UUID>). This release allows you to add a custom volume name prefix, again by parameterizing the StorageClass (`volumenameprefix: myPrefix`), for easier identification of volumes in the GlusterFS backend. The new GlusterFS volumes backing the CNS PVs will be created with the volume name prefix, project name/namespace, claim name and UUID (<myPrefix>_<namespace>_<claimname>_UUID), making it easier for you to automate day-2 admin tasks like backup and recovery, policy application based on pre-ordained volume nomenclature, and other day-2 housekeeping tasks.

Enhanced CNS installation with OCP advanced installer

In previous releases, the OpenShift advanced installer introduced the ability to deploy CNS and provide GlusterFS storage for general applications and an integrated OpenShift Container Registry as part of or after an OCP installation. We’ve enhanced this in the 3.9 release to also provide gluster-block storage for infrastructure applications. At this time, gluster-block is only supported for logging and metrics.

CNS can provide persistent storage for both OCP’s infrastructure applications (i.e., integrated registry, logging, and metrics) and general application consumption. Both options can be used in parallel, resulting in two separate CNS clusters being deployed in a single OCP environment. It’s also possible to use a single CNS cluster for both purposes, although it is not recommended.

The following is an example of a partial inventory file with selected options concerning deployment of CNS for applications and an additional CNS cluster for infrastructure workloads like registry, logging, and metrics storage. When using these options, values with specific sizes (e.g., openshift_hosted_registry_storage_volume_size=25Gi) or node selectors  (e.g., “role”:”infra”) should be adjusted for your particular deployment needs.

If you’re planning to use gluster-block for logging and metrics, they should not be installed when OCP is installed. These infrastructure services will be installed using specific playbooks after the initial OCP deployment using a gluster-block StorageClass to dynamically provision storage.

[OSEv3:children]

...

nodes
glusterfs
glusterfs_registry

[OSEv3:vars]
...      
# registry
openshift_hosted_registry_replicas=3       
openshift_registry_selector="role=infra"   
openshift_hosted_registry_storage_kind=glusterfs 
openshift_hosted_registry_storage_volume_size=25Gi  

# CNS storage for applications
openshift_storage_glusterfs_namespace=app-storage
openshift_storage_glusterfs_block_deploy=false    

# CNS storage for OpenShift infrastructure
openshift_storage_glusterfs_registry_namespace=infra-storage  
openshift_storage_glusterfs_registry_storageclass=false       
openshift_storage_glusterfs_registry_block_deploy=true   
openshift_storage_glusterfs_registry_block_host_vol_create=true    
openshift_storage_glusterfs_registry_block_host_vol_size=200   
openshift_storage_glusterfs_registry_block_storageclass=true
openshift_storage_glusterfs_registry_block_storageclass_default=true
openshift_storageclass_default=false

...

[nodes]
...

[glusterfs]
ose-app-node01.ocpgluster.com glusterfs_devices='[ "/dev/xvdf" ]'   
ose-app-node02.ocpgluster.com glusterfs_devices='[ "/dev/xvdf" ]'
ose-app-node03.ocpgluster.com glusterfs_devices='[ "/dev/xvdf" ]'
 
[glusterfs_registry]
ose-infra-node01.ocpgluster.com glusterfs_devices='[ "/dev/xvdf" ]'
ose-infra-node02.ocpgluster.com glusterfs_devices='[ "/dev/xvdf" ]'
ose-infra-node03.ocpgluster.com glusterfs_devices='[ "/dev/xvdf" ]'

Inventory file options explained

The first section of the inventory file defines the host groups the installation will be using. We’ve defined two new groups: (1) glusterfs_registry and (2) glusterfs. The settings for either group all start with either openshift_storage_glusterfs_registry_ or just openshift_storage_glusterfs_. In each group, the nodes that will make up the CNS cluster are listed, and the devices ready for exclusive use by CNS are specified (glusterfs_devices=).

The first group, glusterfs_registry specifies a cluster that will host a single, statically deployed PersistentVolume for use exclusively by a hosted registry. This cluster will not offer a StorageClass for file-based PersistentVolumes (openshift_storage_glusterfs_registry_storageclass=false). This cluster will also run gluster-block (openshift_storage_glusterfs_registry_block_deploy=true) that will be available via a StorageClass (openshift_storage_glusterfs_registry_block_storageclass=true). To ease installation of logging and metrics services, later this StorageClass will become the system-wide default (openshift_storage_glusterfs_registry_block_storageclass_default=true). Special attention should be given to choosing the size for openshift_storage_glusterfs_registry_block_host_vol_size. This is the hosting volume for gluster-block devices that will be created for logging and metrics. Make sure that the size can accommodate all these block volumes and that you have sufficient storage if another hosting volume must be created.

The second group of hosts in [glusterfs] specifies a cluster for general-purpose application storage and will, by default, come with a StorageClass to enable dynamic provisioning. Since gluster-block is only supported for logging and metrics services so far, it is not exposed to users in OpenShift via a StorageClass for this general-purpose CNS cluster (openshift_storage_glusterfs_block_deploy=false).

And that’s it: Your OpenShift solution with built-in storage is ready to go! If you want to tune the installation, more options are available in the Advanced Installation. You will get two CNS clusters deployed with their resource objects sitting in two different OCP projects for logical grouping. Note that both clusters run on distinct sets of hosts. An OpenShift node cannot be part of two CNS clusters. Metrics and logging can be deployed with two additional commands. For more details, see the next section of this post and the official documentation.

Installing logging and metrics infrastructure services

As stated earlier, the initial deployment of OCP does not include logging and metrics. Once a gluster-block StorageClass has been created and is configured as the default StorageClass, logging and metrics can be installed.

Installing metrics service

Before running the metrics playbook, make sure to change metrics=true as shown below and add/modify the inventory file options as required for your deployment.

# metrics
openshift_metrics_install_metrics=true    
openshift_metrics_hawkular_nodeselector={"role":"infra"}     
openshift_metrics_cassandra_nodeselector={"role":"infra"}
openshift_metrics_heapster_nodeselector={"role":"infra"}
openshift_metrics_cassandra_pvc_size=25Gi  
openshift_metrics_storage_kind=dynamic

Now run the metrics playbook to install it in the OCP openshift-infra project.

ansible-playbook -i <path_to_inventory_file> 
/usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml

Installing logging service

Before running the logging playbook, make sure to change logging=true as shown below and add/modify the inventory file options as required for your deployment.

# logging
openshift_logging_install_logging=true                        
openshift_logging_es_cluster_size=3  
openshift_logging_es_nodeselector={"role":"infra"}            
openshift_logging_kibana_nodeselector={"role":"infra"}
openshift_logging_curator_nodeselector={"role":"infra"}
openshift_logging_es_pvc_size=50Gi  
openshift_logging_storage_kind=dynamic

Now run the logging playbook to install it in the OCP logging project.

ansible-playbook -i <path_to_inventory_file> 
/usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml

Note: Remove the default setting from the gluster-block StorageClass after successful deployment of both logging and metrics.

oc patch storageclass glusterfs-registry-block -p '{"metadata": 
{"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'

Uninstalling container-native storage

With this release, we’re also introducing support for uninstall. This might come in handy when there are errors in Inventory File options that cause the gluster cluster to deploy incorrectly.

If you’re removing a CNS installation that is currently being used by any applications, you should remove those applications before removing CNS as they will lose access to storage. This includes infrastructure applications like registry, logging, and metrics that have PV claims created using the glusterfs-storage and glusterfs-storage-block Storage Class resources.

You can remove those by re-running the deployment playbooks like this:

ansible-playbook -i <path_to_inventory_file> -e 
"openshift_logging_install_logging=false" 
/usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml

ansible-playbook -i <path_to_inventory_file> -e 
"openshift_logging_install_metrics=false" 
/usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml

If you have the registry, remove it with the following command:

oc delete deploymentconfig docker-registry

If running the uninstall because a deployment failed, run the uninstall command with the following variables to wipe the storage devices for both glusterfs and glusterfs_registry before trying the CNS installation again.

ansible-playbook -i <path_to_inventory file> -e 
"openshift_storage_glusterfs_wipe=True" -e 
"openshift_storage_glusterfs_registry_wipe=true" 
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml

Container-native storage Brownfield installation

You can add CNS clusters and resources to an existing OCP install using the following command. This same process can be used when CNS has been uninstalled due to errors.

ansible-playbook -i <path_to_inventory_file> 
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml

Want to learn more?

For hands-on experience combining OpenShift and CNS, check out our test drive, a free, in-browser lab experience that walks you through using both.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s