Running OpenShift Container Storage 3.10 with Red Hat OpenShift Container Platform 3.10

By Annette Clewett anJose A. Rivera

With the release of Red Hat OpenShift Container Platform 3.10, we’ve officially rebranded what used to be referred to as Red Hat Container-Native Storage (CNS) as Red Hat OpenShift Container Storage (OCS). Versioning remains sequential (i.e, OCS version 3.10 is the follow on to CNS 3.9). You’ll continue to have the convenience of OCS 3.10 as part of the normal OpenShift deployment process in a single step, and OpenShift Container Platform (OCP) evaluation subscription has access to OCS evaluation binaries and subscriptions.

OCS 3.10 introduces an important feature for container-based storage with OpenShift. Arbiter volume support allows for there to be only two replica copies of the data, while still providing split-brain protection and ~30% savings in storage infrastructure versus a replica-3 volume. This release also hardens block support for backing OpenShift infrastructure services. Detailed information on the value and use of OCS 3.10 features can be found here.

OCS 3.10 installation with OCP 3.10 Advanced Installer

Let’s now take a look at the installation of OCS with the OCP Advanced Installer. OCS can provide persistent storage for both OCP’s infrastructure applications (e.g., integrated registry, logging, and metrics), as well as  general application data consumption. Typically, both options are used in parallel, resulting in two separate OCS clusters being deployed in a single OCP environment. It’s also possible to use a single OCS cluster for both purposes.

Following is an example of a partial inventory file with selected options concerning deployment of OCS for applications and an additional OCS cluster for infrastructure workloads like registry, logging, and metrics storage. When using these options for your deployment, values with specific sizes (e.g., openshift_hosted_registry_storage_volume_size=10Gi) or node selectors  (e.g., node-role.kubernetes.io/infra=true) should be adjusted for your particular deployment needs.

If you’re planning to use gluster-block volumes for logging and metrics, they can now be installed when OCP is installed. (Of course, they can also be installed later.)

[OSEv3:children]
...
nodes
glusterfs
glusterfs_registry

[OSEv3:vars]
...      
# registry
openshift_hosted_registry_storage_kind=glusterfs       
openshift_hosted_registry_storage_volume_size=10Gi   
openshift_hosted_registry_selector="node-role.kubernetes.io/infra=true"

# logging
openshift_logging_install_logging=true
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_size=50Gi
openshift_logging_es_cluster_size=3
openshift_logging_es_pvc_storage_class_name='glusterfs-registry-block'
openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"}

# metrics
openshift_metrics_install_metrics=true
openshift_metrics_storage_kind=dynamic
openshift_metrics_storage_volume_size=20Gi
openshift_metrics_cassandra_pvc_storage_class_name='glusterfs-registry-block'
openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"}

# Container image to use for glusterfs pods
openshift_storage_glusterfs_image="registry.access.redhat.com/rhgs3/rhgs-server-rhel7:v3.10"

# Container image to use for gluster-block-provisioner pod
openshift_storage_glusterfs_block_image="registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7:v3.10"

# Container image to use for heketi pods
openshift_storage_glusterfs_heketi_image="registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7:v3.10"
 
# OCS storage cluster for applications
openshift_storage_glusterfs_namespace=app-storage
openshift_storage_glusterfs_storageclass=true
openshift_storage_glusterfs_storageclass_default=false
openshift_storage_glusterfs_block_deploy=false   

# OCS storage cluster for OpenShift infrastructure
openshift_storage_glusterfs_registry_namespace=infra-storage  
openshift_storage_glusterfs_registry_storageclass=false       
openshift_storage_glusterfs_registry_block_deploy=true   
openshift_storage_glusterfs_registry_block_host_vol_create=true    
openshift_storage_glusterfs_registry_block_host_vol_size=200   
openshift_storage_glusterfs_registry_block_storageclass=true
openshift_storage_glusterfs_registry_block_storageclass_default=false

...
[nodes]
ose-app-node01.ocpgluster.com openshift_node_group_name="node-config-compute"
ose-app-node02.ocpgluster.com openshift_node_group_name="node-config-compute"
ose-app-node03.ocpgluster.com openshift_node_group_name="node-config-compute"
ose-app-node04.ocpgluster.com openshift_node_group_name="node-config-compute"
ose-infra-node01.ocpgluster.com openshift_node_group_name="node-config-infra"
ose-infra-node02.ocpgluster.com openshift_node_group_name="node-config-infra"
ose-infra-node03.ocpgluster.com openshift_node_group_name="node-config-infra"

[glusterfs]
ose-app-node01.ocpgluster.com glusterfs_zone=1 glusterfs_devices='[ "/dev/xvdf" ]'   
ose-app-node02.ocpgluster.com glusterfs_zone=2 glusterfs_devices='[ "/dev/xvdf" ]'
ose-app-node03.ocpgluster.com glusterfs_zone=3 glusterfs_devices='[ "/dev/xvdf" ]'
ose-app-node04.ocpgluster.com glusterfs_zone=1 glusterfs_devices='[ "/dev/xvdf" ]'

[glusterfs_registry]
ose-infra-node01.ocpgluster.com glusterfs_zone=1 glusterfs_devices='[ "/dev/xvdf" ]'
ose-infra-node02.ocpgluster.com glusterfs_zone=2 glusterfs_devices='[ "/dev/xvdf" ]'
ose-infra-node03.ocpgluster.com glusterfs_zone=3 glusterfs_devices='[ "/dev/xvdf" ]'

Inventory file options explained

The first section of the inventory file defines the host groups the installation will be using. We’ve defined two new groups: (1) glusterfs and (2) glusterfs_registry. The settings for either group all start with either openshift_storage_glusterfs_ or openshift_storage_glusterfs_registry. In each group, the nodes that will make up the OCS cluster are listed, and the devices ready for exclusive use by OCS are specified (glusterfs_devices=).

The first group of hosts in glusterfs specifies a cluster for general-purpose application storage and will, by default, come with the StorageClass glusterfs-storage to enable dynamic provisioning. For high availability of storage, it’s very important to have four nodes for the general-purpose application cluster, glusterfs.

The second group, glusterfs_registry, specifies a cluster that will host a single, statically deployed PersistentVolume for use exclusively by a hosted registry that can scale. This cluster will not offer a StorageClass for file-based PersistentVolumes with the options and values as they are currently configured (openshift_storage_glusterfs_registry_storageclass=false). This cluster will also support gluster-block (openshift_storage_glusterfs_registry_block_deploy=true). PersistentVolume creation can be done via StorageClass glusterfs-registry-block (openshift_storage_glusterfs_registry_block_storageclass=true). Special attention should be given to choosing the size for openshift_storage_glusterfs_registry_block_host_vol_size. This is the hosting volume for gluster-block devices that will be created for logging and metrics. Make sure that the size can accommodate all these block volumes and that you have sufficient storage if another hosting volume must be created.

If you want to tune the installation, more options are available in the Advanced Installation. To automate the generation of required inventory file options as shown previously, check out this newly available red-hat-storage tool called “CNS Inventory file Creator” or CIC (alpha version at this time). The CIC tool creates CNS or OCS inventory file options for both OCP 3.9 and OCP 3.10, respectively. CIC will ask a series of questions about the OpenShift hosts, the storage devices, sizes of PersistentVolumes for registry, logging and metrics and has baked-in checks to make sure the OCP installation will be successful. This tool  is currently alpha state, and we’re looking for feedback. Download it from github repository openshift-cic.

Single OCS cluster installation

Again, it is possible to support both general-application storage and infrastructure storage in a single OCS cluster. To do this, the inventory file options will change slightly for logging and metrics. This is because when there is only one cluster, the gluster-block StorageClass would be glusterfs-storage-block. The registry PV will be created on this single cluster if the second cluster, [glusterfs_registry], does not exist. For high availability, it’s very important to have four nodes for this cluster.  Also, special attention should be given to choosing the size for openshift_storage_glusterfs_block_host_vol_size. This is the hosting volume for gluster-block devices that will be created for logging and metrics. Make sure that the size can accommodate all these block volumes and that you have sufficient storage if another hosting volume must be created.

[OSEv3:children]
...
nodes
glusterfs

[OSEv3:vars]
...      
# registry
...

# logging
openshift_logging_install_logging=true
...
openshift_logging_es_pvc_storage_class_name='glusterfs-storage-block'
... 

# metrics
openshift_metrics_install_metrics=true
...
openshift_metrics_cassandra_pvc_storage_class_name='glusterfs-storage-block'

...

# OCS storage cluster for applications
openshift_storage_glusterfs_namespace=app-storage
openshift_storage_glusterfs_storageclass=true
openshift_storage_glusterfs_storageclass_default=false
openshift_storage_glusterfs_block_deploy=true
openshift_storage_glusterfs_block_host_vol_create=true
openshift_storage_glusterfs_block_host_vol_size=100
openshift_storage_glusterfs_block_storageclass=true
openshift_storage_glusterfs_block_storageclass_default=false
...

[nodes]

ose-app-node01.ocpgluster.com glusterfs_zone=1 glusterfs_devices='[ "/dev/xvdf" ]'   
ose-app-node02.ocpgluster.com glusterfs_zone=2 glusterfs_devices='[ "/dev/xvdf" ]'
ose-app-node03.ocpgluster.com glusterfs_zone=3 glusterfs_devices='[ "/dev/xvdf" ]'
ose-app-node04.ocpgluster.com glusterfs_zone=1 glusterfs_devices='[ "/dev/xvdf" ]'

[glusterfs]
ose-app-node01.ocpgluster.com glusterfs_zone=1 glusterfs_devices='[ "/dev/xvdf" ]'   
ose-app-node02.ocpgluster.com glusterfs_zone=2 glusterfs_devices='[ "/dev/xvdf" ]'
ose-app-node03.ocpgluster.com glusterfs_zone=3 glusterfs_devices='[ "/dev/xvdf" ]'
ose-app-node04.ocpgluster.com glusterfs_zone=1 glusterfs_devices='[ "/dev/xvdf" ]'

OCS 3.10 uninstall

With the OCS 3.10 release, the uninstall.yml playbook can be used to remove all gluster and heketi resources. This might come in handy when there are errors in inventory file options that cause the gluster cluster to deploy incorrectly.

If you’re removing an OCS installation that is currently being used by any applications, you should remove those applications before removing OCS, because they will lose access to storage. This includes infrastructure applications like registry, logging, and metrics that have PV claims created using the glusterfs-storage and glusterfs-storage-block Storage Class resources.

You can remove logging and metrics resources by re-running the deployment playbooks like this:

ansible-playbook -i <path_to_inventory_file> -e
"openshift_logging_install_logging=false"
/usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml

ansible-playbook -i <path_to_inventory_file> -e
"openshift_logging_install_metrics=false"
/usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml

Make sure to manually remove any logging or metrics PersistentVolumeClaims. The associated PersistentVolumes will be deleted automatically.

If you have the registry using a glusterfs PersistentVolume, remove it with the following command:

oc delete deploymentconfig docker-registry
oc delete pvc registry-claim
oc delete pv registry-volume
oc delete service glusterfs-registry-endpoints

If running the uninstall.yml because a deployment failed, run the uninstall.yml playbook with the following variables to wipe the storage devices for both glusterfs and glusterfs_registry before trying the OCS installation again.

ansible-playbook -i <path_to_inventory file> -e
"openshift_storage_glusterfs_wipe=True" -e
"openshift_storage_glusterfs_registry_wipe=true"
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml

OCS 3.10 post installation for applications, registry, logging and metrics

You can add OCS clusters and resources to an existing OCP install using the following command. This same process can be used if OCS has been uninstalled due to errors.

ansible-playbook -i <path_to_inventory_file>
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml

After the new cluster(s) is created and validated, you can deploy the registry using a newly created glusterfs ReadWriteMany volume. Run this playbook to create the registry resources:

ansible-playbook -i <path_to_inventory_file>
/usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/config.yml

You can now deploy logging and metrics resources by re-running these deployment playbooks:

ansible-playbook -i <path_to_inventory_file>
/usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml

ansible-playbook -i <path_to_inventory_file>
/usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml

Want to learn more?

For hands-on experience combining OpenShift and OCS, check out our test drive, a free, in-browser lab experience that walks you through using both. Also, watch this short video explaining why to use OCS with OCP. Detailed information on the value and use of OCS 3.10 features can be found here.

Improved volume management for Red Hat OpenShift Container Storage 3.10

By Annette Clewett and Husnain Bustam

Hopefully by now you’ve seen that with the release of Red Hat OpenShift Container Platform 3.10 we’ve rebranded our container-native storage (CNS) offering to be called Red Hat OpenShift Container Storage (OCS). Versioning remains sequential (i.e, OCS 3.10 is the follow on to CNS 3.9).

OCS 3.10 introduces important features for container-based storage with OpenShift. Arbiter volume support allows for there to be only two replica copies of the data, while still providing split-brain protection and ~30% savings in storage infrastructure versus a replica-3 volume. This release also hardens block support for backing OpenShift infrastructure services. In addition to supporting arbiter volumes, major improvements to ease operations are available to give you the ability to monitor provisioned storage consumption, expand persistent volume (PV) capacity without downtime to the application, and use a more intuitive naming convention for PVs.

For easy evaluation of these features, an OpenShift Container Platform evaluation subscription now includes access to OCS evaluation binaries and subscriptions.

New features

Now let’s dive deeper into the new features of the OCS 3.10 release:

  • Prometheus OCS volume metrics: Volume consumption metrics data (e.g., volume capacity, available space, number of inodes in use, number of inodes free) available in Prometheus for OCS are very useful. These metrics monitor storage capacity and consumption trends and take timely actions to ensure applications do not get impacted.
  • Heketi topology and configuration metrics: Available from the Heketi HTTP metrics service endpoint, these metrics can be viewed using Prometheus or curl http://<heketi_service_route>/metrics. These metrics can be used to query heketi health, number of nodes, number of devices, device usage, and cluster count.
  • Online expansion of provisioned storage: You can now expand the OCS-backed PVs within OpenShift by editing the corresponding claim (oc edit pvc <claim_name>) with the new desired capacity (spec→ requests → storage: new value).
  • Custom volume naming: Before this release, the names of the dynamically provisioned GlusterFS volumes were auto-generated with random uuid number. Now, by adding a custom volume name prefix, the GlusterFS volume name will include the namespace or project as well as the claim name, thereby making it much easier to map to a particular workload.
  • Arbiter volumes: Arbiter volumes allow for reduced storage consumption and better performance across the cluster while still providing the redundancy and reliability expected of GlusterFS.

Volume and Heketi metrics

As of OCP 3.10 and OCS 3.10, the following metrics are available in Prometheus (and by executing curl http://<heketi_service_route>/metrics):

kubelet_volume_stats_available_bytes:      Number of available bytes in the volume
kubelet_volume_stats_capacity_bytes: Capacity in bytes of the volume
kubelet_volume_stats_inodes: Maximum number of inodes in the volume
kubelet_volume_stats_inodes_free: Number of free inodes in the volume
kubelet_volume_stats_inodes_used: Number of used inodes in the volume
kubelet_volume_stats_used_bytes: Number of used bytes in the volume
heketi_cluster_count: Number of clusters
heketi_device_brick_count: Number of bricks on device
heketi_device_count: Number of devices on host
heketi_device_free: Amount of free space available on the device
heketi_device_size: Total size of the device
heketi_device_used: Amount of space used on the device
heketi_nodes_count: Number of nodes on the cluster
heketi_up: Verifies if heketi is running
heketi_volumes_count: Number of volumes on cluster

 

 

Populating Heketi metrics in Prometheus requires additional configuration of the Heketi service. You must add the bolded annotations using the following commands:

# oc annotate svc heketi-storage prometheus.io/scheme=http
# oc annotate svc heketi-storage prometheus.io/scrape=true
# oc describe svc heketi-storage
Name:           heketi-storage
Namespace:      app-storage
Labels:         glusterfs=heketi-storage-service
                heketi=storage-service
Annotations:    description=Exposes Heketi service
                prometheus.io/scheme=http
                prometheus.io/scrape=true
Selector:       glusterfs=heketi-storage-pod
Type:           ClusterIP
IP:             172.30.90.87
Port:           heketi  8080/TCP
TargetPort:     8080/TCP

Populating Heketi metrics in Prometheus also requires additional configuration of the Prometheus configmap. As shown in the following, you must modify the Prometheus configmap with the namespace of Hekti service and restart prometheus-0 pod:

# oc get svc --all-namespaces | grep heketi
appstorage       heketi-storage       ClusterIP 172.30.90.87  <none>  8080/TCP
# oc get cm prometheus -o yaml -n openshift-metrics
....
- job_name: 'kubernetes-service-endpoints'
   ...
   relabel_configs:
     # only scrape infrastructure components
     - source_labels: [__meta_kubernetes_namespace]
       action: keep
       regex: 'default|logging|metrics|kube-.+|openshift|openshift-.+|app-storage'
# oc scale --replicas=0 statefulset.apps/prometheus
# oc scale --replicas=1 statefulset.apps/prometheus

Online expansion of GlusterFS volumes and custom naming

First, let’s discuss what’s needed to allow expansion of GlusterFS volumes. This opt-in feature is enabled by configuring the StorageClass for OCS with the parameter allowVolumeExpansion set to “true,” enabling the feature gate ExpandPersistentVolumes. You can now dynamically resize storage volumes attached to containerized applications without needing to first detach and then attach a storage volume with increased capacity, which enhances application availability and uptime.

Enable the ExpandPersistentVolumes feature gate on all master nodes:

# vim /etc/origin/master/master-config.yaml
kubernetesMasterConfig:
  apiServerArguments:
    feature-gates:
    - ExpandPersistentVolumes=true
# /usr/local/bin/master-restart api
# /usr/local/bin/master-restart controllers

This release also supports adding a custom volume name prefix created with the volume name prefix, project name/namespace, claim name, and UUID (<myPrefix>_<namespace>_<claimname>_UUID). Parameterizing the StorageClass ( `volumenameprefix: myPrefix`) allows easier identification of volumes in the GlusterFS backend.

The new OCS PVs will be created with the volume name prefix, project name/namespace, claim name, and UUID (<myPrefix>_<namespace>_<claimname>_UUID), making it easier for you to automate day-2 admin tasks like backup and recovery, applying policies based on pre-ordained volume nomenclature, and other day-2 housekeeping tasks.

In this StorageClass, support for both online expansion of OCS/GlusterFS PVs and custom volume naming has been added.

# oc get sc glusterfs-storage -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-storage
parameters:
  resturl: http://heketi-storage-storage.apps.ose-master.example.com
  restuser: admin
  secretName: heketi-storage-admin-secret
  secretNamespace: storage
  volumenameprefix: gf 
allowVolumeExpansion: true 
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete

❶ Custom volume name support: <volumenameprefixstring>_<namespace>_<claimname>_UUID
Parameter needed for online expansion or resize of GlusterFS PVs

Be aware that PV expansion is not supported for block volumes, only for file volumes.

Expanding a volume starts with editing the PVC field “requests:storage” with the new expanded size for the PersistentVolume. For example, we have 1GiB PV, we want to expand the PV to 2GiB. To expand/resize PV to 2GiB, edit the PVC field “requests:storage” with the new value. The PV will be automatically resized to 2GiB. The new 2GiB size will be reflected in OCP, heketi-cli, and gluster commands. The expansion process creates another replica set and converts the 3-way replicated volume to distributed-replicated volume, 2×3 instead of 1×3 bricks.

GlusterFS arbiter volumes

Arbiter volume support is new to OCS 3.10 and has the following advantages:

  • An arbiter volume is still a 3-way replicated volume for highly available storage.
  • Arbiter bricks do not store file data; they only store file names, structure, and metadata.
  • Arbiter uses client quorum to compare this metadata with metadata of other nodes to ensure consistency of the volume and prevent split brain conditions.
  • Using Heketi commands, it is possible to control arbiter brick placement using tagging so that all arbiter bricks are on the same node.
  • With control of arbiter brick placement, the ‘arbiter’ node can have limited storage compared to other nodes in the cluster.

The following example has two gluster volumes configured across 5 nodes to create two 3-way arbitrated replicated volumes, with the arbiter bricks on a dedicated arbiter node.

In order to use arbiter volumes with OCP workloads, an additional parameter must be added to the GlusterFS StorageClass, user.heketi.arbiter true. In this StorageClass, support for the online expansion of GlusterFS PVs, custom volume naming, and arbiter volumes have been added.

# oc get sc glusterfs-storage -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-storage
parameters:
  resturl: http://heketi-storage-storage.apps.ose-master.example.com
  restuser: admin
  secretName: heketi-storage-admin-secret
  secretNamespace: storage
  volumenameprefix: gf 
  volumeoptions: user.heketi.arbiter true ❸
allowVolumeExpansion: true 
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete

❶ Custom volume name support: <volumenameprefixstring>_<namespace>_<claimname>_UUID
Parameter needed for online expansion or resize of GlusterFS volumes
❸ Enable arbiter volume support in the StorageClass. All the PVs created from this StorageClass will be 3-way arbitrated replicated volume.

Want to learn more?

For hands-on experience combining OpenShift and OCS, check out our test drive, a free, in-browser lab experience that walks you through using both. Also, check out  this short video explaining why using OCS with OpenShift is the right choice for the container storage infrastructure. For details on running OCS 3.10 with OCP 3.10, click here.

Introducing Red Hat Gluster Storage 3.4: Feature overview

By Anand Paladugu, Principal Product Manager

We’re pleased to announce that Red Hat Gluster Storage 3.4 is now Generally Available!

Since this release is a full rebase with the upstream, it consolidates many bug fixes, thus giving you a greater degree of overall stability for both container storage and traditional file serving use cases. Given that Red Hat OpenShift Container Storage is based on Red Hat Gluster Storage, these fixes will also be embedded in the 3.10 release of OpenShift Container Storage. To enable you to refresh your Red Hat Enterprise Linux (RHEL) 6-based Red Hat Gluster Storage installations, this release supports upgrading your Red Hat Gluster Storage servers from RHEL 6 to RHEL 7. Last, you can now deploy Red Hat Gluster Storage Web Administrator with minimal resources, which also offers robust and feature-rich monitoring capabilities.

Here is an overview of the new features delivered in Red Hat Gluster Storage 3.4:

Support for upgrading Red Hat Gluster Storage from RHEL 6 to RHEL 7

Many customers like to ensure they’re on the latest and greatest RHEL in their infrastructures. Two scenarios are now supported for upgrading RHEL servers in a Red Hat Gluster Storage deployment from RHEL 6 to RHEL 7:

  1. Red Hat Gluster Storage version is <= 3.3.x and the underlying RHEL version is <= latest version of 6.x. The upgrade process updates Red Hat Gluster Storage to version 3.4 and the underlying RHEL version to the latest version of RHEL 7.
  2. Red Hat Gluster Storage version is 3.4 and the underlying RHEL version is the latest version of 6.x. The upgrade process keeps the Red Hat Gluster Storage version at 3.4 and upgrades the underlying RHEL version to the latest version of RHEL 7.

MacOS client support

Mac workstations continue to make inroads into corporate infrastructures. Red Hat Gluster Storage 3.4 supports MacOS as a Server Message Block (SMB) client and thereby allows customers to map SMB shares backed by Red Hat Gluster Storage in the MAC finder tool.

Punch hole support for third-party applications

The “punch hole” feature provides the benefit of freeing up physical disk space when portions of a file are de-referenced. For example, suppose you’ve used up 20 Gigs of your disk space for backing up a file, and some portions of the file are de-referenced due to data duplication. Without punch hole support, the 20 Gigs remain occupied in the underlying physical hard disk. With support for punch holes, however, third-party applications can “punch a hole” corresponding to the portions of the deleted files, thereby freeing up physical disk space. This further helps to reduce storage costs associated with backing up and archiving those virtual machines (VMs).

Subdirectory exports using the Gluster Fuse protocol now fully supported

Beginning with Red Hat Gluster Storage 3.4, subdirectory export using Fuse is now fully supported. This feature provides namespace isolation where a single Gluster volume can be shared to many clients, and they can be mounting only a subset of the volume (namespace) (i.e., a subdirectory). You can also export a subdirectory of the already exported volume, to utilize space left in the volume for a different project.

Red Hat Gluster Storage web admin enhancements

The Web Administration tool delivers browser-based graphing, trending, monitoring, and alerting for Red Hat Gluster Storage in the enterprise. This latest Red Hat Gluster Storage release optimizes this web admin tool to consume fewer resources and allow greater scaling to monitor larger clusters than in the past.

Faster directory lookups using the Gluster NFS-Ganesha server

In Red Hat Gluster Storage 3.4, the Readdirp API is extended and enhanced to return handles along with directory stats as part of its reply, thereby reducing NFS operations latency.

In internal testing, performance gains were noticed for all directory operations when compared to Red Hat Gluster Storage 3.3.1. For example, make directory operations improved by up to 31%, file create operations have improved by up to 42%, and file read operations have improved by up to 150%.

Want to learn more?

For hands-on experience with Red Hat Gluster Storage, check out our test drive.

Introducing OpenShift Container Storage: Meet the new boss, same as the old boss!

By Steve Bohac, Product Marketing

Today, we’re introducing Red Hat OpenShift Container Storage 3.10.

Is this product new to you? It surely is—that’s because with the announcement today of Red Hat OpenShift Container Platform 3.10, we’ve rebranded our container-native storage (CNS) offering to now be referred to as Red Hat OpenShift Container Storage. This is still the same product with the strong customer momentum we announced a few months ago during Red Hat Summit week.Why the new name? “Red Hat OpenShift Container Storage” better reflects the product offering and its strong affinity with Red Hat OpenShift Container Platform. Not only does it install with OpenShift (via Red Hat Ansible), it’s developed, qualified, tested, and versioned coincident with OpenShift Container Platform releases. This product name best reflects that strong integration. Again, the product itself didn’t change in any way—all that’s changed is the product name.

Red Hat OpenShift Container Storage enables application portability and a consistent user experience across the hybrid cloud.

This new release, Red Hat OpenShift Container Storage 3.10, is the follow-on to Container-Native Storage 3.9 and introduces three important features for container-based storage with OpenShift: (1) arbiter volume support enabling high availability with efficient storage utilization and better performance, (2) enhanced storage monitoring and configuration visibility using the OpenShift Prometheus framework, and (3) block-backed persistent volumes (PVs) now supported for general application workloads in addition to supporting OCP infrastructure workloads.

If you haven’t already bookmarked our Red Hat Storage blog, now would be a great time! Over the coming weeks, we will be publishing deeper discussions on OpenShift Container Storage. In the meantime, though, for a more thorough understanding of OpenShift Container Storage, check out these recent technical blogs describing in depth the value of our approach to storage for containers:

Want to learn more?

For more information on OpenShift Container Storage, click here. Also, you can find the new Red Hat OpenShift Container Storage datasheet here.

For hands-on experience combining OpenShift and OpenShift Container Storage, check out our test drive, a free, in-browser lab experience that walks you through using both.

For more general information around storage for containers, check out our Container Storage for Dummies book.

Why are customers choosing Red Hat’s Container-Native Storage in the public cloud with OpenShift?

By Sayandeb Saha, Director, Product Management, Storage Business Unit

In our last blog post in this series, we talked about how the Container-Native Storage (CNS) offering for OpenShift Container Platform from Red Hat has seen increased customer adoption in on-premise environments by offering a peaceful coexistence approach with classic storage arrays that are not deeply integrated with OpenShift. In this post, we’ll explore why many customers are deploying our CNS offering in the three big public clouds—AWS, Microsoft Azure, and the Google Cloud Platform—on top of native public cloud offerings from the public clouds—despite good integration of Kubernetes with native storage offerings in the cloud. Let’s examine some of these problems and constraints in a bit more detail and describe how CNS addresses them.

Slow attach/detachpoor availability

The first issue stems from the fact that the native block storage offerings (EBS in AWS, Data Disk in Azure, Persistent Disk in Google Cloud) in the public cloud were designed and engineered to support virtual machine (VM) workloads. In such workloads, attaching and consequently detaching a block device to a machine image/instance is an infrequent occurrence at best, as these workloads are less dynamic compared to Platform-as-a-Service (PaaS) and DevOps workloads, which frequently run on OpenShift powering dynamic build and deploy CI/CD pipelines and other similar workloads and workflows.

Some of our customers found that attach and detach times for these block devices, when directly accessed from OpenShift workloads using the native kubernetes storage provisioners, are unacceptable because they led to poor startup times for pods (slow attach) and limited or no high availability on a failover, which usually triggers a sequence that includes a detach operation, an attach operation, and a subsequent mount operation.

Each of these operations usually triggers a variety of API calls specific for the public cloud provider. Any or all of these intermediate steps can fail, causing users to lose access to storage persistent volumes (PVs) for their compute pods for an extended period. Overlaying Red Hat’s CNS offering as a storage management fabric to aggregate, pool, and serve out PVs expediently without worrying about the status of individual cloud native block storage (a.k.a EBS or Azure Data Disk) can provide major relief, because it effectively isolates the lifecycle of cloud-native block storage devices from that of the application pods allocating and deallocating PVs dynamically as application teams work on OpenShift. This isolation effectively addresses this issue.

Block device limits per compute instance

The second issue some of our customers run into is the fact that there is a limit to the number of block devices that one can attach to the machine images or instances in various public cloud environments.

OpenShift supports a maximum of 250 containers per host. The maximum number of block devices that are supported to be attached to machine instances per account is far fewer (for example, max 40 EBS devices per EC2 instance). Even though it is unusual to have a 1:1 mapping between containers and storage devices, this low maximum can lead to a lot of unintended behavior, notwithstanding the fact that it leads to a higher total cost of ownership (need more hosts than necessary).

For example, in a failover scenario during the detach, attach, and mount sequence, the API call to attach might fail, because there are already a maximum number of devices attached to the EC2 instance where this attempt is being made, which can cause a glitch/outage. Overlaying Red Hat’s CNS offering as a storage management fabric on cloud-based block devices mitigates the impact of hitting the maximum number of devices that can be attached to a machine image or instance, because storage is served out from a pool that is unencumbered by individual max device per instance/host limit. Storage can continue to be served out until the entire pool is exhausted which, at that time, can be expanded by adding new hosts and devices.

Cross-AZ storage availability

The third issue arises from the fact that cloud block storage devices are usually accessible within a specific Availability Zone (AZ) in AWS or Availability Sets in Azure. AZs are like failure domains in public clouds.

Most customers who deploy OpenShift in the public cloud do so to span more than one AZ for high availability. This is done so that when one AZ dies or goes offline, the OpenShift cluster remains operational. Using block devices constrained to an AZ for providing storage services to OpenShift workloads can defeat the purpose, because then containers must be scheduled within hosts that belong to the same AZ, and customers can not leverage the full power of Kubernetes orchestration. This configuration could also lead to an outage when an AZ goes offline.

Our customers use CNS to mitigate this problem so that even when there is an AZ failure, a three-way replicated cross-AZ storage service (CNS) is available for containerized applications to avoid downtimes. This also enables Kubernetes to schedule pods across AZs (instead of within an AZ), thereby preserving the spirit of the original fault-tolerant OpenShift deployment architecture that spans multiple AZs.

Cost-effective storage consolidation

Storage provided by CNS is efficiently allocated and offers performance with the first gigabyte provisioned, thereby enabling storage consolidation. For example, consider six MySQL database instances, each in need of 25 GiB of storage capacity and up to 1500 IOPS at peak load. With EBS in AWS, one would create six EBS volumes, each with at least 500 GiB capacity out of the gp2 (General Purpose SSD) EBS tier, in order to get 1500 IOPS. The level of performance is tied to provisioned capacity with EBS.

With CNS, one can achieve the same level using only 3 EBS volumes at 500 GiB capacity from the gp2 tier and run these with GlusterFS. One would create six 25 GiB volumes and provide storage to many databases with high IOPS performance, provided they don’t peak all at the same time. Doing that, one would halve EBS cost and still have capacity to spare for other services. Read IOPS performance is likely even higher, because in CNS with three-way replication as data is read from distributed across 3×1500 IOPS gp2 EBS volumes.

Check us out for more

As you can see, there’s a good case to be made for using CNS in various public clouds for a multitude of technical reasons our customers care about, besides the fact that Red Hat CNS provides a consistent storage consumption and management experience across hybrid and multi clouds (see the following figure).

 

Red Hat CNS runs anywhere and everywhere Red Hat OpenShift Container Platform runs.

In addition to the application portability that OpenShift already provides across hybrid and multi clouds, we’re working on multi cloud replication features that would enable CNS to effectively become the data fabric that enables data portability—another good reason to select and stay with CNS. Stay tuned for more information on that!

For hands-on experience now combining OpenShift and CNS, check out our test drive, a free, in-browser lab experience that walks you through using both.

Leverage your existing storage investments with container-native storage

By Sayandeb Saha, Director, Product Management

The Container-Native Storage (CNS) offering for OpenShift Container Platform from Red Hat has seen wide customer adoption in the past year or so. Customers are deploying it in a wide variety of environments that include bare metal, virtualized, and private and public clouds. It mimics the diverse spread of environments in which OpenShift itself gets deployed—which is also CNS’s key strength (i.e., being able to back OpenShift wherever it runs—see the following graphic).

During the past of year of customer adoption of CNS, we’ve observed some key trends that are unique for OpenShift/Kubernetes storage and that we’ll highlight in a series of blogs. This blog series will also include business and technical solutions that have worked for our customers.

In this blog post, we examine a trend where customers have adopted CNS as a storage management fabric that sits in between the OpenShift Container Platform and their classic storage gear. This particular adoption pattern continues to have a really high uptake, and there are sound business and technical reasons for doing this, which we’ll explore here.

First the Solution (The What): We’ve seen a lot of customers deploying CNS to serve out storage from their existing storage arrays/SANs and other traditional storage, as illustrated in the following graphic. In this scenario, block devices from existing storage arrays are served out with our CNS software running in VMs or containers/pods to OpenShift. In this case, the storage for the VMs that runs OpenShift is still served by the arrays.

Now the Why: Initially, it seemed backward as to why customers would be doing this; after all, software-defined storage solutions like CNS are meant to run on x86 bare metal (on premise) or in the public cloud, but further investigation revealed some interesting discoveries.

While OpenShift users and ops teams consume infrastructure, they typically do not manage infrastructure. In on-premise environments, OpenShift ops teams are highly dependent on other infrastructure teams for virtualization, storage, and operating systems for the infrastructure on which they run OpenShift. Similarly, in public clouds they consume the native compute and storage infrastructure available in these clouds.

As a consequence, they are highly dependent on storage infrastructure that is already in place. Typically, it’s very difficult to justify a storage server purchase when storage has been already procured a year or more ago from a traditional storage vendor for a new use case (OpenShift storage in this case). The issue is that this traditional storage was not designed for nor intended to be used with containers and the budget for storage has mostly been spent. This has driven the OpenShift operations teams to adopt CNS effectively as a storage management fabric that sits between their OpenShift Container Platform deployment and their existing storage array. The inherent flexibility of Red Hat Gluster Storage in this case is the form of CNS being leveraged, which enables it to aggregate and pool block devices that are attached to a VM and serve that out to OpenShift workloads. OpenShift operations teams can now have the best of both worlds. They can repurpose their existing storage array that is already in place/on premise but actually consume CNS which operates as a management fabric offering the latest and greatest in terms of feature, functionality, and manageability with a deep integration with the OpenShift platform.

In addition to business reasons, there are also various technical reasons that these OpenShift operations teams are adopting CNS. These include, but are not limited to:

  • Lack of deep integration of their existing storage arrays with OpenShift Container Platform
  • Even if their traditional storage array has rudimentary integration with OpenShift, very likely it has limited feature support, which renders it unusable with many OpenShift workloads (like lack of dynamic provisioning)
  • The roadmap of their storage arrays vendor may not match their current (or future) OpenShift/Kubernetes storage feature support needs, like lack of availability of a Persistent Volume (PV) resize feature
  • Needing a fully featured OpenShift Storage solution for OpenShift workloads as well as the OpenShift infrastructure itself. Many existing storage platforms can support one or the other, but not both. For instance, a storage array serving out Fiber Channels LUNs (plain block storage) can’t back an OpenShift registry as one needs shared storage access for it usually provided by a file or object storage back end.
  • They seek a consistent storage consumption and management experience across hybrid and multiple clouds. Once they learn to implement and manage CNS from Red Hat in one environment, it’s repeatable in all other environments. They can’t use their storage array in the public cloud.

Using CNS from Red Hat is a win for OpenShift ops teams. They can get started with a state-of-the-art storage back end for OpenShift apps and infrastructure without needing to acquire new infrastructure for OpenShift Storage right away. They have the option to move to x86-based storage servers during the following budget cycle as they grow their OpenShift footprint and onboard more apps and customers to it. The experience with CNS serves them well if they choose to implement OpenShift and CNS in other environments like AWS, Azure, and Google Cloud.

Want to learn more?

For hands-on experience combining OpenShift and CNS, check out our test drive, a free, in-browser lab experience that walks you through using both.

Red Hat Summit 2018—It’s a wrap!

By Will McGrath, Product Marketing, Red Hat Storage

Wowzer! Red Hat Summit 2018 was a blur of activity. The quality and quantity of conversations with customers, partners, industry analysts, community members, and Red Hatters was unbelievable. This event has grown steadily the past few years to over 7,000 registrants this year. From a Storage perspective, this was the largest presence ever in terms of content and customer interaction.

Key announcements

For Storage, we made two key announcements during Red Hat Summit. The first was around Red Hat Storage One, a pre-configured offering engineered with our server partners, announced last week. If you didn’t catch Dustin Black’s blog post that goes into the detail of the solution, check it out .

The second announcement, which occurred this week, highlighted the momentum in building a storage offering that provides a seamless developer experience and unified orchestration for containers. There are now more than 150 customers worldwide that have adopted Red Hat’s container-native storage solutions to enable their transition to the hybrid cloud, including Vorwerk and innovation award winner Lufthansa Technik.  

We featured a number of customer success stories, including Massachusetts Open Cloud, which worked with Boston Children’s Hospital to redefine medical-image processing using Red Hat Ceph Storage.

If you’d like to keep up on the containers news, check out our blog post from Tuesday and this week’s news around CoreOS integration into Red Hat OpenShift. You might also like to check out the news around customers deploying OpenShift on Red Hat infrastructureincluding OpenStackthrough container-based application development and tightly integrated cloud technologies.

Storage expertise on display

On the morning of the first day of Summit, Burr Sutter and team demoed a number of technologies, including Red Hat Storage, to showcase application portability across the open hybrid cloud. This morning, Erin Boyd and team ran some way cool live demos that showed the power of microservices and functions with OpenShift, Storage, OpenWhisk, Tensorflow, and a host of technologies across the hybrid cloud.

For those who had the opportunity to attend any of the 20+ Red Hat Summit storage sessions, you were able to learn how our Red Hat Gluster Storage and Red Hat Ceph Storage products appeal to both traditional and modern users. The roadmap presentations by both Neil Levine (Ceph) and Sayan Saha (Gluster and container-native storage) were very popular. Sage Weil, the creator of Ceph, gave a standing-room only talk on the future of storage. Some of these storage sessions will be available on the Red Hat Summit YouTube channel in the coming weeks.

We also had several partners demoing their combined solutions with Red Hat Storage, including Intel, Mellanox, Penguin Computing, QCT, and Supermicro. Commvault had a guest appearance during Sean Murphy’s Red Hat Hyperconverged Infrastructure talk, explaining what led them to decide to include it in their HyperScale Appliances and Software offerings.

This year, we conducted an advanced Ceph users’ group meeting the day before the conference with marquee customers participating in deep-dive discussions with product and community leaders. During the conference, the storage lockers have been a hit. We had great presence on the show floor, including the community booths. Our breakfast was well attended with over a hundred people registered and featured a panel of customers and partners.

Continue the conversation

During his appearance on The Cube by Silicon Angle, Red Hat Storage VP/GM Ranga Rangachari talked about his point of view on “UnStorage.” This idea, triggered by his original blog post on the subject, made quite a few waves at the event. Customers and analysts are responding positively to the idea of a new approach to storage in the age of hybrid cloud, hyperconvergence, and containers. Today is the last day to win prizes by tweeting  @RedHatStorage with the hashtag #UnStorage.

If you missed us in San Francisco, we’ll be at OpenStack Summit in Vancouver from May 21-24. Red Hat is a headline sponsor at Booth A19. If you’re attending, come check out our OpenStack and Ceph demo, and check back on our blog page for news from the event. We’ll also be hosting the “Craft Your Cloud” event on Tuesday, May 22, from 6-9 pm at Steamworks in Vancouver. For more information and to register, click here. For more fun and networking opportunities, join the Ceph and RDO communities for a happy hour on May 23 from 6-8 pm at The Portside Pub in Vancouver. For more information and to register for that event, click here.

On to Red Hat Summit 2019

You can check out the videos and keynotes from Red Hat Summit 2018 on demand. Next year, Red Hat Summit is being held in Boston againit’s been rotating between San Francisco and Bostonso if you couldn’t attend San Francisco this year we urge you to plan to visit us in Boston next year. We hope you enjoyed our coverage of Red Hat Summit 2018, and hope to see you in 2019.

More accolades for Red Hat Ceph Storage

By Daniel Gilfix, Product Marketing, Red Hat Storage

Once again, an independent analytic news source has confirmed what many of you already know: that Red Hat Ceph Storage stands alone in its commitment to technical excellence for the customers it serves. In the latest IT Brand Pulse survey covering Networking & Storage products, IT professionals from around the world have selected Red Hat Ceph Storage as the “Scale-out Object Storage Software” leader in all categories. This includes price, performance, reliability, service and support, and innovation. The honors follow a pattern of recognition from IT Brand Pulse, having bestowed the leadership tag to Red Hat Ceph Storage in 2017, 2015, and 2014, with 2016 noted for Red Hat as “Service and Support” leader.

The report documented the results of the independent, March 2018, annual survey that polled vendors on their perception of excellence in eleven different categories. Red Hat Ceph Storage earned ratings that were visibly head and shoulders above the competition, including more than a 2X differential over Scality and VMware.

Source: IT Brand Pulse, https://itbrandpulse.com/it-pros-vote-2018-networking-storage-brand-leaders/

It feels like just yesterday!

This latest third party validation comes on the heels of Red Hat Ceph Storage being named as a finalist in Storage Magazine and SearchStorage’s 2017 Products of the Year competition in late January 2018. Here, the evaluation was based on Red Hat Ceph Storage v2.3, one that made great strides in the areas of connectivity and containerization, including an NFS gateway to an S3-compatible object interface and compatibility with the Hadoop S3A plugin.

Red Hat Ceph Storage 3 carries the baton

IT professionals voting in this year’s IT Brand Pulse survey were able to consider newer features in the important Red Hat Ceph Storage 3 release that addressed a series of major customer challenges in object storage and beyond. We delivered full support for file-based access via CephFS, expanded ties to legacy storage environments through iSCSI, pumped fuel into our containerization options with CSDs for 25% hardware deployment savings, and introduced an easier monitoring interface and additional layers of automation for more self-maintaining deployments.  

See you at Red Hat Summit!

Ceph booth at Red Hat Summit 2018

As usual, the real testament to our success is the continued satisfaction of our customer base, the ones who are increasingly choosing Red Hat Ceph Storage for modern use cases like AI and ML, rich media, data lakes, hybrid cloud infrastructure based on OpenStack, and traditional backup and restore.

Ceph user group at Red Hat Summit 2018

We look forward to discussing deployment options and whether Red Hat Ceph Storage might be right for you this week at Red Hat Summit—There’s still so much more to go! Catch us at one of the following sessions in Moscone West:

Today (Wednesday, May 9)

Tomorrow (Thursday, May 10)

Container-native storage from Red Hat is on a roll at Red Hat Summit 2018!

By Steve Bohac, Product Marketing, Red Hat Storage

It’s Red Hat Summit week, with this year’s edition taking place in San Francisco! As always, Red Hat has a plethora of announcements this week.

If you haven’t already heard the news, yesterday we announced substantial customer adoption momentum with container-native storage from Red Hat. Customers such as Lufthansa Technik, Aragonesa de Servious Telematico (AST), Generali Switzerland, IHK-GfI, and Vorwerk (amongst many more) are using Red Hat OpenShift Container Platform for cloud-native applications and are representative of how organizations are seeking out scalable, fully integrated, developer friendly storage for containers.

Based on Red Hat Gluster Storage, container-native storage from Red Hat offers these organizations scalable, persistent storage for containers across hybrid clouds with increased application portability. Tightly integrated with Red Hat OpenShift Container Platform, container-native storage from Red Hat can be used to persist not only application data but data for logging, metrics, and the container registry. The deep integration with Red Hat OpenShift Container Platform helps developers easily provision and manage elastic storage for applications and offers a single point of support. Customers use container-native storage to persist data for a variety of applications, including SQL and NoSQL databases, CI/CD tools, web serving, and messaging applications.

Organizations using container-native storage from Red Hat can benefit from simplified management, rapid deployment, and a single point of support. The versatility of container-native storage from Red Hat can enable customers to run cloud-native applications in containers, on bare metal, in virtualized environments, or in the public cloud.

For those of you attending Red Hat Summit this week, as always we know you love breakout sessions to learn more about Red Hat solutions—and we have a bunch covering container-native storage from Red Hat! Don’t forget to get your raffle tickets at each of the storage sessions you attend. Here’s what the line up for container-native storage from Red Hat sessions looks like:

(All in Moscone West unless otherwise noted)

Tuesday, May 8

Thursday, May 10

Want to learn more?

For hands-on experience combining OpenShift and container-native storage, check out our test drive, a free, in-browser lab experience that walks you through using both.

Happy Red Hat Summit! Hope to see you this week!

 

 

 

Five ways to experience UnStorage at Red Hat Summit

Welcome to Red Hat Summit 2018 in San Francisco! The Storage team has been hard at work to make this the best possible showcase of technology and customers—and have fun while doing it. This year our presence is built around the theme: UnStorage for the modern enterprise.

What is UnStorage?

Today’s users need their data so accessible, so scalable, so versatile that the old concept of “storing” it seems counterintuitive. Perhaps a better way of describing the needs of the modern enterprise is UnStorage, as outlined in this blog post by Red Hat Storage VP and GM, Ranga Rangachari.

Five ways to experience UnStorage at Red Hat Summit

  1. Content is king: We have 24 sessions packed with storage knowledge, best practices, and success stories. Over 21 Red Hat Storage customers will be featured at the event, including on a panel at our breakfast (open to all attendees) on Wednesday at 7 am at the Marriott Marquis. Learn how some of the most innovative enterprises leverage the power of unStorage to solve their scale and agility challenges.
  2. Without hardware partners, it’s like clapping with one hand: By definition, the success of software-defined storage hinges on the strength of the hardware ecosystem. Since the storage controller software is only half the solution, it’s important to have deep engineering investment with hardware and component vendors to build rock-solid solutions for customers. With partners like Supermicro, Mellanox, Penguin Computing, Intel, Commvault, and QCT, all featured at the conference, Red Hat Storage enables greater customer choice and openness—a key tenet of UnStorage.
  3. Explore your storage curiosity: UnStorage is all about breaking the rules to make things better. You’ll find a lot of creative ideas that are off the beaten track. Just as UnStorage is ubiquitous—it stretches across private, public, and hybrid cloud boundaries—it’s hard to miss Storage at the conference. You can find storage lockers near the expo entrance where you can drop off backpacks and charge phones while you attend sessions. Or enter to win one of two Star Wars collector edition drones by attending sessions or visiting the booth. Stop by the Storage Launch Pad to play online games, take surveys, and pick up a ton of giveaways, including two golden tickets handed out every day, which will afford you a special set of prizes.
  4. Test drive storage: Kick the tires on UnStorage with one of three test drives for Ceph, Gluster, and OpenShift Ops. As the name suggests, software-defined storage is completely decoupled from hardware, making it easy to test and deploy in the cloud. On the other side of the deployment spectrum, you can also try out the sizing tool for Red Hat Storage One, our single SKU pre-configured system announced last week. Stop by one of four Storage pods on the expo floor for demos and conversations with Storage experts.
  5. The proof of the pudding: Stop by Thursday’s keynote with CTO Chris Wright and live demos by Burr Sutter and team featuring container-native storage baked into Red Hat platforms such as OpenShift. UnStorage is as invisible as it is pervasive. Modern enterprises demand that storage be fully integrated into compute platforms for easier management and scale. With container-native storage surpassing 150 customers in the last year alone, learn how customers such as Schiphol, FICO, and Macquarie Bank are building next-generation hybrid clouds with Red Hat technologies.

We’re not all-work-all-the-time at Red Hat Storage, though. Join us at the community happy hour or the hybrid cloud infrastructure party on Tuesday to blow off some steam during a long week. Our social media strategist, Colleen Corrice, is running a way cool Twitter contest: All you have to do is post a picture at a Storage session or booth @RedHatStorage with the hashtag #UnStorage to receive a T-shirt and be included in a drawing for a personal planetarium.

Finally, check out this infographic on all things UnStorage @ Red Hat Summit. Please check back for a daily blog through this week. We hope to see you at Red Hat Summit 2018.

Container-native storage 3.9: Enhanced day-2 management and unified installation within OpenShift Container Platform

By  Annette Clewett, Humble Chirammal, Daniel Messer, and Sudhir Prasad

With today’s release of Red Hat OpenShift Container Platform 3.9, you will now have the convenience of deploying Container-Native Storage (CNS) 3.9 built on Red Hat Gluster Storage as part of the normal OpenShift deployment process in a single step. At the same time, major improvements in ease of operation have been introduced to give you the ability to monitor provisioned storage consumption, expand persistent volume (PV) capacity without downtime, and use a more intuitive naming convention for persistent volume names.

Continue reading “Container-native storage 3.9: Enhanced day-2 management and unified installation within OpenShift Container Platform”

Product of the Year finalist

By Daniel Gilfix, Red Hat Storage

For the second year in a row, Red Hat Ceph Storage has been named as a finalist in Storage Magazine and SearchStorage’s 2017 Products of the Year competition. Whereas in 2016, the honor was bestowed upon what was arguably the most important product release since Ceph came aboard the Red Hat ship, this year’s candidate was Red Hat Ceph Storage 2.3, a point release. This means a lot to us, but as a reader—perhaps a current or prospective customer, why should you care?

Excellent question, I must say, since normally we don’t like to boast. Our focus here at Red Hat is on the needs, experiences, and ultimate satisfaction of those who use our solutions. And given the evolution of Red Hat Ceph Storage from its acquisition from Inktank, the storage vendor, to Red Hat, the IT vendor, one would hope that we’re making progress.

Source: Wikimedia Commons, https://commons.wikimedia.org/wiki/File:Conflict_Resolution_in_Human_Evolution.jpg

The fact that Red Hat Ceph Storage 2.3 was recognized as among those reflecting the latest trends in flash, cloud, and container technologies is a good sign that this is true. More important validation, however, comes from customers like Produban and UK Cloud, who are incorporating the product into broad Red Hat solutions. It also comes from those like Monash University and CLIMB, who can appreciate improvements to versatility, connectivity, and flexibility, like the NFS gateway to an S3-compatible object interface, compatibility testing with the Hadoop S3A plugin, and a containerized version of the product.

Even more uplifting from a user perspective today is the fact that v2.3 has already been superseded by Red Hat Ceph Storage 3, a more substantive advance into the realm of object storage that addresses a few key customer requirements while making adoption less challenging. For example, the product rounded its value as a cost and resource-saving unified storage platform with full support for file-based access (CephFS) and links to legacy storage environments through iSCSI. Containerization was advanced to include CSDs, enabling nearly 25% hardware deployment savings and more predictable performance through the co-location of storage daemons. And we added a snazzy new monitoring interface and additional layers of automation to make deployments more self-maintaining. According to Olivier Delachapelle, Head of Data Center Category Management EMEIA at Fujitsu, “Red Hat Ceph Storage 3 is probably the most advanced software-defined storage solution combining extreme scalability, inherent disaster resilience, and significant price-capacity value.     

Snapshot of Red Hat Ceph Storage management console, top-level interface

In the end, we feel good about the public recognition, but we feel even better when our customers and partners are happy and have what they need to succeed. I encourage you to share your thoughts about where we’re on target and/or perhaps missing the boat. Ultimately, being part of an IT company means our storage solution can serve a role that was perhaps unimaginable before, and it supports our commitment to real-world deployment of the future of storage.