This blog post comes to us from Sayan Saha, Sr. Manager, Product Management, Storage & Data Business, Red Hat. In it, Sayan explores how Red Hat Storage's server solutions are evolving and adapting to the needs of today's businesses. Read on!

facebook

Red Hat released an update of it’s Storage Server product based on the upstream GlusterFS project in mid-January. It was the most significant update release after the General Availability of Red Hat Storage Server 3.0 in October 2014. Red Hat Storage Server 3.0 update 3 (a.k.a RHSS 3.0.3) delivered a broad set of functionality building upon the major release of the Storage Server 3.0.

While update 1 & update 2 of Red Hat Storage Server 3.0 delivered mostly bug-fixes, RHEL 6.6 and Hortonworks Data Platform 2.1 support, this update focused on enabling a core set of storage functionality beyond stability and performance fixes which is seldom seen in update releases. This illustrates Red Hat’s commitment to deliver storage functionality for it’s storage server product as and when they are ready as opposed to waiting for the next big release.

With RHSS 3.0.3 we rounded out the GlusterFS volume snapshot functionality introduced in RHSS 3.0 with full support for user serviceable snapshots across the 3 major protocols i.e. NFS, SMB & FUSE that are supported by the storage server. With user serviceable snapshots you can easily navigate, access and restore files from activated snapshots available on a Gluster volume without needing to contact the admin or doing a more disruptive volume restore from a snapshot. The activated snapshots are available in the hidden .snaps folder (for every folder except while using SMB based access where it is available in the root share only) to which you can navigate to and restore the files that you need without requiring assistance from the admin. Your ability to restore files depends on how frequently snapshots are scheduled to be taken on the relevant Gluster volume.

The much awaited, requested and anticipated native Infiniband support/Remote Direct Memory Access (RDMA) is out of technology preview status and fully supported in this update of the storage server. You can now use RDMA for communication between Gluster storage bricks (server pool maintenance & some client to server traffic) as well as amongst Gluster native clients and servers (data traffic) that make up the trusted storage pool in production environments. This enables you to run your future Red Hat Storage Server deployments on Infiniband infrastructure that you may already have in place. The RDMA based access from clients is supported via the Gluster native client.

We also refreshed Red Hat Storage Servers’s object access API which it has supported since it’s launch in June, 2012 to comply with OpenStack Swift’s Icehouse release. Red Hat Storage Server chose Swift as the object access API from inception based on it’s open source roots and community driven innovation roadmap. This refresh to Icehouse now supports object expiration functionality. With this API you can access files as objects (and objects as files) without needing to do anything special. Such cross protocol access capabilities open up unique possibilities with regards to use-cases that are primarily file stores but need object access for a certain class of users.

In this update we started shipping an utility called gstatus which has been used in the upstream Gluster community for sometime. The gstatus utility provides an easy-to-use, high-level view of the health of a trusted storage pool with a single command. It gathers information by executing individual GlusterFS commands on various logical entities that make up a trusted storage pool like nodes, volumes, bricks etc., aggregates and distills the data collected presenting an easily understandable high level health status of the trusted storage pool.

As always we look forward to delivering new functionality in between major releases of the Red Hat Storage Server to enable customers and partners to use Red Hat Storage Server in an increasingly number of use-cases.