Latest release of Red Hat Gluster Storage enhanced with container support, file-based auto tiering, writable snapshots, and more

By Alok Srivastava, Senior Product Manager, Red Hat Gluster Storage and Data, Red Hat

Red Hat Gluster Storage 3.1.2 became generally available today, adding such key functionality as containerization and file-based auto tiering to its already feature-rich base. For detailed information on all the latest Red Hat Gluster Storage features and enhancements, see the documentation and read on here. For my interview on the topic, watch our video below.

 

 

Key feature enhancements

 

Red Hat Gluster Storage image on Red Hat’s container registry for persistent file storage for containerized applications

Persistent file-based storage from containerized Red Hat Gluster Storage can be consumed by containerized applications over the network (see following figure). You can quickly bring up a Red Hat Gluster Storage container in a Red Hat Atomic Host or on Red Hat Enterprise Linux by simply pulling a Red Hat Gluster Storage image from Red Hat’s container registry.

 

RHGS 3.1.2 (1)

 

File-based auto tiering to leverage innovation in flash and HDD design

The file-based auto tiering feature of Red Hat Gluster Storage 3.1.2, initially available only in tech preview and supported on Red Hat Enterprise Linux 7 based on Red Hat Gluster Storage, places frequently accessed or “hot” data on fast media (e.g., flash, SSDs) and seldom-accessed or “cold” data on higher capacity yet slower-spinning media (see following figure).

 

RHGS 3.1.2 (2)

 

Tiering combines SSDs and HDDs to enable intelligent data placement and thereby boost performance and cost efficiencies. For example, a new or an existing volume may be distributed (erasure coded) on HDDs and the hot tier could be distributed on SSDs. The “attach” operation of tiering, transparent to applications, converts existing volumes into a cold tier. Together, the combination is one tiered volume.

 

Writable snapshots with shared backend LVM store

This release of Red Hat Gluster Storage supports writable snapshots–clones of snapshotted volumes with read/write permissions. Until today, only thin, LVM-based, read-only snapshots were available in the product. The writable clones, which are space efficient and instantly created, share the backend LVM store with their parent snapshots (see following figure).

 

RHGS 3.1.2 (3)

 

Other feature enhancements

 

Bit rot detection status

The bit rot detection functionality in Red Hat Gluster Storage periodically scans all data bricks in the product, compares checksums, and identifies and sometimes proactively fixes corrupted files. This release augments this feature so that you can identify corrupted files easily and take any necessary corrective action.

Faster SMB access

SMB shares back-ended by Red Hat Gluster Storage now perform better for streaming workloads. With this release, asynchronous IO from Samba to Gluster is supported and the relevant option is enabled by default in smb.conf. We have seen up to 100% performance increase from the previous release in our performance labs!

Offline console installation

Offline installation of the Red Hat Gluster Storage console is now enabled by offering an OVA image. You can now import RHGS-C OVA image to Red Hat Enterprise Virtualization Manager and install the console s/w on virtual machines.

Red Hat Gluster Storage now available in Google Cloud Platform

By Sayan Saha, Head of Product Management, Red Hat Gluster Storage and Data,
Red Hat

Today we announced the availability of Red Hat Gluster Storage in Google Cloud Platform as a fully supported offering. Red Hat Gluster Storage will give Google Cloud Platform users the ability to use a scale-out, POSIX-compatible, massively scalable, elastic file storage solution with a global namespace.

This offering will bring existing users of Red Hat Gluster Storage another supported public cloud environment in which they can run their POSIX-compatible file storage workloads. For their part, Google Cloud Platform users will have access to Red Hat Gluster Storage, which they can use for several cloud storage use cases, including active archives, rich media streaming, video rendering, web serving, data analytics, and content management. POSIX compatibility will give users the ability to move their existing on-premise applications to Google Cloud Platform without having to rewrite applications to a different interface.

 

RHGS_Google

Enterprises can also migrate their data from an on-premise environment to the Google Cloud Platform, easily leveraging the geo-replication capabilities of Red Hat Gluster Storage.

A Red Hat Gluster Storage node in Google Cloud Platform is created by attaching Google standard persistent disks (HDD) or solid-state persistent disks (SSD) to a Google Compute Engine (GCE) instance. Two or more such nodes make up the trusted storage pool of storage nodes. To help protect against unexpected failures, the Red Hat Gluster Storage nodes that constitute the trusted storage pool should be instantiated across Google Cloud Platform’s zones (within the same region), up to and including a single zone.

Gluster volumes are created by aggregating available capacity from Red Hat Gluster Storage instances. Capacity can be dynamically expanded or shrunk to meet your changing business demands. Additionally, Red Hat Gluster Storage provides geo-replication capabilities that enable data to be asynchronously replicated from one Google Cloud Platform region to another, thereby enabling disaster recovery for usage scenarios that need it in a master-slave configuration.

Anticipated roadmap features like file-based tiering in Red Hat Gluster Storage include providing the capability to create volumes with a mix of SSD- and HDD-based persistent disks providing storage tiering (hierarchical storage management) in the cloud in a transparent manner.

Red Hat Gluster Storage in Google Cloud Platform will be accessed using the highly performant Gluster native (Fuse-based) client from Red Hat Enterprise Linux 6, Red Hat Enterprise Linux 7, and other Linux-based clients. Users may also use NFS or SMB.

We are excited that users will be able to take advantage of all the Red Hat Gluster Storage features in Google Cloud Platform, including replication, snapshots, directory quotas, erasure coding, bit-rot scrubbing and geo-replication, because they now have a compelling option for their scale-out file storage use cases in Google’s cloud.

Red Hat Ceph Storage captures throne

For the second straight year, Red Hat Storage has received the prestigious Brand Leader award for Scale-out Object Storage Software by IT Brand Pulse. The 2015 winners were selected by IT professionals in an independent, non-vendor-sponsored survey of IT Brand Pulse’s million members and 100+ online IT groups.

Red Hat Storage Award

Continue reading “Red Hat Ceph Storage captures throne”

The importance of partners

As we approach the holiday season, our latest partner announcement is appropriate. Why? Because, partner-wise, Red Hat Storage is setting up for a veritable feast.

Storage is an ingredient, not a meal

Let’s face it. Nobody ever deploys storage technologies by themselves. They’re always deployed alongside hardware platforms and workloads–they have to be. Without hardware platforms to run on top of and applications needing their services, storage technologies don’t really do anything.

Continue reading “The importance of partners”

Availability of Red Hat Gluster Storage in Microsoft Azure

Sayan Saha, head of Product Management, Red Hat Gluster Storage and Big Data, Red Hat

Today, we announced our plans to make several Red Hat offerings, including Red Hat Gluster Storage, available in Microsoft Azure as fully supported offerings. Red Hat Gluster Storage offers Azure users a scale-out, POSIX compatible, massively scalable, elastic file storage solution with a global namespace.

Continue reading “Availability of Red Hat Gluster Storage in Microsoft Azure”

Why Software Defined Storage is set to disrupt the world of containers. And why you should care.

Containers have the potential to be hugely disruptive – they are about to impact almost every process and person within the data center. Container technology will also impact how we think about storage for applications and microservices. In turn, software defined storage will impact how storage is dynamically provisioned and managed for containerized applications.

Continue reading “Why Software Defined Storage is set to disrupt the world of containers. And why you should care.”

Latest OpenStack user survey shows Ceph continues to dominate

According to the OpenStack Foundation’s sixth and most recent user survey released just prior to this week’s OpenStack Summit in Tokyo, 62% of users selected Ceph RBD block storage for their OpenStack use cases, nearly three and more than four times the two closest alternatives, LVM (default) and NetApp, respectively.  In production, a full 38% of respondents indicated that their OpenStack deployments depended on Ceph as their Cinder driver, with the same comparisons. A survey of the largest production clouds, those exceeding 1,000 cores, showed similar results, with 37% of users selecting Ceph RBD followed by NetApp at 12%.  Interestingly enough, with 9% of respondents using GlusterFS in production, development & quality assurance, or proof of concept across all OpenStack deployments, more than 70% of OpenStack users are relying on block storage championed by Red Hat Storage.

Continue reading “Latest OpenStack user survey shows Ceph continues to dominate”

Address security challenges with software-defined storage

Hand-in-hand with the explosive growth of data and iCloud services comes a range of security threats, all of which can be addressed with software-defined storage. To learn how, register for the following webinars – the link is at the bottom of the post!

Continue reading “Address security challenges with software-defined storage”

Ceph Deployment at Target: Best Practices and Lessons Learned

In October 2014, the first Ceph environment at Target, a U.S. based international chain of department stores, went live. In this insightful slide show (embedded at the end of this post), Will Boege, Sr. Technical Architect at Target, talks about the process, highlighting challenges faced and lessons learned in Target’s first ‘official’ Openstack release.

Continue reading “Ceph Deployment at Target: Best Practices and Lessons Learned”

Ceph’s on the move… and coming to a city near you!

What’s that saying: Ya can’t keep a good person down? Well, ya can’t keep a good technology contained—and that’s why Ceph’s been appearing at venues across the globe.

Ceph Day just hit Chicago

Most recently—this past August—Ceph made its way to Chicago, home of Chicago-style pizza and hot dogs, a place known worldwide for its Prohibition-era ruckus as well as its present-day spirits and brews. There, at Ceph Day Chicago, Bloomberg’s Chris Jones, senior cloud infrastructure, architecture/DevOps, explained how Ceph helps power storage at the financial giant.

Continue reading “Ceph’s on the move… and coming to a city near you!”

The Top 5 Q&As From Sage Weil’s Recent Reddit AMA

Picture1

Sage Weil, Red Hat’s chief architect of Ceph and co-creator of Cephamong many other credentials – recently held an “ask me anything” session on Reddit. Though you can read the whole thing for yourself, here, we’ve collected the top questions and answers for your edification. Read on!

Continue reading “The Top 5 Q&As From Sage Weil’s Recent Reddit AMA”

Check out our rich media deep-dive webinar – on demand!

Want to know more about rich media? In a recent webinar, Red Hat Senior Solutions Architect Kyle Bader took a deep dive into rich media and the unique demands it places on storage systems. We recap some highlights from the webinar here, but please register for the on-demand version here to get the full experience.

Continue reading “Check out our rich media deep-dive webinar – on demand!”