Red Hat Ceph Storage 3.2 is now available! The big news with this release is full support for the BlueStore Ceph backend, offering significantly increased performance for both object and block applications.

First available as a Technology Preview in Red Hat Ceph Storage 3.1, Red Hat has conducted extensive performance tuning and testing work to verify that BlueStore is now ready for use in production environments. With the 3.2 release, Red Hat Ceph Storage has attributes that make it suitable for a wide range of use cases and workloads, including:

  • Data analytics: As a data lake, Red Hat Ceph Storage uses object storage to deliver massive scalability and high availability to support demanding multitenant analytics workloads. Disparate analytics clusters can be consolidated to reduce cost of ownership, lower administrative burden, and increase service levels. BlueStore helps improve performance, while support for erasure coding helps reduce overall storage costs for data protection over simple replication.
  • Hybrid cloud applications: Red Hat Ceph Storage is ideal for on-premise storage clouds. Because Red Hat Ceph Storage supports the Amazon Web Services (AWS) Simple Storage Service (S3) interface, applications can access their storage with the same API, whether in public or private clouds.
  • OpenStack applications. Red Hat Ceph Storage is very popular for OpenStack applications. Red Hat Ceph Storage 3.2 can offer improved performance for OpenStack deployments, including Red Hat OpenStack Platform. Erasure coding for RADOS Block Device (RBD) is available as a Technology Preview in this release.
  • Backup target. A growing list of software vendors have certified their backup applications with Red Hat Ceph Storage as a backup storage target:
    • Veritas NetBackup for Symantec OpenStorage (OST) cloud backup - versions 7.7 and 8.0  
    • Rubrik Cloud Data Management (CDM) - versions 3.2 and later  
    • NetApp AltaVault - versions 4.3.2 and 4.4  
    • Trilio, TrilioVault - versions 3.0
    • Veeam Backup & Replication - version 9.x

BlueStore performance

BlueStore is all about performance. For hard disk drive (HDD) based clusters, BlueStore architecturally removes the double-write penalty incurred by the traditional FileStore backend. Additionally, BlueStore provides significant performance enhancements in configurations that use all solid-state drives (SSDs) or Non Volatile Memory Express (NVM Express, or NVMe) drives.

The architectural shift to a BlueStore backend has already shown performance improvements on community Ceph distributions. Testing by Micron in 2018 demonstrated up to 2x increases in performance with the BlueStore over the traditional FileStore backend.

Micron conducted BlueStore vs. FileStore object testing and reported significant performance improvements in terms of both improved throughput and reduced latency.

4MB objects

100% writes

  • 88% increase in throughput
  • 47% decrease in average latency

70%/30% reads/writes

  • 64% increase in throughput
  • 40% decrease in average latency

Micron also conducted BlueStore vs. FileStore block testing and reported higher IOPS and lower latency.

4K random blocks

100% writes

  • 18% higher I/O operations (IOPS)
  • 5% lower average latency
  • Up to 70%+ reduced 99.999% latency

70%/30% reads/writes

  • 14% higher IOPS
  • 80%+ lower read tail latency
  • 70%+ lower write tail latency

Upgrades and new installs

Importantly, both the BlueStore and FileStore backends coexist in Red Hat Ceph Storage 3.2. Existing Red Hat Ceph Storage 2.5 and 3.1 clusters retain the FileStore backend when upgrading to version 3.2. Newly created Red Hat Ceph Storage clusters default to the BlueStore backend. Those wishing to upgrade existing clusters to the BlueStore backend should contact Red Hat Support.

For more information on how Red Hat Ceph can tackle your toughest data storage challenges, please visit our Ceph product page.