Product of the Year finalist

By Daniel Gilfix, Red Hat Storage

For the second year in a row, Red Hat Ceph Storage has been named as a finalist in Storage Magazine and SearchStorage’s 2017 Products of the Year competition. Whereas in 2016, the honor was bestowed upon what was arguably the most important product release since Ceph came aboard the Red Hat ship, this year’s candidate was Red Hat Ceph Storage 2.3, a point release. This means a lot to us, but as a reader—perhaps a current or prospective customer, why should you care?

Excellent question, I must say, since normally we don’t like to boast. Our focus here at Red Hat is on the needs, experiences, and ultimate satisfaction of those who use our solutions. And given the evolution of Red Hat Ceph Storage from its acquisition from Inktank, the storage vendor, to Red Hat, the IT vendor, one would hope that we’re making progress.

Source: Wikimedia Commons, https://commons.wikimedia.org/wiki/File:Conflict_Resolution_in_Human_Evolution.jpg

The fact that Red Hat Ceph Storage 2.3 was recognized as among those reflecting the latest trends in flash, cloud, and container technologies is a good sign that this is true. More important validation, however, comes from customers like Produban and UK Cloud, who are incorporating the product into broad Red Hat solutions. It also comes from those like Monash University and CLIMB, who can appreciate improvements to versatility, connectivity, and flexibility, like the NFS gateway to an S3-compatible object interface, compatibility testing with the Hadoop S3A plugin, and a containerized version of the product.

Even more uplifting from a user perspective today is the fact that v2.3 has already been superseded by Red Hat Ceph Storage 3, a more substantive advance into the realm of object storage that addresses a few key customer requirements while making adoption less challenging. For example, the product rounded its value as a cost and resource-saving unified storage platform with full support for file-based access (CephFS) and links to legacy storage environments through iSCSI. Containerization was advanced to include CSDs, enabling nearly 25% hardware deployment savings and more predictable performance through the co-location of storage daemons. And we added a snazzy new monitoring interface and additional layers of automation to make deployments more self-maintaining. According to Olivier Delachapelle, Head of Data Center Category Management EMEIA at Fujitsu, “Red Hat Ceph Storage 3 is probably the most advanced software-defined storage solution combining extreme scalability, inherent disaster resilience, and significant price-capacity value.     

Snapshot of Red Hat Ceph Storage management console, top-level interface

In the end, we feel good about the public recognition, but we feel even better when our customers and partners are happy and have what they need to succeed. I encourage you to share your thoughts about where we’re on target and/or perhaps missing the boat. Ultimately, being part of an IT company means our storage solution can serve a role that was perhaps unimaginable before, and it supports our commitment to real-world deployment of the future of storage.  

 

The third one’s a charm

By Federico Lucifredi, Red Hat Storage

 

 

Red Hat Ceph Storage 3 is our annual major release of Red Hat Ceph Storage, and it brings great new features to customers in the areas of containers, usability, and raw technology horsepower. It includes support for CephFS, giving us a comprehensive, all-in-one storage solution in Ceph spanning block, object, and file alike. It introduces iSCSI support to provide storage to platforms like VMWare ESX and Windows Server that currently lack native Ceph drivers. And we are introducing support for client-side caching with dm-cache.

On the usability front, we’re introducing new automation to manage the cluster with less user intervention (dynamic bucket sharding), a troubleshooting tool to analyze and flag invalid cluster configurations (Ceph Medic), and a new monitoring dashboard (Ceph Metrics) that brings enhanced insight into the state of the cluster.

Last, but definitely not least, containerized storage daemons (CSDs) drive a significant improvement in TCO through better hardware utilization.

Containers, containers, never enough containers!

We graduated to fully supporting our Ceph distribution running containerized in Docker application containers earlier in June 2017 with the 2.3 release, after more than a year of open testing of tech preview images.

Red Hat Ceph Storage 3 raises the bar by introducing colocated CSDs as a supported configuration. CSDs drive a significant TCO improvement through better hardware utilization; the baseline object store cluster we recommend to new users spans 10 OSD storage nodes, 3 MON controllers, and 3 RGW S3 gateways. By allowing colocation, the smaller MON and RGW nodes can now run colocated on the OSDs’ hardware, allowing users to avoid not only the capital expense of those servers but the ongoing operational cost of managing those servers. Pricing that configuration using a popular hardware vendor, we estimate that users could experience a 24% hardware cost reduction or, in alternative, add 30% more raw storage for the same initial hardware invoice.

“All nodes are storage nodes now!”

We are accomplishing this improvement by colocating any of the Ceph scale-out daemons on the storage servers, one per host. Containers reserve RAM and CPU allocations that protect both the OSD and the co-located daemon from resource starvation during rebalancing or recovery load spikes. We can currently colocate all the scale-out daemons except the new iSCSI gateway, but we expect that in the short term MON, MGR, RGW, and the newly supported MDS should take the lion’s share of these configurations.

As my marketing manager is found of saying, all nodes are storage nodes now! Just as important, we can field a containerized deployment using the very same ceph-ansible playbooks our customers are familiar with and have come to love. Users can conveniently learn how to operate with containerized storage while still relying on the same tools—and we continue to support RPM-based deployments. So if you would rather see others cross the chasm first, that is totally okay as well—You can continue operating with RPMs and Ansible as you are accustomed to.

CephFS: now fully supportawesome

The Ceph filesystem, CephFS for friends, is the Ceph interface providing the abstraction of a POSIX-compliant filesystem backed by the storage of a RADOS object storage cluster. CephFS achieved reliability and stability already last year, but with this new version, the MDS directory metadata service is fully scale-out, eliminating our last remaining concern to its production use. In Sage Weil’s own words, it is now fully awesome!

“CephFS is now fully awesome!” —Sage Weil

With this new version, CephFS is now fully supported by Red Hat. For details about CephFS, see the Ceph File System Guide for Red Hat Ceph Storage 3. While I am on the subject, I’d like to give a shout-out to the unsung heroes in our awesome storage documentation team: They have steadily introduced high-quality guides with every release, and our customers are surely taking notice.

iSCSI and NFS: compatibility trifecta

Earlier this year, we introduced the first version of our NFS gateway, allowing a user to mount an S3 bucket as if it was an NFS folder, for quick bulk import and export of data from the cluster, as literally every device out there speaks NFS natively. In this release, we’re enhancing the NFS gateway with support for NFS v.3 alongside the existing NFS v.4 support.

The remaining leg of our legacy compatibility plan is iSCSI. While iSCSI is not ideally suited to a scale-out system like Ceph, the use of multipathing for failover makes the fit smoother than one would expect, as no explicit HA is needed to manage failover.

With Red Hat Ceph Storage 3, we’re bringing to GA the iSCSI gateway service that we have been previewing during the past year. While we continue to favor the LibRBD interface as it is more featureful and delivers better performance, iSCSI makes perfect sense as a fall-back to connect VMWare and Windows servers to Ceph storage, and generally anywhere a native Ceph block driver is not yet available. With this initial release, we are supporting VMWare ESX 6.5, Windows Server 2016, and RHV 4.x over an iSCSI interface, and you can expect to see us adding more platforms to the list of supported clients next year as we plan to increase the reach of our automated testing infrastructure.

¡Arriba, arriba! ¡Ándale, ándale!

Red Hat’s famous Performance and Scale team has revisited client-side caching tuning with the new codebase and blessed an optimized configuration for dm-cache that can now be easily configured with Ceph-volume, the new up-and-coming tool that is slated by the Community to eventually give the aging ceph-disk a well-deserved retirement.

Making things faster is important, but equally important is insight into performance metrics. The new dashboard is well deserving of a blog on its own right, but let’s just say that it plainly makes available a significant leap in performance monitoring to Ceph environments, starting with the cluster as a whole and drilling into individual metrics or individual nodes as needed to track down performance issues. Select users have been patiently testing our early builds with Luminous this summer, and their consistently positive feedback makes us confident you will love the results.

Linux monitoring has many flavors, and while we supply tools as part of the product, customers often want to integrate their existing infrastructure, whether it is Nagios alerting in very binary tones that something seems to be wrong, or another tool. For this reason, we joined forces with our partners at Datadog to introduce a joint configuration for SAAS monitoring of Ceph using Datadog’s impressive tools.

Get the stats

More than 30 features are landing in this release alongside our rebasing of the enterprise product to the Luminous codebase. These map to almost 500 bugs in our downstream tracking system and hundreds more upstream in the Luminous 12.2.1 release we started from. I’d like to briefly call attention to about 20 of them that our very dedicated global support team prioritized for us as the most impactful way to further smooth out the experience of new users and move forward on our march toward making Ceph evermore enterprise-ready and easy to use. This is our biggest release yet, and its timely delivery 3 months after the upstream freeze is an impressive achievement for our Development and Quality Assurance engineering teams.

As always, those of you with an insatiable thirst for detail should read the release notes next—and feel free to ping me on Twitter if you have any questions!

The third one’s a charm

Better economics through improved hardware utilization, great value-add for customers in the form of new access modes in file, iSCSI, and NFS compatibility, joined by improved usability and across-the-board technological advancement are the themes we tried to hit with this release. I think we delivered… but we aren’t done yet. We plan to send more stuff your way this year! Stay tuned.

But if you can’t wait to hear more about the new object storage features in Red Hat Ceph Storage 3, read this blog post by Uday Boppana.

The rise of object storage in the modern datacenter

Red Hat Ceph Storage 3 greatly advances object storage capabilities

By Uday Boppana, Red Hat Storage

From speaking with customers across industry verticals and geographies, Red Hat is finding that object storage is increasingly top of mind as enterprises address growing data volumes, regulatory pressure, and threats to data security.

Take financial services firms, for instance. Their IT teams fight multiple fires trying to appease internal and external stakeholders in a fast-moving industry. They are expected to provide lines of business with cost-effective, cloud-like services, from development frameworks, storage backup, archive, to sync and share, while also bridging the traditional in-house banking applications with a plethora of cloud-native applications–and deploying all of them on a single storage platform to reduce costs. Satisfying these challenging goals requires a scalable, on-prem storage platform that can also be extended across hybrid cloud deployments–something that cannot be satisfied through traditional file or block storage solutions.

Red Hat Ceph Storage 3 offers a unified, petabyte-scale solution that addresses these pain points. The newest release, announced late last year, adds much in terms of scalability, security features, ease of management, and lowered costs. It also enhances the multiprotocol support for file and  object storage interoperability and migration that was introduced in Red Hat Ceph Storage 2.

Cost-effective private cloud backups

Red Hat Ceph Storage 3 helps customers modernize their backup infrastructures and reduce the cost of data backups for private cloud infrastructure through certified interoperability with Veritas NetBackup and Rubrik Cloud Data Management. These software offerings will back up to a Red Hat Ceph Storage cluster using its AWS-compatible S3 API. Details on supported versions are listed in the product’s compatibility guide.

Better scale for file workloads, at lower cost

The benefits of object storage as a data storage platform are well documented, making possible increased reach and applicability of object storage to a variety of workloads. Expanded support of its NFS gateway for RGW to include NFS V3 in addition to V4 means Red Hat Ceph Storage users can now gradually transition most NFS file workloads to a modern, scalable, object storage platform without disruption, with full migration only when their applications, tools, and management processes are ready.

Increased security for data assets

Red Hat Ceph Storage 3 enables greater data security for at-rest data and enables permanent data deletion. Object granular encryption is supported for data at-rest using user-provided keys. This functionality can also be used to permanently delete an object by encrypting the object  and shredding the key before deleting the object.

Reduced storage costs by eliminating redundant object data

Inline, object granular compression means being able to eliminate redundant data within an object before saving it to the disk and saving storage costs. The compression operation happens in-line as data is written to RGW from the hosts, before saving it to the disk.

Simpler data lifecycle management

Red Hat Ceph Storage 3 eases storage and data management through a policy-based S3 API framework for bucket and object lifecycle management. The Red Hat Ceph Storage object access API is fully compatible with AWS S3 API and now adds support for S3 bucket lifecycle API for object and version expiration.

The web-scale modern datacenter

In sum, as the modern datacenter relies more on web-scale infrastructure, object storage can help organizations bring to bear much of the value in other digital transformation efforts in the development and applications space. As the hybrid cloud becomes a mainstream reality, standardizing on a scalable object-storage solution that can span on-prem, private, and public clouds becomes more imperative to the success of the modern enterprise.

For more on the new and exciting features in Red Hat Ceph Storage 3, check out this blog post by Federico Lucifredi in our “Architects’ Corner.”