Leverage your existing storage investments with container-native storage

By Sayandeb Saha, Director, Product Management

The Container-Native Storage (CNS) offering for OpenShift Container Platform from Red Hat has seen wide customer adoption in the past year or so. Customers are deploying it in a wide variety of environments that include bare metal, virtualized, and private and public clouds. It mimics the diverse spread of environments in which OpenShift itself gets deployed—which is also CNS’s key strength (i.e., being able to back OpenShift wherever it runs—see the following graphic).

During the past of year of customer adoption of CNS, we’ve observed some key trends that are unique for OpenShift/Kubernetes storage and that we’ll highlight in a series of blogs. This blog series will also include business and technical solutions that have worked for our customers.

In this blog post, we examine a trend where customers have adopted CNS as a storage management fabric that sits in between the OpenShift Container Platform and their classic storage gear. This particular adoption pattern continues to have a really high uptake, and there are sound business and technical reasons for doing this, which we’ll explore here.

First the Solution (The What): We’ve seen a lot of customers deploying CNS to serve out storage from their existing storage arrays/SANs and other traditional storage, as illustrated in the following graphic. In this scenario, block devices from existing storage arrays are served out with our CNS software running in VMs or containers/pods to OpenShift. In this case, the storage for the VMs that runs OpenShift is still served by the arrays.

Now the Why: Initially, it seemed backward as to why customers would be doing this; after all, software-defined storage solutions like CNS are meant to run on x86 bare metal (on premise) or in the public cloud, but further investigation revealed some interesting discoveries.

While OpenShift users and ops teams consume infrastructure, they typically do not manage infrastructure. In on-premise environments, OpenShift ops teams are highly dependent on other infrastructure teams for virtualization, storage, and operating systems for the infrastructure on which they run OpenShift. Similarly, in public clouds they consume the native compute and storage infrastructure available in these clouds.

As a consequence, they are highly dependent on storage infrastructure that is already in place. Typically, it’s very difficult to justify a storage server purchase when storage has been already procured a year or more ago from a traditional storage vendor for a new use case (OpenShift storage in this case). The issue is that this traditional storage was not designed for nor intended to be used with containers and the budget for storage has mostly been spent. This has driven the OpenShift operations teams to adopt CNS effectively as a storage management fabric that sits between their OpenShift Container Platform deployment and their existing storage array. The inherent flexibility of Red Hat Gluster Storage in this case is the form of CNS being leveraged, which enables it to aggregate and pool block devices that are attached to a VM and serve that out to OpenShift workloads. OpenShift operations teams can now have the best of both worlds. They can repurpose their existing storage array that is already in place/on premise but actually consume CNS which operates as a management fabric offering the latest and greatest in terms of feature, functionality, and manageability with a deep integration with the OpenShift platform.

In addition to business reasons, there are also various technical reasons that these OpenShift operations teams are adopting CNS. These include, but are not limited to:

  • Lack of deep integration of their existing storage arrays with OpenShift Container Platform
  • Even if their traditional storage array has rudimentary integration with OpenShift, very likely it has limited feature support, which renders it unusable with many OpenShift workloads (like lack of dynamic provisioning)
  • The roadmap of their storage arrays vendor may not match their current (or future) OpenShift/Kubernetes storage feature support needs, like lack of availability of a Persistent Volume (PV) resize feature
  • Needing a fully featured OpenShift Storage solution for OpenShift workloads as well as the OpenShift infrastructure itself. Many existing storage platforms can support one or the other, but not both. For instance, a storage array serving out Fiber Channels LUNs (plain block storage) can’t back an OpenShift registry as one needs shared storage access for it usually provided by a file or object storage back end.
  • They seek a consistent storage consumption and management experience across hybrid and multiple clouds. Once they learn to implement and manage CNS from Red Hat in one environment, it’s repeatable in all other environments. They can’t use their storage array in the public cloud.

Using CNS from Red Hat is a win for OpenShift ops teams. They can get started with a state-of-the-art storage back end for OpenShift apps and infrastructure without needing to acquire new infrastructure for OpenShift Storage right away. They have the option to move to x86-based storage servers during the following budget cycle as they grow their OpenShift footprint and onboard more apps and customers to it. The experience with CNS serves them well if they choose to implement OpenShift and CNS in other environments like AWS, Azure, and Google Cloud.

Want to learn more?

For hands-on experience combining OpenShift and CNS, check out our test drive, a free, in-browser lab experience that walks you through using both.

Red Hat Summit 2018—It’s a wrap!

By Will McGrath, Product Marketing, Red Hat Storage

Wowzer! Red Hat Summit 2018 was a blur of activity. The quality and quantity of conversations with customers, partners, industry analysts, community members, and Red Hatters was unbelievable. This event has grown steadily the past few years to over 7,000 registrants this year. From a Storage perspective, this was the largest presence ever in terms of content and customer interaction.

Key announcements

For Storage, we made two key announcements during Red Hat Summit. The first was around Red Hat Storage One, a pre-configured offering engineered with our server partners, announced last week. If you didn’t catch Dustin Black’s blog post that goes into the detail of the solution, check it out .

The second announcement, which occurred this week, highlighted the momentum in building a storage offering that provides a seamless developer experience and unified orchestration for containers. There are now more than 150 customers worldwide that have adopted Red Hat’s container-native storage solutions to enable their transition to the hybrid cloud, including Vorwerk and innovation award winner Lufthansa Technik.  

We featured a number of customer success stories, including Massachusetts Open Cloud, which worked with Boston Children’s Hospital to redefine medical-image processing using Red Hat Ceph Storage.

If you’d like to keep up on the containers news, check out our blog post from Tuesday and this week’s news around CoreOS integration into Red Hat OpenShift. You might also like to check out the news around customers deploying OpenShift on Red Hat infrastructureincluding OpenStackthrough container-based application development and tightly integrated cloud technologies.

Storage expertise on display

On the morning of the first day of Summit, Burr Sutter and team demoed a number of technologies, including Red Hat Storage, to showcase application portability across the open hybrid cloud. This morning, Erin Boyd and team ran some way cool live demos that showed the power of microservices and functions with OpenShift, Storage, OpenWhisk, Tensorflow, and a host of technologies across the hybrid cloud.

For those who had the opportunity to attend any of the 20+ Red Hat Summit storage sessions, you were able to learn how our Red Hat Gluster Storage and Red Hat Ceph Storage products appeal to both traditional and modern users. The roadmap presentations by both Neil Levine (Ceph) and Sayan Saha (Gluster and container-native storage) were very popular. Sage Weil, the creator of Ceph, gave a standing-room only talk on the future of storage. Some of these storage sessions will be available on the Red Hat Summit YouTube channel in the coming weeks.

We also had several partners demoing their combined solutions with Red Hat Storage, including Intel, Mellanox, Penguin Computing, QCT, and Supermicro. Commvault had a guest appearance during Sean Murphy’s Red Hat Hyperconverged Infrastructure talk, explaining what led them to decide to include it in their HyperScale Appliances and Software offerings.

This year, we conducted an advanced Ceph users’ group meeting the day before the conference with marquee customers participating in deep-dive discussions with product and community leaders. During the conference, the storage lockers have been a hit. We had great presence on the show floor, including the community booths. Our breakfast was well attended with over a hundred people registered and featured a panel of customers and partners.

Continue the conversation

During his appearance on The Cube by Silicon Angle, Red Hat Storage VP/GM Ranga Rangachari talked about his point of view on “UnStorage.” This idea, triggered by his original blog post on the subject, made quite a few waves at the event. Customers and analysts are responding positively to the idea of a new approach to storage in the age of hybrid cloud, hyperconvergence, and containers. Today is the last day to win prizes by tweeting  @RedHatStorage with the hashtag #UnStorage.

If you missed us in San Francisco, we’ll be at OpenStack Summit in Vancouver from May 21-24. Red Hat is a headline sponsor at Booth A19. If you’re attending, come check out our OpenStack and Ceph demo, and check back on our blog page for news from the event. We’ll also be hosting the “Craft Your Cloud” event on Tuesday, May 22, from 6-9 pm at Steamworks in Vancouver. For more information and to register, click here. For more fun and networking opportunities, join the Ceph and RDO communities for a happy hour on May 23 from 6-8 pm at The Portside Pub in Vancouver. For more information and to register for that event, click here.

On to Red Hat Summit 2019

You can check out the videos and keynotes from Red Hat Summit 2018 on demand. Next year, Red Hat Summit is being held in Boston againit’s been rotating between San Francisco and Bostonso if you couldn’t attend San Francisco this year we urge you to plan to visit us in Boston next year. We hope you enjoyed our coverage of Red Hat Summit 2018, and hope to see you in 2019.

More accolades for Red Hat Ceph Storage

By Daniel Gilfix, Product Marketing, Red Hat Storage

Once again, an independent analytic news source has confirmed what many of you already know: that Red Hat Ceph Storage stands alone in its commitment to technical excellence for the customers it serves. In the latest IT Brand Pulse survey covering Networking & Storage products, IT professionals from around the world have selected Red Hat Ceph Storage as the “Scale-out Object Storage Software” leader in all categories. This includes price, performance, reliability, service and support, and innovation. The honors follow a pattern of recognition from IT Brand Pulse, having bestowed the leadership tag to Red Hat Ceph Storage in 2017, 2015, and 2014, with 2016 noted for Red Hat as “Service and Support” leader.

The report documented the results of the independent, March 2018, annual survey that polled vendors on their perception of excellence in eleven different categories. Red Hat Ceph Storage earned ratings that were visibly head and shoulders above the competition, including more than a 2X differential over Scality and VMware.

Source: IT Brand Pulse, https://itbrandpulse.com/it-pros-vote-2018-networking-storage-brand-leaders/

It feels like just yesterday!

This latest third party validation comes on the heels of Red Hat Ceph Storage being named as a finalist in Storage Magazine and SearchStorage’s 2017 Products of the Year competition in late January 2018. Here, the evaluation was based on Red Hat Ceph Storage v2.3, one that made great strides in the areas of connectivity and containerization, including an NFS gateway to an S3-compatible object interface and compatibility with the Hadoop S3A plugin.

Red Hat Ceph Storage 3 carries the baton

IT professionals voting in this year’s IT Brand Pulse survey were able to consider newer features in the important Red Hat Ceph Storage 3 release that addressed a series of major customer challenges in object storage and beyond. We delivered full support for file-based access via CephFS, expanded ties to legacy storage environments through iSCSI, pumped fuel into our containerization options with CSDs for 25% hardware deployment savings, and introduced an easier monitoring interface and additional layers of automation for more self-maintaining deployments.  

See you at Red Hat Summit!

Ceph booth at Red Hat Summit 2018

As usual, the real testament to our success is the continued satisfaction of our customer base, the ones who are increasingly choosing Red Hat Ceph Storage for modern use cases like AI and ML, rich media, data lakes, hybrid cloud infrastructure based on OpenStack, and traditional backup and restore.

Ceph user group at Red Hat Summit 2018

We look forward to discussing deployment options and whether Red Hat Ceph Storage might be right for you this week at Red Hat Summit—There’s still so much more to go! Catch us at one of the following sessions in Moscone West:

Today (Wednesday, May 9)

Tomorrow (Thursday, May 10)

Container-native storage from Red Hat is on a roll at Red Hat Summit 2018!

By Steve Bohac, Product Marketing, Red Hat Storage

It’s Red Hat Summit week, with this year’s edition taking place in San Francisco! As always, Red Hat has a plethora of announcements this week.

If you haven’t already heard the news, yesterday we announced substantial customer adoption momentum with container-native storage from Red Hat. Customers such as Lufthansa Technik, Aragonesa de Servious Telematico (AST), Generali Switzerland, IHK-GfI, and Vorwerk (amongst many more) are using Red Hat OpenShift Container Platform for cloud-native applications and are representative of how organizations are seeking out scalable, fully integrated, developer friendly storage for containers.

Based on Red Hat Gluster Storage, container-native storage from Red Hat offers these organizations scalable, persistent storage for containers across hybrid clouds with increased application portability. Tightly integrated with Red Hat OpenShift Container Platform, container-native storage from Red Hat can be used to persist not only application data but data for logging, metrics, and the container registry. The deep integration with Red Hat OpenShift Container Platform helps developers easily provision and manage elastic storage for applications and offers a single point of support. Customers use container-native storage to persist data for a variety of applications, including SQL and NoSQL databases, CI/CD tools, web serving, and messaging applications.

Organizations using container-native storage from Red Hat can benefit from simplified management, rapid deployment, and a single point of support. The versatility of container-native storage from Red Hat can enable customers to run cloud-native applications in containers, on bare metal, in virtualized environments, or in the public cloud.

For those of you attending Red Hat Summit this week, as always we know you love breakout sessions to learn more about Red Hat solutions—and we have a bunch covering container-native storage from Red Hat! Don’t forget to get your raffle tickets at each of the storage sessions you attend. Here’s what the line up for container-native storage from Red Hat sessions looks like:

(All in Moscone West unless otherwise noted)

Tuesday, May 8

Thursday, May 10

Want to learn more?

For hands-on experience combining OpenShift and container-native storage, check out our test drive, a free, in-browser lab experience that walks you through using both.

Happy Red Hat Summit! Hope to see you this week!

 

 

 

Five ways to experience UnStorage at Red Hat Summit

Welcome to Red Hat Summit 2018 in San Francisco! The Storage team has been hard at work to make this the best possible showcase of technology and customers—and have fun while doing it. This year our presence is built around the theme: UnStorage for the modern enterprise.

What is UnStorage?

Today’s users need their data so accessible, so scalable, so versatile that the old concept of “storing” it seems counterintuitive. Perhaps a better way of describing the needs of the modern enterprise is UnStorage, as outlined in this blog post by Red Hat Storage VP and GM, Ranga Rangachari.

Five ways to experience UnStorage at Red Hat Summit

  1. Content is king: We have 24 sessions packed with storage knowledge, best practices, and success stories. Over 21 Red Hat Storage customers will be featured at the event, including on a panel at our breakfast (open to all attendees) on Wednesday at 7 am at the Marriott Marquis. Learn how some of the most innovative enterprises leverage the power of unStorage to solve their scale and agility challenges.
  2. Without hardware partners, it’s like clapping with one hand: By definition, the success of software-defined storage hinges on the strength of the hardware ecosystem. Since the storage controller software is only half the solution, it’s important to have deep engineering investment with hardware and component vendors to build rock-solid solutions for customers. With partners like Supermicro, Mellanox, Penguin Computing, Intel, Commvault, and QCT, all featured at the conference, Red Hat Storage enables greater customer choice and openness—a key tenet of UnStorage.
  3. Explore your storage curiosity: UnStorage is all about breaking the rules to make things better. You’ll find a lot of creative ideas that are off the beaten track. Just as UnStorage is ubiquitous—it stretches across private, public, and hybrid cloud boundaries—it’s hard to miss Storage at the conference. You can find storage lockers near the expo entrance where you can drop off backpacks and charge phones while you attend sessions. Or enter to win one of two Star Wars collector edition drones by attending sessions or visiting the booth. Stop by the Storage Launch Pad to play online games, take surveys, and pick up a ton of giveaways, including two golden tickets handed out every day, which will afford you a special set of prizes.
  4. Test drive storage: Kick the tires on UnStorage with one of three test drives for Ceph, Gluster, and OpenShift Ops. As the name suggests, software-defined storage is completely decoupled from hardware, making it easy to test and deploy in the cloud. On the other side of the deployment spectrum, you can also try out the sizing tool for Red Hat Storage One, our single SKU pre-configured system announced last week. Stop by one of four Storage pods on the expo floor for demos and conversations with Storage experts.
  5. The proof of the pudding: Stop by Thursday’s keynote with CTO Chris Wright and live demos by Burr Sutter and team featuring container-native storage baked into Red Hat platforms such as OpenShift. UnStorage is as invisible as it is pervasive. Modern enterprises demand that storage be fully integrated into compute platforms for easier management and scale. With container-native storage surpassing 150 customers in the last year alone, learn how customers such as Schiphol, FICO, and Macquarie Bank are building next-generation hybrid clouds with Red Hat technologies.

We’re not all-work-all-the-time at Red Hat Storage, though. Join us at the community happy hour or the hybrid cloud infrastructure party on Tuesday to blow off some steam during a long week. Our social media strategist, Colleen Corrice, is running a way cool Twitter contest: All you have to do is post a picture at a Storage session or booth @RedHatStorage with the hashtag #UnStorage to receive a T-shirt and be included in a drawing for a personal planetarium.

Finally, check out this infographic on all things UnStorage @ Red Hat Summit. Please check back for a daily blog through this week. We hope to see you at Red Hat Summit 2018.

Introducing Red Hat Storage One

More than a year ago, our Storage Architecture team set out to answer the question of how we can overcome the last barriers to software-defined storage (SDS) adoption. We know from our thousands of test cycles and hundreds of hours of data analysis that a properly deployed Gluster or Ceph system can easily compete with—and often surpass—the feature and performance capabilities of any proprietary storage appliance, usually at a fraction of the cost based on our experience and rigorous study. We have many customer success stories to back up these claims with real-word deployments for enterprise workloads. However, one piece of feedback is consistent and clear: Not every customer is willing or prepared to commit the resources necessary to architect a system to the best standards for their use case. The barrier to entry is simply higher than for a comparative proprietary appliance that is sold in units of storage with specific workload-based performance metrics. The often-lauded flexibility of SDS is, in these cases, its Achilles’ heel.

Continue reading “Introducing Red Hat Storage One”

Container-native storage 3.9: Enhanced day-2 management and unified installation within OpenShift Container Platform

By  Annette Clewett, Humble Chirammal, Daniel Messer, and Sudhir Prasad

With today’s release of Red Hat OpenShift Container Platform 3.9, you will now have the convenience of deploying Container-Native Storage (CNS) 3.9 built on Red Hat Gluster Storage as part of the normal OpenShift deployment process in a single step. At the same time, major improvements in ease of operation have been introduced to give you the ability to monitor provisioned storage consumption, expand persistent volume (PV) capacity without downtime, and use a more intuitive naming convention for persistent volume names.

Continue reading “Container-native storage 3.9: Enhanced day-2 management and unified installation within OpenShift Container Platform”

Product of the Year finalist

By Daniel Gilfix, Red Hat Storage

For the second year in a row, Red Hat Ceph Storage has been named as a finalist in Storage Magazine and SearchStorage’s 2017 Products of the Year competition. Whereas in 2016, the honor was bestowed upon what was arguably the most important product release since Ceph came aboard the Red Hat ship, this year’s candidate was Red Hat Ceph Storage 2.3, a point release. This means a lot to us, but as a reader—perhaps a current or prospective customer, why should you care?

Excellent question, I must say, since normally we don’t like to boast. Our focus here at Red Hat is on the needs, experiences, and ultimate satisfaction of those who use our solutions. And given the evolution of Red Hat Ceph Storage from its acquisition from Inktank, the storage vendor, to Red Hat, the IT vendor, one would hope that we’re making progress.

Source: Wikimedia Commons, https://commons.wikimedia.org/wiki/File:Conflict_Resolution_in_Human_Evolution.jpg

The fact that Red Hat Ceph Storage 2.3 was recognized as among those reflecting the latest trends in flash, cloud, and container technologies is a good sign that this is true. More important validation, however, comes from customers like Produban and UK Cloud, who are incorporating the product into broad Red Hat solutions. It also comes from those like Monash University and CLIMB, who can appreciate improvements to versatility, connectivity, and flexibility, like the NFS gateway to an S3-compatible object interface, compatibility testing with the Hadoop S3A plugin, and a containerized version of the product.

Even more uplifting from a user perspective today is the fact that v2.3 has already been superseded by Red Hat Ceph Storage 3, a more substantive advance into the realm of object storage that addresses a few key customer requirements while making adoption less challenging. For example, the product rounded its value as a cost and resource-saving unified storage platform with full support for file-based access (CephFS) and links to legacy storage environments through iSCSI. Containerization was advanced to include CSDs, enabling nearly 25% hardware deployment savings and more predictable performance through the co-location of storage daemons. And we added a snazzy new monitoring interface and additional layers of automation to make deployments more self-maintaining. According to Olivier Delachapelle, Head of Data Center Category Management EMEIA at Fujitsu, “Red Hat Ceph Storage 3 is probably the most advanced software-defined storage solution combining extreme scalability, inherent disaster resilience, and significant price-capacity value.     

Snapshot of Red Hat Ceph Storage management console, top-level interface

In the end, we feel good about the public recognition, but we feel even better when our customers and partners are happy and have what they need to succeed. I encourage you to share your thoughts about where we’re on target and/or perhaps missing the boat. Ultimately, being part of an IT company means our storage solution can serve a role that was perhaps unimaginable before, and it supports our commitment to real-world deployment of the future of storage.  

 

The third one’s a charm

By Federico Lucifredi, Red Hat Storage

 

 

Red Hat Ceph Storage 3 is our annual major release of Red Hat Ceph Storage, and it brings great new features to customers in the areas of containers, usability, and raw technology horsepower. It includes support for CephFS, giving us a comprehensive, all-in-one storage solution in Ceph spanning block, object, and file alike. It introduces iSCSI support to provide storage to platforms like VMWare ESX and Windows Server that currently lack native Ceph drivers. And we are introducing support for client-side caching with dm-cache.

On the usability front, we’re introducing new automation to manage the cluster with less user intervention (dynamic bucket sharding), a troubleshooting tool to analyze and flag invalid cluster configurations (Ceph Medic), and a new monitoring dashboard (Ceph Metrics) that brings enhanced insight into the state of the cluster.

Last, but definitely not least, containerized storage daemons (CSDs) drive a significant improvement in TCO through better hardware utilization.

Containers, containers, never enough containers!

We graduated to fully supporting our Ceph distribution running containerized in Docker application containers earlier in June 2017 with the 2.3 release, after more than a year of open testing of tech preview images.

Red Hat Ceph Storage 3 raises the bar by introducing colocated CSDs as a supported configuration. CSDs drive a significant TCO improvement through better hardware utilization; the baseline object store cluster we recommend to new users spans 10 OSD storage nodes, 3 MON controllers, and 3 RGW S3 gateways. By allowing colocation, the smaller MON and RGW nodes can now run colocated on the OSDs’ hardware, allowing users to avoid not only the capital expense of those servers but the ongoing operational cost of managing those servers. Pricing that configuration using a popular hardware vendor, we estimate that users could experience a 24% hardware cost reduction or, in alternative, add 30% more raw storage for the same initial hardware invoice.

“All nodes are storage nodes now!”

We are accomplishing this improvement by colocating any of the Ceph scale-out daemons on the storage servers, one per host. Containers reserve RAM and CPU allocations that protect both the OSD and the co-located daemon from resource starvation during rebalancing or recovery load spikes. We can currently colocate all the scale-out daemons except the new iSCSI gateway, but we expect that in the short term MON, MGR, RGW, and the newly supported MDS should take the lion’s share of these configurations.

As my marketing manager is found of saying, all nodes are storage nodes now! Just as important, we can field a containerized deployment using the very same ceph-ansible playbooks our customers are familiar with and have come to love. Users can conveniently learn how to operate with containerized storage while still relying on the same tools—and we continue to support RPM-based deployments. So if you would rather see others cross the chasm first, that is totally okay as well—You can continue operating with RPMs and Ansible as you are accustomed to.

CephFS: now fully supportawesome

The Ceph filesystem, CephFS for friends, is the Ceph interface providing the abstraction of a POSIX-compliant filesystem backed by the storage of a RADOS object storage cluster. CephFS achieved reliability and stability already last year, but with this new version, the MDS directory metadata service is fully scale-out, eliminating our last remaining concern to its production use. In Sage Weil’s own words, it is now fully awesome!

“CephFS is now fully awesome!” —Sage Weil

With this new version, CephFS is now fully supported by Red Hat. For details about CephFS, see the Ceph File System Guide for Red Hat Ceph Storage 3. While I am on the subject, I’d like to give a shout-out to the unsung heroes in our awesome storage documentation team: They have steadily introduced high-quality guides with every release, and our customers are surely taking notice.

iSCSI and NFS: compatibility trifecta

Earlier this year, we introduced the first version of our NFS gateway, allowing a user to mount an S3 bucket as if it was an NFS folder, for quick bulk import and export of data from the cluster, as literally every device out there speaks NFS natively. In this release, we’re enhancing the NFS gateway with support for NFS v.3 alongside the existing NFS v.4 support.

The remaining leg of our legacy compatibility plan is iSCSI. While iSCSI is not ideally suited to a scale-out system like Ceph, the use of multipathing for failover makes the fit smoother than one would expect, as no explicit HA is needed to manage failover.

With Red Hat Ceph Storage 3, we’re bringing to GA the iSCSI gateway service that we have been previewing during the past year. While we continue to favor the LibRBD interface as it is more featureful and delivers better performance, iSCSI makes perfect sense as a fall-back to connect VMWare and Windows servers to Ceph storage, and generally anywhere a native Ceph block driver is not yet available. With this initial release, we are supporting VMWare ESX 6.5, Windows Server 2016, and RHV 4.x over an iSCSI interface, and you can expect to see us adding more platforms to the list of supported clients next year as we plan to increase the reach of our automated testing infrastructure.

¡Arriba, arriba! ¡Ándale, ándale!

Red Hat’s famous Performance and Scale team has revisited client-side caching tuning with the new codebase and blessed an optimized configuration for dm-cache that can now be easily configured with Ceph-volume, the new up-and-coming tool that is slated by the Community to eventually give the aging ceph-disk a well-deserved retirement.

Making things faster is important, but equally important is insight into performance metrics. The new dashboard is well deserving of a blog on its own right, but let’s just say that it plainly makes available a significant leap in performance monitoring to Ceph environments, starting with the cluster as a whole and drilling into individual metrics or individual nodes as needed to track down performance issues. Select users have been patiently testing our early builds with Luminous this summer, and their consistently positive feedback makes us confident you will love the results.

Linux monitoring has many flavors, and while we supply tools as part of the product, customers often want to integrate their existing infrastructure, whether it is Nagios alerting in very binary tones that something seems to be wrong, or another tool. For this reason, we joined forces with our partners at Datadog to introduce a joint configuration for SAAS monitoring of Ceph using Datadog’s impressive tools.

Get the stats

More than 30 features are landing in this release alongside our rebasing of the enterprise product to the Luminous codebase. These map to almost 500 bugs in our downstream tracking system and hundreds more upstream in the Luminous 12.2.1 release we started from. I’d like to briefly call attention to about 20 of them that our very dedicated global support team prioritized for us as the most impactful way to further smooth out the experience of new users and move forward on our march toward making Ceph evermore enterprise-ready and easy to use. This is our biggest release yet, and its timely delivery 3 months after the upstream freeze is an impressive achievement for our Development and Quality Assurance engineering teams.

As always, those of you with an insatiable thirst for detail should read the release notes next—and feel free to ping me on Twitter if you have any questions!

The third one’s a charm

Better economics through improved hardware utilization, great value-add for customers in the form of new access modes in file, iSCSI, and NFS compatibility, joined by improved usability and across-the-board technological advancement are the themes we tried to hit with this release. I think we delivered… but we aren’t done yet. We plan to send more stuff your way this year! Stay tuned.

But if you can’t wait to hear more about the new object storage features in Red Hat Ceph Storage 3, read this blog post by Uday Boppana.

The rise of object storage in the modern datacenter

Red Hat Ceph Storage 3 greatly advances object storage capabilities

By Uday Boppana, Red Hat Storage

From speaking with customers across industry verticals and geographies, Red Hat is finding that object storage is increasingly top of mind as enterprises address growing data volumes, regulatory pressure, and threats to data security.

Take financial services firms, for instance. Their IT teams fight multiple fires trying to appease internal and external stakeholders in a fast-moving industry. They are expected to provide lines of business with cost-effective, cloud-like services, from development frameworks, storage backup, archive, to sync and share, while also bridging the traditional in-house banking applications with a plethora of cloud-native applications–and deploying all of them on a single storage platform to reduce costs. Satisfying these challenging goals requires a scalable, on-prem storage platform that can also be extended across hybrid cloud deployments–something that cannot be satisfied through traditional file or block storage solutions.

Red Hat Ceph Storage 3 offers a unified, petabyte-scale solution that addresses these pain points. The newest release, announced late last year, adds much in terms of scalability, security features, ease of management, and lowered costs. It also enhances the multiprotocol support for file and  object storage interoperability and migration that was introduced in Red Hat Ceph Storage 2.

Cost-effective private cloud backups

Red Hat Ceph Storage 3 helps customers modernize their backup infrastructures and reduce the cost of data backups for private cloud infrastructure through certified interoperability with Veritas NetBackup and Rubrik Cloud Data Management. These software offerings will back up to a Red Hat Ceph Storage cluster using its AWS-compatible S3 API. Details on supported versions are listed in the product’s compatibility guide.

Better scale for file workloads, at lower cost

The benefits of object storage as a data storage platform are well documented, making possible increased reach and applicability of object storage to a variety of workloads. Expanded support of its NFS gateway for RGW to include NFS V3 in addition to V4 means Red Hat Ceph Storage users can now gradually transition most NFS file workloads to a modern, scalable, object storage platform without disruption, with full migration only when their applications, tools, and management processes are ready.

Increased security for data assets

Red Hat Ceph Storage 3 enables greater data security for at-rest data and enables permanent data deletion. Object granular encryption is supported for data at-rest using user-provided keys. This functionality can also be used to permanently delete an object by encrypting the object  and shredding the key before deleting the object.

Reduced storage costs by eliminating redundant object data

Inline, object granular compression means being able to eliminate redundant data within an object before saving it to the disk and saving storage costs. The compression operation happens in-line as data is written to RGW from the hosts, before saving it to the disk.

Simpler data lifecycle management

Red Hat Ceph Storage 3 eases storage and data management through a policy-based S3 API framework for bucket and object lifecycle management. The Red Hat Ceph Storage object access API is fully compatible with AWS S3 API and now adds support for S3 bucket lifecycle API for object and version expiration.

The web-scale modern datacenter

In sum, as the modern datacenter relies more on web-scale infrastructure, object storage can help organizations bring to bear much of the value in other digital transformation efforts in the development and applications space. As the hybrid cloud becomes a mainstream reality, standardizing on a scalable object-storage solution that can span on-prem, private, and public clouds becomes more imperative to the success of the modern enterprise.

For more on the new and exciting features in Red Hat Ceph Storage 3, check out this blog post by Federico Lucifredi in our “Architects’ Corner.”

Why traditional storage doesn’t cut it in the new world of containers

By Steve Bohac, Red Hat Storage

Persistent storage for containers is a hot topic these days. While containers do a great job of storing application logic, they do not offer a built-in solution for storing application data across the lifecycle of containers. Ephemeral (or local) storage is not enough–Stateful applications require that the container data be available beyond the life of the containers. They also require that the underlying storage layer provide all the enterprise features available to applications that are deployed in, say, virtualized environments.

Another important consideration is that because many view containers as the next step in the evolution of server virtualization, it’s critical to provide persistent storage options to administrators because hypervisors have always allowed for persistent storage in one form or the other.

One approach is to use traditional storage appliances that support legacy applications. This is a natural inclination and assumption, but… the wrong one.

Traditional storage appliances are based on decades-old architectures at this point and were not made for a container-based application world. These approaches also fail to offer the portability you need for your apps in today’s hybrid cloud world. Some of these traditional storage vendors offer additional software for your containers, which can be used as a go-between for these storage appliances and your container orchestration, but this approach still falls short as it is undermined by those same storage appliance limitations. This approach would also mean that storage for the container is provisioned separately from your container orchestration layer.

There’s a better way! Storage containers containing storage software co-­reside with compute containers and serve storage to the compute containers from hosts that have local or direct-attached storage. Storage containers are deployed and provisioned using the same orchestration layer you’ve adopted in house (like Red Hat OpenShift Container Platform, which is Kubernetes based), just like compute containers. In this deployment scenario, storage services are provided by containerized storage software (like Red Hat Container-Native Storage based on Red Hat Gluster Storage) to pool and expose storage from local hosts or direct-attached storage to containerized applications.

Red Hat Container-Native Storage for Red Hat OpenShift Container Platform is built with Red Hat Gluster Storage and is flexible, cost-effective, and developer-friendly storage for containers. It helps organizations standardize storage across multiple environments and easily integrates with Red Hat OpenShift to deliver a persistent storage layer for containerized applications that require long-term storage. Enterprises can benefit from a simple, integrated solution including the container platform, registry, application development environment, and storage–all in one, supported by a single vendor.

To hear from one customer who implemented a Red Hat Container-Native Storage solution, please check out our Brinker International case study. Also, take our solution for a free test drive and see for yourself.

If you–like we are–attending KubeCon and CloudNativeCon in Austin, Texas, this week, we’d love to take a minute to meet with you and talk about Red Hat Container-Native Storage. Stop by the Red Hat booth (D1, near the Hub Lounge) or attend one of our sessions devoted to container storage to learn more about running Red Hat Container-Native Storage for your container-based application platform. Also, our own Steve Watt from the Red Hat Office of the CTO will be speaking from the show on theCube tomorrow, December 7, as well. If you’re not able to make it to Austin, please find us at a roadshow event coming to a city near you.

Red Hat Ceph Storage 3: Featuring CephFS and iSCSI support and containerized storage daemons

By Douglas Fuller, Red Hat Ceph Storage Engineering

If you missed last week’s huge announcement about Red Hat Ceph Storage 3, you can find details here. To quickly get you up to speed, though, the big news in this release is around enabling a large variety of storage needs in OpenStack, easing migration from legacy storage platforms, and deploying enterprise storage in Linux containers.

CephFS is here!

One of the highlights of the Red Hat Ceph Storage 3 announcement was production support for CephFS. This delivers a POSIX-compliant shared file system layered on top of massively scalable object storage. Client support is available in the Red Hat Enterprise Linux 7.4 kernel and via FUSE. CephFS leverages Ceph’s RADOS object store for data scalability as well as a natively clustered metadata server (MDS) for metadata scalability, high availability, and performance.

One cluster to do it all

Red Hat Ceph Storage uses the CRUSH structured data distribution scheme, enabling users to deploy a highly scalable and reliable file system using industry-standard, commodity hardware. Expensive, custom-engineered RAID controllers are no longer necessary. Expanding a CephFS deployment is as easy as expanding a Ceph cluster—CRUSH smoothly manages cluster changes, including expansions with new or different hardware.

Have a hybrid storage cluster with SSDs, HDDs, and NVMe devices? CRUSH can divide your storage workload across any and all devices for maximum performance where you need it and maximum capacity at commodity cost where you don’t. This allows disparate workloads—such as scratch, home, or archive data—to coexist in the same cluster using different or overlapping hardware as needed.

In addition, CephFS’s MDS may be dynamically provisioned and resized online to maximize performance and scalability. For metadata-intensive workloads, the Ceph MDS cluster can repartition its workload, either statically or dynamically, online in response to demand. It’s also fault-tolerant by design, with no need for passive standby or expensive and complex “Shoot the Other Node in the Head” (STONITH) configurations to maintain constant availability.

Take the “cluster” out of cluster management

Red Hat Ceph Storage 3 deploys with Red Hat Ansible Automation, integrating smoothly into existing cluster management environments. Now you can deploy and manage compute and storage both using Ansible playbooks.

New in Red Hat Ceph Storage 3 is a REST API for cluster data and management tools. Monitoring tools are available out of the box to provide detailed health and performance data across your Ceph cluster.

A million uses and counting

Red Hat Ceph Storage offers great flexibility to customers. It can be deployed across a wide variety of storage applications, allowing enterprises to manage one unified system supporting block, file, and object interfaces. With the added flexibility of iSCSI support, users from heterogeneous environments—such as VMware and Windows—can leverage the power of the storage platform.

This flexibility is extremely attractive to organizations such as academic research institutions, many of which are participating in the SuperComputing17 conference in Denver this week. Their IT departments have the onerous task of supporting complicated workflows and yet have to work with shoestring budgets in many cases.

To learn more, check out this additional blog post, and join us at the Red Hat SC17 booth (1763) for presentations, swag, and more.