This new release, Red Hat OpenShift Container Storage 3.10, is the follow-on to Container-Native Storage 3.9 and introduces three important features for container-based storage with OpenShift: (1) arbiter volume support enabling high availability with efficient storage utilization and better performance, (2) enhanced storage monitoring and configuration visibility using the OpenShift Prometheus framework, and (3) block-backed persistent volumes (PVs) now supported for general application workloads in addition to supporting OCP infrastructure workloads.
If you haven’t already bookmarked our Red Hat Storage blog, now would be a great time! Over the coming weeks, we will be publishing deeper discussions on OpenShift Container Storage. In the meantime, though, for a more thorough understanding of OpenShift Container Storage, check out these recent technical blogs describing in depth the value of our approach to storage for containers:
By Sayandeb Saha, Director, Product Management, Storage Business Unit
In our last blog post in this series, we talked about how the Container-Native Storage (CNS) offering for OpenShift Container Platform from Red Hat has seen increased customer adoption in on-premise environments by offering a peaceful coexistence approach with classic storage arrays that are not deeply integrated with OpenShift. In this post, we’ll explore why many customers are deploying our CNS offering in the three big public clouds—AWS, Microsoft Azure, and the Google Cloud Platform—on top of native public cloud offerings from the public clouds—despite good integration of Kubernetes with native storage offerings in the cloud. Let’s examine some of these problems and constraints in a bit more detail and describe how CNS addresses them.
Slow attach/detach—poor availability
The first issue stems from the fact that the native block storage offerings (EBS in AWS, Data Disk in Azure, Persistent Disk in Google Cloud) in the public cloud were designed and engineered to support virtual machine (VM) workloads. In such workloads, attaching and consequently detaching a block device to a machine image/instance is an infrequent occurrence at best, as these workloads are less dynamic compared to Platform-as-a-Service (PaaS) and DevOps workloads, which frequently run on OpenShift powering dynamic build and deploy CI/CD pipelines and other similar workloads and workflows.
Some of our customers found that attach and detach times for these block devices, when directly accessed from OpenShift workloads using the native kubernetes storage provisioners, are unacceptable because they led to poor startup times for pods (slow attach) and limited or no high availability on a failover, which usually triggers a sequence that includes a detach operation, an attach operation, and a subsequent mount operation.
Each of these operations usually triggers a variety of API calls specific for the public cloud provider. Any or all of these intermediate steps can fail, causing users to lose access to storage persistent volumes (PVs) for their compute pods for an extended period. Overlaying Red Hat’s CNS offering as a storage management fabric to aggregate, pool, and serve out PVs expediently without worrying about the status of individual cloud native block storage (a.k.a EBS or Azure Data Disk) can provide major relief, because it effectively isolates the lifecycle of cloud-native block storage devices from that of the application pods allocating and deallocating PVs dynamically as application teams work on OpenShift. This isolation effectively addresses this issue.
Block device limits per compute instance
The second issue some of our customers run into is the fact that there is a limit to the number of block devices that one can attach to the machine images or instances in various public cloud environments.
OpenShift supports a maximum of 250 containers per host. The maximum number of block devices that are supported to be attached to machine instances per account is far fewer (for example, max 40 EBS devices per EC2 instance). Even though it is unusual to have a 1:1 mapping between containers and storage devices, this low maximum can lead to a lot of unintended behavior, notwithstanding the fact that it leads to a higher total cost of ownership (need more hosts than necessary).
For example, in a failover scenario during the detach, attach, and mount sequence, the API call to attach might fail, because there are already a maximum number of devices attached to the EC2 instance where this attempt is being made, which can cause a glitch/outage. Overlaying Red Hat’s CNS offering as a storage management fabric on cloud-based block devices mitigates the impact of hitting the maximum number of devices that can be attached to a machine image or instance, because storage is served out from a pool that is unencumbered by individual max device per instance/host limit. Storage can continue to be served out until the entire pool is exhausted which, at that time, can be expanded by adding new hosts and devices.
Cross-AZ storage availability
The third issue arises from the fact that cloud block storage devices are usually accessible within a specific Availability Zone (AZ) in AWS or Availability Sets in Azure. AZs are like failure domains in public clouds.
Most customers who deploy OpenShift in the public cloud do so to span more than one AZ for high availability. This is done so that when one AZ dies or goes offline, the OpenShift cluster remains operational. Using block devices constrained to an AZ for providing storage services to OpenShift workloads can defeat the purpose, because then containers must be scheduled within hosts that belong to the same AZ, and customers can not leverage the full power of Kubernetes orchestration. This configuration could also lead to an outage when an AZ goes offline.
Our customers use CNS to mitigate this problem so that even when there is an AZ failure, a three-way replicated cross-AZ storage service (CNS) is available for containerized applications to avoid downtimes. This also enables Kubernetes to schedule pods across AZs (instead of within an AZ), thereby preserving the spirit of the original fault-tolerant OpenShift deployment architecture that spans multiple AZs.
Cost-effective storage consolidation
Storage provided by CNS is efficiently allocated and offers performance with the first gigabyte provisioned, thereby enabling storage consolidation. For example, consider six MySQL database instances, each in need of 25 GiB of storage capacity and up to 1500 IOPS at peak load. With EBS in AWS, one would create six EBS volumes, each with at least 500 GiB capacity out of the gp2 (General Purpose SSD) EBS tier, in order to get 1500 IOPS. The level of performance is tied to provisioned capacity with EBS.
With CNS, one can achieve the same level using only 3 EBS volumes at 500 GiB capacity from the gp2 tier and run these with GlusterFS. One would create six 25 GiB volumes and provide storage to many databases with high IOPS performance, provided they don’t peak all at the same time. Doing that, one would halve EBS cost and still have capacity to spare for other services. Read IOPS performance is likely even higher, because in CNS with three-way replication as data is read from distributed across 3×1500 IOPS gp2 EBS volumes.
Check us out for more
As you can see, there’s a good case to be made for using CNS in various public clouds for a multitude of technical reasons our customers care about, besides the fact that Red Hat CNS provides a consistent storage consumption and management experience across hybrid and multi clouds (see the following figure).
In addition to the application portability that OpenShift already provides across hybrid and multi clouds, we’re working on multi cloud replication features that would enable CNS to effectively become the data fabric that enables data portability—another good reason to select and stay with CNS. Stay tuned for more information on that!
For hands-on experience now combining OpenShift and CNS, check out our test drive, a free, in-browser lab experience that walks you through using both.
During the past of year of customer adoption of CNS, we’ve observed some key trends that are unique for OpenShift/Kubernetes storage and that we’ll highlight in a series of blogs. This blog series will also include business and technical solutions that have worked for our customers.
In this blog post, we examine a trend where customers have adopted CNS as a storage management fabric that sits in between the OpenShift Container Platform and their classic storage gear. This particular adoption pattern continues to have a really high uptake, and there are sound business and technical reasons for doing this, which we’ll explore here.
First the Solution (The What): We’ve seen a lot of customers deploying CNS to serve out storage from their existing storage arrays/SANs and other traditional storage, as illustrated in the following graphic. In this scenario, block devices from existing storage arrays are served out with our CNS software running in VMs or containers/pods to OpenShift. In this case, the storage for the VMs that runs OpenShift is still served by the arrays.
Now the Why: Initially, it seemed backward as to why customers would be doing this; after all, software-defined storage solutions like CNS are meant to run on x86 bare metal (on premise) or in the public cloud, but further investigation revealed some interesting discoveries.
While OpenShift users and ops teams consume infrastructure, they typically do not manage infrastructure. In on-premise environments, OpenShift ops teams are highly dependent on other infrastructure teams for virtualization, storage, and operating systems for the infrastructure on which they run OpenShift. Similarly, in public clouds they consume the native compute and storage infrastructure available in these clouds.
As a consequence, they are highly dependent on storage infrastructure that is already in place. Typically, it’s very difficult to justify a storage server purchase when storage has been already procured a year or more ago from a traditional storage vendor for a new use case (OpenShift storage in this case). The issue is that this traditional storage was not designed for nor intended to be used with containers and the budget for storage has mostly been spent. This has driven the OpenShift operations teams to adopt CNS effectively as a storage management fabric that sits between their OpenShift Container Platform deployment and their existing storage array. The inherent flexibility of Red Hat Gluster Storage in this case is the form of CNS being leveraged, which enables it to aggregate and pool block devices that are attached to a VM and serve that out to OpenShift workloads. OpenShift operations teams can now have the best of both worlds. They can repurpose their existing storage array that is already in place/on premise but actually consume CNS which operates as a management fabric offering the latest and greatest in terms of feature, functionality, and manageability with a deep integration with the OpenShift platform.
In addition to business reasons, there are also various technical reasons that these OpenShift operations teams are adopting CNS. These include, but are not limited to:
Lack of deep integration of their existing storage arrays with OpenShift Container Platform
Even if their traditional storage array has rudimentary integration with OpenShift, very likely it has limited feature support, which renders it unusable with many OpenShift workloads (like lack of dynamic provisioning)
The roadmap of their storage arrays vendor may not match their current (or future) OpenShift/Kubernetes storage feature support needs, like lack of availability of a Persistent Volume (PV) resize feature
Needing a fully featured OpenShift Storage solution for OpenShift workloads as well as the OpenShift infrastructure itself. Many existing storage platforms can support one or the other, but not both. For instance, a storage array serving out Fiber Channels LUNs (plain block storage) can’t back an OpenShift registry as one needs shared storage access for it usually provided by a file or object storage back end.
They seek a consistent storage consumption and management experience across hybrid and multiple clouds. Once they learn to implement and manage CNS from Red Hat in one environment, it’s repeatable in all other environments. They can’t use their storage array in the public cloud.
Using CNS from Red Hat is a win for OpenShift ops teams. They can get started with a state-of-the-art storage back end for OpenShift apps and infrastructure without needing to acquire new infrastructure for OpenShift Storage right away. They have the option to move to x86-based storage servers during the following budget cycle as they grow their OpenShift footprint and onboard more apps and customers to it. The experience with CNS serves them well if they choose to implement OpenShift and CNS in other environments like AWS, Azure, and Google Cloud.
Want to learn more?
For hands-on experience combining OpenShift and CNS, check out our test drive, a free, in-browser lab experience that walks you through using both.
By Will McGrath, Product Marketing, Red Hat Storage
Wowzer! Red Hat Summit 2018 was a blur of activity. The quality and quantity of conversations with customers, partners, industry analysts, community members, and Red Hatters was unbelievable. This event has grown steadily the past few years to over 7,000 registrants this year. From a Storage perspective, this was the largest presence ever in terms of content and customer interaction.
For Storage, we made two key announcements during Red Hat Summit. The first was around Red Hat Storage One, a pre-configured offering engineered with our server partners, announced last week. If you didn’t catch Dustin Black’s blog post that goes into the detail of the solution, check it out .
The second announcement, which occurred this week, highlighted the momentum in building a storage offering that provides a seamless developer experience and unified orchestration for containers. There are now more than 150 customers worldwide that have adopted Red Hat’s container-native storage solutions to enable their transition to the hybrid cloud, including Vorwerk and innovation award winner Lufthansa Technik.
We featured a number of customer success stories, including Massachusetts Open Cloud, which worked with Boston Children’s Hospital to redefine medical-image processing using Red Hat Ceph Storage.
If you’d like to keep up on the containers news, check out our blog post from Tuesday and this week’s news around CoreOS integration into Red Hat OpenShift. You might also like to check out the news around customers deploying OpenShift on Red Hat infrastructure—including OpenStack—through container-based application development and tightly integrated cloud technologies.
Storage expertise on display
On the morning of the first day of Summit, Burr Sutter and team demoed a number of technologies, including Red Hat Storage, to showcase application portability across the open hybrid cloud. This morning, Erin Boyd and team ran some way cool live demos that showed the power of microservices and functions with OpenShift, Storage, OpenWhisk, Tensorflow, and a host of technologies across the hybrid cloud.
For those who had the opportunity to attend any of the 20+ Red Hat Summit storage sessions, you were able to learn how our Red Hat Gluster Storage and Red Hat Ceph Storage products appeal to both traditional and modern users. The roadmap presentations by both Neil Levine (Ceph) and Sayan Saha (Gluster and container-native storage) were very popular. Sage Weil, the creator of Ceph, gave a standing-room only talk on the future of storage. Some of these storage sessions will be available on theRed Hat Summit YouTube channelin the coming weeks.
We also had several partners demoing their combined solutions with Red Hat Storage, including Intel, Mellanox, Penguin Computing, QCT, and Supermicro. Commvault had a guest appearance during Sean Murphy’s Red Hat Hyperconverged Infrastructure talk, explaining what led them to decide to include it in their HyperScale Appliances and Software offerings.
This year, we conducted an advanced Ceph users’ group meeting the day before the conference with marquee customers participating in deep-dive discussions with product and community leaders. During the conference, the storage lockers have been a hit. We had great presence on the show floor, including the community booths. Our breakfast was well attended with over a hundred people registered and featured a panel of customers and partners.
Continue the conversation
During his appearance on The Cube by Silicon Angle, Red Hat Storage VP/GM Ranga Rangachari talked about his point of view on “UnStorage.” This idea, triggered by his original blog post on the subject, made quite a few waves at the event. Customers and analysts are responding positively to the idea of a new approach to storage in the age of hybrid cloud, hyperconvergence, and containers. Today is the last day to win prizes by tweeting @RedHatStorage with the hashtag #UnStorage.
If you missed us in San Francisco, we’ll be at OpenStack Summit in Vancouver from May 21-24. Red Hat is a headline sponsor at Booth A19. If you’re attending, come check out our OpenStack and Ceph demo, and check back on our blog page for news from the event. We’ll also be hosting the “Craft Your Cloud” event on Tuesday, May 22, from 6-9 pm at Steamworks in Vancouver. For more information and to register, clickhere. For more fun and networking opportunities, join the Ceph and RDO communities for a happy hour on May 23 from 6-8 pm at The Portside Pub in Vancouver. For more information and to register for that event, clickhere.
On to Red Hat Summit 2019
You can check out the videos and keynotes from Red Hat Summit 2018 on demand. Next year, Red Hat Summit is being held in Boston again—it’s been rotating between San Francisco and Boston—so if you couldn’t attend San Francisco this year we urge you to plan to visit us in Boston next year. We hope you enjoyed our coverage of Red Hat Summit 2018, and hope to see you in 2019.
By Daniel Gilfix, Product Marketing, Red Hat Storage
Once again, an independent analytic news source has confirmed what many of you already know: that Red Hat Ceph Storage stands alone in its commitment to technical excellence for the customers it serves. In the latest IT Brand Pulse survey covering Networking & Storage products, IT professionals from around the world have selected Red Hat Ceph Storage as the “Scale-out Object Storage Software” leader in all categories. This includes price, performance, reliability, service and support, and innovation. The honors follow a pattern of recognition from IT Brand Pulse, having bestowed the leadership tag to Red Hat Ceph Storage in 2017, 2015, and 2014, with 2016 noted for Red Hat as “Service and Support” leader.
The report documented the results of the independent, March 2018, annual survey that polled vendors on their perception of excellence in eleven different categories. Red Hat Ceph Storage earned ratings that were visibly head and shoulders above the competition, including more than a 2X differential over Scality and VMware.
It feels like just yesterday!
This latest third party validation comes on the heels of Red Hat Ceph Storage being named as a finalist in Storage Magazine and SearchStorage’s 2017 Products of the Yearcompetition in late January 2018. Here, the evaluation was based on Red Hat Ceph Storage v2.3, one that made great strides in the areas of connectivity and containerization, including an NFS gateway to an S3-compatible object interface and compatibility with the Hadoop S3A plugin.
Red Hat Ceph Storage 3 carries the baton
IT professionals voting in this year’s IT Brand Pulse survey were able to consider newer features in the important Red Hat Ceph Storage 3 release that addressed a series of major customer challenges in object storage and beyond. We delivered full support for file-based access via CephFS, expanded ties to legacy storage environments through iSCSI, pumped fuel into our containerization options with CSDsfor 25% hardware deployment savings, and introduced an easier monitoring interface and additional layers of automation for more self-maintaining deployments.
See you at Red Hat Summit!
As usual, the real testament to our success is the continued satisfaction of our customer base, the ones who are increasingly choosing Red Hat Ceph Storage for modern use cases like AI and ML, rich media, data lakes, hybrid cloud infrastructure based on OpenStack, and traditional backup and restore.
We look forward to discussing deployment options and whether Red Hat Ceph Storage might be right for you this week at Red Hat Summit—There’s still so much more to go! Catch us at one of the following sessions in Moscone West:
Based on Red Hat Gluster Storage, container-native storage from Red Hat offers these organizations scalable, persistent storage for containers across hybrid clouds with increased application portability. Tightly integrated with Red Hat OpenShift Container Platform, container-native storage from Red Hat can be used to persist not only application data but data for logging, metrics, and the container registry. The deep integration with Red Hat OpenShift Container Platform helps developers easily provision and manage elastic storage for applications and offers a single point of support. Customers use container-native storage to persist data for a variety of applications, including SQL and NoSQL databases, CI/CD tools, web serving, and messaging applications.
Organizations using container-native storage from Red Hat can benefit from simplified management, rapid deployment, and a single point of support. The versatility of container-native storage from Red Hat can enable customers to run cloud-native applications in containers, on bare metal, in virtualized environments, or in the public cloud.
For those of you attending Red Hat Summit this week, as always we know you love breakout sessions to learn more about Red Hat solutions—and we have a bunch covering container-native storage from Red Hat! Don’t forget to get your raffle tickets at each of the storage sessions you attend. Here’s what the line up for container-native storage from Red Hat sessions looks like:
Welcome to Red Hat Summit 2018 in San Francisco! The Storage team has been hard at work to make this the best possible showcase of technology and customers—and have fun while doing it. This year our presence is built around the theme: UnStorage for the modern enterprise.
What is UnStorage?
Today’s users need their data so accessible, so scalable, so versatile that the old concept of “storing” it seems counterintuitive. Perhaps a better way of describing the needs of the modern enterprise is UnStorage, as outlined in this blog post by Red Hat Storage VP and GM, Ranga Rangachari.
Five ways to experience UnStorage at Red Hat Summit
Content is king: We have 24 sessions packed with storage knowledge, best practices, and success stories. Over 21 Red Hat Storage customers will be featured at the event, including on a panel at our breakfast (open to all attendees) on Wednesday at 7 am at the Marriott Marquis. Learn how some of the most innovative enterprises leverage the power of unStorage to solve their scale and agility challenges.
Without hardware partners, it’s like clapping with one hand: By definition, the success of software-defined storage hinges on the strength of the hardware ecosystem. Since the storage controller software is only half the solution, it’s important to have deep engineering investment with hardware and component vendors to build rock-solid solutions for customers. With partners like Supermicro, Mellanox, Penguin Computing, Intel, Commvault, and QCT, all featured at the conference, Red Hat Storage enables greater customer choice and openness—a key tenet of UnStorage.
Explore your storage curiosity: UnStorage is all about breaking the rules to make things better. You’ll find a lot of creative ideas that are off the beaten track. Just as UnStorage is ubiquitous—it stretches across private, public, and hybrid cloud boundaries—it’s hard to miss Storage at the conference. You can find storage lockers near the expo entrance where you can drop off backpacks and charge phones while you attend sessions. Or enter to win one of two Star Wars collector edition drones by attending sessions or visiting the booth. Stop by the Storage Launch Pad to play online games, take surveys, and pick up a ton of giveaways, including two golden tickets handed out every day, which will afford you a special set of prizes.
Test drive storage: Kick the tires on UnStorage with one of three test drives for Ceph, Gluster, and OpenShift Ops. As the name suggests, software-defined storage is completely decoupled from hardware, making it easy to test and deploy in the cloud. On the other side of the deployment spectrum, you can also try out the sizing tool for Red Hat Storage One, our single SKU pre-configured system announced last week. Stop by one of four Storage pods on the expo floor for demos and conversations with Storage experts.
The proof of the pudding: Stop by Thursday’s keynote with CTO Chris Wright and live demos by Burr Sutter and team featuring container-native storage baked into Red Hat platforms such as OpenShift. UnStorage is as invisible as it is pervasive. Modern enterprises demand that storage be fully integrated into compute platforms for easier management and scale. With container-native storage surpassing 150 customers in the last year alone, learn how customers such as Schiphol, FICO, and Macquarie Bank are building next-generation hybrid clouds with Red Hat technologies.
We’re not all-work-all-the-time at Red Hat Storage, though. Join us at the community happy hour or the hybrid cloud infrastructure party on Tuesday to blow off some steam during a long week. Our social media strategist, Colleen Corrice, is running a way cool Twitter contest: All you have to do is post a picture at a Storage session or booth @RedHatStorage with the hashtag #UnStorage to receive a T-shirt and be included in a drawing for a personal planetarium.
By Annette Clewett, Humble Chirammal, Daniel Messer, and Sudhir Prasad
With today’s release of Red Hat OpenShift Container Platform 3.9, you will now have the convenience of deploying Container-Native Storage (CNS) 3.9 built on Red Hat Gluster Storage as part of the normal OpenShift deployment process in a single step. At the same time, major improvements in ease of operation have been introduced to give you the ability to monitor provisioned storage consumption, expand persistent volume (PV) capacity without downtime, and use a more intuitive naming convention for persistent volume names.
Continue reading “Container-native storage 3.9: Enhanced day-2 management and unified installation within OpenShift Container Platform”
For the second year in a row, Red Hat Ceph Storage has been named as a finalist in Storage Magazine and SearchStorage’s 2017 Products of the Year competition. Whereas in 2016, the honor was bestowed upon what was arguably the most important product release since Ceph came aboard the Red Hat ship, this year’s candidate was Red Hat Ceph Storage 2.3, a point release. This means a lot to us, but as a reader—perhaps a current or prospective customer, why should you care?
Excellent question, I must say, since normally we don’t like to boast. Our focus here at Red Hat is on the needs, experiences, and ultimate satisfaction of those who use our solutions. And given the evolution of Red Hat Ceph Storage from its acquisition from Inktank, the storage vendor, to Red Hat, the IT vendor, one would hope that we’re making progress.
The fact that Red Hat Ceph Storage 2.3 was recognized as among those reflecting the latest trends in flash, cloud, and container technologies is a good sign that this is true. More important validation, however, comes from customers like Produban and UK Cloud, who are incorporating the product into broad Red Hat solutions. It also comes from those like Monash University and CLIMB, who can appreciate improvements to versatility, connectivity, and flexibility, like the NFS gateway to an S3-compatible object interface, compatibility testing with the Hadoop S3A plugin, and a containerized version of the product.
Even more uplifting from a user perspective today is the fact that v2.3 has already been superseded by Red Hat Ceph Storage 3, a more substantive advance into the realm of object storage that addresses a few key customer requirements while making adoption less challenging. For example, the product rounded its value as a cost and resource-saving unified storage platform with full support for file-based access (CephFS) and links to legacy storage environments through iSCSI. Containerization was advanced to include CSDs, enabling nearly 25% hardware deployment savings and more predictable performance through the co-location of storage daemons. And we added a snazzy new monitoring interface and additional layers of automation to make deployments more self-maintaining. According to Olivier Delachapelle, Head of Data Center Category Management EMEIA at Fujitsu, “Red Hat Ceph Storage 3 is probably the most advanced software-defined storage solution combining extreme scalability, inherent disaster resilience, and significant price-capacity value.”
In the end, we feel good about the public recognition, but we feel even better when our customers and partners are happy and have what they need to succeed. I encourage you to share your thoughts about where we’re on target and/or perhaps missing the boat. Ultimately, being part of an IT company means our storage solution can serve a role that was perhaps unimaginable before, and it supports our commitment to real-world deployment of the future of storage.
Red Hat Ceph Storage 3 greatly advances object storage capabilities
By Uday Boppana, Red Hat Storage
From speaking with customers across industry verticals and geographies, Red Hat is finding that object storage is increasingly top of mind as enterprises address growing data volumes, regulatory pressure, and threats to data security.
Take financial services firms, for instance. Their IT teams fight multiple fires trying to appease internal and external stakeholders in a fast-moving industry. They are expected to provide lines of business with cost-effective, cloud-like services, from development frameworks, storage backup, archive, to sync and share, while also bridging the traditional in-house banking applications with a plethora of cloud-native applications–and deploying all of them on a single storage platform to reduce costs. Satisfying these challenging goals requires a scalable, on-prem storage platform that can also be extended across hybrid cloud deployments–something that cannot be satisfied through traditional file or block storage solutions.
Red Hat Ceph Storage 3 offers a unified, petabyte-scale solution that addresses these pain points. The newest release, announced late last year, adds much in terms of scalability, security features, ease of management, and lowered costs. It also enhances the multiprotocol support for file and object storage interoperability and migration that was introduced in Red Hat Ceph Storage 2.
Cost-effective private cloud backups
Red Hat Ceph Storage 3 helps customers modernize their backup infrastructures and reduce the cost of data backups for private cloud infrastructure through certified interoperability with Veritas NetBackup and Rubrik Cloud Data Management. These software offerings will back up to a Red Hat Ceph Storage cluster using its AWS-compatible S3 API. Details on supported versions are listed in the product’s compatibility guide.
Better scale for file workloads, at lower cost
The benefits of object storage as a data storage platform are well documented, making possible increased reach and applicability of object storage to a variety of workloads. Expanded support of its NFS gateway for RGW to include NFS V3 in addition to V4 means Red Hat Ceph Storage users can now gradually transition most NFS file workloads to a modern, scalable, object storage platform without disruption, with full migration only when their applications, tools, and management processes are ready.
Increased security for data assets
Red Hat Ceph Storage 3 enables greater data security for at-rest data and enables permanent data deletion. Object granular encryption is supported for data at-rest using user-provided keys. This functionality can also be used to permanently delete an object by encrypting the object and shredding the key before deleting the object.
Reduced storage costs by eliminating redundant object data
Inline, object granular compression means being able to eliminate redundant data within an object before saving it to the disk and saving storage costs. The compression operation happens in-line as data is written to RGW from the hosts, before saving it to the disk.
Simpler data lifecycle management
Red Hat Ceph Storage 3 eases storage and data management through a policy-based S3 API framework for bucket and object lifecycle management. The Red Hat Ceph Storage object access API is fully compatible with AWS S3 API and now adds support for S3 bucket lifecycle API for object and version expiration.
The web-scale modern datacenter
In sum, as the modern datacenter relies more on web-scale infrastructure, object storage can help organizations bring to bear much of the value in other digital transformation efforts in the development and applications space. As the hybrid cloud becomes a mainstream reality, standardizing on a scalable object-storage solution that can span on-prem, private, and public clouds becomes more imperative to the success of the modern enterprise.
For more on the new and exciting features in Red Hat Ceph Storage 3, check out this blog post by Federico Lucifredi in our “Architects’ Corner.”
Persistent storage for containers is a hot topic these days. While containers do a great job of storing application logic, they do not offer a built-in solution for storing application data across the lifecycle of containers. Ephemeral (or local) storage is not enough–Stateful applications require that the container data be available beyond the life of the containers. They also require that the underlying storage layer provide all the enterprise features available to applications that are deployed in, say, virtualized environments.
Another important consideration is that because many view containers as the next step in the evolution of server virtualization, it’s critical to provide persistent storage options to administrators because hypervisors have always allowed for persistent storage in one form or the other.
One approach is to use traditional storage appliances that support legacy applications. This is a natural inclination and assumption, but… the wrong one.
Traditional storage appliances are based on decades-old architectures at this point and were not made for a container-based application world. These approaches also fail to offer the portability you need for your apps in today’s hybrid cloud world.Some of these traditional storage vendors offer additional software for your containers, which can be used as a go-between for these storage appliances and your container orchestration, but this approach still falls short as it is undermined by those same storage appliance limitations. This approach would also mean that storage for the container is provisioned separately from your container orchestration layer.
There’s a better way! Storage containers containing storage software co-reside with compute containers and serve storage to the compute containers from hosts that have local or direct-attached storage. Storage containers are deployed and provisioned using the same orchestration layer you’ve adopted in house (like Red Hat OpenShift Container Platform, which is Kubernetes based), just like compute containers. In this deployment scenario, storage services are provided by containerized storage software (like Red Hat Container-Native Storage based on Red Hat Gluster Storage) to pool and expose storage from local hosts or direct-attached storage to containerized applications.
Red HatContainer-Native Storage for Red Hat OpenShift Container Platform is built with Red Hat Gluster Storage and is flexible, cost-effective, and developer-friendly storage for containers. It helps organizations standardize storage across multiple environments and easily integrates with Red Hat OpenShift to deliver a persistent storage layer for containerized applications that require long-term storage. Enterprises can benefit from a simple, integrated solution including the container platform, registry, application development environment, and storage–all in one, supported by a single vendor.
If you–like we are–attending KubeCon and CloudNativeCon in Austin, Texas, this week, we’d love to take a minute to meet with you and talk about Red Hat Container-Native Storage. Stop by the Red Hat booth (D1, near the Hub Lounge) or attend one of our sessions devoted to container storage to learn more about running Red Hat Container-Native Storage for your container-based application platform. Also, our own Steve Watt from the Red Hat Office of the CTO will be speaking from the show on theCube tomorrow, December 7, as well. If you’re not able to make it to Austin, please find us at a roadshow event coming to a city near you.
By Douglas Fuller, Red Hat Ceph Storage Engineering
If you missed last week’s huge announcement about Red Hat Ceph Storage 3, you can find details here. To quickly get you up to speed, though, the big news in this release is around enabling a large variety of storage needs in OpenStack, easing migration from legacy storage platforms, and deploying enterprise storage in Linux containers.
CephFS is here!
One of the highlights of the Red Hat Ceph Storage 3 announcement was production support for CephFS. This delivers a POSIX-compliant shared file system layered on top of massively scalable object storage. Client support is available in the Red Hat Enterprise Linux 7.4 kernel and via FUSE. CephFS leverages Ceph’s RADOS object store for data scalability as well as a natively clustered metadata server (MDS) for metadata scalability, high availability, and performance.
One cluster to do it all
Red Hat Ceph Storage uses the CRUSH structured data distribution scheme, enabling users to deploy a highly scalable and reliable file system using industry-standard, commodity hardware. Expensive, custom-engineered RAID controllers are no longer necessary. Expanding a CephFS deployment is as easy as expanding a Ceph cluster—CRUSH smoothly manages cluster changes, including expansions with new or different hardware.
Have a hybrid storage cluster with SSDs, HDDs, and NVMe devices? CRUSH can divide your storage workload across any and all devices for maximum performance where you need it and maximum capacity at commodity cost where you don’t. This allows disparate workloads—such as scratch, home, or archive data—to coexist in the same cluster using different or overlapping hardware as needed.
In addition, CephFS’s MDS may be dynamically provisioned and resized online to maximize performance and scalability. For metadata-intensive workloads, the Ceph MDS cluster can repartition its workload, either statically or dynamically, online in response to demand. It’s also fault-tolerant by design, with no need for passive standby or expensive and complex “Shoot the Other Node in the Head” (STONITH) configurations to maintain constant availability.
Take the “cluster” out of cluster management
Red Hat Ceph Storage 3 deploys with Red Hat Ansible Automation, integrating smoothly into existing cluster management environments. Now you can deploy and manage compute and storage both using Ansible playbooks.
New in Red Hat Ceph Storage 3 is a REST API for cluster data and management tools. Monitoring tools are available out of the box to provide detailed health and performance data across your Ceph cluster.
A million uses and counting
Red Hat Ceph Storage offers great flexibility to customers. It can be deployed across a wide variety of storage applications, allowing enterprises to manage one unified system supporting block, file, and object interfaces. With the added flexibility of iSCSI support, users from heterogeneous environments—such as VMware and Windows—can leverage the power of the storage platform.
This flexibility is extremely attractive to organizations such as academic research institutions, many of which are participating in the SuperComputing17 conference in Denver this week. Their IT departments have the onerous task of supporting complicated workflows and yet have to work with shoestring budgets in many cases.
To learn more, check out this additional blog post, and join us at the Red Hat SC17 booth (1763) for presentations, swag, and more.