Red Hat named a “visionary” in Gartner’s Magic Quadrant for Distributed File Systems and Object Storage

Today we announced that Gartner has named Red Hat Storage as a Visionary in its first ever Magic Quadrant for Distributed File Systems and Object Storage. This is a great honor and solid recognition by a leading IT analyst of Red Hat’s vision and prominence in the market with Red Hat Gluster Storage and Red Hat Ceph Storage.


Gartner’s Magic Quadrants are based on rigorous analysis of a vendor’s completeness of vision and ability to execute. We are proud to be ranked highest in both ability to execute and completeness of vision among the Visionaries Gartner named.

We believe this placement by Gartner reinforces our commitment to offer a unified, open, software-defined storage portfolio that delivers a range of data services for next-generation workloads helping customers to accelerate the transition to modern IT infrastructures.

Access the Gartner Magic Quadrant for Distributed File Systems and Object Storage report here.

This graphic was published by Gartner, Inc., as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Red Hat.

Gartner does not endorse any vendor, product, or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Software-defined storage explained on the back of a napkin


The demand for storage is headed only in one direction, and that’s up. The cost of enterprise storage is a mounting concern for CIOs as there is added pressure to retain more data due to factors such as regulatory compliance and big data analytics.

Traditional, monolithic storage appliances are built on a scale-up model that is rigid and expensive. The only way to grow storage capacity is by throwing feeds and speeds at the appliance. On the other hand, distributed software-defined storage is built on a scale-out model that lends itself naturally to seamless scale and agility. Adding capacity is as simple as adding more industry-standard servers to the storage cluster.

The price-performance ceiling of traditional storage appliances

When we compare the price-performance characteristics of traditional scale-up storage appliances to software-defined storage such as Red Hat Gluster Storage, we see that as capacity grows, storage appliances hit a performance plateau, while software-defined storage scales linearly. On the other hand, we find that the cost of a monolithic appliance grows exponentially as we hit the performance plateau.

It’s important to remember that traditional storage appliances were built in the era before today’s diverse workloads. For that reason, Red Hat works with the top hardware vendors to build reference architectures that are optimized for performance, throughput, or capacity depending on the workload you’re running.

Unprecedented flexibility and choice

With Red Hat Gluster Storage, you run the same software bits whether the storage is deployed on premise, on virtual machines, in the cloud, or even in containers. Red Hat Gluster Storage offers you advanced storage features such as tiering, bit rot detection, geo replication, and erasure coding, just to name a few. When considering the 3-year TCO, including hardware and support, you get comparable features and performance for about half the price of a storage appliance.

If you feel locked in by your proprietary storage vendor, perhaps it’s time to give open, software-defined storage a try. Take Red Hat Gluster Storage for a test drive on AWS today.

Container-native storage for next-generation applications

Last week, Red Hat announced the general availability of OpenShift Container Platform 3.3, which includes key updates to the developer experience, web console, cluster management, and enterprise container registry. Read more about the updates in this blog series or listen to the briefing by OpenShift Lead Architect, Clayton Coleman.

Container-native storage for OpenShift Container Platform

In previous blog posts, we’ve covered the various deployment options available with Red Hat Gluster Storage as it relates to applications running in containers. In the latest release of OpenShift Container Platform, container-native storage has been revalidated to offer customers agility and choice for persistent storage. This release include a new persistent volume selector that helps users differentiate between storage back-ends with similar access modes.

Persistent storage for both stateful and stateless apps

Red Hat OpenShift Container Platform provides developers the ability to  provision, build, and deploy applications and their components in a self-service fashion. It integrates with continuous integration and delivery tools, making it an ideal solution for development teams. And because OpenShift Container Platform offers persistent storage natively to containers, IT organizations can run both stateful and stateless applications on one platform.

Infrastructure and applications under a single control plane

In a recent article, we talked about how containers and software-defined storage have a complementary relationship in the sense that each one lends itself nicely to the demands of enterprises using the other technology. In some ways, containers hold the key to addressing a challenge that was once thought of as unsolvable. How do we provide developers added control over infrastructure for their applications without the pain associated with actually managing infrastructure? With OpenShift Container Platform, developers can provision and manage programmable storage, just as easily as they would applications. This results in faster development cycles, and more reliable application management, while also lowering costs. Detailed product documentation is available here.

Hear from the experts

OpenShift Container Platform is the industry’s most secure and comprehensive enterprise-grade container platform based on industry standards, Docker and Kubernetes. We have a number of upcoming opportunities where you could learn more and interact with Red Hat experts. We hope to see you there.

  • Red Hat Storage Days – Coming to Seattle (10/18), New York (10/20), and Boston (11/3) with a full agenda of how software-defined storage is tailormade for container and OpenStack environments.
  • OpenShift Storage Webinar – On November 3 you can deep dive into some of the latest storage provisioning and management updates in OpenShift Container Platform 3.3. and get a sneak preview of what’s on the horizon.
  • OpenShift Commons Gathering – Scheduled conveniently a day before (November 7) and co-located with KubeCon in Seattle, this event is a great opportunity to network with and hear from the star-studded speaker list, which includes Red Hat’s Clayton Colman, Microsoft’s Brendan Burns, Google’s Kelsey Hightower and Craig McLuckie. Space is limited, so please register soon.
  • KubeCon – Visit our booth on November 8-9 to learn how Red Hat contributions to the Docker and Kubernetes communities are driving some of the key innovations in the container orchestration space.

Red Hat Gluster Storage leads the charge on persistent storage for containers

Offers choice of deployment configurations for containerized applications

By Irshad Raihan and Sayan Saha, Red Hat Storage

One of the key reasons software-defined storage has risen to fame over the past decade is the multiple aspects of agility it offers. As we move into the era of application-centric IT, microservices, and containers, agility isn’t just a good idea, it could mean the difference between survival and extinction.

Agility in a container-centric datacenter

As we covered in a recent webinar, Red Hat Gluster Storage offers unique value to developers and administrators looking for a storage solution that is not only container-aware but serves out storage for containerized applications natively.

One critical aspect of agility offered by Red Hat Storage is that the storage can be deployed in a number of configurations in relation to the hardware where the containers reside. This allows architects to choose the best configuration that makes the most sense for their particular situation and yet allows them to transition to a different configuration with minimal impact to applications.

Dedicated, scale-out storage for containerized applications

If you’re a storage admin looking to provide a stand-alone storage volume to applications running in containers, Red Hat Gluster Storage can expose a mount point so your applications have access to a durable, distributed storage cluster.


In this configuration, the Red Hat Gluster Storage installation runs in an independent cluster (either on premise or in one of the supported public clouds: Microsoft Azure, AWS, or Google Cloud Platform) and is accessed over the network from a platform like Red Hat OpenShift.

Red Hat OpenShift — optimized to run containerized applications and workloads — ships with the appropriate Gluster storage plugins necessary to make this configuration work out of the box.

Container-native storage: Persistent storage for containers with containers

In another deployment configuration, you can run containerized Red Hat Gluster Storage inside Red Hat’s OpenShift Container Platform. Red Hat Gluster Storage containers are orchestrated using Kubernetes, OpenShift’s container orchestrator like any other application container.

The storage container (Kubernetes pod) pools and serves out local or direct-attached storage from hosts (to be consumed by application containers for their persistent storage needs), offering Gluster’s rich set of enterprise-class storage features, data services, and data-protection capabilities for applications and microservices running in OpenShift.

Exactly one privileged Red Hat Gluster Storage container is instantiated per host as a Kubernetes pod. As a user, you benefit from being able to deploy enterprise-grade storage using a workflow that is consistent with their application orchestration, use a converged (compute + storage) deployment model, and can choose storage-intensive nodes (hosts with local or direct-attached storage) within a cluster for deploying storage containers, optionally collocated with application containers.


This solution, known as container-native storage, currently generally available from Red Hat, leverages an open source project named Heketi. contributed by Luis Pabón (one of the speakers on the recent webinar). Heketi is a RESTful volume manager that allows for programmatic volume allocation and provides the glue necessary to manage multiple Gluster volumes across clusters, thereby allowing Kubernetes to provision storage without being limited to a single Red Hat Gluster Storage cluster.

Heketi enhances the user experience of dynamically managing storage, whether it’s via the API or as a developer working in the OpenShift Container Platform, and runs as a container itself inside OpenShift in the container-native storage solution, providing a service endpoint for Gluster. As a storage administrator, you no longer need to manage or configure bricks, disks, or trusted storage pools. The Heketi service will manage all hardware for you, enabling it to allocate storage on demand. Any disks registered with Heketi must be provided in raw format, which will then be managed by it using LVM on the disks provided.


This is a key differentiator for Red hat Gluster Storage. As far as we can tell, no other storage vendor can provide this flavor of container-native storage, and certainly not with the level of integration provided with OpenShift Container Platform. As a number of early adopters have told us, it’s invaluable to have a single point of support all the way up from the operating system layer, to orchestration, app dev, and storage.

Stay tuned — ’cause we’re not done

We’re working hard to continue to innovate to make a much more seamless experience for developers and administrators alike to manage storage in a containerized environment.


We’ve delivered a number of industry-first innovations over the past year and will continue to focus on enabling a seamless user experience for developers and administrators looking to adopt containers as the preferred deployment platform. Stay tuned.

Storage appliances take cover!

New performance and sizing guidance on Red Hat Gluster Storage with QCT hardware

By Will McGrath, Partner Marketing, Red Hat Storage

I’m an oldest sibling. They say the traits of the oldest include being mature and dependable, while rebellion and excitement are often characteristics of the youngest. Think of the Prodigal Son parable.

Brothers in arms

On the surface, it’s possible to draw a parallel to the Red Hat Storage products. Red Hat Gluster Storage was the first product in Red Hat’s quiver. Distributed, POSIX­-compatible, scale-out file storage software, Gluster has added several modern architectural features over the years — geo­-replication, erasure coding, bit rot detection, self healing and tiering, to mention a few.

Ceph was recently (April 2014) added to the portfolio and garnered attention as the new kid on the block, but also partly because it addresses the rapidly growing object storage market and has been the most preferred platform for OpenStack1. As younger siblings often do, Ceph tends to hog the spotlight.

This week, elder sibling Red Hat Gluster Storage made a statement about its maturity in the marketplace and open source community, while also reinforcing the message around choice that Red Hat’s software-defined storage portfolio offers to customers.

New performance and sizing guide for Gluster

QCT (Quanta Cloud Technology), whose parent company, Quanta Computer, Inc., is probably the least-known leading server vendor in the world, has worked with Red Hat to produce the industry’s first Red Hat Gluster Storage Performance and Sizing Guide.

QCT and Red Hat have performed extensive testing to characterize optimized configurations for deploying Red Hat Gluster Storage on several QCT servers, with the goal of providing a highly prescriptive recommendation to end customers on how best to tailor storage to the demands of their workloads.

A slew of different configuration options were tested:

  • Small, medium, and large file operations
  • Standard and dense server chassis
  • JBOD vs. RAID6 storage layouts
  • Replicated and dispersed (erasure coding) volumes
  • SSD tiering vs. non­-tiering
  • Self-­healing with different cluster sizes

QCT has gone a step beyond co­-producing the 34­-page joint performance and sizing guide and created single SKUs, to make ordering much easier for cost/capacity­-optimized and throughput­-optimized configurations.

The following diagram highlights QCT’s naming convention:


(Note: You can find more details on QCT’s part-number variants in Appendix A of the Performance and Sizing Guide on the QCT website).

The performance and sizing guide and ready-to-­order SKUs from QCT go a long way in cementing the market and thought leadership of open, software-defined storage in a rapidly evolving landscape.

Sister, sister!

While the new performance and sizing guide represents significant value to customers curious about the best server configurations for a particular workload, it also serves as a data point in the overarching message about the ability to control the very guts of enterprise storage, something that traditional appliances generally cannot offer.

Gluster has made waves recently, as a significantly lower cost alternative to EMC Isilon for certain workloads and for being supported within multiple public clouds.

While Gluster has gained traction in the press as providing persistent file storage for containers in Red Hat’s OpenShift Container Platform, Ceph enjoys its status as the preferred block storage platform for OpenStack private cloud builders. Each has their own niche, strengths, and, more important, their own tribe. Gluster and Ceph as upstream open source communities are vibrant and, like young siblings, growing at different rates. Gluster is a larger community (with a larger customer base), while Ceph is growing faster.

Needless to say, they are both equally loved by the Red Hat family, and continue to enjoy strong engineering and marketing focus, to help customers build world-class storage for their applications.

1OpenStack User Survey (April 2016)

The emperor has no clothes – Disrobing the myth of storage window dressing

By Daniel Gilfix, Product Marketing, Red Hat Storage

The emperor has no clothes! The emperor has no clothes!

Those were the words uttered by a bold observer of the king’s procession in the Danish fairy tale written by Hans Christian Andersen in 1837. The metaphor has since been used to connote, among other things, collective denial or ignorance of an obvious fact. Such is the case today with storage, as IT has had to grope with exponential growth of data from social media and cloud, media and entertainment, video on-demand services, and even medical imaging. In an era where people throw around buzz words like digital transformation and discuss solutions to address the ensuing pressures imposed by all this data growth on capacity, scalability, and cost, we’re often led to believe that storage will take care of itself.

Illustration of “The Emperor’s New Clothes.” By Vilhelm Pedersen (1820 – 1859). Source: English Wikipedia

21st century reality

Thanks to new research by Vanson Bourne Ltd in a survey commissioned by Red Hat, we’re beginning to see mounting evidence that denying the critical role of storage in “sexy” solutions spanning physical, virtual, private cloud, container, and public cloud environments is like sequestering the unabashed observers of the masquerading naked king. Indeed, today’s solutions require the agility to access data from anywhere, anytime, on any device, the flexibility to store data on-premises or in the cloud, and advanced data protection that provides integrity and high availability at very large scale. All of these are core deliverables of software-defined storage. And while it would be presumptuous to believe storage alone can solve all these data challenges, it’s naïve to think that they’re solvable without it.

Research findings

Vanson Bourne’s research underlies this new reality. In fact, inadequate storage infrastructure is now considered fourth out of the top ten pain points that IT decision makers experience on a weekly basis, behind cost, security, and system complexity. 94% of respondents are frustrated with their organization’s storage solution, and 70% fear it can’t cope with next generation workloads. Over the next three to five years, respondents believe that their organization’s volume of data is set to increase by 54%. Nearly three quarters of respondents worry about their organization’s ability to cope with this amount data. When asked about specific workload sizes, the outlook from respondents seems even gloomier. Two thirds believe that their organization lacks the versatility to cope with workloads bigger than a petabyte, while only 17% feel they could support a new application next month requiring 10 PBs. Clearly most organizations are not prepared to cope with large workloads, and without this preparedness, their ability to roll out new applications and IT solutions may effectively be in jeopardy.

Benefits of agile storage

Fortunately, 98% of IT decision makers are optimistic about the tangible benefits of moving to a more agile storage solution. 62% view storage as an opportunity to make efficiencies, 54% as an opportunity to become innovative. More revealing, however, is the fact that specific anticipated benefits cited by Vanson Bourne respondents are eerily similar to those sought by IT departments striving for digital transformation across the datacenter, with 56% saying flexibility, 48% the ability to move data into storage more quickly, 31% support for varied workloads, and 28% freedom from being tied into third party vendor relationships.

Bottom line

Without discussing what, beyond budgetary constraints, might be impeding procurement for these IT decision makers, it’s clear that more agile storage could alleviate most frustrations with their current storage solution and deliver invaluable benefits to their infrastructure and solutions dependent upon storage at their core. This is something we have touted consistently at Red Hat with our portfolio of open, software-defined storage solutions that are specifically designed to support new workloads with flexible and highly scalable architectures based on commodity hardware. It’s refreshing to see quantifiable evidence beginning to surface from the front line of our customers in support of our mission, but there are still those who continue to watch the same IT solution parade like admirers of the emperor’s attire. 73% of respondents in Vanson Bourne’s research believe that their organizations are not always aware of storage needs, and 83% feel that storage needs to be a higher priority.

So the next time someone proposes a new approach to solving an IT problem that claims to deliver benefits related to data efficiency, flexibility, accessibility, and all around agility, make sure to remember those whose vision was mired by group think a couple centuries ago in Denmark. Is storage still window dressing for you, or should it be an integral component if not prime focus of these so-called solutions up front, so that these promised benefits are actually seen, not just imagined?

Red Hat Ceph Storage is object storage

By Steve Bohac, Product Marketing, Red Hat Storage

Explosive data growth continues to overwhelm hardware-based storage and IT budgets. Web-scale architectures are becoming more and more prevalent within enterprise IT, as well as cloud providers. By “web scale,” we mean large (often a PB and beyond!) architectures employed by the large social media and large cloud services providers we all use every day.

Very often, these web-scale storage architectures are built on object storage. Traditional, proprietary, file-system-based storage appliances generally are built with architectures that are too rigid at this PB (or greater) scale. Installations this large can expose the file system’s inherent weaknesses, performance limitations, and management complexities. Thus, object storage is becoming more attractive for its potential of handling scale while minimizing complexity and costs. Software-defined storage (like Red Hat Storage) can significantly cut costs, prevent vendor lock-in, and allow customers to add capacity and performance independently—critical at the PB+ scale!

You may have seen our recent announcement of Red Hat Ceph Storage 2 featuring enhancements to its object storage capabilities. We’ve seen that object storage implementations have increased over the past few years. Red Hat Ceph Storage has been object based since day one of the Ceph open source project. In fact, RADOS, the foundational component of Ceph, is an acronym for Reliable, Autonomous, Distributed Object Store.

Blog 08.04.16

Red Hat Ceph Storage is well suited for object storage installations because it is:

  • Proven at web scale for object storage – Red Hat Ceph Storage was designed from the ground up since the beginning of Ceph 10 years ago. During the past decade, it has been hardened by customer usage as well as extensive community development. Consequently, many large companies are running Red Hat Ceph Storage for their large production workloads with object storage.
  • Flexible storage for your applications – Several access methods provide customers flexibility in how their applications interact with Ceph object storage: Amazon S3, OpenStack Swift, or Ceph’s native API protocols can all be employed. Ceph’s scale-out architecture provides additional flexibility, allowing customers to scale performance and capacity separately.
  • Capable of offering the data protection, reliability, and availability enterprises demand – The distributed nature of the Ceph storage architecture allows you to store and protect your data across numerous hardware assets (thereby mitigating the risk of a hardware failure). The employment of erasure coding provides more storage-efficient data protection than traditional RAID-based solutions. Last, geo-replication capability provides disaster recovery in the unfortunate event that a location suffers some sort of outage.
  • Open, community-based, software-defined storage for object – An open source object storage offering, Red Hat Ceph Storage draws on the innovations of a community of developers, partners, and customers.
  • Cost-effective object storage to help you maximize your storage budget – The employment of a distributed, open, software-defined storage solution has the potential to significantly cut costs, prevent vendor lock-in, and allow customers to add capacity without degrading performance – critical at the PB+ scale!

Learn more at and

Red Hat Gluster Storage 3.1.3 is here!

By Alok Srivastava, Senior Product Manager, Red Hat Gluster Storage and Data, Red Hat

Container-native storage, faster self-healing, sharding, and more

It’s a great time to be a storage aficionado! Last week, we announced Red Hat Ceph Storage 2. Today, we’re thrilled to announce the general availability of Red Hat Gluster Storage 3.1.3.

Building on momentum

Red Hat Gluster Storage has enjoyed strong momentum in terms of customer success and community growth. We’ve added a number of enterprise-class features over the past 3 to 4 years that have significantly enhanced performance, reliability, durability, and security.

Software-defined storage offers the best of both worlds—the flexibility to grow storage incrementally and reuse existing industry-standard hardware, while also taking advantage of the latest innovation in storage controller software and hardware components. For more on Red Hat Gluster Storage 3.1.3 features, check out the following video.

Feature-packed release

The 3.1.3 release of Red Hat Gluster Storage includes a number of feature enhancements that enable greater performance, reliability, and faster self-healing, including deep integration of Red Hat Gluster Storage with other Red Hat products.

Persistent storage for containers

You may have already seen our blog post from earlier today on container-native storage for OpenShift Container Platform. Earlier this year, we announced a containerized image of Red Hat Gluster Storage. This release moves a step further and enables converged storage containers that can co-reside with application containers on the same host. Shared resources between application and storage help in overall TCO reduction. Containers are deployed and provisioned using an enhanced Heketi module.

Container-native storage provides storage services to the application containers by pooling and exposing storage from either local hosts or direct-attached storage.

RHGS 3.1.3-2

Multi-threaded self-heal

All at once or one at a time? While the debate between single- and multi-threaded approach is never ending, Red Hat Gluster Storage self-heal certainly does better for some workloads when it is parallelized. This release of Red Hat Gluster Storage allows you to perform self-heal in parallel. Multi-threaded self-heal will be most useful with a large number of small files (e.g., sharded VM images). Facebook is the primary contributor to multi-threaded self-heal in the Gluster community.


Sharding refers to breaking a large file into tinier chunks. Sharding splits large virtual machine (VM) image files into small blocks of configurable size. This results in faster self-healing with reduction in CPU usage, which helps the hyperconvergence of Red Hat Gluster Storage with Red Hat Enterprise Virtualization and live VM use case.

Geo-replication feature of Red Hat Gluster Storage is also sharding aware for these two use cases so that only required shards/ fragments are replicated.

RHGS 3.1.3-1

Integration with VSS

We heard you! You need not call up your storage administrator if you are a Windows user and need to browse through the previous version of any file/folder. Red Hat Gluster Storage is now integrated with Volume Shadow Copy Service (VSS) of Microsoft Windows. Red Hat Gluster Storage supports the viewing and accessing of snapshots.

RHGS 3.1.3-3

SMB Multichannel

Have more network adapters? Get better SMB performance! SMB Multichannel is a feature of SMB 3.0 protocol that increases the network performance and availability of Red Hat Gluster Storage servers. SMB Multichannel allows use of multiple network connections simultaneously and provides increased throughput along with network fault tolerance. SMB Multichannel is provided as a technical preview feature with Red Hat Gluster Storage 3.1.3, and we intend to fully support it soon.

Easy installation of hyperconverged setup

We’ve ensured that the installation of the hyperconverged setup of Red Hat Gluster Storage and Red Hat Enterprise Virtualization is easy enough for you. The Ansible-based gdeploy tool is enhanced for the automated installation of hyperconverged setups.

Kilo refresh for Gluster Swift

We’ve refreshed Gluster Swift to support OpenStack Kilo for RHEL 7-based Red Hat Gluster Storage. RHEL 6-based Red Hat Gluster Storage continues to support Openstack Icehouse.

Scheduling of geo-replication

Periodic scheduling of geo-replication allows administrators to synchronize data between clusters during non-peak hours. We have detailed performance and sizing guides available later this year, with prescriptive guidance to tweak the right price/performance mix for your workloads.

Find us at Red Hat Summit

Red Hat Storage has an impressive presence at this year’s conference, with key announcements around object storage with Red Hat Ceph Storage and container-native storage with Red Hat Gluster Storage. Stop by Pods 31 and 32 of Booth 508 on the expo floor, speak with storage experts, or attend one of our sessions. You could even win a wicked-cool Amazon Echo (as seen in the Baldwin ads)!

RHGS 3.1.3-4

Red Hat Gluster Storage sets the storage agenda. Again. This time with new container-native storage.

Earlier today, you may have read the news about container-native storage for OpenShift Container Platform, which helps developers manage storage just as easily as applications. As the name suggests, storage and applications are served out of containers running on the same server. Red Hat Gluster Storage runs in a container inside OpenShift, just like another application, but instead it serves storage. This helps lower costs and increase control and flexibility. Even better, it enables customers to move applications between platforms (on-premise, virtual, cloud, containers) without needing to rewrite code to deal with different storage environments.

Container Blog1

What’s the big deal?

Containers are the next big thing. There’s a good chance you’re thinking about, planning, or already transitioning your applications to containers. While the container revolution started primarily with stateless applications, the reality is that most enterprise applications running in containers need a persistent storage layer to store application data, state, and configuration, all of which need to live beyond the life of a container. Our engineers have delivered a continuum of innovation by containerizing Red Hat Gluster Storage itself, and also leveraging API-driven volume allocation to help easily manage and provision storage for containers. Simply put, Red Hat Gluster Storage solves the persistent storage challenge for containers.

Why Red Hat?

Today’s storage news is aligned to the announcement outlining Red Hat’s strategy to offer the OpenShift platform from development to cloud platforms. Container-native storage strengthens the Red Hat value proposition for containers, while also increasing the appeal of Red Hat solutions to developers. In the words of Scott Sinclair, Senior Storage Analyst at ESG: “Any integration of storage into the container host and application development platform goes a long way to alleviating resistance in container adoption.”

In addition, this capability comes backed by Red Hat’s world-class support, offering a single point of support for customers looking for a comprehensive solution for their container environments.

To containers and beyond!

Red Hat Gluster Storage has made huge leaps in terms of adding enterprise-grade storage features over the last 3 to 4 years. Significant enhancements to scalability, reliability, performance, and security features have helped it surpass traditional storage solutions in terms of price/performance. Now, all this enterprise-class functionality is available for containerized applications.

Red Hat Gluster Storage is flexible, it’s easy to set up and use, and it’s one of the few storage solutions in the marketplace that can span classic, mode 1 customers to all-in, mode 2 customers and everything in between. Follow our story at Red Hat Summit this week.

The milestone of Red Hat Ceph Storage 2

By Daniel Gilfix, Red Hat Storage

This week’s announcement of Red Hat Ceph Storage 2 marks the most significant release of the cloud-native, software-defined storage product since the acquisition of Inktank by Red Hat in 2014. It represents an important milestone, not only in terms of the company’s steadfast commitment to storage but also from the perspective of preparing open source customers for the highly coveted software-defined datacenter.


Source: Scott Maxwell, LuMaxArt Golden Guy Trophy Winner, at

Why storage matters

We work in an era where storage is often taken for granted, under-glamorized for the role it serves, and yet increasingly essential, often indispensable, to solutions spanning physical, virtual, private cloud, container, and public cloud environments. Nevertheless, in recent studies commissioned by Red Hat, Vanson Bourne Ltd reports that 71% of IT decision makers fear their organizations’ storage solutions won’t be able to handle next-generation workloads, while 451 Research indicates that 57% already have or are moving to software-defined datacenters this year. As most loyal Ceph followers know, this is precisely why folks are so excited about the launch of our new product.


Source: Pixabay on public domain at

What’s new in 2

While Red Hat Ceph Storage has distinguished itself as a unified storage platform that’s overwhelmingly preferred for OpenStack, for years Ceph has actually fulfilled this role with folks like service providers and the telco community as an object store proven at scale. Red Hat Ceph Storage 2 adds innovation resulting in a far more robust object storage solution for a wide variety of use cases aimed at the enterprise, like active archive, big data, and media content hosting. Customers also receive an easier-to-use product and life-cycle management by virtue of the integrated Red Hat Storage Console 2.

Object storage enhancements

Ceph’s object storage interface, RGW, has greatly improved. RGW geo-clusters with single namespace allows local communications with local clusters with “eventually consistent” synchronization between them. RBD mirroring enables multi-site replication for disaster recovery and archival. Support for Active Directory, LDAP, and Keystone v3 authentication systems expands security options. And improved S3 and Swift protocol API support–such as AWS v4 client signatures, bulk delete, and object versioning–strengthens compatibility with AWS and OpenStack, respectively.

Sneak peek

In addition, there are three sets of functionality in Tech Preview for early customer exposure:

  • CephFS, the POSIX-compliant file system that uses a Ceph storage cluster to store data in accordance with the OpenStack Manila service
  • An NFS-to-S3 gateway for import and export of object data
  • BlueStore, a new storage media backend that we expect to lead to 2-3X performance improvements for the entire product

Bottom line

All these features are instrumental to empowering Ceph to keep pace with the ever-growing demands of its spirited user base and to handle multi-petabyte workloads with the grace and efficiency that enterprise customers need for software-defined datacenters. On the surface, they might not appear especially sexy, but for cloud builders and IT decision makers of all sorts, many of whom are already in the loyal Ceph community, they are a breath of fresh air to an otherwise stifled march toward storage infrastructure agility.

See you at Summit

If you’re coming to Red Hat Summit at the Moscone Center in San Francisco between June 27-30, please stop by the Red Hat Storage booth (#31-32) or the partner pavilion (#529), where a cross-sample of Ceph partners are demonstrating the elements of our extended ecosystem. Further information is available as well on

Ten reasons to choose Red Hat Gluster Storage over EMC Isilon

As we head toward a software-defined, container-centric, micro-services driven IT culture, it is becoming quite apparent to CIOs that their storage is stuck in the stone age. In some ways storage was the last bastion of mode 1 technologies, as things around it in the data center have gone the way of distributed, scale-out, and agile.

Customers who choose to add in open software-defined storage into the mix, or make a clean sweep transition for all their storage, can quickly realize several benefits. We have articulated the top ten in this list….Drumroll!

Ten Reasons to Choose Red Hat Gluster Storage over EMC Isilon

  1. Cost effective, even at petabyte scale
  2. Choice of deployment environment (bare metal, VM, container, cloud)
  3. Deploy enterprise storage on industry-standard hardware
  4. Performance and sizing guides for optimal hardware selection
  5. Granular updates (including security fixes) as opposed to forklift updates
  6. World-class, open source platform (RHEL, XFS) vs. closed, proprietary platform
  7. All features included, no hidden costs for add-on features
  8. Skills reuse (leverage existing Linux skills set)
  9. Simple pricing model
  10. Tested, trusted support and services

If you find that some of these claims are audacious, you’re not alone. We think the same way. And we have the data to back it up. Hear for yourself from experts such as Brent Compton, Director of Solution Architectures at Red Hat Storage, by signing up for a free webinar replay from June 8 or a repeat performance on June 16. Brent has some very compelling data that corroborates each of the items in the preceding list. Our expert panel will also take live questions at the end.

If you’d like to speak to Red Hat Storage experts live, please join us at Red Hat Summit in San Francisco in a few weeks or sign up for one of our free Red Hat Storage Days coming to a city near you.

In the meantime, watch this video, in which we discuss just how Red Hat Gluster Storage stacks up against traditional storage like EMC Isilon:

Sign up for the June 8 webinar replay.
Sign up for the June 16 webinar.

Comparing GlusterFS performance as a persistent storage layer for containerized applications on bare metal vs. PaaS deployments

By Amrik Jalif, Head of Storage UK&I, Red Hat

Container technology offers significant improvements in application density and deployment time. Containers have the versatility and power to do to virtualized environments what virtualization did to “a single server for every app.” Density goes up from 20 virtual machines per node to the equivalent of 50. Managing storage in a containerized world should be easy and automated. By containerizing the storage layer, developers have much more control over their environments.

In previous posts, we’ve outlined how Red Hat Storage can offer significant benefits to enterprises looking to deploy applications in containerized and PaaS environments, such as OpenShift Enterprise. We now have benchmark studies that compare performance between accessing GlusterFS from applications running on bare-metal x86 servers vs. from application containers in a PaaS environment, as in in OpenShift Enterprise.

We used the Flexible IO (FIO) tool in distributed mode as the workload generator for random and sequential workloads. We scaled OpenShift pods starting from 5 to 500, varying the file size and number of jobs in each pod while the dataset size was kept constant at 400GB. For a true “apples to apples” comparison, we ran distributed FIO on 6 OpenShift pods and on 6 bare-metal clients and scaled the pods from 5 to 1000 to push the limits on performance.