The value of on-premise object storage

The recent outage of the Amazon Web Services (AWS) S3 offering has likely reminded customers of the old “too many eggs in one basket” adage, sparking reviews of where and how they deploy and manage their storage. Outages of this magnitude illustrate how much data lives in object storage these days.

Object storage has come to be the foundation for web-scale architectures in public or private clouds, as we’ve previously discussed. Its allure is due to its potential to handle massive scale while minimizing complexity and cost. Customers struggling with large-scale data storage deployments are turning to object storage to overcome the limitations that legacy storage systems face at scale (typically degradation of performance).

Object storage allows application developers and users to focus more on their workflow and logic and not worry about handling file storage and file locations. However, the large extent of outages due to the unavailability of AWS S3 shows the dependence that so many businesses have on public-cloud-based object storage today.

Ceph is an object store at its core and was designed for web-scale applications. Running Red Hat Ceph Storage on premises offers customers a more secure, cost-effective, and performant way to manage large amounts of data without the risks that can be associated with public-cloud-based solutions.

The S3 API has become the de facto standard for accessing data in object stores, and organizations now develop and demand applications that “speak” S3. Applications that use the S3 API can more easily migrate between on-premise, private clouds built on Red Hat Ceph Storage and public clouds like AWS just by changing the storage endpoint network address. This enables them to operate on both datasets that are present in private clouds and those stored in the public cloud. A common API for object storage means that applications can move between these two cloud deployment models, managing data consistently wherever they go.

For a recent customer example using Red Hat Ceph Storage at scale for object storage, check out our recently published success story with the CLIMB project in the U.K.

Hopefully, most of you reading this were not stung by the recent outage. Regardless, now is as good a time as any to review your infrastructure to determine if an on-premise object storage approach with Red Hat Ceph Storage makes sense. We think it does!

For more information on Red Hat Ceph Storage for object storage, check out this page.

For more information on Red Hat Ceph Storage for cloud Infrastructures, check out this page.

Cue the drumroll…Red Hat Gluster Storage 3.2 releases today

Faster metadata operations, deeper integration with OpenShift, smaller hardware footprint

It’s here! Red Hat today announced the general availability of Red Hat Gluster Storage 3.2. Please join the live launch event at 12pm EDT today to hear from customers, partners, experts, and the thriving open source GlusterFS community. Can’t make it? No sweat. The content will be available for your viewing pleasure for 90 days.

The best yet. And then some.

This release is a significant milestone in the life of the product. It brings to bear a number of key enhancements but also represents a solid, stable release that is a culmination of much of the innovation over the past 18 months.

Red Hat Gluster Storage is highly scalable, software-defined storage that can be deployed anywhere – from bare metal, to virtual machines, to containers, and across all three major public cloud providers (Microsoft Azure, Google Cloud Platform, and Amazon AWS).

Red Hat Gluster Storage 3.2 is geared toward cloud-native applications and modern workloads that demand a level of agility and scale that traditional storage appliances cannot offer. Read on to find out how this release delivers performance and cost improvements along with a seamless user experience.

Deeper integration with OpenShift Container Platform

Container-native storage for OpenShift Container Platform enables developers to provision and manage storage as easily as they can applications. In addition, improvements in small-file performance in Red Hat Gluster Storage will help container registry workloads – the heart of the container platform itself – to persist Docker images for containerized applications.

In addition to small file performance, there are a number of scale improvements to the management plane that allow for a larger number of persistent volumes (OpenShift PVs) that can be hosted in a single cluster. Also, the container for Red Hat Gluster Storage has been configured to run the sshd service for supporting async geo-replication between remote sites and has been instrumented to hold the SSL certificates for supporting in-flight encryption to be leveraged by container-native storage in upcoming releases.

Storage administrators will also find that better small file performance allows for a better user experience with day-to-day operations, such as directory listings, up to 8x faster. The secret sauce, client-side caching, that facilitates this performance improvement will benefit both Linux (Fuse) and Windows (SMB) users.

The improvement in small file performance will be particularly relevant to use cases with a large number of files under a few MBs in size. A number of use cases, such as hosting Git repositories and electronic design automation, will benefit as well.

Lower hardware footprint and costs

Most companies look to 3-way replication for enterprise-grade data integrity. What if we told you that you could benefit from the same grade of data integrity while lowering your hardware footprint and cost?

This is possible in Red Hat Gluster Storage 3.2 through a new type of volume known as an arbiter volume that does just that – arbitrates between two nodes that may be out of sync. This is particularly useful for remote office-branch office (ROBO) scenarios where remote locations are interested in saving space, power, and cost through hyperconvergence.

Additionally, faster self-healing of sharded storage volumes will benefit customers running hyperconverged configurations, such as Red Hat Virtualization with Red Hat Gluster Storage.

Performance enhancements

Multi-threaded applications can perform faster by parallelizing I/O requests by enabling client-io threads. This feature is activated automatically during heavy loads and dials back during idle state. There are significant performance gains with more number of threads with client io threads enabled.

Our internal test results have shown up to 250% performance increase with the dispersed volumes with client-io-threads enabled. There is a near linear performance gain with the increase in the number of threads.

Native eventing and integration with Nagios

Red Hat Gluster Storage 3.2 features a new native push-based notification framework that can send out asynchronous alerts. As with earlier releases, full integration with the open source Nagios framework is supported.

Learn more. Speak to Red Hat experts and customers.

Find out more about the benefits of the new feature-rich Red Hat Gluster Storage at our launch event happening today at 12pm EDT. Sign up for free, and interact directly with our experts who will be standing by to answer your questions.

Hear from customers like Brinker International who are running containers on Red Hat Gluster Storage in production and have some great results to share. Hear also about great things happening in the GlusterFS community, which includes contributors like Facebook. We hope to see you there.