Moving the ball forward for object storage with Red Hat Ceph Storage 2.2

We are happy to announce that Red Hat Ceph Storage 2.2 is now Generally Available (GA). It is based on the open source community version of Ceph Storage, specifically version 10.2.5, the Jewel release stream. Similar to Red Hat Ceph Storage 2 announced last summer, Red Hat Ceph Storage 2.2 has a heavy focus on object storage deployments. In addition to following a new, more predictable release process, Red Hat Ceph Storage 2.2 offers a number of enhancements, including:

Global clusters

The object access method for Red Hat Ceph Storage (aka the RADOS Gateway, or “RGW”) now supports up to three sites for global cluster configurations. This means that customers can deploy an active-active global cluster across three geographically distributed sites with data replication and consistency across all three. Alternatively, the RGW multi-site capability can be used in disaster recovery configurations for data protection against site disaster using an active-passive deployment.

Better security and encryption support

Red Hat Ceph Storage now has native support for the Secure Sockets Layer (SSL) in RGW. This is a good option for small to medium size installations that require data encryption between the object storage client and RGW. Because SSL and encryption can have a performance impact, we still recommend terminating SSL at the load balancer or HA layer for large-scale installations.

S3 API enhancements

Red Hat Ceph Storage supports the S3 object API for easy application mobility across private and public clouds. Red Hat Ceph Storage 2.2 now adds support for the new multipart upload copy-part S3 API.

Swift enhancements

Red Hat Ceph Storage 2.2 also has important developments for customers using the Swift object API that demonstrate our continued focus on object storage deployments, including:

  • Support for Swift object versioning functionality.
  • Full testing and compliance of Red Hat Ceph Storage 2.2 object storage with the Swift API tests in the Tempest test suite from RefStack toolset for OpenStack. For customers, this translates into interoperability between applications and services using the Swift API and Red Hat Ceph Storage.

For a recent example of a customer success with Red Hat Ceph Storage at scale for object storage, check out our recently published success story with the CLIMB project in the U.K.

For general information on object storage features in Red Hat Ceph Storage, read this blog.

For general information on Red Hat Ceph Storage, visit this page.

The value of on-premise object storage

The recent outage of the Amazon Web Services (AWS) S3 offering has likely reminded customers of the old “too many eggs in one basket” adage, sparking reviews of where and how they deploy and manage their storage. Outages of this magnitude illustrate how much data lives in object storage these days.

Object storage has come to be the foundation for web-scale architectures in public or private clouds, as we’ve previously discussed. Its allure is due to its potential to handle massive scale while minimizing complexity and cost. Customers struggling with large-scale data storage deployments are turning to object storage to overcome the limitations that legacy storage systems face at scale (typically degradation of performance).

Object storage allows application developers and users to focus more on their workflow and logic and not worry about handling file storage and file locations. However, the large extent of outages due to the unavailability of AWS S3 shows the dependence that so many businesses have on public-cloud-based object storage today.

Ceph is an object store at its core and was designed for web-scale applications. Running Red Hat Ceph Storage on premises offers customers a more secure, cost-effective, and performant way to manage large amounts of data without the risks that can be associated with public-cloud-based solutions.

The S3 API has become the de facto standard for accessing data in object stores, and organizations now develop and demand applications that “speak” S3. Applications that use the S3 API can more easily migrate between on-premise, private clouds built on Red Hat Ceph Storage and public clouds like AWS just by changing the storage endpoint network address. This enables them to operate on both datasets that are present in private clouds and those stored in the public cloud. A common API for object storage means that applications can move between these two cloud deployment models, managing data consistently wherever they go.

For a recent customer example using Red Hat Ceph Storage at scale for object storage, check out our recently published success story with the CLIMB project in the U.K.

Hopefully, most of you reading this were not stung by the recent outage. Regardless, now is as good a time as any to review your infrastructure to determine if an on-premise object storage approach with Red Hat Ceph Storage makes sense. We think it does!

For more information on Red Hat Ceph Storage for object storage, check out this page.

For more information on Red Hat Ceph Storage for cloud Infrastructures, check out this page.

Cue the drumroll…Red Hat Gluster Storage 3.2 releases today

Faster metadata operations, deeper integration with OpenShift, smaller hardware footprint

It’s here! Red Hat today announced the general availability of Red Hat Gluster Storage 3.2. Please join the live launch event at 12pm EDT today to hear from customers, partners, experts, and the thriving open source GlusterFS community. Can’t make it? No sweat. The content will be available for your viewing pleasure for 90 days.

The best yet. And then some.

This release is a significant milestone in the life of the product. It brings to bear a number of key enhancements but also represents a solid, stable release that is a culmination of much of the innovation over the past 18 months.

Red Hat Gluster Storage is highly scalable, software-defined storage that can be deployed anywhere – from bare metal, to virtual machines, to containers, and across all three major public cloud providers (Microsoft Azure, Google Cloud Platform, and Amazon AWS).

Red Hat Gluster Storage 3.2 is geared toward cloud-native applications and modern workloads that demand a level of agility and scale that traditional storage appliances cannot offer. Read on to find out how this release delivers performance and cost improvements along with a seamless user experience.

Deeper integration with OpenShift Container Platform

Container-native storage for OpenShift Container Platform enables developers to provision and manage storage as easily as they can applications. In addition, improvements in small-file performance in Red Hat Gluster Storage will help container registry workloads – the heart of the container platform itself – to persist Docker images for containerized applications.

In addition to small file performance, there are a number of scale improvements to the management plane that allow for a larger number of persistent volumes (OpenShift PVs) that can be hosted in a single cluster. Also, the container for Red Hat Gluster Storage has been configured to run the sshd service for supporting async geo-replication between remote sites and has been instrumented to hold the SSL certificates for supporting in-flight encryption to be leveraged by container-native storage in upcoming releases.

Storage administrators will also find that better small file performance allows for a better user experience with day-to-day operations, such as directory listings, up to 8x faster. The secret sauce, client-side caching, that facilitates this performance improvement will benefit both Linux (Fuse) and Windows (SMB) users.

The improvement in small file performance will be particularly relevant to use cases with a large number of files under a few MBs in size. A number of use cases, such as hosting Git repositories and electronic design automation, will benefit as well.

Lower hardware footprint and costs

Most companies look to 3-way replication for enterprise-grade data integrity. What if we told you that you could benefit from the same grade of data integrity while lowering your hardware footprint and cost?

This is possible in Red Hat Gluster Storage 3.2 through a new type of volume known as an arbiter volume that does just that – arbitrates between two nodes that may be out of sync. This is particularly useful for remote office-branch office (ROBO) scenarios where remote locations are interested in saving space, power, and cost through hyperconvergence.

Additionally, faster self-healing of sharded storage volumes will benefit customers running hyperconverged configurations, such as Red Hat Virtualization with Red Hat Gluster Storage.

Performance enhancements

Multi-threaded applications can perform faster by parallelizing I/O requests by enabling client-io threads. This feature is activated automatically during heavy loads and dials back during idle state. There are significant performance gains with more number of threads with client io threads enabled.

Our internal test results have shown up to 250% performance increase with the dispersed volumes with client-io-threads enabled. There is a near linear performance gain with the increase in the number of threads.

Native eventing and integration with Nagios

Red Hat Gluster Storage 3.2 features a new native push-based notification framework that can send out asynchronous alerts. As with earlier releases, full integration with the open source Nagios framework is supported.

Learn more. Speak to Red Hat experts and customers.

Find out more about the benefits of the new feature-rich Red Hat Gluster Storage at our launch event happening today at 12pm EDT. Sign up for free, and interact directly with our experts who will be standing by to answer your questions.

Hear from customers like Brinker International who are running containers on Red Hat Gluster Storage in production and have some great results to share. Hear also about great things happening in the GlusterFS community, which includes contributors like Facebook. We hope to see you there.

Cloud native app developers delight: Container storage just got a whole lot easier!

The new Red Hat OpenShift Container Platform offers a rich user experience with dynamic provisioning of storage volumes, automation, and much more

 

By Michael Adam, Engineering Lead, Container Native Storage, and Sayan Saha, Head of Product, Red Hat Gluster Storage

 

Today, Red Hat announced general availability of Red Hat OpenShift Container Platform 3.4, which includes key features such as enhanced multi-tenancy and streamlined deployment for hybrid clouds. In addition, a number of open source storage innovations have been included in this release, which enable easier storage management and provisioning across the lifecycle of containers.

The story so far

Containers were built to be ephemeral and stateless. However, stateful applications running in containers need enterprise-grade persistent storage. Over the past 18 months, Red Hat has delivered a continuum of innovation around persistent storage for containers, leading the charge on both fronts – the open source communities and enterprise products. Red Hat offers container-native storage – durable, distributed, software-defined storage integrated deeply into the Red Hat OpenShift Container Platform, managed by Kubernetes.

cns3-4-1

Rich developer and management experience

In the latest release, Red Hat OpenShift Container Platform 3.4 offers dynamic provisioning of persistent volumes, allowing for a much richer developer experience, addressing annoying delays due to lengthy storage provisioning cycles needed by traditional storage platforms.

Storage administrators can expect to find that easier volume management with dynamic provisioning frees them up for more value-added tasks. Developers building cloud-native apps deployed in containers can benefit from faster storage provisioning and a better user experience.

DevOps managers can relish the automation and integration through a new deployment tool included with the subscription that can deploy container-native storage with push-button simplicity.

Dynamic provisioning for persistent volume claims

Prior to this release, storage administrators and application developers were limited to a static provisioning model where persistent volumes (PVs) of fixed capacity had to be pre-provisioned manually to be consumed by applications running in Kubernetes pods.

Persistent volume claims (PVCs) are used to consume storage resources in Kubernetes like pods that consume compute resources. When new PVCs were received, an attempt was made to match the PVC request with the closest available PV in terms of capacity, and if one was found the claim would be bound to it. This scheme is inefficient.

Consider a situation where 10, 100 GB PVs have been pre-provisioned and made available. A request for 50 GB of storage would be matched to one of the available 100 GB PVs. This is wasteful as storage is over-committed. On the other hand, a request for 150 GB of storage would go unsatisfied as there is no close match, even though there is unused storage capacity.

The new dynamic provisioning feature fixes that issue by automating the provisioning of storage volumes. For instance, a 50 GB PVC request is addressed using a 50 GB PV that is dynamically provisioned for developers requiring zero admin intervention. In other words, users can expect to get exactly what they asked for as long as the underlying storage platform has available capacity.

Note that dynamic provisioning is supported even when Red Hat Gluster Storage serves out storage from a dedicated storage cluster in addition to container-native storage. This demo shows how container-native storage can be dynamically provisioned in OpenShift Container Platform.

cns3-4-2

Dynamic provisioning using storage classes

Dynamic provisioning is enabled by a new feature in OpenShift called storage classes. Storage classes enable storage admins to describe as well as classify their various storage implementations that are available to be used by the OpenShift cluster, and they enable developers to configure specific parameters when requesting storage on demand. Container-native storage can be configured as a storage class, which allows OpenShift developers to dynamically provision storage when submitting claims against the storage class, as seen below.

cns3-4-3

Faster and easier storage deployments using Kubernetes daemon sets

Container-native storage now ships with a deployment tool that will deploy the whole system in an already installed OpenShift cluster. The deployment tool is flexible in that it can easily be used in Ansible playbooks. The administrator only needs to prepare a topology file, a JSON-formatted file describing the nodes and storage devices to be used. Based on that, the deployment of the Gluster storage cluster and the management server as pods in the OpenShift cluster is achieved with the invocation of just a single command. Once deployment is completed, the Gluster storage is ready for both manual and dynamic provisioning with an appropriate storage class. In case of any errors encountered during deployment, the tool supports an abort operation that undoes the failed partial deployment, so that it can be started from scratch. This demo shows the deployment tool in action.

cns3-4-4

GID level security and endpoints

Several features have been added to Red Hat OpenShift Container Platform 3.4 to create a more secure storage environment. The first of these is the addition of system-controlled, preallocated GIDs for the Red Hat Gluster Storage container. This enables the container to run as a non-root user, permitting only allowed users to access the data.

Second, usability with endpoints has been resolved with the deployment of a service and endpoint for each dynamically provisioned volume. This allows PVs to be specific to the requestors namespace without the added steps of manually creating these resources.

The most comprehensive persistent storage for containers

Red Hat continues to be a major contributor to the Docker and Kubernetes communities. In fact, as of today, Red Hat has the second-most contributors in each, second only to Docker and Google, respectively. Much of the innovation happening upstream is focused on solving the persistent storage challenge for stateful applications. Red Hat has contributed a number of volume plugins for a variety of protocols. Learn more about the latest innovations from Red Hat during the virtual event on January 19 or in a webinar with container storage experts on January 24. Learn more at redhat.com/containerstorage.

A happy new year with Red Hat Ceph Storage!

By Daniel Gilfix, Red Hat Storage

Now that a somewhat tumultuous 2016 is in our rearview mirror, what better way to kick off the new year than with a couple of major endorsements for one of the key emerging businesses of the world’s leading provider of open source solutions? That’s right: Last week alone, over a scant 48 hours, Red Hat watched as Red Hat Ceph Storage 2 earned distinction by CRN as one of the 10 coolest open source products of 2016 and by TechTarget as one of its 12 finalists for 2016 product of the year in server-based storage. Two major endorsements.

Commitment upstream and down

The dual recognition is testament not only to the open source community but also to Red Hat and its valued customers spanning industries like telco, financial services, retail, and the public sector for the advancement of software-defined storage, which has become increasingly indispensable for workloads like cloud infrastructure, data lakes, backup and recovery, media repository, surveillance, and containers. Some of these customers have the necessary in-house skills, but many others rely on the expertise of Red Hat Storage Consulting and training for access to the best practices in architectural design, solution implementation, and knowledge transfer typically required when graduating from upstream to downstream deployment.

Always breaking new ground

Red Hat Ceph Storage 2 was announced at Red Hat Summit last May and began shipping in August. The first major release since the acquisition of Inktank by Red Hat in 2014, it marked the introduction of a user-friendly interface and an integrated lifecycle management system called Red Hat Console 2. Besides the ability to manage and monitor cluster activity in terms of health, performance, and capacity, Console empowered users to install Ceph in under an hour and to grow clusters graphically using Ansible. Red Hat Ceph Storage 2 also marked a departure from an exclusive “we’re for OpenStack” philosophy to a broader emphasis on offering object storage capabilities that are vital for managing vast quantities of unstructured data efficiently with emerging workloads.

Not just for OpenStack anymore

This is not to say that Red Hat Ceph Storage isn’t for OpenStack anymore—far from it. Ceph remains the overwhelmingly preferred storage backend for OpenStack workloads (OpenStack.org user survey, April 2016), and Red Hat Ceph Storage continues to tighten its integration with Red Hat OpenStack Platform. This is evident by Red Hat OpenStack Platform Director’s ability to automate upgrades from Red Hat Ceph Storage 1.3 to 2, manage object and block deployment, and leverage OpenStack’s shared filesystem service (Manila) with the CephFS driver. In effect, Red Hat customers can now fully customize their OpenStack deployment architectures with one unified storage platform. OpenStack Platform customers even receive a complimentary 64 TBs of Red Hat Ceph Storage for proof of concept.

Something more for everyone

But the focus of Red Hat Ceph Storage 2 that caught the attention of CRN and TechTarget was the retransformation of Red Hat’s storage product as an easier-to-use, more versatile product for everyone. The new management interface expanded user appeal for those without advanced Linux expertise or prior familiarity with Ceph. Features like support for Active Directory and LDAP authentication, integration with the S3 protocol of Amazon Web Services, and disaster-recovery options that include remote workloads all added to the product’s strength as an object storage platform. Red Hat Ceph Storage 2 now handles petabyte-scale deployments with the flexibility required by next-generation software-defined datacenters and the ease and enhanced cost efficiency required by today’s business.

Onward to new accomplishments

So, as we embark on new adventures and roller-coaster rides in 2017, let us all take note that software-defined storage on industry-standard hardware has now matured to be a platform of stability for the general populace needing to store enterprise data. Red Hat Ceph Storage has garnered well-deserved recognition for its advancement as a leader of the pack. Stay tuned for things to come this year.

Survey says….Complexity of persistent storage is as big as cost and scale combined

TechValidate recently ran a survey, commissioned by Red Hat, of more than 300 Red Hat customers in various stages of the container adoption journey. The results were consistent with our understanding of the container space. In fact, the pain points we’re looking to solve with the Red Hat container portfolio are much bigger roadblocks than we had previously imagined. You can find the unfiltered results published here.

You can’t contain this

While the constructs that make up containers (e.g., namespaces, cgroups) have been around for more than a decade, containers moved into the mainstream about 2-3 years ago. If you compare where similar seismic shifts, such as big data and cloud, were at the same point in time, you’ll appreciate how rapidly containers have become pervasive and how sticky they are. The first result that surprised us was that almost three out of four people surveyed were interested in or actively working with containers. What wasn’t surprising was that security, scalability, and portability were cited as major bottlenecks preventing a smooth transition to containers from previous deployment platforms.

Complexity of persistent storage is a major roadblock

When asked about the biggest pain point associated with persistent storage, complexity outranked everything else by far, so much so that if you add up the number of responses that listed complexity as one of the pain points, it is almost equal to cost and scale combined. This is a pertinent result as we find enterprises with deep storage knowledge and years of success running applications on bare metal and in virtual machines suddenly struggling with persistent storage for containers. That’s why a key focus for us at Red Hat is to make it super easy to set up, provision, and manage persistent storage, not just for administrators, but developers as well.

When asked about the type of storage most customers use for stateful applications deployed in containers, the answer mainly revolved around reusing existing storage appliances and software defined storage, or a combination of both. This is consistent with the anecdotal feedback we’ve received from customers around using their existing storage investments for some container workloads and complementing that with software-defined storage such as Red Hat Gluster Storage for workloads that demand more elasticity at a lower price.

United we stand, divided we fall

Red Hat has focused on Kubernetes as the container orchestration framework given its maturity, user experience, and community. About half the respondents agreed that a single control plane for both applications and storage is optimal. In addition, a single point of support for storage and container host and open source were listed as key attributes of a vendor customers would like to partner with as a trusted advisor.

Find out more

Join us for a virtual event on January 19 in the new year that will deep dive into the recent innovations to help customers address a number of pain points and navigate the journey toward containers successfully.

In addition, Red Hat recently collaborated with Bain & Company on a survey of more than 400 executives and IT leaders on the transition to containers. You can find the results here.

Red Hat Ceph Storage 2.1 – Rolling down the tracks of Jewel

Three months after the release of Red Hat Ceph Storage 2, we’re proud to announce the first update to our Ceph Jewel-based product with Red Hat Ceph Storage 2.1.

For the 2.1 release, like version 1.3.3, we’re continuing a “train model” in software development. By fixing the release date and pushing out features that aren’t ready until later, we can deliver the latest versions of stable, upstream Ceph code to Red Hat customers on a faster, more predictable schedule and thereby empower customers to plan their upgrades with greater confidence.

What’s in it?

Red Hat Ceph Storage 2.1 is based on Ceph Jewel v10.2.3. Two new features may be of particular interest to service providers using Ceph as an object store:

  • Static web sites: Allows users to host static content, such as HTML or media objects, in RGW objects and have them served as fully functional web sites.
  • S3 payer-request API: Allows users hosting content to have content requesters pay for the network usage fees associated with delivery. This is well-suited for hosting large binary content without being penalized as the originator of the content.

In addition, users who need to keep a large number of objects in one bucket will appreciate indexless buckets. As the name implies, the Ceph Rados Gateway (RGW) can be configured not to maintain an index, which can significantly increase performance by skipping metadata operations during writes. This new feature is well-suited for situations where applications maintain their own index (as is the case with many web and big data apps). We aim to deliver more enhancements around optimizing large-bucket use cases in future releases.

iSCSI in tech preview

Finally, we’re introducing an initial set of iSCSI functionality for use with Ceph’s Rados Block Device (RBD) as part of the Red Hat tech preview program. By using a Red Hat Enterprise Linux 7.3 host as an iSCSI target, users can map LUNs to kernel RBD images with an Ansible-driven workflow that supports multi-pathing. As we evolve this feature toward general availability, we welcome feedback from users running VMware ESX, RHV, or Microsoft Windows as iSCSI initiators. Customers are encouraged to discuss potential use cases with their account managers.

Lifecycle

The Red Hat Ceph Storage 2 stream is supported until July 2019, and the next minor update is targeted for Spring 2017.

Five lessons you learned as a child that apply to your enterprise storage strategy

By Bianca Owens, Red Hat Storage

bianca1

Always share

Open, software-defined storage is only possible through the collaboration between hundreds, even thousands, of contributors worldwide who join forces to tackle some of the trickiest challenges around enterprise storage. Innovation isn’t limited to a small group of product managers and engineers but is instead a continuum of ideas from enthusiasts and practitioners, which is then hardened for enterprise consumption and backed by Red Hat’s world-class support.

Reduce, reuse, recycle

IT hardware has feelings, too. When your shiny new servers arrive in the datacenter, somewhere there is a tear shed by your trusty, and perhaps rusty, servers. Thanks to software-defined storage, they can now enjoy a new life as storage servers. Red Hat Storage software has smarts built into it to allow non-disruptive additions to the hardware cluster, allowing storage workloads to be deployed on newly added servers (or servers with newly added components) almost immediately.

Eat your veggies so you grow big and strong

Using a software-defined storage solution will help you scale easily as your storage needs evolve. Just as we all grow in steady, small increments, allowing for our bodies to adjust and balance as we gain height, Red Hat Storage enables administrators to scale with extremely granular control so they can build balanced storage systems that are not awkwardly skewed in favor of compute, network, memory, or capacity.

Don’t be afraid to make new friends

By definition, software-defined storage is independent of the hardware on which it runs. This translates into greater flexibility and choice for customers who can use any industry-standard hardware and components. Red Hat Storage has many friends who make durable, cost-effective hardware and works closely with each of them to create reference architectures and product bundles to give customers a range of solutions along the build vs. buy spectrum. Read our lip: No vendor lock in.

Red Hat is also friends with public cloud providers. For instance, Red Hat Gluster Storage is available on Amazon, Google, and Microsoft public clouds. And what’s more interesting is that the same bits are deployed on premise, in containers and virtual machines, and public cloud deployments, which means fewer application rewrites and retooling as applications are transitioned across boundaries.

Money doesn’t grow on trees

Customers and analysts agree that software-defined storage can significantly reduce storage costs. In a recent whitepaper on the economics of software-defined storage, IDC found that “over a five-year period, procuring server hardware with internal disks and deploying a software-based storage solution such as Red Hat Gluster Storage and Red Hat Ceph Storage can save businesses over 60% and 46%, respectively, compared with a competitive NAS solution.”

Comparable features and performance, at about half the price. What’s not to like? When it comes to storage, it’s never been better to be a kid in a candy store. Get started with your goody bag here.

IDC: The economics of software-defined storage

Red Hat Storage received a resounding endorsement from IDC in the recently published analyst opinion whitepaper on the economics of software-defined storage. Over the past decade, one of the change drivers motivating companies to move off traditional storage appliances to software-defined storage has been rising costs, given the mounting pressure to retain and process more data than ever before.

In this paper, IDC concludes that “over a five-year period, procuring server hardware with internal disks and deploying a software-based storage solution such as Red Hat Gluster Storage and Red Hat Ceph Storage can save businesses over 39% and 53%, respectively, compared with a competitive NAS solution.”

But wait—There’s more….

The savings numbers by themselves are compelling enough for most CIOs considering the transition from monolithic, proprietary storage appliances. However, there are a number of additional savings that can make the decision a no-brainer.

  • Businesses can leverage the latest innovation in servers, spinning disks, memory, flash, external disk systems, and other components to continuously evolve their storage systems, rather than being tied to the plodding innovation cycle of their storage vendors. As hardware prices decline, companies can purchase hardware at lower prices over time rather than being forced to make a large investment at the outset.
  • Customers can scale storage infrastructure built on practices and standards adopted by the largest cloud service providers, bringing greater efficiencies and helping to convert capital expenditure (CapEx) to operating expenditure (OpEx).
  • Datacenters that have undergone recent hardware refreshes can reuse older servers and hardware as storage servers, thus reducing cost and improving utilization.
  • Capacity planning is a breeze, because enterprises can purchase exactly how much they need for ready-to-go projects rather than over-provision to allow for future growth. With software-defined storage, expensive and cumbersome migrations are a thing of the past, because the hardware and software updates happen on an incremental and more manageable basis rather than discrete forklift upgrades.

“Open” is more than a modifier

One of the topics clearly outlined in the IDC paper is the preeminence of “open” and “open source” in almost every aspect of the datacenter. The open source experiment that started more than 20 years ago has now turned to the default option for many enterprises. In the words of IDC: “Today, Linux not only is used to run applications but also powers many hardware-based storage platforms—a fact that has not gone unnoticed by many.”

Storage built with open standards, using open source storage controller software, offers customers the best of both worlds—latest innovations driven by hundreds and thousands of practitioners across the world and unmatched cost efficiencies for enterprise-grade solutions on par with any incumbent storage technology.

Open source has cost implications beyond product and support. For instance, because Linux skills are ubiquitous in most datacenters, training to be a storage administrator for Red Hat Ceph and Gluster Storage is a much smaller skills investment compared to what’s required for a dedicated storage administrator to master a proprietary storage appliance.

“Software-defined” defined

Not surprisingly, many traditional storage appliance vendors have tried to rebrand themselves as “software defined” by adding superficial add-ons, but when you look under the covers, they leave much to be desired in terms of flexibility and choice.

The IDC paper cites a few building blocks essential to any software-defined solution that can serve as a litmus test for customers looking for a truly software-defined solution:

  • Standalone or autonomous storage controller software for storage access services, data persistence, networking functions, and interconnects that make no assumptions of underlying hardware components or any underlying resilience or redundancy schemes like RAID
  • A system that supports rolling hardware and software upgrades, as well as the ability to run mixed hardware configurations
  • Platforms that do not contain proprietary hardware components like custom-designed ASICs, accelerator cards, chipsets, memory components, or CPUs
  • A shared-nothing architecture in which data is shared and distributed (commonly found in scale-out environments) and that allows nodes to function independently, in contrast to scale-up systems with proprietary interconnects to share hardware resources

The breakdown

The IDC whitepaper compares the acquisition and maintenance cost of 300TB of Red Hat Gluster Storage and 500TB of Red Hat Ceph Storage (on Supermicro hardware) with competitive NAS storage system spanning 3- and 5-year horizons. The results speak for themselves.

idc1

idc2

It’s more than just storage

While the IDC whitepaper focuses squarely on the economics of software-defined storage, the author is quick to point out that Red Hat Storage can bring added value to the organization through “storage and data management efficiency, increase in application performance, reduction in storage infrastructure costs, and increase in IT productivity.”

You can dive into the IDC whitepaper and get started right away with your Red Hat Gluster Storage or Red Hat Ceph Storage test drive today.

Jack of all trades: New Cisco UCS S-Series and Red Hat Storage

imagesToday, Cisco announced its new UCS S-Series storage-optimized server with the introduction of the UCS S3260, marking its entry into the emerging server market for data intensive workloads.

Red Hat and Cisco have worked together for a long time, including our collaboration on Red Hat OpenStack Platform.

Out with the old…

By jumping into the high-density storage-optimized server market, Cisco validates what we see as the continued movement to emerging software-defined, scale-out architectures for solutions like OpenStack and container-native storage and hyper-converged infrastructure.

With the ability to spread data across multiple servers, both Red Hat Ceph Storage and Red Hat Gluster Storage are helping to drive this trend. Open, software-defined storage enables enterprises to build an elastic cloud infrastructure for newer, data intensive workloads.

Ceph provides unified storage over a distributed object store (RADOS) as its core by providing unified block, object and file interfaces, while Gluster provides an elastic, scale out NAS file storage system.

As more organizations move to open source SDS from appliances / traditional SAN arrays, they often miss the recipes for a best practice deployment. Red Hat has worked with Cisco to produce reference design architectures to take the guess work out of configuring throughput-optimized, cost / capacity-optimized and emerging high IOPs performing clusters, including whitepapers for both Red Hat Ceph Storage and Red Hat Gluster Storage with Cisco’s previous generation of the S-Series, the C3160 high density rack server.

Open source drives storage innovation

Both Ceph and Gluster use community-powered innovation to accelerate their core feature sets faster than what is possible via a single proprietary vendor. Red Hat is a top contributor to both Ceph and Gluster upstream development, but several hardware, software and cloud service providers, including eBay, Yahoo!, CERN (Ceph) and Facebook (Gluster), all contribute to the code base. Cisco itself is a top-50 contributor to Ceph in terms of code commits.

Versatility

The Cisco UCS S-Series builds on the x86 storage-optimized server trend – but seemingly shuffles the deck with more of an enterprise spin via features such as dual-node servers, quadruple fans and power supplies, connected to Cisco UCS Fabric Interconnects.

One aspect of the new UCS S-Series design we are excited about is “versatility”. UCS offers common, consistent architecture for variety of IT needs that we expect may enable it to become a standard hardware building block for enterprise environments. S-Series includes features such as a modular chassis design, facilitating upgrades to new Intel chipsets including its disk expander module, providing the ability to swap out a server node for an additional 4 drives (increasing the raw capacity from 560 to 600 TB).

Cisco has also integrated networking fabric into its storage-optimized servers, making it easier to extend your interconnect as your cluster scales out. The S3260 offers dual 40GbE ports for each server node. As one moves to denser servers (with more than 24 drives) in Ceph configurations, the need for 40Gb Ethernet becomes greater. Enterprises can benefit from tightly-integrated fabric interconnect which translates to less latency, which is important for applications like video streaming.

A key piece is the UCS Manager configuration and handling tool which can simplify deployment. UCS Manager enables the creation of an initial configuration profile for storage, network, compute, etc. for the S3260, helping customers to more easily grow their Ceph environments by pushing out the profile to additional S3260s as they expand.

Combined with the Red Hat Storage ability to handle block, object and file access along with being flexible enough to handle throughput optimized, cost / capacity and high IOPS workloads, Cisco’s UCS S-Series may not just be a jack of all trades, but also a master of many.

Stay tuned for more upcoming joint solution papers from the Cisco UCS S3260 and Red Hat Ceph Storage teams. In the interim, learn more about the UCS S-Series at cisco.com/go/storage.

Red Hat named a “visionary” in Gartner’s Magic Quadrant for Distributed File Systems and Object Storage

Today we announced that Gartner has named Red Hat Storage as a Visionary in its first ever Magic Quadrant for Distributed File Systems and Object Storage. This is a great honor and solid recognition by a leading IT analyst of Red Hat’s vision and prominence in the market with Red Hat Gluster Storage and Red Hat Ceph Storage.

mq_graphic

Gartner’s Magic Quadrants are based on rigorous analysis of a vendor’s completeness of vision and ability to execute. We are proud to be ranked highest in both ability to execute and completeness of vision among the Visionaries Gartner named.

We believe this placement by Gartner reinforces our commitment to offer a unified, open, software-defined storage portfolio that delivers a range of data services for next-generation workloads helping customers to accelerate the transition to modern IT infrastructures.

Access the Gartner Magic Quadrant for Distributed File Systems and Object Storage report here.

This graphic was published by Gartner, Inc., as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Red Hat.

Gartner does not endorse any vendor, product, or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Software-defined storage explained on the back of a napkin

napkin

The demand for storage is headed only in one direction, and that’s up. The cost of enterprise storage is a mounting concern for CIOs as there is added pressure to retain more data due to factors such as regulatory compliance and big data analytics.

Traditional, monolithic storage appliances are built on a scale-up model that is rigid and expensive. The only way to grow storage capacity is by throwing feeds and speeds at the appliance. On the other hand, distributed software-defined storage is built on a scale-out model that lends itself naturally to seamless scale and agility. Adding capacity is as simple as adding more industry-standard servers to the storage cluster.

The price-performance ceiling of traditional storage appliances

When we compare the price-performance characteristics of traditional scale-up storage appliances to software-defined storage such as Red Hat Gluster Storage, we see that as capacity grows, storage appliances hit a performance plateau, while software-defined storage scales linearly. On the other hand, we find that the cost of a monolithic appliance grows exponentially as we hit the performance plateau.

It’s important to remember that traditional storage appliances were built in the era before today’s diverse workloads. For that reason, Red Hat works with the top hardware vendors to build reference architectures that are optimized for performance, throughput, or capacity depending on the workload you’re running.

Unprecedented flexibility and choice

With Red Hat Gluster Storage, you run the same software bits whether the storage is deployed on premise, on virtual machines, in the cloud, or even in containers. Red Hat Gluster Storage offers you advanced storage features such as tiering, bit rot detection, geo replication, and erasure coding, just to name a few. When considering the 3-year TCO, including hardware and support, you get comparable features and performance for about half the price of a storage appliance.

If you feel locked in by your proprietary storage vendor, perhaps it’s time to give open, software-defined storage a try. Take Red Hat Gluster Storage for a test drive on AWS today.