Like all good open source, making Ceph takes good friends

Header

It’s kind of like living in a neighborhood, you know, software is. Once you’ve moved in, you can try to develop your neighborhood all on your own, without the guidance and contributions of the knowing, experienced people around you. But you won’t get the best outcome. In the best neighborhoods, all neighbors welcome the contributions of those around them—Everyone stays open to the thoughts and help of the people around them who share their goal: to make what they’re developing—their neighborhood—the very best it can be. For the greater good.

Open source is powerful

Relying on others’ contributions for the greater good is, of course, the very essence of open source software: harnessing any number of experienced, creative, talented individuals and companies to make software so very much greater than it could be with just one contributor. The vibrancy and effectiveness of Ceph, the top block storage driver choice of OpenStack® users, is testament to the undeniable, transformative power of the open source way.

We’re powerful because of open source

We recently released Red Hat Ceph Storage 1.3 but, given its longstanding, open source foundation, we need to share credit for it, because some of our many strategic partners contributed to the latest release of Ceph, Hammer, on which our product is based.

Take Sandisk, for example. They’ve been actively working on a number of Ceph Hammer performance and stability improvements, systematically tackling performance bottlenecks exposed by all-flash-array configurations. Want some performance numbers? Click here.

Intel is another active Ceph community participant that regularly contributes to the Ceph code base. In fact, Intel has collaborated with us to develop a combined solution based on Ceph optimization and Ceph testing with Intel technologies, and they, too, have made a number of significant contributions to Hammer—contributions we will all benefit from, things like:

  • More efficient messaging and encoding for replicated writes
  • Optional SSD discard for FileStore journal

You can learn more about our joint work with Intel here.

And then there’s Yahoo, which has contributed significantly to the Ceph Object gateway, specifically addressing bucket hotspots at petascale. Yahoo is in the process of deploying its Ceph-based, multi-petabyte, software-defined storage solution, Cloud Object Store (COS), initially hosting picture thumbnails for its Flickr service. Ceph provides the flexibility Yahoo needs and, critically, supports erasure coding, saving very large amounts of physical storage compared to the alternative. To learn more about Ceph and Yahoo, click here.

Of course, these are just a few of the many contributions—corporate and individual—offered by the Ceph community, and they all deliver on Ceph’s promise to be “The Future of Storage.” (For a more complete list of contributors, click here.) Great software takes good friends—just like a neighborhood—and Ceph is one of the greatest software projects around thanks to our friends. The Ceph network is vibrant and still growing, and we look forward to what tomorrow will bring.

Welcome Aboard, CohortFS Team

Today we are pleased to share some exciting news with you: a group of engineers from CohortFS, a Michigan-based startup working on distributed storage systems, will be joining our growing Red Hat Storage team.

The team from CohortFS, based in Ann Arbor, has several years of experience working with the Ceph project. Having contributed performance optimizations and key improvements to the object gateway, along with leading-edge work on Ceph with NFS-Ganesha, this team is already well-known within the Ceph community.

As part of Red Hat, they bring technical expertise and a wealth of knowledge in high-performance, low-latency distributed systems that will help to expand and improve our software-defined storage portfolio.

To our newest associates, welcome!

Red Hat Summit 2015: The Storage Recap

That’s a wrap! We finally brought Red Hat Summit 2015 to a close. Were you there? Or were you watching at home like we said you should? Either way, with so much going on you were bound to miss something, so read on to get the full scoop.

Our news

Conferences are excellent places to meet others in your industry, to learn the latest from the greatest, and to find out what will affect your industry in the coming year. Red Hat Summit is no different!

On the news front, we made a number of important announcements last week. To find out what you missed – or get a more detailed look – read our blog posts:

Optimizing storage cluster configurations for multiple workloads – the art of the science

Supermicro and Red Hat have developed a set of validated storage configurations by workload type, from a few hundred terabytes to multiple petabytes, to simplify the evaluation and buying cycle.

Red Hat Storage announces a collaboration with Supermicro Computer and Vantrix at the Red Hat Summit 2015

Red Hat Storage has expanded its ecosystem of partners to include Supermicro and Vantrix. This relationship enables Vantrix to offer high-performance data storage in enterprises deploying rich media and archival storage workloads on Red Hat Gluster Storage in the Vantrix Media Platform, and Supermicro is now offering Red Hat Ceph Storage in its software-defined storage and server offering.

Announcing Red Hat Gluster Storage 3.1

Red Hat Gluster Storage offers a host of new features in this release including erasure coding, tiering, bit-rot detection, and much more. Click to get all the details and learn about an opportunity for a test drive.

RHGS-FACEBOOK

Announcing Red Hat Ceph Storage 1.3

We also brought updates to Red Hat Ceph Storage. Version 1.3 brings improved self-management for large clusters, improved performance, and greater efficiency.

RHCS-FACEBOOK

Cisco and Red Hat collaborate on petabyte scale storage for OpenStack and Big Data

We’ve teamed with Cisco to build ultra-dense, high-throughput solutions to store very large amounts of unstructured data – deployed with minimal rack space – using Red Hat Gluster Storage and Red Hat Ceph Storage on Cisco’s Unified Computing System.

Red Hat drives deeper integration of persistent storage for containerized environments

The Red Hat Storage and OpenShift communities came together to deliver one of the first platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) offerings for automatic orchestration of remote, persistent, storage of containerized application services across a large cluster topology. Click here for more details. 

Our industry and communities in eleven slides

Ranga Rangachari, vice president and general manager of Red Hat Storage, led a breakfast session during the Summit and presented a short presentation giving attendees a sense of our industry, communities, partners, and solutions. Now, even if you couldn’t make it to breakfast, you too can learn what was shared that morning. Check it out:

The Summit in Pictures

Our team was on site to document the goings on. Here are just a few of our favorite updates, but if you want to get the full scoop be sure to check out the Summit hashtag stream on Twitter.

See you next year

Boston was great. Fair weather, great people, beautiful architecture. But nothing beats the city by the bay. Red Hat Summit 2016 will be returning to San Francisco! Remember to adjust your workouts for hills, bring your appetites, and learn to layer your clothing … but expect a warm welcome, as usual.

New studies from Red Hat, Supermicro, and Mellanox shows major Red Hat Storage performance increases with Mellanox networking

According to a study just completed by Mellanox Technologies, Ltd., Supermicro, and Red Hat, Red Hat Ceph Storage and Red Hat Gluster Storage deliver higher storage server performance when used with Mellanox solutions for 25, 40, 50, 56, and 100 Gigabit Ethernet networking and RDMA technology. They can also lower the cost of deploying rack-scale storage for cloud and enterprise by squeezing more performance out of dense servers. Dense storage servers (>18 hard drives) and all-flash configurations can drive more throughput than standard 10GbE bandwidth can accommodate. Mellanox high-speed networking technologies allow these dense and all-flash servers to achieve higher throughput performance. In addition, for latency-sensitive workloads, Mellanox networking technologies can significantly reduce IO latencies.

Mellanox, Red Hat, Seagate, and Supermicro are also running an ongoing Red Hat Ceph Storage benchmark project to demonstrate performance with various combinations of flash and hard drives. Tests in the first phase demonstrated that, when using 40Gb instead of 10GbE networks:

  • Per-server large sequential read throughput performance increased up to 100% for Red Hat Ceph Storage servers with 72 drives
  • Per-server large read throughput performance increased up to 20% for Red Hat Ceph Storage servers with 36 disks
  • Read and write latency dropped up to 50%

mellanox

Optimizing storage cluster configurations for multiple workloads – the art of the science

SuperMicro-Facebook-2

Supermicro & Red Hat collaborate on easier storage procurement and deployment for customers

Most IT infrastructure buyers look to optimize their purchases around performance, availability, and cost. Storage buyers are no different. However, optimizing storage infrastructure across a number of workloads can become challenging as each workload might have unique requirements. Red Hat Ceph Storage customers frequently request simple, recommended cluster configurations for different workload types.  The most common requests are for throughput-optimized (such as kvm image repositories) and capacity-optimized workloads (like digital media object archives), but IOPS-optimized workloads (such as MySQL on OpenStack) are also emerging rapidly.

Simplified procurement for storage solutions

To simplify the evaluation and buying cycle, Red Hat teamed with server vendor Supermicro to build a set of validated storage configurations by workload type for a range of storage clusters from a few hundred terabytes to multiple petabytes. This makes Supermicro the first vendor to publish comprehensive results of Red Hat Ceph Storage performance across their server line.

Red Hat Ceph Storage combined with Supermicro servers, Seagate HDDs, Intel SSDs, and Mellanox or Supermicro networking represents an exciting technology combination that affords:

  • Choice of throughput or cost/capacity-optimized configurations
  • Range of cluster sizes, from hundreds of terabytes to multiple petabytes
  • Repeatable configurations that have been tested and verified in Supermicro laboratoriesPicture1

To further simplify the procurement process, Supermicro has created server and rack-level SKUs optimized for Red Hat Ceph Storage, according to the throughput and capacity-optimized workload categories described above. These SKUs can be ordered directly from the server vendor. Supermicro also announced today that they will start to ship Red Hat Ceph Storage as part of their cloud storage solutions. The reference architecture is being finalized, so check back with us soon for updates. You can find additional details at supermicro.com/ceph

In the spirit of software-defined storage, Supermicro solutions with Red Hat Ceph Storage utilize best of breed components available in workload optimized form factors from 1U to 4U. The solutions offer high storage density coupled with up to 96% power efficiency providing real-world advantages in both procurement and operational costs for deployments of all sizes.

A blueprint for success today and for the future

Organizations need simple and tested cluster configurations for different workload types to improve time to value of their storage solution. It is quite common for customers to request optimized configurations allowing them to start small and scale to different cluster sizes. Red Hat’s approach is to evaluate, test, and document reference configurations optimized for different workload categories, is focused on delivering specific and proven configuration advice.

In this regard, Red Hat and Supermicro are working on a new reference architecture that can serve as a guideline to customers looking to deploy Red Hat Ceph Storage for small, medium, and large clusters. The reference architecture whitepaper will include a list of the common criteria used to identify optimal Supermicro cluster configurations for Red Hat Ceph Storage. Watch this space for a link to the reference architecture whitepaper in a future blog post.

One of the key benefits of Red hat Ceph Storage is the ability to deploy different types of storage pools within the same cluster, targeted for different workloads.

  • Block storage pools typically use triple replication for data protection on throughput-optimized servers
  • Object storage pools typically use erasure coding for data protection on cost or capacity-optimized servers
  • High-IOPS server pools can also be added to a Red Hat Ceph Storage cluster (lab testing scheduled for summer 2015)

The reference architecture whitepaper will clearly outline the configuration decisions that could help optimize storage for these workload requirements.

Meet the Experts

If you plan to be at Red Hat Summit this week, please stop by the storage booth in the Infrastructure Pavilion (#306) to speak to our storage experts or to sign up for a free mentored test drive on Red Hat Ceph Storage. We hope to see you there.

Follow

Get every new post delivered to your Inbox.

Join 3,213 other followers

%d bloggers like this: