New studies from Red Hat, Supermicro, and Mellanox shows major Red Hat Storage performance increases with Mellanox networking

According to a study just completed by Mellanox Technologies, Ltd., Supermicro, and Red Hat, Red Hat Ceph Storage and Red Hat Gluster Storage deliver higher storage server performance when used with Mellanox solutions for 25, 40, 50, 56, and 100 Gigabit Ethernet networking and RDMA technology. They can also lower the cost of deploying rack-scale storage for cloud and enterprise by squeezing more performance out of dense servers. Dense storage servers (>18 hard drives) and all-flash configurations can drive more throughput than standard 10GbE bandwidth can accommodate. Mellanox high-speed networking technologies allow these dense and all-flash servers to achieve higher throughput performance. In addition, for latency-sensitive workloads, Mellanox networking technologies can significantly reduce IO latencies.

Mellanox, Red Hat, Seagate, and Supermicro are also running an ongoing Red Hat Ceph Storage benchmark project to demonstrate performance with various combinations of flash and hard drives. Tests in the first phase demonstrated that, when using 40Gb instead of 10GbE networks:

  • Per-server large sequential read throughput performance increased up to 100% for Red Hat Ceph Storage servers with 72 drives
  • Per-server large read throughput performance increased up to 20% for Red Hat Ceph Storage servers with 36 disks
  • Read and write latency dropped up to 50%

mellanox

Optimizing storage cluster configurations for multiple workloads – the art of the science

SuperMicro-Facebook-2

Supermicro & Red Hat collaborate on easier storage procurement and deployment for customers

Most IT infrastructure buyers look to optimize their purchases around performance, availability, and cost. Storage buyers are no different. However, optimizing storage infrastructure across a number of workloads can become challenging as each workload might have unique requirements. Red Hat Ceph Storage customers frequently request simple, recommended cluster configurations for different workload types.  The most common requests are for throughput-optimized (such as kvm image repositories) and capacity-optimized workloads (like digital media object archives), but IOPS-optimized workloads (such as MySQL on OpenStack) are also emerging rapidly.

Simplified procurement for storage solutions

To simplify the evaluation and buying cycle, Red Hat teamed with server vendor Supermicro to build a set of validated storage configurations by workload type for a range of storage clusters from a few hundred terabytes to multiple petabytes. This makes Supermicro the first vendor to publish comprehensive results of Red Hat Ceph Storage performance across their server line.

Red Hat Ceph Storage combined with Supermicro servers, Seagate HDDs, Intel SSDs, and Mellanox or Supermicro networking represents an exciting technology combination that affords:

  • Choice of throughput or cost/capacity-optimized configurations
  • Range of cluster sizes, from hundreds of terabytes to multiple petabytes
  • Repeatable configurations that have been tested and verified in Supermicro laboratoriesPicture1

To further simplify the procurement process, Supermicro has created server and rack-level SKUs optimized for Red Hat Ceph Storage, according to the throughput and capacity-optimized workload categories described above. These SKUs can be ordered directly from the server vendor. Supermicro also announced today that they will start to ship Red Hat Ceph Storage as part of their cloud storage solutions. The reference architecture is being finalized, so check back with us soon for updates. You can find additional details at supermicro.com/ceph

In the spirit of software-defined storage, Supermicro solutions with Red Hat Ceph Storage utilize best of breed components available in workload optimized form factors from 1U to 4U. The solutions offer high storage density coupled with up to 96% power efficiency providing real-world advantages in both procurement and operational costs for deployments of all sizes.

A blueprint for success today and for the future

Organizations need simple and tested cluster configurations for different workload types to improve time to value of their storage solution. It is quite common for customers to request optimized configurations allowing them to start small and scale to different cluster sizes. Red Hat’s approach is to evaluate, test, and document reference configurations optimized for different workload categories, is focused on delivering specific and proven configuration advice.

In this regard, Red Hat and Supermicro are working on a new reference architecture that can serve as a guideline to customers looking to deploy Red Hat Ceph Storage for small, medium, and large clusters. The reference architecture whitepaper will include a list of the common criteria used to identify optimal Supermicro cluster configurations for Red Hat Ceph Storage. Watch this space for a link to the reference architecture whitepaper in a future blog post.

One of the key benefits of Red hat Ceph Storage is the ability to deploy different types of storage pools within the same cluster, targeted for different workloads.

  • Block storage pools typically use triple replication for data protection on throughput-optimized servers
  • Object storage pools typically use erasure coding for data protection on cost or capacity-optimized servers
  • High-IOPS server pools can also be added to a Red Hat Ceph Storage cluster (lab testing scheduled for summer 2015)

The reference architecture whitepaper will clearly outline the configuration decisions that could help optimize storage for these workload requirements.

Meet the Experts

If you plan to be at Red Hat Summit this week, please stop by the storage booth in the Infrastructure Pavilion (#306) to speak to our storage experts or to sign up for a free mentored test drive on Red Hat Ceph Storage. We hope to see you there.

Red Hat Storage announces a partnership with Supermicro Computer and Vantrix at the Red Hat Summit 2015

Partner-logos-TWITTER-CARD

Note: The use of the word “partnership” does not imply a legal partnership between Red Hat and any other company.

To further expand and strengthen our thriving partner ecosystem of IT software and hardware leaders, we are teaming with Supermicro Computer, Inc., and Vantrix to deliver comprehensive, proven, and fully supported compute and storage solutions for enterprises managing petabyte-scale data. They join a vibrant ecosystem that also includes partners such as Fujitsu.

Announced at the Red Hat 2015 Summit, the alliance incorporates Red Hat Gluster Storage into the Vantrix Media Platform solution, allowing Vantrix to offer high-performance data storage in enterprises deploying rich media and archival storage workloads. And, while Supermicro Computing has offered Red Hat Gluster Storage as part of its software-defined storage and server product offering since 2013, with the addition of Red Hat Ceph Storage, Supermicro can deliver a comprehensive suite of end-to-end, highly scalable infrastructure solutions.

These two partnerships are the latest in a long line of collaborations between Red Hat Storage and storage industry leaders; a growing Red Hat partner base that delivers a comprehensive and proven software-defined storage solution set to enterprise customers, worldwide.

To learn more, check out our press release here, and be sure to follow the news from Red Hat Summit 2015 at our Twitter feed.

Announcing Red Hat Gluster Storage 3.1

everglades

Florida and the Everglades seen from space. (NASA)

The Everglades is a region of Florida that consists of wetlands and swamps. These natural areas are filled with life and possibility, so it makes sense that Everglades was the code for the new Red Hat Gluster Storage (RHGS) 3.1, announced today, and available this summer.

What is it, what’s new

For those not familiar with RHGS, it is a combination of the GlusterFS 3.7 and Red Hat Enterprise Linux platforms. Combined, they create space efficient, resilient, scalable, high performance and cost-effective storage.

This release brings:

  • Erasure Coding
  • Tiering
  • Clustered NFS ganesha
  • Bit-rot detection
  • And more!

RHGS 3.1 also includes enhancements to address the data protection and storage management challenges faced by users of unstructured and big data storage.

Getting into the details

These are just a handful of the new capabilities in this update. For more insight, please visit this page.

Erasure coding

Erasure coding is an advanced data protection mechanism that reconstructs corrupted or lost data by using information about the data that’s stored elsewhere in the storage system.

Red Hat Gluster Storage Direction roadmap use cases  2

Erasure coding is an alternative to h/w raid. Although RAID5/6 provides protection up to single/double disk failure, the risk of multiple disks failing during rebuild is very high for higher capacity disks. Erasure coding provides failure protection beyond just single/double component failure and consumes less space than replication.

Tiering

Moving data between tiers of hot and cold storage is a computationally expensive task. To address this, RHGS 3.1 supports automated promotion and demotion of data within a volume, so different sub-volume types act as hot and cold tiers. Data is automatically assigned or reassigned a “temperature” based on the frequency of access. RHGS supports an “attach” operation which converts an existing volume to become a “cold” tier within a volume and creates a new “hot” tier in the same volume. Together, the combination is a single “tiered” volume. For example, the existing volume may be erasure coded on HDDs and the “hot” tier could be distributed-replicated on SSDs.

Red Hat Gluster Storage Direction roadmap use cases 1

Active/Active NFSv4

RHGS 3.1 supports active-active NFSv4 using the NFS ganesha project. This allows an user to export Gluster volumes through NFSv4.0 and NFSv3 via NFS ganesha. NFS ganesha is an user space implementation of the NFS protocol which is very flexible with simplified failover and failback in case of a node/network failure. The high availability implementation
(that supports up to 16 nodes of active-active NFS heads) uses the corosync and pacemaker infrastructure. Each node has a floating IP address which fails-over to a configured surviving node in case of a failure. Failback happens when the failed node comes back online.

Bit rot detection

This detects silent-data corruption which is usually encountered at the disk level. This corruption leads to a slow deterioration in the performance and integrity of data stored in storage systems. RHGS 3.1 periodically runs a background scan to detect and report bit-rot errors via a checksum using the SHA256 algorithm.

Because this process can have a severe impact on storage system performance, RHGS allows an administrator to execute the processes at three different speeds (lazy, normal, aggressive).

Your next step

To get more details about RHGS 3.1, please visit this page. To take RHGS for a test drive, please visit https://engage.redhat.com/aws-test-drive-201308271223.

Announcing Red Hat Ceph Storage 1.3

Like a fine wine, Red Hat Ceph Storage (RHCS) gets better with age. During Red Hat Summit 2015, we announced the availability of RHCS 1.3, a release that brings with it improvements and tuning designed to please many an admin. Let’s take a look at what you can expect but, before we do, remember that you can test drive RHCS by visiting this link. Do it today.

Robustness at scale

Data is growing at a mind-boggling rate every year, so it makes sense that RHCS should be able to handle multi-petabyte clusters where failure is a fact of life. To mitigate that failure and enhance performance, RHCS now has:

  • Improved automatic rebalancing logic that prioritizes degraded, rather than misplaced, objects
  • Rebalancing operations that can be temporarily disabled so those operations don’t impede performance
  • Scrubbing that can be scheduled to avoid disruption at peak times
  • Object buckets that can be sharded to avoid hot-spots

Past, present, & future of Red Hat Ceph Storage 1

Performance tuning

RHCS has been tweaked and tuned to improve speed and increase I/O consistency. This includes optimizations for flash storage devices; read-ahead caching, which accelerates virtual machine booting in OpenStack; allocation hinting, which reduces XFS fragmentation to minimize performance degradation over time; and cache hinting, which preserves the cache’s advantages and improves performance.

Past, present, & future of Red Hat Ceph Storage 2

Operational efficiency

New features in RHCS 1.3 help admins manage storage more efficiently. For example, Calamari, the product’s management platform, now supports multiple users and clusters. Civetweb server greatly simplifies deployment of Ceph Object Gateway: it now takes only two commands to fully install RGW. CRUSH management via Calamari API allows programmatic adjustment of placement policies, and faster block device operations make resize, delete, and flatten operations quicker.

Your next step

See the changes in RHCS yourself! Visit today or hit us up on Twitter today!

Follow

Get every new post delivered to your Inbox.

Join 3,199 other followers

%d bloggers like this: