New studies from Red Hat, Supermicro, and Mellanox shows major Red Hat Storage performance increases with Mellanox networking

According to a study just completed by Mellanox Technologies, Ltd., Supermicro, and Red Hat, Red Hat Ceph Storage and Red Hat Gluster Storage deliver higher storage server performance when used with Mellanox solutions for 25, 40, 50, 56, and 100 Gigabit Ethernet networking and RDMA technology. They can also lower the cost of deploying rack-scale storage for cloud and enterprise by squeezing more performance out of dense servers. Dense storage servers (>18 hard drives) and all-flash configurations can drive more throughput than standard 10GbE bandwidth can accommodate. Mellanox high-speed networking technologies allow these dense and all-flash servers to achieve higher throughput performance. In addition, for latency-sensitive workloads, Mellanox networking technologies can significantly reduce IO latencies.

Mellanox, Red Hat, Seagate, and Supermicro are also running an ongoing Red Hat Ceph Storage benchmark project to demonstrate performance with various combinations of flash and hard drives. Tests in the first phase demonstrated that, when using 40Gb instead of 10GbE networks:

  • Per-server large sequential read throughput performance increased up to 100% for Red Hat Ceph Storage servers with 72 drives
  • Per-server large read throughput performance increased up to 20% for Red Hat Ceph Storage servers with 36 disks
  • Read and write latency dropped up to 50%

mellanox

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s