According to a study just completed by Mellanox Technologies, Ltd., Supermicro, and Red Hat, Red Hat Ceph Storage and Red Hat Gluster Storage deliver higher storage server performance when used with Mellanox solutions for 25, 40, 50, 56, and 100 Gigabit Ethernet networking and RDMA technology. They can also lower the cost of deploying rack-scale storage for cloud and enterprise by squeezing more performance out of dense servers. Dense storage servers (>18 hard drives) and all-flash configurations can drive more throughput than standard 10GbE bandwidth can accommodate. Mellanox high-speed networking technologies allow these dense and all-flash servers to achieve higher throughput performance. In addition, for latency-sensitive workloads, Mellanox networking technologies can significantly reduce IO latencies.
Supermicro & Red Hat collaborate on easier storage procurement and deployment for customers
Most IT infrastructure buyers look to optimize their purchases around performance, availability, and cost. Storage buyers are no different. However, optimizing storage infrastructure across a number of workloads can become challenging as each workload might have unique requirements. Red Hat Ceph Storage customers frequently request simple, recommended cluster configurations for different workload types. The most common requests are for throughput-optimized (such as kvm image repositories) and capacity-optimized workloads (like digital media object archives), but IOPS-optimized workloads (such as MySQL on OpenStack) are also emerging rapidly.
Note: The use of the word “partnership” does not imply a legal partnership between Red Hat and any other company.
To further expand and strengthen our thriving partner ecosystem of IT software and hardware leaders, we are teaming with Supermicro Computer, Inc., and Vantrix to deliver comprehensive, proven, and fully supported compute and storage solutions for enterprises managing petabyte-scale data. They join a vibrant ecosystem that also includes partners such as Fujitsu.
The Everglades is a region of Florida that consists of wetlands and swamps. These natural areas are filled with life and possibility, so it makes sense that Everglades was the code for the new Red Hat Gluster Storage (RHGS) 3.1, announced today, and available this summer.
Like a fine wine, Red Hat Ceph Storage (RHCS) gets better with age. During Red Hat Summit 2015, we announced the availability of RHCS 1.3, a release that brings with it improvements and tuning designed to please many an admin. Let’s take a look at what you can expect but, before we do, remember that you can test drive RHCS by visiting this link. Do it today.
Enterprises are dealing with workloads that demand anywhere from a few terabytes to multiple petabytes of unstructured data. Storage-intensive enterprise workloads are ubiquitous in the data center and range across workloads such as:
- Archiving and backup, including backup images and online archives
- Rich media content storage and delivery, such as videos, images, and audio files
- Enterprise drop-box
- Cloud and business applications, including log files, and RFID and other machine-generated data
- Virtual and cloud infrastructure, such as virtual machine images as well as emerging workloads, such as co-resident applications
As modern day application architects continue to aggressively leverage container technologies, Red Hat Enterprise Linux customers demand deeper integration between Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS). This week at Red Hat Summit, we are excited to showcase the hard work of the Red Hat Storage and OpenShift communities to deliver one of the first PaaS offerings for automatic orchestration of remote persistent storage for containerized application services across a large cluster topology.
It’s 2015, and this year Red Hat Summit is in Boston, a city as known for its crucial role in United States history, its formerly cursed baseball team, and, of course, a wonderful cream-filled, chocolate-covered doughnut–among many, many other things.
— Jim Whitehurst (@JWhitehurst) May 22, 2015
You may have read some of our recent posts about the recent OpenStack Summit in Vancouver. What you may not have seen are the thoughts from the industry or the conversation on social networks. Read on for a snapshot in time!
This week at OpenStack Summit in Vancouver, Red Hat announced integrated OpenStack Manila shared file system services with Red Hat Gluster Storage to provide enterprises with a scale-out file system for OpenStack clouds.
This news is extremely relevant to enterprises that need a scale out shared file service for their private cloud deployments. Before the Manila file share service, there was no elegant way to handle file shares in OpenStack. The integration with OpenStack Manila allows Red Hat Gluster Storage to serve as a storage back end for Manila, giving users a way to take advantage of file-based storage services in an OpenStack environment. In addition, it takes away much of the housekeeping chores from OpenStack users who can now request a file share, use it as long as needed and place it back into the storage pool, without ever caring about who first created the file or where exactly it’s stored.
OpenStack Manila file storage users have a reason to celebrate. Red Hat is proud to announce a new performance benchmark that not only reinforces the performance of GlusterFS at scale but also demonstrates how enterprises could significantly lower enterprise storage and energy costs.
The benchmark tests represent the industry’s first results of GlusterFS on ultra-dense Cisco UCS C3160 servers with high-speed networking. GlusterFS offers over half a petabyte of usable storage with just four nodes, which makes it is extremely cost effective in storage intensive workloads such as OpenStack Manila. The latest IOzone file system performance benchmark shows how GlusterFS read and write performance scale near linearly when deployed on Cisco UCS 3160 servers.
According to a voluntary online survey conducted by the OpenStack User Committee, many of OpenStack’s founding projects, including Nova and Glance continue to be popular. However, other projects, such as Heat, Swift, and Trove, are also gaining in adoption.
There were some areas of the survey that really stood out, however. Such as the small matter of storage, one of our favorite topics! For example, there’s the fact that Ceph continues to be the most popular storage driver, gaining 7 percent since the last survey.