Red Hat Ceph Storage is a proven, petabyte-scale, object storage solution designed to meet the scalability, cost, performance, and reliability challenges of large-scale, media-serving, savvy organizations. Designed for web-scale object storage and cloud infrastructures, Red Hat Ceph Storage delivers the scalable performance necessary for rich media and content-distribution workloads.
While most of us are familiar with deploying block or file storage, object storage expertise is less common. Object storage is an effective way to provision flexible and massively scalable data storage without the arbitrary limitations of traditional proprietary or scale-up storage solutions. Before building object storage infrastructure at scale, organizations need to understand how to best configure and deploy software, hardware, and network components to serve a range of diverse workloads. They also need to understand the performance and scalability they can expect from given hardware, software, and network configurations.
This reference architecture/performance and sizing guide describes Red Hat Ceph Storage coupled with QCT (Quanta Cloud Technology) storage servers and networking as object storage infrastructure. Testing, tuning, and performance are described for both large-object and small-object workloads. This guide also presents the results of the tests conducted to evaluate the ability of configurations to scale to host hundreds of millions of objects.
After hundreds of hours of [Test ⇒ Tune ⇒ Repeat] exercises, this reference architecture provides empirical answers to a range of performance questions surrounding Ceph object storage, such as (but not limited to):
- What are the architectural considerations before designing object storage?
- What networking is most performant for Ceph object storage?
- What does performance look like with dedicated vs. co-located Ceph RGWs?
- How many Ceph RGW nodes do I need?
- How do I tune object storage performance?
- What are the recommendations for small/large object workloads?
- What should I do? I’ve got millions of objects to store.
And the list of questions goes on. You can unlock the performance secrets of Ceph object storage for your organization with the help of the Red Hat Ceph Storage/QCT performance and sizing guide.
The Storage Solution Architectures team at Red Hat develops reference architectures, performance and sizing guides, and test drives for Gluster- and Ceph-based solutions. We’re a group of architects who perform lab validation, tuning, and interoperability development for composable storage services with target workloads on optimized server and network configurations. We seek simplicity on the other side of complexity.
At the end of this blog entry is a full library of our current publications and test drives.
In our modern era, a top company asset is pivotability. Pivotability based on external market changes. Pivotability after unknowns become known. Pivotability after golden ideas become dark alleys. For most enterprises, pivotability requires a composable technology infrastructure for shifting resources to meet changing needs. Composable storage services, such as those provided by Ceph and Gluster, are part of many companies’ composable infrastructures.
Composable technology infrastructures are most frequently described by the following attributes:
- Open source v. closed development.
- On-demand architectures v. fixed architectures.
- Commodity hardware v. proprietary appliances.
- Cross-industry collaboration v. isolated single-vendor silos.
As noted in the following figure, a few companies with large staffs of in-house experts can create composable infrastructures from raw technologies. Their large investments in in-house expertise allows them to convert raw technologies into solutions with limited pre-integration by technology suppliers. AWS, Google, and Azure are all examples of DIY businesses. A larger number of other companies, also needing composable infrastructures, rely on technology suppliers and the community for solution pre-integration and guidance to reduce their in-house expertise costs. We’ll label them “Assisted DIY.” Finally, the majority of global enterprises lack the in-house expertise for deploying these composable infrastructures. They rely on public cloud providers and pre-packaged solutions for their infrastructure needs. We’ll call them “Pre-packaged.”
The reference architectures, performance and sizing guides, and test drives produced by our team are primarily focused on the “Assisted DIY” segment of companies. Additionally, we strive to make Gluster and Ceph composable storage services available to the “Pre-packaged” segment of companies by using what we learn to produce pre-packaged combinations of Red Hat software with partner hardware targeting specific workload use cases.
We enjoy our roles at Red Hat because of the many of you with whom we collaborate to produce value. We hope you find these guides useful.
Team-produced with partner collaboration:
Partner-produced with team collaboration:
Hands-on test drives:
Did you know about the Ceph support Wiki? A wide range of articles and resources await, including this one we’ve included, below. But it’s a wiki, so you get to contribute! Head on over and join us at…
…and be part of the conversation.
Continue reading “10 Commands Every Ceph Administrator Should Know”
If you haven’t yet noticed by now, Red Hat just relaunched our website. And in particular, we’re super excited about the launch of the new Red Hat Storage web pages. The new Red Hat Storage product pages provides a single reference source to get the latest on storage including: Red Hat Storage Server, Inktank by Red Hat, Gluster, Ceph and Big Data solutions. We hope you will find the site to be much more informative, insightful and easier to navigate. And, since we’re the leader of open source technology, we believe in putting our money where our mouth is…the site will be powered by open source technologies such as Red Hat Enterprise Linux, Red Hat Satellite, Drupal and OpenShift, across many aspects of the site.
Continue reading “Red Hat Storage gets a Facelift on Redhat.com”
One of the on-going challenges of IT is managing the never-ending demands for more and more data processing and information storage. One organization facing this challenge was the Metro de Madrid. Their IT infrastructure managed 45-50TB (terabytes) of information, divided into two main blocks: high-speed storage via fiber optic channels and storage across networks. Responsible for the administration and maintenance of operational computer systems of the Metro de Madrid, such as train traffic, energy management, audio systems, traveller information systems, and more, they required:
Continue reading “Keeping the trains on time: Improved scalability, capacity, availability and zero downtime on 90% of critical systems for Metro de Madrid”
by Irshad Raihan, Red Hat Storage – Big Data Product Marketing
The trusty paper shredder in my home office died last week. I’m in the market for a new one. Years ago, when I purchased “Shreddy” (of course, it had a name) after a brief conversation with a random store clerk, choices were few and information scarce. In fact, paper shredders weren’t really considered standard personal office equipment as they are today. Most good shredders were built for offices not homes. Back in the market more than a decade later, it’s clear that the search for a new shredder is going to be trickier than I had imagined.
A paper shredder is a lot like big data.
Continue reading “What Can a Paper Shredder Teach Us About Big Data?”
by Irshad Raihan, Red Hat Storage – Big Data Product Marketing
Digital data has been around for centuries in one form or the other. Commercial tabulating machines have been available since the late 1800’s when they were used for accounting, inventory and population census. Why then do we label today as the Big Data age? What dramatically changed in the last 10-15 years that has the entire IT industry chomping at the bit?
More data? Certainly. But that’s only the tip of the iceberg. There are two big drivers that have contributed to the classic V’s (Volume, Variety, Velocity) of Big Data. The first is the commoditization of computing hardware – servers, storage, sensors, cell phones – basically anything that runs on silicon. The second is the explosion in the number of data authors – both machines and humans.
Continue reading “The Data Life Cycle Has Changed. Are You Ready?”
We’ll have a number of Q&As coming your way in the days and weeks to come. Here’s one to kick things off — it’s with Red Hat’s Scott Clinton. Scott is senior director of product management & marketing for Storage and Big Data.
Continue reading “A Q&A with Red Hat’s Scott Clinton”
Summit Spotlight: Don’t Miss These Storage Tracks and Sessions
Red Hat Summit kicks off this year from April 14-17th in San Francisco, CA. We’ve organized more than 150 breakout sessions, each with a unique solution-focus, but attendees of all experience levels will see a variety of products, demos, customer success stories, and more.
Continue reading “Summit Spotlight: Don’t Miss These Storage Tracks and Sessions”
By Steve Bohac, Red Hat Storage Product and Solution Marketing
Open software-defined storage is transforming the way organizations tackle their data management challenges. We are seeing that more and more customers are realizing that an open software-based approach can create opportunities to significantly reduce costs and efficiently contend with their exploding data landscape. Additionally, open software-defined storage solutions can help discover new roles and value for enterprise storage.
Continue reading “Manageability Becoming A Key Component of Open, Software Defined Storage (Red Hat Storage Console Now Available!)”
Today’s Post: Red Hat Storage Server In Action
By Steve Bohac, Red Hat Storage Product Marketing
Several weeks ago, we posted the blog “Open Software Defined Storage – Don’t Get Fooled By The False ‘Open’ and Get Locked-In Again”. Today’s entry is the conclusion of this four part mini-series.
We understand how difficult it is to optimize your storage for innovation and growth, and our goal is to help enterprises on their journey to convert their data centers from cost centers into revenue-generators. Red Hat Storage Server has helped businesses of all varieties achieve their objectives. Here’s how open, software-defined storage has helped a few organizations get to the next level:
Continue reading “Red Hat’s Approach with Open, Software-Defined Storage (A Four Part Series)”
By Scott Clinton @ Redhat
Open software-defined storage is transforming the way organizations tackle their data management challenges. More and more of companies are seeing how it can open up the opportunity to significantly reduce costs, efficiently contend with the exploding data landscape, and support today’s increasingly software-defined and hybrid datacenter all while discovering the new roles and value software-based storage platforms can bring, now that open software-defined storage is enabling organization to take data, as is, out of the box.
Continue reading “Open Software Defined Storage – Don’t get fooled by the false “Open” and get locked-in once again”