Library of Ceph and Gluster reference architectures – Simplicity on the other side of complexity

The Storage Solution Architectures team at Red Hat develops reference architectures, performance and sizing guides, and test drives for Gluster- and Ceph-based solutions. We’re a group of architects who perform lab validation, tuning, and interoperability development for composable storage services with target workloads on optimized server and network configurations. We seek simplicity on the other side of complexity.

At the end of this blog entry is a full library of our current publications and test drives.

In our modern era, a top company asset is pivotability. Pivotability based on external market changes. Pivotability after unknowns become known. Pivotability after golden ideas become dark alleys. For most enterprises, pivotability requires a composable technology infrastructure for shifting resources to meet changing needs. Composable storage services, such as those provided by Ceph and Gluster, are part of many companies’ composable infrastructures.

Composable technology infrastructures are most frequently described by the following attributes:

  • Open source v. closed development.
  • On-demand architectures v. fixed architectures.
  • Commodity hardware v. proprietary appliances.
  • Cross-industry collaboration v. isolated single-vendor silos.

As noted in the following figure, a few companies with large staffs of in-house experts can create composable infrastructures from raw technologies. Their large investments in in-house expertise allows them to convert raw technologies into solutions with limited pre-integration by technology suppliers. AWS, Google, and Azure are all examples of DIY businesses. A larger number of other companies, also needing composable infrastructures, rely on technology suppliers and the community for solution pre-integration and guidance to reduce their in-house expertise costs. We’ll label them “Assisted DIY.” Finally, the majority of global enterprises lack the in-house expertise for deploying these composable infrastructures. They rely on public cloud providers and pre-packaged solutions for their infrastructure needs. We’ll call them “Pre-packaged.”

Brent_Slide

The reference architectures, performance and sizing guides, and test drives produced by our team are primarily focused on the “Assisted DIY” segment of companies. Additionally, we strive to make Gluster and Ceph composable storage services available to the “Pre-packaged” segment of companies by using what we learn to produce pre-packaged combinations of Red Hat software with partner hardware targeting specific workload use cases.

We enjoy our roles at Red Hat because of the many of you with whom we collaborate to produce value.  We hope you find these guides useful.

Team-produced with partner collaboration:

Partner-produced with team collaboration:

Pre-packaged solutions:

Hands-on test drives:

Manageability Becoming A Key Component of Open, Software Defined Storage (Red Hat Storage Console Now Available!)

By Steve Bohac, Red Hat Storage Product and Solution Marketing

Open software-defined storage is transforming the way organizations tackle their data management challenges. We are seeing that more and more customers are realizing that an open software-based approach can create opportunities to significantly reduce costs and efficiently contend with their exploding data landscape. Additionally, open software-defined storage solutions can help discover new roles and value for enterprise storage.

Continue reading “Manageability Becoming A Key Component of Open, Software Defined Storage (Red Hat Storage Console Now Available!)”

TheStraightTech: GlusterFS Over NFS, Virtual IP Migration, and Replication

Community linchpin Jjulian has posted a helpful HOWTO on How GlusterFS NFS works when migrating a virtual IP in a replicated volume:

How Virtual IP Migration works with GlusterFS NFS and a replicated volume:

Given [a] simple setup, where a client is connected via NFS to Server_A, the client communicates with a glusterfs client that’s designed to be an nfs server. This client loads a “vol” file with a specific nfs translator in order to accomplish this. The rest of that “vol” file works exactly like the FUSE client in that it connects via the distribute and replicate or stripe translators to the bricks via the servers.

Read the entire article

Enough data to fill a stack of DVDs to the moon (and back)

I believe that storage needs to change radically — in terms of technology, architecture, deployment, and — perhaps most importantly — economics.

read more