Questions and Answers, part one: Red Hat Storage Server 3, Ceph, and Gluster

You may recall we recently launched Red Hat Storage Server 3 (learn more about that here). Well, we had a lot of questions arise during our keynote that we weren’t able to share at the time due to the live nature of the broadcast. Well, we’ve collected all the questions we received and compiled a series of blog posts around them. We’ll be sharing them over the next few days…starting with this post.

Tweet 5_Whats New

Competitor Comparison

What proof points do you have for the storage TCO compared to either legacy methods or your competitors?

IDC published a white paper that details this information. Check out The Economics of Software-based Storage report.

How is RHS Ceph on commodity hardware positioned for advantage against big storage vendors like NetApp, EMC, HDS, etc…

Red Hat and open source gives customers freedom of choice on hardware which helps them to drive down costs. The scale-out architectures of Red Hat Storage Server and Red Hat’s InkTank Ceph Enterprise are better suited for massive data growth without having to invest upfront. In addition, as we run on industry-standard hardware and combine Red Hat Enterprise Linux with GlusterFS as the underlying storage OS the storage nodes can also be used to run some infrastructure applications which helps to reduce datacenter footprint and costs.

How does this differ from Netapp & EMC?  What advantages does it have over these big storage players?

NetApp has no real scale-out storage solution. EMC does have Isilon for scale-out file storage which is a proprietary appliance with similar features, but at significantly higher costs. Red Hat Storage also provides converged (computing and storage) capabilities, whereas NetApp and EMC do not.

Can you provide Red Hat’s definition for Software Defined Storage and how the capacity and security mechanisms in SDS are improved & differentiated compared to traditional storage?

Software-defined storage decouples the storage hardware and the control layer. Also the common SDS regime is to use standard hardware and develop advanced features rather in the software layer. RHS uses RHEL as the underlying storage OS which provides military grade security features. Using a mainstream OS like RHEL also means that overall more customers are using it and when security issues are discovered that they can receive fixes immediately. Just as a recent example it took RH a couple of hours to fix a very wide-spread and dangerous security issue in the bash shell (shellshock) whereas there are quite a few UNIX, linux or freebsd based deployments or appliances which still don’t have a fix for this issue as of today.

How do you measure advantage your system provides?

We can measure three factors: Costs, Scalability and Performance. Our costs are on average up to 60% lower than our competition, we can now scale up to >30 PB in a single pool and linearly increase throughput by just adding additional storage nodes.

What are the advantages of RHSSv3 compared to ZFS solutions like Nexenta?

ZFS is a great file system but it’s running on a single-node and therefore can’t really scale-out but rather uses the scale-up approach (adding CPU, cache, SSD). This is fine for small to medium environments, but brings the same limitations as proprietary legacy storage appliances. They don’t scale.

How is software-defined storage different from storage hypervisors or storage virtualization?

Red Hat Storage Server and Red Hat’s Inktank Ceph Enterprise use virtualization approaches as well but they go beyond that capability and provide many more features. Also the most commonly used storage virtualization technologies are block-based and provide just larger virtual block pools which usually require more expensive and complex fibre-channel based storage. RHS pools and virtualizes filesystems based on commodity storage servers where there is no need for a shared storage system or fibre-channel, but rather uses the disks which are in each of the storage servers. We use an algorithmic approach for virtualization which tends to scale better than classic storage virtualization technologies which have to go through a controller appliance or metadata server.

 

Storage Hangout: Live from Strata + Hadoop World – Barcelona, Spain (11/20)

Tune into the latest episode of the Storage Hangout, broadcasting live from Strata + Hadoop World in Barcelona, Spain. Brian hangs out with Greg Kleiman, Director of Red Hat Big Data, about the latest news and announcements coming out of Strata+Hadoop World conference.

Greg touches on the long term vision for big data analytics, the partnership alliances with Cloudera and Hadoop, and provides a glimpse into the future 2015 Red Hat Big Data strategy.

BrightTALK Webinar Highlights: The New Shape of Software Defined Storage

slide3_001Did you have a chance to catch the BrightTALK webinar: “The New Shape of Software Defined Storage?” If not, then you missed out. It was hosted by the Taneja Group Sr. Analyst Mike Matchett, and it explored the rapidly expanding world of Software Defined Storage with a panel of the hottest SDS vendors in the market including:

  • Gridstore: Founder & CTO: Kelly Murphy
  • HP: Director, Product Marketing & Management – SDS: Rob Strechay
  • IBM: Director, Storage and SDE Strategy: Sam Werner
  • Red Hat: Product Marketing Lead, Big Data: Irshad Raihan

Some of the highlights from the webinar, which Irshad touches on includes:

  • New workloads are driving innovation in software defined storage. With the acquisition of Inktank, Red Hat is now poised to tackle any type of semi or unstructured data across multiple formats such as file, block, and object.
  • The latest release, Red Hat Storage Server 3.0 combines the best in class enterprise grade Linux Server platform Red Hat Enterprise Linux 6 and GlusterFS 3.6 to create an open software-defined, massively scalable, high performance, highly-available and cost-effective storage offering.
  • This release of the storage server introduces volume snapshots, monitoring using Nagios, increased usable capacity per storage server, hadoop workload support, non-disruptive upgrade from the previous version and supportability enhancements to address your data protection and storage management challenges for unstructured and big data storage.
  • The storage server is rigorously qualified to meet exacting performance and scale demands for next generation enterprise and cloud storage deployments and is tightly integrated with Red Hat’s Scalable File-system (XFS file system).
  • Key features of the latest RHSS3 release:
    • Local snapshots for disk based backup
    • Monitoring using Nagios
    • In place analytics for Hadoop workloads
    • Non disruptive management and upgrades

To watch the webinar on-demand, tune in here.

Big data in risk management: Red Hat and Hortonworks to the rescue

The real estate bubble of 2008 made shockwaves worldwide. This was so significant an event that the US economy only this year was determined to have recovered to pre-bubble levels. But this event also resulted in all sorts of other shockwaves — regulatory ones.

To address these regulatory needs for Wall Street, Red Hat Storage teamed up with Hortonworks to build an enterprise grade big data risk management solution. At the recent Strata+Hadoop World, Vamsi Chemitiganti (chief architect, financial services) presented the solution in a session — which you can see for yourself at the bottom of this post.

vamsi

The slide does a great job of breaking down the workflow. To spare your eyes, here are the steps:

  1. The Red Hat JBoss Data Grid and Hortonworks Data Platform (HDP) are primed with Real Time and Batch Loads
  2. Data Transformations take place in HDP. Data flows back and forth between the Red Hat JBoss Data Grid in Memory Compute Tier and HDP
  3. In-Memory calculations take place within Red Hat JBoss Data Grid. Results are routed to HDP where Interactive and Batch job loads
  4. Interactive SQL Queries and Batch Job Execution take place within HDP
  5. Business users leverage business intelligence tools to query data interactive in HDP and/or a relational database management system (RDBMS).

Check out the full session right here:

Storage Hangout: Is POSIX Compatibility Relevant in the Age of Big Data?

POSIX or “Portable Operating System Interface for UNIX” is a set of standards specified by the IEEE and ISO that defines the interface between programs and operating systems. It defines the application programming interface (API), along with command line shells and utility interfaces, for software compatibility with variants of Unix and other operating systems. By designing programs to conform to POSIX compatibility, developers have some assurance that their software can be easily ported to POSIX-compliant operation systems. It has been around since the 90′s.

So what does a 20 years old established concept like POSIX compatibility have to do in the future emerging technology age of Big Data?

Tune into the Storage Hangout to find out as Brian Chang and Irshad Raihan, Red Hat Big Data discuss this and more.

Also, be sure to catch Irshad on the upcoming Bright Talk Webinar: The New Shape of Software Defined Storage on Thursday, 11/13 at 1:00pm EST.

slide3_001Register for free here: https://www.brighttalk.com/webcast/9775/129643

Follow

Get every new post delivered to your Inbox.

Join 3,123 other followers

%d bloggers like this: