For Red Hat, it’s 1994 all over again

In 1994, and, in fact, throughout the late 1990s and into the early 2000s, there was a pronounced shift as we saw the industry move away from proprietary hardware and software and toward industry-standard servers with Linux running on top. Today, we see the same shift in trends. Only, today, this shift centers around Big Data.

Come listen to the talk given by Ranga Rangachari, VP/GM Red Hat Storage at Strata + Hadoop World. Gain key observations on Big Data. Learn how the Red Hat product portfolio can help you capitalize on the full scope of your Big Data. And understand how Red Hat partners with the key vendors in the space to multiply your options, giving you the tools you need to benefit from your Big Data.

The old data lifecycle centered around multiple silos. New data is different. Listen to how data management has shifted. If you’ve ever wanted to learn how you can successfully ingest data from multiple sources, integrate this wealth of data, and discover new ways to tap into the data so you can truly open the possibilities of your data, this is the talk for you. Check it out, below!

Big Data Update with Steve Watt, Chief Architect

 

Q: Intro: Steve Watt Red Hat Chief Architect, Hadoop and Big Data

A: Background with HP and IBM. Always been on cutting edge. When I came to Red Hat I thought, “What does Red Hat have to do with Big Data, and why do they want me to join?” Now I know there’s a ton of big data stuff happening. Very cool projects.

Q: Can you tell us about some of those projects.

A: We offer value to customers at multiple layers in the stack. Obviously RHEL is the OS layer of choice for most data centers and internet companies. Then you have Storage (plugin) and OpenStack (IaaS). On top of that OpenShift (PaaS). But more importantly it’s our partnerships that is important to customers.

Q: What do you mean? Can you elaborate?

A: Most vendors you talk to about big data will talk about a “stack”. At Red Hat, open and open source are burned into our DNA. We know customers want to build best of breed solutions for themselves by using the best pieces for a particular task, rather than get locked into a “stack” that has some pieces that are optimal but sadly most pieces that are not optimal for a particular workload. Our approach is to build open components that can seamlessly work with other third party software and hardware. e.g. Splunk, Intel, Cloudera, Hortonworks, Continuum, Lucidworks, SAP etc.

Q: Can you tell us a little bit about what to expect from Red Hat’s Big Data position in 2015?

A: More partnerships. More around OpenStack, Storage and Middleware. More thought leadership. Also, be sure to bookmark enterprisersproject.com and redhat.com/bigdata for regular updates.

Learn More

For more detail about innovative projects Red Hat is involved in, hit up these links:

Click here to learn more about Hortonworks Data Platform 2.1 on Red Hat Storage 3.0.2

Click here to learn about our Hadoop plug-in refresh and the Ambari project.

Machine data growth and retention got you beat? Red Hat Storage has your back!

by Irshad Raihan, Red Hat Big Data Product Manager

Screenshot-1
Machine data is by far the fastest growing component of Big Data and the Internet of Things. Everyday more and more devices are spewing out logs by the second that can be analyzed for patterns and anomalies. Machine data tends to be more credible and rich in insights when compared to human data (think social sentiment).

Many enterprises use machine data analytics to improve process efficiency, identify cyber security threats, and optimize energy usage. Splunk is a market leader in machine data analytics – it’s essentially the Google of machine data. Once the data is collected and stored, Splunk can run dynamic searches that can reveal a number of actionable insights in a timely manner. As you can imagine, Splunk gets better at detecting patterns in the data, as data sets grow larger. Also, a number of regulatory compliance standards require the retention of data for longer periods than ever before.

The combination of the two forcing functions presents a conundrum for Splunk customers. On the one hand they may be looking for patterns in machine data to lower costs and improve efficiency. One the other hand, they now have to spend more to store large volumes of data, and keep it searchable so they can find those patterns more effectively. Traditional storage platforms are expensive and struggle with scaling capacity to keep up with the growth of machine data.

Red Hat Storage Server offers a cost-effective solution to this conundrum. Customers can use Red Hat Storage to build a hybrid model between low latency direct attached storage for “hot” data and a cluster of scalable, highly available storage layer for “cold” data that needs to be retained for much longer periods of time but needs to be included in dynamic searches. Check out this whitepaper by Function1 – a premier Red Hat and Splunk partner and integrator for operational analytics – on using Red Hat Storage Server as a Hybrid Storage Solution for Splunk Enterprise. In addition, hear from Sandeep Khaneja, VP at Function1 about the partnership with Red Hat.

Learn more about the hybrid storage solution for Splunk in this Red Hat webinar.

Can Big Data Save the Environment by Making Your Company More Eco-Friendly?

In the second part of our discussion with Sandeep Khaneja of Function1, we focus on specific vertical markets where he is seeing major opportunities for Big Data. Sandeep see’s significant progress in the area of energy management and energy consumption. As enterprise organizations look to save building facility costs such as lighting, cooling and heating, big data modules help to monitor and analyze massive data sets in a effort to conserve energy being consumed.

In addition, get the latest on the Splunk 6.2 release and learn more about the new features and functionality including improved performance and ease of management.

To learn more about Function1, go here.

Can a Hybrid Storage Solution Find a Needle in Your Big Data Haystack?

A scalable storage solution for Big Data is becoming more and more critical to IT. As enterprise companies experience larger data set growth, the need to mine the data becomes increasingly of greater importance. Is a hybrid cold storage solution really able to find a needle in an cavernous data haystack?

Tune in to this week’s Storage Hangout with our guest, Sandeep Khaneja, as he talks about the latest at Function1 and Splunk. Hear his take on the Splunk scalable hybrid storage use case customer scenarios for massive data sets. The interview is the first of two episodes with Sandeep.

To learn more about Function1, go here.

Follow

Get every new post delivered to your Inbox.

Join 3,137 other followers

%d bloggers like this: