Down the Stretch: Gluster Scale-out Community Contest

We’re coming down to the final 24 hours of the 1st International GlusterFS Scale-out Community Contest. As of right now, the top 5 looks like this:

Joe Julian 118
Semiosis 69
Jeff Darcy 69
Greg Swift 34
kyle.sabine 17

Continue reading “Down the Stretch: Gluster Scale-out Community Contest”

Datamation Article on Cloud Storage + CIC Case Study

Over at Datamation, Jeff Vance has written an informative guide for prospective cloud storage customers with a list of questions they should be asking. I was pleased to find that one of the questions is “Are files and backups persistently (and quickly) available?” with quotes from CIC‘s Nhan Nguyen, Chief Scientist and CTO. Here are the relevant bits:

Continue reading “Datamation Article on Cloud Storage + CIC Case Study”

TheStraightTech: Mount GlusterFS at Boot Time on Ubuntu

Some Ubuntu users have reported errors mounting GlusterFS volumes at boot time due to the order that the Upstart system init daemon starts services. Gluster community member Semiosis posted a solution:

On Ubuntu, if you add a glusterfs volume to the fstab on a server, using localhost as the server address, you’ll find that the volume does not get mounted at boot time. This is because the fstab mounts are executed before the glusterd service has been started during the boot process.

Ubuntu uses the upstart system init daemon, and it uses a special daemon called mountall to execute the fstab mounts. Because of this, the only way to have the glusterd service available when the fstab mounts are executed is to “upstartify” glusterd by converting the glusterd init script to an upstart job.

TheStraightTech: Targeted Self-heal (repairing less than the whole volume)

There are a variety of strategies one can employ to improve the performance of GlusterFS self-heal procedures. Gluster community member Semiosis weighs in with one he dubs targeted self-heal:

The canonical repair procedure (find /client/mount -print0 | xargs -0 stat > /dev/null) verifies the integrity of every file in the whole volume. That’s great, but what if you know that only a subset of the files needs to be repaired, say only files on one brick, or only files modified in the last half-hour. Here is a strategy for “targeted self-heal” which can save lots of time compared to healing the whole volume.

The general strategy here is to run find on a good replica brick, then stat the resulting files through a client mount. Now lets go through a couple examples which involve an Nx2 volume with two servers each having N bricks, so each server has a complete copy of the whole volume. The first example is where one brick disk has died and needs to be replaced with a new empty disk. The second example is where an entire server needs to be shutdown temporarily, say for a kernel upgrade.

Read the rest of his article here.

TheStraightTech: How are Distribution and Replication Related?

If you use GlusterFS, you know that there are several types of volumes you can create:

  • Distributed – Distributes files throughout the bricks in the volume.
  • Replicated – Replicates files across bricks in the volume.
  • Striped – Stripes data across bricks in the volume.
  • Distributed Striped – Distributes data across striped bricks in the volume.
  • Distributed Replicated – Distributes files across replicated bricks in the volume.

In GlusterFS, unlike most similar systems, these two types of data-placement decisions are kept completely separate.  Distribution, which places each file in exactly one place for the purpose of aggregating capacity within a single namespace, is provided by the DHT translator.  Replication, which places each file in N places for improved availability, is provided by the AFR translator.

The relationship between DHT and AFR can be viewed from two opposite perspectives – the configuration order and the execution order.  If you hear someone refer to “distribute-first” or “replicate-first” that generally refers to the configuration order.

September 30 Webinar: Introduction to GlusterFS

This webinar is the latest in a monthly series designed to give new users an overview of GlusterFS and its capabilities. Learn all you wanted and more to get started right away.

When: 9am PDT on Friday, September 30

RSVP now!

Continue reading “September 30 Webinar: Introduction to GlusterFS”

TheStraightTech: GlusterFS Over NFS, Virtual IP Migration, and Replication

Community linchpin Jjulian has posted a helpful HOWTO on How GlusterFS NFS works when migrating a virtual IP in a replicated volume:

How Virtual IP Migration works with GlusterFS NFS and a replicated volume:

Given [a] simple setup, where a client is connected via NFS to Server_A, the client communicates with a glusterfs client that’s designed to be an nfs server. This client loads a “vol” file with a specific nfs translator in order to accomplish this. The rest of that “vol” file works exactly like the FUSE client in that it connects via the distribute and replicate or stripe translators to the bricks via the servers.

Read the entire article

Gluster Grows Global Reach Through Partnership with CSS Corp

Leading Information Communications Technology services company adds GlusterFS to portfolio to deliver a high-performance, scale-out storage system to help organizations control rising storage costs

Gluster, the leading provider of scale-out, open source storage solutions, today announced its partnership with CSS Corp, a global Information and Communications Technology (ICT) services company. CSS Corp’s presence in key markets like the US, Europe and India will help Gluster meet the widespread demand and need for an innovative scale-out NAS storage solution like GlusterFS.

“We at CSS Corp, focus extensively on talent and experience to ensure that our end users’ needs are being met capably, comprehensively and cost-effectively. By partnering with Gluster, we are expanding our storage services and offering our customers a scale-out storage solution that will help organizations control perpetually rising storage costs and thus enjoy great and secure storage services,” said Sandip Deb, senior vice president at CSS Corp.

Gluster’s software-only solutions enable enterprises to deploy storage as a virtualized, commoditized, and scale-on-demand pool, radically reducing storage costs. CSS Corp is focused on designing, developing, deploying and managing end-to-end IT and network services for its clients that will allow them to address the current and future needs of their infrastructure. With the addition of GlusterFS to its array of services, CSS Corp will now able to deliver a storage option to customers requiring massively scalable, highly available storage for their dynamic storage infrastructure.

“CSS Corp has a proven track record and global reach to deliver enhanced value and innovative services to its customers,” said Jeff Olson, senior director of channel sales at Gluster. “The worldwide popularity of Gluster is growing every day and with CSS Corp’s global presence we are pleased to team up with them on this new agreement. We are looking forward to helping their customers reduce storage costs and scale performance while also providing high availability on demand.”

Supporting Resources

Follow Gluster on the web for the latest news and information at:

About Gluster Technology
Gluster’s software-only solutions let enterprises deploy storage the same way they deploy computing today–as a virtualized, commoditized, and scale-on-demand pool, radically improving storage economics. Combined with the customer’s choice of commodity computing and storage resources, Gluster can scale-out to petabytes of capacity and GB/s of throughput at a fraction of the cost of proprietary systems. Gluster ensures high availability with n-way replication both within and between public and private data centers. Gluster is deployable both on-premise (as a virtual appliance or bare-metal software appliance) and in public clouds such as Amazon Web Services. Gluster is the primary author and maintainer of the open-source GlusterFS software, which has been downloaded over 200,000 times.

About Gluster
Gluster is the leading provider of open source storage solutions for public, private and hybrid clouds. Over 150 enterprises worldwide have used Gluster in commercial deployments ranging from a few terabytes to multiple petabytes, across the most demanding applications in digital media delivery, healthcare, Internet, energy and biotech. Gluster is privately-held and headquartered in Sunnyvale, California. Visit us at www.gluster.com.

CSS Corp
CSS Corp (www.csscorp.com) is a fast growing global information and communications technology company with an impeccable record for designing, developing, deploying and managing end-to-end IT and network services. CSS Corp is headquartered in San Jose, California, USA and has delivery centers around the World including Chennai, Bangalore, Pune and Coimbatore in India, Utah, Boston in the USA, Manila in Philippines, Poland & Mauritius. CSS Corp employs in excess of 5000 people globally. From application development, testing and optimization through to enterprise-level cloud enablement and round the clock technical support services, CSS Corp provides a truly impressive range of quality services that focus on delivering strategic value and operational efficiency for its customers. The company is backed by investors including SAIF Partners, Goldman Sachs and Sierra Ventures.

# # #

Media Contact:
Gluster
Danielle Tarp
Mindshare PR for Gluster
650-947-7405
Danielle@mindsharepr.com

CSS Corp
Vidya Vijayragvan
Text 100 Public Relations India Pvt. Ltd.
044- 28211302
Vidya.vijayragvan@text100.co.in

Upcoming Webinar: Delivering Scale-out NAS for Hybrid Clouds (with Redapt)

Gluster has partnered with Redapt, Inc., an innovative data center architecture and infrastructure solutions provider, to integrate GlusterFS with hardware providing customers with highly-scalable NAS storage technology for on-premise, virtual and cloud environments. Gluster’s storage technology enables Redapt to offer a comprehensive, cost-effective storage solution delivering the scalability, performance and reliability that companies need to effectively run their data centers. This webinar will provide an overview of the partnership, benefits of the joint solution, and include use cases of how customers today are deploying the joint solution.

RSVP here.

Date: Thursday, September 22
Time: 10 AM PT / 1 PM ET / 17:00 UTC

Speakers:
Tom Trainer, Gluster, Director of Product Marketing
Matt Huff, Redapt, VP Business Development
Jeff Dickey, Redapt, Senior Data Center Solutions Engineer

RSVP:
Gluster webinar registration page

GlusterFS HOWTO: NFS Performance with FUSE Client Redundancy

There’s another great post on the community Q&A sitethis one about NFS performance, excessive load times for PHP-based web sites, and the Fuse client. This was written by Joe Julian, our resident IRC chairman and all-around Gluster soup stirrer:

There’s been a lot of discussion about the latency due to self-heal checking with the fuse client, and how with most php based web sites the sheer volume of files that must be opened for each page load causes page times to be excessive. Common workarounds have been to use web server caches, or php caches to avoid disk reads wherever possible. When this cannot be done, the recommendation has been to use NFS as the kernel caches reduce the disk reads. The problem, of course, with NFS mounts is redundancy failure. ucarp can mitigate the problem, but there are still problems with lost tcp connections causing errors.

Another solution is to move the NFS service to the client. NFS is provided in GlusterFS by using the same client that’s used for fuse, and adding an nfs server translator instead of fuse. This lends itself very well to being moved to the client and having the client provide it’s own NFS service.

To take advantage of this technique, there are two ways of doing it.

Read the rest of Joe’s article, in full. If you’ve had performance issues trying to use GlusterFS for web site file storage, you might find this useful.

Linux Kernel Tuning for GlusterFS

One of our support engineer gurus, Harsha, has published a very detailed post on tuning parameters for the Linux kernel that may impact your GlusterFS performance. Here’s his introduction:

Every now and then, questions come up here internally and with many enthusiasts on what Gluster has to say about kernel tuning, if anything.

The rarity of kernel tuning is on account of the Linux kernel doing a pretty good job on most workloads. But there is a flip side to this design. The Linux kernel historically has eagerly eaten up a lot of RAM, provided there is some, or driven towards caching as the primary way to improve performance.

For most cases, this is fine, but as the amount of workload increases over time and clustered load is thrown upon the servers, this turns out to be troublesome, leading to catastrophic failures of jobs etc.

Having had a fair bit of experience looking at large memory systems with heavily loaded regressions, be it CAD, EDA or similar tools, we’ve sometimes encountered stability problems with Gluster. We had to carefully analyse the memory footprint and amount of disk wait times over days. This gave us a rather remarkable story of disk trashing, huge iowaits, kernel oops, disk hangs etc.

This article is the result of my many experiences with tuning options which were performed on many sites. The tuning not only helped with overall responsiveness, but it dramatically stabilized the cluster overall.

When it comes to memory tuning the journey starts with the ‘VM’ subsystem which has a bizarre number of options, which can cause a lot of confusion.

It’s a great article. Read the whole post.

Our Trophy Case Runneth Over – Gluster Wins a Bossie

The BossiesGluster has been awarded yet another award, coming on the heels of accolades from Storage Magazine, Citrix Synergy, and TiE50, to name just a few.

This time, it’s a Bossie, which are handed out annually by Infoworld to the most deserving Open Source projects and products. Gluster was nominated – and won – in the Data Center and Cloud category.

Awards are always nice – it’s a reflection of all the hard work we’ve put in over the years to make a great product that meets customers’ needs. But really, the greatest award and reward we can receive is when our users, fans and advocates take it upon themselves to tell their friends about us. Those are the awards that matter the most.

  • Page 1 of 2
  • 1
  • 2
  • >