In October 2014, the first Ceph environment at Target, a U.S. based international chain of department stores, went live. In this insightful slide show (embedded at the end of this post), Will Boege, Sr. Technical Architect at Target, talks about the process, highlighting challenges faced and lessons learned in Target’s first ‘official’ Openstack release.

Ceph was selected to replace the traditional array-based approach that was implemented in a prototype Havana environment. Will outlines the criteria for the move in four succinct bullets:

  • The traditional storage model was problematic to integrate
  • Maintenance and purchase costs from array vendors could become prohibitive
  • Traditional storage area networks just didn’t “feel” right in his space
  • Ceph integrated tightly with OpenStack

Ceph was to be used for:

  • RBD for Openstack instances and volumes
  • RADOSGW for object
  • RBD backing Celiometer MongoDB volumes

Initial deployment included three monitor nodes, twelve OSD nodes, two 10GBE per host, and a basic LSI ‘MegaRaid’ controller.

After the rollout it became evident that there were performance issues within the environment, issues that led to the first lesson learned: instrument your deployment. Compounding the performance issues were mysterious reliability issues, which led to the second lesson learned: do your research on hardware your server vendor provides. And this was just the beginning.

To learn more about Ceph, or to take a free test drive, please visit the Red Hat Ceph Storage homepage.

[slideshare id=51874888&doc=06-cephlessonstarget-150820194458-lva1-app6891]