By Sayan Saha, Sr. Manager, Product Management, Storage & Data Business, Red Hat.
Last week Red Hat announced the general availability of Red Hat Enterprise Linux Atomic Host – a host environment optimized to run containerized applications with a minimal footprint. Red Hat Enterprise Linux Atomic Host is designed to simplify maintenance using image-based update and rollback and includes orchestration toolsets such as Kubernetes for managing containers across a cluster of hosts. The new Red Hat Enterprise Linux Atomic Host inherits the industry-leading hardware ecosystem, reliability, stability and security the industry has come to expect from Red Hat Enterprise Linux.
What this means for Red Hat Storage customers
This announcement is significant for Red Hat Storage customers on multiple fronts. Workloads running in containers require persistent storage for application code and data. Given the rapid growth in the number of containers within today’s IT shops, software-defined storage has an advantage over traditional storage in terms of supporting the performance, scale and efficiency requirements of containerized applications.
With the latest release of Red Hat Enterprise Linux Atomic Host, a Red Hat Storage Server volume can be natively mounted inside one or more containers and used for storing and retrieving application state and data. On the client side, Red Hat Enterprise Linux Atomic Host includes an NFS client that can be used to access Red Hat Storage Server today. Our engineers are hard at work to bring broader client side support for accessing storage from containerized applications and workloads in the form of super privileged containers running on Red Hat Enterprise Linux Atomic Host.
Containers push the limits of traditional storage
Industry analysts report a growing trend of increased complexity and a growing number of applications running inside containers. To support container-based workloads, organizations require sophisticated and modern storage infrastructure that can support container migration (of both image & data volumes), locality-awareness and failover across hosts. Most importantly, these storage solutions must scale on-demand to meet the storage demands of enterprises that deploy containerized applications at scale.
In this co-authored whitepaper with Cisco, we cite the future of containerized applications and the storage considerations they will drive. In particular, we find that enterprises can better address storage challenges associated with using containers in massively scalable private and hybrid cloud environments using open, scale-out, distributed file, object, and block storage, than they can with traditional, monolithic storage.
Open, software-defined storage supports the workloads of tomorrow
Many CIOs and IT architects are turning to open, software-defined storage to address the need for agility and support rapidly evolving workloads in their data center. They often find that their DevOps teams reach limits in performance or scale of traditional storage solutions, despite expensive contract renewals and upgrades. In addition, most traditional storage solutions were created in an era prior to containerization, making them ill-equipped to integrate with or support such environments. Red Hat Storage, born in the software-defined era, can offer scalability and flexibility needed to keep up with today’s container storage demands.
Additionally, we find that software based hyper-converged architectures are gaining traction in traditional virtualized environments. Hyper-converged architectures enable users to combine compute & storage clusters, running applications from the same set of hosts that also serve storage pools for those applications. We predict that such hyper-converged architectures will quickly become very popular for deploying containerized application across hosts for their superior usability, TCO benefits and simplicity.
Red Hat Storage with Ceph & Gluster provide a compelling choice for storage options for containerized environments. Learn more at redhat.com/storage.