Part 2: Connecting an OCP application to a MySQL instance

By Marko Karg and Annette Clewett

At the end of the first post in this blog series, we deployed a MySQL instance using StatefulSets (STS). Today, we want to use that same deployment method and connect a WordPress instance to the MySQL database and see what happens if the database fails. Remember, we’re using a MySQL pod created from a StatefulSet.

OpenShift on AWS test environment

All the posts in this series use an OCP-on-AWS setup that includes 8 EC2 instances deployed as 1 master node, 1 infra node, and 6 worker nodes that also run OCS gluster and heketi pods. The 6 worker nodes are basically the storage provider (OpenShift Container Storage [OCS]) and persistent storage consumers (MySQL). As the following figure shows, the ocs worker nodes are of instance type m5.2xlarge with 8 vCPUs, 32 GB Mem, and 3x100GB gp2 volumes attached to each node for OCP and 1 1TB gp2 volume for OCS storage cluster. The AWS region us-west-2 has availability zones (AZs) us-west-2a, us-west-2b, us-west-2c, and the 6 worker nodes are spread across the 3 AZs, two nodes in each AZ. This mean the OCS storage cluster is “stretched” across these 3 AZs.

MySQL setup

We’ve created a headless MySQL service already, using an OCS-based persistent volume claim (PVC):

oc get services
NAME        TYPE CLUSTER-IP    EXTERNAL-IP   PORT(S)     AGE
mysql       ClusterIP             <none>     3306/TCP    16h

The STS we need is also created as described in the first post of this series. Once the STS has been created and the container has been started, we have a running MySQL instance running:

oc get pods
NAME                READY   STATUS    RESTARTS    AGE
mysql-ocs-0         1/1     Running   0           21h

With a service and a database up, we can move forward to get our application deployed.

Although there are templates available that would set up WordPress with a preconfigured database in one shot, we want to take the long way and start from scratch to illustrates the required steps.

WordPress setup

Let’s create a new php application to run WordPress:

# oc new-app php~https://github.com/wordpress/wordpress

After a few seconds, we have the required pods in our project:

# oc get pods
NAME                READY     STATUS      RESTARTS   AGE
mysql-ocs-0         1/1       Running     0          21h
wordpress-1-build   0/1       Completed   0          22h
wordpress-1-q5jts   1/1       Running     0          22h

To make our WordPress instance available to the world, we need to expose it:

# oc expose service wordpress

Now two services are available, one for MySQL and one for WordPress:

# oc get service
NAME                                                     TYPE       
CLUSTER-IP      EXTERNAL-IP  PORT(S)             AGE
glusterfs-dynamic-31a07eb1-3a72-11e9-96fc-02e7350e98d2   ClusterIP  
172.30.165.245  <none>       1/TCP               1m
mysql-ocs                                                ClusterIP  
172.30.210.183  <none>       3306/TCP            1m
wordpress                                                ClusterIP  
172.30.1.139    <none>       8080/TCP,8443/TCP   28s

Let’s connect to the web interface of WordPress now. To do so, we take the HOST / PORT portion from the following command:

oc get route wordpress
NAME        HOST/PORT                                         PATH 
SERVICES    PORT       TERMINATION     WILDCARD
wordpress   wordpress-marko.apps.ocpocs311.ocpgluster.com             
wordpress   8080-tcp None              None

In our case, that’s wordpress-marko.apps.ocpocs311.ocpgluster.com. Take that string and put it into a browser to go to the WordPress interface. The WordPress web interface will guide us through setting up the database connection now.

We have most of the above values pre-defined in our MySQL STS, so the database name is “wordpress”, the username is “admin”, and the password is “secret”.

cat mysql-sts.yaml
….omitted….
   spec:
      terminationGracePeriodSecods: 10
      containers:
      - name: mysql-ocs
        image: mysql:5.7
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        - name: MYSQL_DATABASE
          value: wordpress
        - name: MYSQL_USER
          value: admin
        - name: MYSQL_PASSWORD
          value: secret
….omitted….

Our database host can be found by running this command:

oc get services
NAME        TYPE CLUSTER-IP       EXTERNAL-IP     PORT(S)
AGE
NAME                                                     TYPE
CLUSTER-IP     EXTERNAL-IP      PORT(S)             AGE
glusterfs-dynamic-31a07eb1-3a72-11e9-96fc-02e7350e98d2   ClusterIP
172.30.165.245 <none>           1/TCP               1m
mysql-ocs                                                ClusterIP 
               <none>           3306/TCP            1m
wordpress                                                ClusterIP
172.30.1.139   <none>           8080/TCP,8443/TCP   28s 

So we will use mysql-ocs for the database host.

If information is entered is correct, WordPress will guide us through the rest of the installation process:

We now need to enter some information for the web front end:

The installation takes some time and finally presents this screen:

So now our deployment is done. Next, we’ll log into WordPress and create some test content:

Failure scenario

So now that we have everything in place, we want to see what happens when our MySQL container fails. To simulate that, we’ve set up a client that checks the website using the “curl” command.

Because we’re only interested in the HTTP response over a longer time, we run it in a loop, trimming the output to what we’re interested in:

while true; do date; curl -I 
http://wordpress-marko.apps.ocpocs311.ocpgluster.com/2019/02/26/lorem-ipsum/ 
2>&1 | grep HTTP; sleep 1; done

As a first test we simply kill the MySQL container, watching the preceding loop closely:

# oc get pods
NAME                READY STATUS     RESTARTS  AGE
mysql-ocs-0         1/1   Running    0         21h
wordpress-1-build   0/1   Completed  0         22h
wordpress-1-q5jts   1/1   Running    0         22h

Delete the MySQL pod:

oc delete pod mysql-ocs-0

Here’s the output from the preceding curl loop:

Mi 27. Feb 09:40:44 UTC 2019
HTTP/1.1 200 OK
Mi 27. Feb 09:40:45 UTC 2019
HTTP/1.1 500 Internal Server Error

Mi 27. Feb 09:40:59 UTC 2019
HTTP/1.1 500 Internal Server Error
Mi 27. Feb 09:41:03 UTC 2019
HTTP/1.1 200 OK

So the pod failure effectively caused our WordPress instance to be unavailable for 18 seconds (09:40:45 to 09:41;)3), give or take a few seconds for the curl command. We’ve run a larger number of the same test and ended up with an average value of 12 seconds. This is the time that the WordPress application is unavailable due to the MySQL pod being deleted. Once the pod is re-created and the OCS storage is mounted in the pod, then WordPress is available again.

This test only deletes the pod that is running the database. What we cannot be sure of so far is that the MySQL pod actually moves from one node to another. To have that happen, we have to cordon the node on which the pod currently runs and then delete the pod. Cordoning the node means that it will take no new containers and, as a consequence, the new incarnation of our database pod will have to be started on a different node.

As a first step, we need to find the node on which mysql-ocs is currently running:

# ocs get pod mysql-ocs-0 -o wide

ocs get pod mysql-ocs-0 -o wide

NAME          READY   STATUS    RESTARTS      AGE      IP            NODE           
NOMINATED NODE
mysql-ocs-0   1/1     Running   0             5m       10.129.2.44   
ip-172-16-27-161.us-west-2.compute.internal   <none>

Now we cordon the node ip-172-16-27-161.us-west-2.compute.internal and then delete the mysql-ocs-0 pod:

# oc adm cordon ip-172-16-27-161.us-west-2.compute.internal
node/ip-172-16-27-161.us-west-2.compute.internal cordoned
# oc delete pod mysql-ocs-0
pod "mysql-ocs-0" deleted

Again, we’ve run a series of the same test and ended up with an average value of 12 seconds. Therefore, it does not matter if the pod must be relocated to another node or is re-created on the same node again.

Conclusion

The goal behind this post was to show how an application like WordPress in OCP can be connected to a database, as well as how fast a pod can fail-over to another node when it is using Red Hat OCS as a storage platform. An average time of 12 seconds is what we can reproduce persistently for one MySQL pod. The exact time is, of course, specific to every setup, as it depends on a lot of different factors, but the reproducibility and the deterministic time is something that’s common to OpenShift deployments.

How to run a MySQL pod on OCP using OCS and StatefulSets

By Sagy Volkov and Annette Clewett

Greetings from Red Hat’s storage architect team! With this post, we’re kicking off a series in which we’ll demonstrate a step-by-step deployment of a stateful application on OpenShift Container Platform (OCP) using OpenShift Container Storage (OCS). This series, based on the 3.11 version of both OCP and OCS, will not cover how to install OCP or OCS.

We’ll start with creating one MySQL pod (using OCP StatefulSets and OCS), and then add the application that uses the MySQL database on persistent storage. As we progress in this series, we’ll show more advanced topics, such as OCP multi-tenant scenarios, MySQL performance on OCS, failover scenarios, and more.

OpenShift on AWS test environment

All the posts in this series use an OCP-on-AWS setup that includes 8 EC2 instances deployed as 1 master node, 1 infra node, and 6 worker nodes that also run OCS gluster and heketi pods. The 6 worker nodes are basically the storage provider (OCS) and persistent storage consumers (MySQL). As shown in the following, the OCS worker nodes are of instance type m5.2xlarge with 8 vCPUs, 32 GB Mem, and 3x100GB gp2 volumes attached to each node for OCP and a single 1TB gp2 volume for OCS storage cluster. The AWS region us-west-2 has Availability Zones (AZs) us-west-2a, us-west-2b, us-west-2c, and the 6 worker nodes are spread across the 3 AZs, two nodes in each AZ. This means the OCS storage cluster is “stretched” across these 3 AZs.

MySQL deployment with StatefulSets

This post revolves around deploying a MySQL pod using OCS and StatefulSets (STS), so let’s get started.

Stateful applications need persistent volume(s) (PVs) to support failover scenarios in which, when a pod (or pods) move(s) to a different worker node, the data it/they use(s) must be persistent after the pod(s) move(s).

STS were introduced in Kubernetes 1.9 and have a few advantages over “simple” deployments:

  1. Pod creation can be ordered when creating (and reversed ordered when scaling down). This is especially important in master/slave scenarios and/or distributed databases.
  2. Pods can have an easy naming convention and retain the name when migrating from one node to another after a failover.
  3. The persistent volume claims (PVCs) are not deleted when the STS is deleted to keep the data intact for future usage.

The first step in creating a PVC is making sure we have a storage class we can use to dynamically create the volume in OCP:

oc get sc
NAME                PROVISIONER      AGE
glusterfs-storage   kubernetes.io/glusterfs   9d
gp2 (default)       kubernetes.io/aws-ebs     21d
gp2-xfs             kubernetes.io/aws-ebs     18d

As you can see, we have 3 storage classes in our OCP cluster. For the MySQL deployment, we will be using the glusterfs-storage class, which is created with the installation of OCS when deploying OCP using Ansible playbooks and specific OCS inventory file options. This means that every time a claim is made for storage it will be the glusterfs-storage class that will provide it because it is configured into our STS definition file. If you want to see the content of any of the storageclass (SC) resources, run “oc get sc <storageclass_name> -o yaml”.

Because we are going to use STS, one of the requirements is to create a headless service for our MySQL application. We’re going to use the following yaml file:

cat headless-service-mysql.yaml
apiVersion: v1
kind: Service
metadata:
    name: mysql-ocs
    labels:
       app: mysql-ocs
spec:
    ports:
    - port: 3306
       name: mysql-ocs
    clusterIP: None
    selector:
       app: mysql-ocs

And then create the service.

oc create -f headless-service-mysql.yaml
service/mysql-ocs created
$oc get svc
NAME        TYPE        CLUSTER-IP   EXTERNAL-IP    PORT(S)    AGE
mysql-ocs   ClusterIP   None         <none>         3306/TCP   6s

Now that we have a storageclass and a headless service, let’s look at our STS yaml. This is a simple example, and as we progress in this series, we’ll update and add to this file.

Note: It is neither secure nor recommended to have plain-text password sets in yaml files. Instead, use secrets. For our example, to make things simple, we’ll use plain text.

cat mysql-sts.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
    name: mysql-ocs
spec:
  selector:
         matchLabels:
      app: mysql-ocs
  serviceName: "mysql-ocs"
  podManagementPolicy: Parallel
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql-ocs
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mysql-ocs
        image: mysql:5.7
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        - name: MYSQL_DATABASE
          value: wordpress
        - name: MYSQL_USER
          value: admin
        - name: MYSQL_PASSWORD
          value: secret
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-ocs-data
          mountPath: /var/lib/mysql
  volumeClaimTemplates:
  - metadata:
      name: mysql-ocs-data
    spec:
      storageClassName: glusterfs-storage
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 8Gi

Most of the container definitions are similar to that of a “DeploymentConfig” type. We’re using the headless service “mysql-ocs” that we previously created and specified MySQL 5.7 as the image to be used. The interesting part is at the bottom of the preceding file; the “volumeClaimTemplates” definition is how we create a persistent volume (PV), then claim it (PVC) and attach it to the newly created MySQL pod. As you can also see, we’re using the storage class we have from the OCP/OCS installation (glusterfs-storage), and we request a volume size of 8 GB to be created and use in a “ReadWriteOnce” mode.

To create our STS, we run the following command:

oc create -f mysql-sts.yaml
statefulset.apps/mysql-ocs created

Deployment validation

Let’s check that the pod is running. Please note that, depending on the hardware used, the MySQL container image download speed, size of volume requested, and availability of existing PVCs, this action can take between from a few seconds to around a minute.

oc get pods
NAME          READY     STATUS      RESTARTS     AGE
mysql-ocs-0   1/1       Running     0            31s

Let’s look at the PVC we created with this STS.

oc get pvc
NAME                         STATUS     VOLUME
CAPACITY ACCESS MODES     STORAGECLASS          AGE
mysql-ocs-data-mysql-ocs-0   Bound 
pvc-cb25b2c0-3a12-11e9-96fc-02e7350e98d2     8Gi       RWO 
glusterfs-storage 1m

And the PV that is associated with the PVC:

oc get pv
NAME                                       CAPACITY ACCESS MODES  RECLAIM 
POLICY   STATUS    CLAIM .                         STORAGECLASS 
REASON    AGE
pvc-cb25b2c0-3a12-11e9-96fc-02e7350e98d2   8Gi      RWO .         Delete 
Bound     sagy/mysql-ocs-data-mysql-ocs-0   glusterfs-storage          3m

If you want to see the connection/relationship between Kubernetes, gluster, heketi, and our persistent storage volume, we can run a few commands to show it. We know the PV name from our “oc get pvc” we ran previously, so we’ll use “oc describe” and search for Path.

oc describe pv pvc-d1fc687c-3a14-11e9-96fc-02e7350e98d2|grep Path
    Path:           vol_82f64c461e4796213160f30519f318f8

In our case, the volume name is vol_82f64c461e4796213160f30519f318f8, and this is the same volume name in gluster., If you log in to the container inside the MySQL pod, we can see the same volume and the directory it is mounted to.

oc rsh mysql-ocs-0
$ df -h|grep vol_82f64c461e4796213160f30519f318f8
172.16.26.120:vol_82f64c461e4796213160f30519f318f8  8.0G 325M 7.7G 4% 
/var/lib/mysql

We can see that the volume is mounted on /var/lib/mysql (what we specified in our STS yaml file) and size is 8.0G.

If we want to check heketi for more info, we must first make sure that heketi-client package is installed on the server you’re trying to run it from. The following file must be sourced to export the environment before using heketi-client commands.

cat heketi-export-app-storage
export HEKETI_POD=$(oc get pods -l glusterfs=heketi-storage-pod -n 
app-storage -o jsonpath='{.items[0].metadata.name}')
export HEKETI_CLI_SERVER=http://$(oc get route/heketi-storage -n app-storage 
-o jsonpath='{.spec.host}')
export HEKETI_CLI_USER=admin
export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n app-storage -o 
jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}')
export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64)

The  heketi volume name is the gluster volume name without the “vol_”, which can be found using the following command:

oc describe pv pvc-d1fc687c-3a14-11e9-96fc-02e7350e98d2|grep Path|awk '{print 
$2}'|awk -F 'vol_' '{print $2}'
82f64c461e4796213160f30519f318f8

And now, after we made sure heketi-cli is installed and sourced the environment variables, the heketi-cli command can be used to get more information about this gluster volume.

heketi-cli volume info 82f64c461e4796213160f30519f318f8
Name: vol_82f64c461e4796213160f30519f318f8
Size: 8
Volume Id: 82f64c461e4796213160f30519f318f8
Cluster Id: f05418936dc63638041af2831914c37d
Mount: 172.16.26.120:vol_82f64c461e4796213160f30519f318f8
Mount Options: 
backup-volfile-servers=172.16.53.212,172.16.39.190,172.16.56.45,172.16.27.161
,172.16.44.7
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distributed+Replica: 3
Snapshot Factor: 1.00

Deleting StatefulSet and persistent storage

So far, we’ve seen how to create a MySQL pod using STS and OCS storage, but what happens when we want to delete a pod or the storage? First, let’s look at our PVC.

oc get pvc
NAME                         STATUS    VOLUME                        
CAPACITY ACCESS MODES     STORAGECLASS        AGE
mysql-ocs-data-mysql-ocs-0   Bound 
pvc-d1fc687c-3a14-11e9-96fc-02e7350e98d2   8Gi        RWO 
glusterfs-storage 20h

Now let’s delete our STS for MySQL.

$ oc delete -f mysql-sts.yaml
statefulset.apps "mysql-ocs" deleted

And let’s check the PVC again after MySQL STS is deleted.

oc get pvc
NAME                         STATUS    VOLUME                        
CAPACITY ACCESS MODES     STORAGECLASS        AGE
mysql-ocs-data-mysql-ocs-0   Bound 
pvc-d1fc687c-3a14-11e9-96fc-02e7350e98d2    8Gi       RWO 
glusterfs-storage 20h

As you can see. the PVC remains with the data intact and will be used again if we will redeploy the same STS.

If you want to delete the PVC, run the following command:

$ oc delete pvc mysql-ocs-data-mysql-ocs-0
persistentvolumeclaim "mysql-ocs-data-mysql-ocs-0" deleted

And you monitor the PV and watch how it gets deleted, as well (PV is first released).

oc get pv
NAME                                       CAPACITY ACCESS MODES RECLAIM 
POLICY STATUS     CLAIM                              STORAGECLASS   
REASON  AGE
pvc-d1fc687c-3a14-11e9-96fc-02e7350e98d2   8Gi      RWO          Delete 
Released   sagy/mysql-ocs-data-mysql-ocs-0    glusterfs-storage 
20h

And if we query again, the PV will be gone and deleted.

oc get pvc
No resources found.

Conclusion

In this post, we’ve shown the first step toward running on OCP an application that needs persistent data. We used the glusterfs-storage storageclass that is provided by OCS to create a PVC and attached the volume to a MySQL pod. We automated the process using an STS. We also explained the relationship between OCS, heketi, the PV, PVC, and the MySQL pod.

In our next post we’ll show how to connect a WordPress pod to our database pod.