9.3 Migrating/recovering disks
Create the Redis StatefulSet from 9.2.2 if it’s not already running.
kubectl create -f Chapter09/9.2.2_StatefulSet_Redis_Multi
Add some data
kubectl exec -it redis-0 -- redis-cli
127.0.0.1:6379> SET capital:australia "Canberra"
OK
127.0.0.1:6379> SET capital:usa "Washington"
OK
$ cd Chapter09/9.2.2_StatefulSet_Redis_Multi/
$ kubectl delete -f redis-statefulset.yaml
service "redis-service" deleted
statefulset.apps "redis" deleted
$ kubectl create -f redis-statefulset.yaml
service/redis-service created
statefulset.apps/redis created
Once the Pods is ready
$ k get pods -w
You can see that the data is still there.
$ kubectl exec -it redis-0 -- redis-cli
127.0.0.1:6379> GET capital:usa
"Washington"
This test didn’t delete the PV or PVC though.
Deleting the PV
Update Reclaim policy
To recover data, the PV needs to be set to Retain. In production, that’s what you should always do. For this test, we will need to update that.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-40c8cdf9-f97d-4b7b-9beb-ef783b1425c5 1Gi RWO Delete Bound default/redis-pvc-redis-1 standard-rwo 3m42s
pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5 1Gi RWO Delete Bound default/redis-pvc-redis-0 standard-rwo 5m32s
pvc-b25dc634-e9b3-433f-b600-109272331761 1Gi RWO Delete Bound default/redis-pvc-redis-2 standard-rwo 25s
Look for the PVC that is associated with the CLAIM
named default/redis-pvc-redis-0
$ PV_NAME=pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5
$ kubectl edit pv $PV_NAME
Find
persistentVolumeReclaimPolicy: Delete
and change to
persistentVolumeReclaimPolicy: Retain
Now it should look like this
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-40c8cdf9-f97d-4b7b-9beb-ef783b1425c5 1Gi RWO Delete Bound default/redis-pvc-redis-1 standard-rwo 6m2s
pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5 1Gi RWO Retain Bound default/redis-pvc-redis-0 standard-rwo 7m52s
pvc-b25dc634-e9b3-433f-b600-109272331761 1Gi RWO Delete Bound default/redis-pvc-redis-2 standard-rwo 2m45s
Save Objects
This isn’t strictly needed, but it avoids us needing to re-create all the config.
$ kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/redis-pvc-redis-0 Bound pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5 1Gi RWO standard-rwo 8m21s
persistentvolumeclaim/redis-pvc-redis-1 Bound pvc-40c8cdf9-f97d-4b7b-9beb-ef783b1425c5 1Gi RWO standard-rwo 8m8s
persistentvolumeclaim/redis-pvc-redis-2 Bound pvc-b25dc634-e9b3-433f-b600-109272331761 1Gi RWO standard-rwo 4m48s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-40c8cdf9-f97d-4b7b-9beb-ef783b1425c5 1Gi RWO Delete Bound default/redis-pvc-redis-1 standard-rwo 6m27s
persistentvolume/pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5 1Gi RWO Retain Bound default/redis-pvc-redis-0 standard-rwo 8m17s
persistentvolume/pvc-b25dc634-e9b3-433f-b600-109272331761 1Gi RWO Delete Bound default/redis-pvc-redis-2 standard-rwo 3m10s
$ kubectl get -o yaml persistentvolumeclaim/redis-pvc-redis-0 > pvc.yaml
$ PV_NAME=pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5
$ kubectl get -o yaml persistentvolume/$PV_NAME > pv.yaml
Delete
With the rreclaim policy set correctly, and the config saved, delete the objects.
kubectl delete statefulset,pvc,pv --all
Re-create
Edit pv.yaml and make 2 changes
- Remove the
uid
field from the claimRef section (the claimRef is the pointer to the PVC; the problem is that the PVC’s uid has changed). - Set the storageClassName to the empty string "" (we’re manually provisioning and don’t want to use a storageClass).
e..g.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
volume.kubernetes.io/selected-node: gk3-my-cluster-pool-2-801f6e81-zm87
volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
creationTimestamp: "2023-11-13T00:14:31Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: redis-sts
name: redis-pvc-redis-0
namespace: default
resourceVersion: "715859"
uid: 4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard-rwo
volumeMode: Filesystem
volumeName: pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
phase: Bound
Edit the pvc.yaml
- a Delete the annotation pv.kubernetes.io/bind-completed: “yes” (this PVC needs to be re-bound, and this annotation will prevent that).
- Set the storageClassName to the empty string "" (same reason as the previous step).
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
volume.kubernetes.io/selected-node: gk3-my-cluster-pool-2-801f6e81-zm87
volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
creationTimestamp: "2023-11-13T00:14:31Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: redis-sts
name: redis-pvc-redis-0
namespace: default
resourceVersion: "715859"
uid: 4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ""
volumeMode: Filesystem
volumeName: pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
phase: Bound
Unlike before, the PV and PVC are “prelinked” (referencing each other)
Now recreate
There should be no objects currently:
$ kubectl get sts,pv,pvc
No resources found
$ kubectl create -f pv.yaml
persistentvolume/pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5 created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5 1Gi RWO Retain Available default/redis-pvc-redis-0 5s
$ kubectl create -f pvc.yaml
persistentvolumeclaim/redis-pvc-redis-0 created
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5 1Gi RWO Retain Bound default/redis-pvc-redis-0 14s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/redis-pvc-redis-0 Bound pvc-4d2a4af8-5a4a-4e9e-849d-7721b03e8fd5 1Gi RWO 5s
$ kubectl create -f redis-statefulset.yaml
statefulset.apps/redis created
Error from server (AlreadyExists): error when creating "redis-statefulset.yaml": services "redis-service" already exists
$
Wait for the Pods to become ready:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-0 1/1 Running 0 2m49s
redis-1 1/1 Running 0 2m36s
redis-2 0/1 Pending 0 16s
Read back the data. We can use a read-replica (whose data we didn’t restore) rather than the primary, to also verify Redis’ own data replication.
$ kubectl exec -it redis-1 -- redis-cli
127.0.0.1:6379> GET capital:australia
"Canberra"