10.1.3 Scaling worker Pods

View in the book. Buy the book.

HPA can be used to scale workers for background queues:

Create it:

kubectl create -f Chapter10/10.1.3_HPA

Add some more tasks

$ kubectl exec -it deploy/pi-worker -- python3 add_tasks.py

Observe the results

$ kubectl get pods,hpa
NAME                            READY   STATUS      RESTARTS   AGE
pod/addwork-thwbq               0/1     Completed   0          6m11s
pod/pi-worker-f7565d87d-526pd   0/1     Pending     0          3s
pod/pi-worker-f7565d87d-7kpzt   0/1     Pending     0          3s
pod/pi-worker-f7565d87d-8tzb2   1/1     Running     0          10m
pod/pi-worker-f7565d87d-dtrkd   1/1     Running     0          10m
pod/redis-0                     1/1     Running     0          17m
pod/redis-1                     1/1     Running     0          14m
pod/redis-2                     1/1     Running     0          13m

NAME                                                       REFERENCE               TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/pi-worker-autoscaler   Deployment/pi-worker    37%/20%         2         10        2          34s
horizontalpodautoscaler.autoscaling/timeserver             Deployment/timeserver   <unknown>/20%   1         10        1          5h6m