This is Part 3 of the Rails on Kubernetes series.
- Part 1: Rails and Docker Compose
- Part 2: Kubernetes
- Part 3: Deployments, Rolling updates and Scaling
If you don't want to read the previous posts and just follow along you can clone my repo and get going right away.
So far we created our Pods using ReplicationControllers. While this is fine, the recommended way is to use Deployments instead. This is a new feature in Kubernetes that allows you to control Rolling Upgrades, Rollbacks and more.
This is how the basic template looks:
apiVersion: apps/v1beta2 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx minReadySeconds: 1 # number of seconds after pod is up that it's considered ready (default 0) strategy: type: RollingUpdate # the other option is Recreate where all pods killed before update maxUnavailable: 1 # max number of pods that can become unavailable during update maxSurge: 1 # max number of extra pods created during deploy (3 + 1 in our case) template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
I went ahead and re-organized the /kube folder in our application. You can see the changes in this commit log.
This is our folder structure now:
├── deployments │ ├── postgres_deploy.yaml │ ├── rails_deploy.yaml │ ├── redis_deploy.yaml │ └── sidekiq_deploy.yaml ├── ingresses │ └── ingress.yaml ├── jobs │ └── setup.yaml ├── services │ ├── postgres_svc.yaml │ ├── rails_svc.yaml │ └── redis_svc.yaml └── volumes └── postgres_volumes.yaml
Since we changed things around we should re-deploy. You can tear down all the resources you created on your own. I will start on a fresh namespace:
~ $ kubectl create namespace rails-staging
Let's create the secrets:
~ $ kubectl -n rails-staging create secret generic db-user-pass --from-literal=password=mysecretpass ~ $ kubectl -n rails-staging create secret generic db-user --from-literal=username=postgres ~ $ kubectl -n rails-staging create secret generic secret-key-base --from-literal=secret-key-base=50dae16d7d1403e175ceb2461605b527cf87a5b18479740508395cb3f1947b12b63bad049d7d1545af4dcafa17a329be4d29c18bd63b421515e37b43ea43df64
Now we can run
~ $ kubectl -n rails-staging create -f kube/volumes/postgres_volumes.yaml ~ $ kubectl -n rails-staging create -f kube/services/postgres_svc.yaml ~ $ kubectl -n rails-staging create -f kube/services/redis_svc.yaml ~ $ kubectl -n rails-staging create -f kube/deployments/postgres_deploy.yaml --record ~ $ kubectl -n rails-staging create -f kube/deployments/redis_deploy.yaml --record
Finally, let's run the
Setup job for migrations and the
~ $ kubectl -n rails-staging create -f kube/jobs/setup.yaml ~ $ kubectl -n rails-staging create -f kube/services/rails_svc.yaml ~ $ kubectl -n rails-staging create -f kube/deployments/rails_deploy.yaml --record ~ $ kubectl -n rails-staging create -f kube/deployments/sidekiq_deploy.yaml --record ~ $ kubectl -n rails-staging create -f kube/ingresses/ingress.yaml
Because we ran our deployment resources with the
--record flag we can now access a Revision History for each deploy.
~ $ kubectl -n rails-staging rollout history deploy/rails-deployment deployments "rails-deployment" REVISION CHANGE-CAUSE 1 kubectl create --namespace=rails-staging --filename=kube/deployments/rails_deploy.yaml --record=true
Applying an update
Let's see what happens if we push an update. First let's add some changes to our app. In my case I added Bootstrap. You can see the changes here.
To test this locally I can either run the app locally or use docker-compose to build and serve it.
~ $ docker-compose build && docker-compose up
Now I can navigate to
http://localhost:3000 and check that everything looks good.
Let's push this new image to DockerHub:
~ $ bundle exec rake docker:push_image ... Done pushing image a7f0b5d
Now we can set a new image for our rails and sidekiq deployments:
~ $ kubectl -n rails-staging set image deploy/rails-deployment rails=tzumby/rails-app-alpine:57b3e12
Since we only have 1 Replica, our rollout settings won't mean anything - Kube will simply create a new Pod and kill the old one. We can check the rollout history:
~ $ kubectl -n rails-staging rollout history deploy/rails-deployment deployments "rails-deployment" REVISION CHANGE-CAUSE 1 kubectl create --namespace=rails-staging --filename=kube/deployments/rails_deploy.yaml --record=true 2 kubectl set image deploy/rails-deployment rails=tzumby/rails-app-alpine:85d97f1 --namespace=rails-stagin
Undo a deploy
Let's say whatever change we pushed is crashing and we want to revert back. This is pretty simple:
~ $ kubectl -n rails-staging rollout undo deployment/rails-deployment --to-revision=1
If you read Part 1 of this series you'll see I mentioned scaling as one of the benefits of running your Rails app in Kubernetes. It's time to deliver on that promise and perform some load testing & scaling.
Let's take a quick look at the resources we created so far.
~ $ kubectl get pods NAME READY STATUS RESTARTS AGE postgres-695fcd89f9-frz59 1/1 Running 0 5h rails-deployment-59cd86c755-m66tq 1/1 Running 0 5h redis-deployment-746c545869-tjbx6 1/1 Running 0 5h sidekiq-deployment-7f5bcf6ccf-ncgxs 1/1 Running 0 5h
In my case I have one pod running for each service. These also come with their own Replica Sets:
~ $ kubectl get rs AME DESIRED CURRENT READY AGE postgres-695fcd89f9 1 1 1 14m rails-deployment-59cd86c755 1 1 1 6m redis-deployment-746c545869 1 1 1 14m sidekiq-deployment-7f5bcf6ccf 1 1 1 12m
Notice how the Desired, Current and Ready are all set to 1. This is because we used one replica when we defined those resources:
apiVersion: apps/v1beta2 kind: Deployment metadata: name: rails spec: replicas: 1 # this is the number of replicas
ab to test our initial req/s throughput. This will be very basic - I just want to see an increase in the amount of req/s the app is handling (this is not by any means a performance test).
~ $ ab -n 500 -c 500 http://rails.local/ Requests per second: 172.24 [#/sec] (mean)
Now let's scale our Rails app to 4 replicas:
~ $ kubectl -n rails-staging scale deployment rails-deployment --replicas=4
We can check the Replica Sets and make sure everything worked:
~ $ kubectl -n rails-staging get rs NAME DESIRED CURRENT READY AGE rails-deployment-59cd86c755 4 4 4 5h
We have 4 of the Pods ready it seems. Let's try to hammer the app again.
~ $ ab -n 500 -c 500 http://rails.local/ Requests per second: 256.35 [#/sec] (mean)
Looks like we're doing a lot better now at 256 req/s. Again, this doesn't really tell us anything about the performance of our app. I just wanted to use those numbers to verify that our deployment scaling worked.
We ran those scale commands without
--record. If we did that we would even get those changes listed as Revisions:
~ $ kubectl -n rails-staging scale deployment rails-deployment --replicas=1 --record
~ $ kubectl -n rails-staging rollout history deployment/rails-deployment REVISION CHANGE-CAUSE 1 kubectl create --namespace=rails-staging --filename=kube/deployments/rails_deploy.yaml --record=true 2 kubectl set image deploy/rails-deployment rails=tzumby/rails-app-alpine:85d97f1 --namespace=rails-staging 3 kubectl scale deployment rails-deployment --namespace=rails-staging --replicas=1 --record=true
We looked at the new Deployments resource in Kubernetes and how we can use that to deploy new code or revert code changes. Deployments also allow us to scale our pods up and down. I didn't cover auto-scaling here but you can check that out on Kubernete's site.
What I described is a very manual process. This is great for learning how the ecosystem works but in a real production application you would most likely use a CI/CD tool that builds and pushes your containers on demand (via git-hooks or some manual trigger).
Subscribe to CosmoCloud Blog
Get the latest posts delivered right to your inbox