This website features code samples from the book Kubernetes for Developers.
It is designed to be followed along side the book, or with an instructor,
to add the appropriate context. For that reason, the samples are presented
stand-alone, without a lot of context.
For simplicity, this learning companion makes a few assumptions that the book doesn’t:
- That you’re using Google Cloud
- That your cluster is a GKE cluster in Autopilot mode
- That you’re using Cloud Shell to run the examples
It can be fairly easily adapted for other Kubernetes platforms, and minikube
(the book includes some hints on how to do that).
You can get a $300 credit to use Google Cloud, and unless you upgrade,
there’s no risk of getting billed. Personally I think this is the ideal
place to learn, because it’s a real production environment so you also
gain the knowledge to do this in production. While minikube
is a great
playground, it’s not designed for production, so can’t give you that added
knowledge, and there are some subtle differences.
Chapter 1 is introductory, so here we start with Chapter 2 on Docker. Have fun!
Containerizing your application—that is, packaging your application and its dependencies into an executable container—is a required step before adopting Kubernetes.
Let’s get started by deploying an application and making it available on the internet. Later, we’ll update it with a new version.
Kubernetes can automate many operations, like restarting your container if it crashes and migrating your application in the case of hardware failure. Kubernetes can also help you update your application without outages and glitches by booting the new version and monitoring its status to ensure it’s ready to serve traffic before removing the old version.
How Pods are allocated to machines based on their resource requirements and the information that you need to give the system so that your Pod will receive the resources that it needs.
How to scale up (and down) your application.
Internal services are a way to scale how you develop and serve your application by splitting your application into multiple smaller services.
Different Pods may request more or less CPU, but they’re all running on the same type of nodes under the hood.
One of the fundamental properties of cloud computing is that even when you’re using an abstract platform that takes care of much of the low-level compute provisioning for you as Kubernetes platforms are capable of doing, you may still care to some extent about the servers that are actually running your workloads.
Stateful applications (i.e., workloads that have attached storage) finally have a home with Kubernetes.
You can process background tasks using Deployment or the Kubernetes Job object.
Namespaces, and configuration as code.
Keeping your cluster up to date, handling disruption, deploying node agents, and building non-root containers. Plus the process of creating a dedicated namespace for a team of developers and how access can be granted specifically to that namespace. This is a pretty common pattern I’ve observed in companies where several teams share clusters.
This is a page that displays all content on a single page.