The Need for Microservices
In the early days of software development, applications were built using monolithic architecture, where the entire codebase was present in a single file. This architecture proved to be much more efficient when teams were small and an application had very few services, but it came at the cost of SPOF (Single Point of Failure).
As software development approaches evolved, architecture shifted to "microservices based application," where each function or service was maintained in a separate file. This new architecture came with a lot of benefits, such as scaling services dynamically and independently, and having a variety of tech stack in an application. All microservices communicate with one another using REST APIs.
Safely learn real foundational and advanced Kubernetes skills in a series of Challenge Labs. Now only $48.
What are images?
Just like the pictures we take using our camera, photographs capture the state of a particular moment. Similarly, images are the snapshots of a particular application at a particular moment containing the application code and all dependencies. The major advantages for using images is that they are portable, can be loaded onto any machine, and you can do the productive business-defining tasks rather than setting up the system with all required dependencies and configurations. Images are then deployed to create containers.
What is Docker?
In a microservices architecture, you use multiple containers to run a single application.
Docker is a tool used to create containers from images.
What is a container registry?
Just like we use GitHub to store our codebase, we have similar registries to store our container images. In Google Cloud, we have Container Registry and Artifact Registry to store our images privately. Storing images in the cloud comes with advantages, such as binary authorization, container analysis to scan for vulnerabilities, and much more.
What is Kubernetes?
Kubernetes is a production grade container orchestration tool. It's used to deploy your containers by removing the need to manage infrastructure by yourself. Kubernetes is built using the Go programming language and was originally built by Google. We call Kubernetes k8s for short.
You can use Kubernetes in your system by using Minikube, which is a tool that is used to run Kubernetes locally, or by deploying Kubernetes clusters on cloud with just one click.
In Google Cloud we use GKE (Google Kubernetes Engine), in Azure we use AKS, and in AWS we use EKS.
A little background:
The broader term we use for a single entity of Kubernetes is a cluster. Clusters hold the top rank in the Kubernetes hierarchy. Clusters comprise nodes, pods, secrets, deployments, and much more.
Nodes are the computational objects used to power workloads or processes. You can have any number of nodes as per your specific needs. You can even create multiple worker and master nodes to increase the availability and reliability of the cluster. Nodes are categorized into two types:
Let’s now discuss the core components:
You can learn more about core components in the Kubernetes Documentation.
Now that you know the core components, let's take a look at what else Kubernetes offers.
Now that we've gone over Kubernetes basics, let's explore the cloud-based solution for Kubernetes, Google Kubernetes Engine.
Kubernetes in Google Cloud
Google Kubernetes Engine is a cloud-based managed solution of Kubernetes. GKE works similar to Kubernetes, but comes with the advantages of the cloud, including high availability, scalability, and affordability. Just create a cluster type of your choice and deploy your containers.
Types of Clusters and Modes:
List of commands:
Running sample application on GKE
1. Log into your Google Cloud account.
2. Search for Kubernetes Engine in the search panel and select the service.
3. If you're using Kubernetes for the very first time, you need to enable Kubernetes Engine API.
4. Select the standard cluster mode and enter the name of the cluster.
5. Select the zonal cluster type and choose the zone nearest to you.
6. Google Cloud will create a default node pool for you, but you can add more node pools as desired. Node pools help manage different machine configurations together. You can edit the node pool by clicking on it.
7. You can enable auto scaling of nodes under the automation section. Play around with the rest of the settings and then click create. Note: The creation of the cluster will take around 5-7 minutes.
8. The rest of the steps will be implemented using the command line, so click on the Cloud Shell Icon present in the top panel.
9. In order to connect to the cluster, we need to make an entry in the etcd database. We will do this by using the kubectl command, which in turn will send a HTTP request to the kube-apiserver, which will then make an entry in the etcd database confirming your identity.
gcloud container clusters get-credentials <CLUSTER-NAME> --zone <ZONE> --project <PROJECT-ID>
10. Now that we are connected to the cluster, let’s spin up our first pod.
11. You can check the active running pods by using the kubectl get pod command.
We will be exposing the pods to the external world in our upcoming series of blogs.
NOTE: Delete all the resources, as keeping the resources idle will incur costs.
Learning Kubernetes and earning Google Cloud certifications opens up huge career opportunities. Ready to start earning Google Cloud certifications? We’ll give you a coach and help you pass the exam the first time—guaranteed.