[Introduction] Set up nginx on Kubernetes and display the default page
table of contents
Hello.
[Introduction] I'm an infrastructure guy in the system solution department who writes a lot of articles.
I'm a true gate guardian, but this time I'll be writing about Kubernetes.
There are so many detailed introductory articles out there that I feel discouraged even before I write them, but I'll try to summarize what I've broken down in my own way in an easy-to-understand manner.
Slimy.
This time, we will actually prepare nginx with Kubernetes and display the welcome page.
It may be rare for developers to get into it, but in order to maximize the performance of an application in an environment created with Kubernetes, it is necessary to design an application that is compatible with Kubernetes to some extent, so here is how it works. Please use this article as a springboard to get started!
What is Kubernetes?
Here's the documentation:
Kubernetes is a portable, extensible, open source platform that facilitates declarative configuration management and automation to manage containerized workloads and services. Kubernetes has a large and rapidly growing ecosystem with a wide range of services, support, and tools available.
Official documentation: https://kubernetes.io/ja/docs/concepts/overview/what-is-kubernetes/
I'll explain it step by step since I don't know much about it, but first of all, Kubernetes is a container orchestration tool (an image that manages many containers like a conductor), so knowledge of containers is required as a prerequisite.
Declarative configuration management and automation
In Kubernetes, if you describe (declare) the "desired state" in a file called a manifest file, Pods (a collection of containers and volumes) will be created according to the contents.
For example, if you order ``4 containers,'' and then make a mistake and delete one container, Kubernetes will automatically create one and add one.
With Docker Compose, etc., you can create a container and keep creating it, but Kubernetes is different in that it has additional management functions as mentioned above.
Manage containerized workloads and services
If you've learned Docker, you may know that Docker runs containers on a single machine.
Kubernetes, on the other hand, manages containers on multiple machines.
In the case of large-scale applications, multiple machines may be linked to distribute the load and functions, but at this time, you can use the manifest file mentioned above without running docker run on each machine one by one. If you prepare it, it will be deployed to each machine in a good manner, so it is a great asset when deploying large-scale services.
merit
In addition to being the de facto standard when deploying large-scale services using containers,
like Docker, it can be delivered including configuration files, and it uses replica sets to control versions and stop containers. There are several advantages, such as being able to update without having to do anything.
Another big advantage is that the open source community is active on a global scale.
Kubernetes glossary
Kubernetes comes with many unique concepts and terms.
Since this part is difficult to memorize, I will explain it briefly.
At this point, I hope you read this with the understanding that there are people like that out there. If you use it, you'll remember it even if you don't like it.
master node
This is the part we command. As we will see later, there are also worker nodes, and the role of the master node is to instruct these worker nodes.
Management such as ``kube-apierver (receives and processes commands from us)''
``etcd (database that maintains the received ``desired state'')''
``kube-scheduler (allocates pods to worker nodes)'' It is a group that realizes functions.
worker node
An area that works in conjunction with the master node and where Pods are actually placed.
There are such things as ``kube-let (which interacts with kube-schedule)'' and
``kube-proxy (which handles the network effectively)''
Pod
A collection of containers and volumes (storage areas).
Even if you don't use volumes, Kubernetes manages Pods as a unit.
service
It has the role of organizing Pods.
One IP is allocated for each service, and when that IP is accessed, the service will load balance to the Pods under it.
This load balancing is per worker node, so load balancing between multiple worker nodes is outside the scope of Service.
replica set
We explained that when Pods stop or are deleted, Kubernetes automatically restores them to the desired number, but to do this we need to manage the number of Pods.
Replica set is responsible for this management.
deployment
It manages the deployment of Pods.
The above-mentioned "Pod", "service", and "replica set" can be described in the manifest file (yaml format), but among them, "Pod" and "replica set" are often described as being included in this "deployment". is.
In other words, when creating a Pod, it will work as long as you have a manifest for "deployment" and a manifest for "service".
About the environment
Seeing is believing, so I'd like you to take a look at the manifest file, which contains the terminology explained in the previous section.
But before that, we need a Kubernetes execution environment to apply the manifest file.
You can use the Kubernetes services provided by each cloud (EKS for AWS, GKE for GCP, AKS for Azure, OKE for OCI, ACK for Alibaba Cloud, etc.), but for the first time Since it is expensive for learning purposes for academics, we recommend using the extension function that comes with "Docker Desktop".
Setting up is easy.
Check Docker Desktop's "setting" ⇒ "Kubernetes" ⇒ "Enable Kubernetes".
You can just wait for various things to be installed. It's the best.
This is all you need to start learning Kubernetes.
Example of manifest file
Now, let's take a look at an example manifest file.
As an example, I borrowed
a yaml file that describes the deployment to create nginx from the documentation It's like this.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14. 2 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service labels: app: nginx spec: type: NodePort ports: - port: 8080 targetPort: 80 nodePort: 30080 protocol: TCP selector: app: nginx
The upper part separated by "---" is the deployment description, and the lower part is the service description.
It is possible to write them in separate manifest files and manage them separately, but if you write them in the same file, separate the resources with a hyphen like this.
The explanation for each item from above is as follows.
■"apiVersion" ⇒ Kubernetes resources have API versions, so write the appropriate one.
The way to check is a little complicated depending on the item, but in the case of deployments and services, use the commands "kubectl api-resources | grep Deployment" and "kubectl api-resources | grep Services" and enter "apps/v1" in the APIVERSION field, respectively. You can see that it says "v1".
■“kind” ⇒ Describe the type of resource. In this case, we will create a deployment and a service, so the names are "Deployment" and "Service" respectively.
■``metadata'' ⇒ As the name suggests, it is metadata. Label your resources.
For now, just remember "name" and "labels".
■"spec" ⇒ Describe the contents of the resource. Subordinate items vary depending on the resource.
[This deployment includes the following: ]
Selector
Used by deployments to manage Pods. In the example, a label is attached. I'll use it later.
replica
Specify the replica set. The example above requests two Pods.
spec
Pod specs. Specifies the image and port used in the container.
[This service includes the following: ]
type
In this field, select from several types and specify how the service communicates with the outside world.
Specifically,
"ClusterIP (can be connected with ClusterIP. Used for internal communication, external access is not allowed)",
"NodePort (can be connected with worker node IP)",
"LoadBalancer (can be connected with load balancer IP)",
" ExternalName (a special setting used to connect externally from the Pod).
When accessing from the outside, we basically use "LoadBalancer", but since it is not public this time, we specify "NodePort" which connects directly to the worker node.
(I think that a resource called Ingress, which functions as an L7 load balancer and can serve as an SSL terminal, is often used for external disclosure, but I will not use it this time, so I put it aside for now. Note that "LoadBalancer" can also be SSL-enabled. It is possible, but when creating in the cloud, depending on the vendor, it may not be supported due to specifications.)
Also, "NodePort" is not only used for testing like this time, but also for situations where you want to perform some operations on a NodePort basis You can use it.
ports
In addition to setting the protocol to TCP, three ports are defined here.
"port" is the service port
"targetPort" is the container port
"nodePort" is the worker node port.
Since NodePort is specified for type this time, nginx will be accessed using this "nodePort".
selector
Specify the label set in the Pod in this item.
Creating resources
Now that we've covered the basics, let's actually create a manifest file, run it, and create resources.
By the way, you can use any name for the file used in Kubernetes.
Unlike Dockerfile and docker-compose.yml, you don't have to use a specific name, so create each one using rules that are easy to manage.
# Open the file with vi, paste the example source, save it, then create a deployment with the kubectl command vi example.yml kubectl apply -f example.yml
Check your pods
$ kubectl get po -l app=nginx NAME READY STATUS RESTARTS AGE nginx-deployment-6595874d85-7mt97 1/1 Running 0 52s nginx-deployment-6595874d85-mds2z 1/1 Running 0 52s
You can see that two Pods have been created and are marked as "Running".
For example, changing the number of "replicas" in the yml file will also increase or decrease the number of Pods.
Next, you can check the service using the following command
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 26h nginx-service NodePort 10.106.92.238 8080:30080/TCP 21m
It looks like the nodePort and service ports are successfully linked, so let's access the Pod by entering "http://localhost:30080/" in the browser.
Yes, the usual welcome page has appeared!
summary
Since this is an "introductory" guide, I only gave a basic explanation.
There are other adjustments that need to be made in order to actually make it publicly available and provide services, but I hope that we can get rid of the feeling that it's something that we don't understand.
I would like to write an article after getting started, so please wait for a while.
Thank you very much!