[Introduction] Setting up nginx on Kubernetes and displaying the default page

Hello.
I'm the infrastructure enthusiast from the System Solutions Department, and I mostly write introductory articles.

As a die-hard gate guardian myself, I'll be writing about Kubernetes this time.
There are so many thorough introductory articles out there that I almost give up before I even start writing, but I'll try to summarize what I've learned in my own way in an easy-to-understand manner.
Smoothly.

This time, we will actually set up nginx on Kubernetes and display a welcome page

It may be rare for developers to get involved in it in depth, but in order to maximize application performance in an environment created with Kubernetes, you will need to design your application to a certain extent to be compatible with Kubernetes, so please use this article as a stepping stone to get started!

What is Kubernetes?

Here's the documentation explanation:

Kubernetes is a portable, scalable, open-source platform for managing containerized workloads and services, facilitating declarative configuration management and automation. Kubernetes has a huge and rapidly growing ecosystem, with a wide range of services, support, and tools available. Official
documentation:https://kubernetes.io/ja/docs/concepts/overview/what-is-kubernetes/

Since it may not be very clear, I will explain it step by step. First of all, Kubernetes is a container orchestration tool (it manages many containers like a conductor), so knowledge of containers is required as a prerequisite

Declarative Configuration Management and Automation

In Kubernetes, you specify (declare) the desired state in a file called a manifest file, and Kubernetes will create Pods (a collection of containers and volumes) according to that state.
For example, if you order "four containers" and then accidentally delete one, Kubernetes will automatically create and add the missing one. While
Docker Compose and similar tools simply create containers and leave them there, Kubernetes differs in that it adds the management functionality described above.

Manage containerized workloads and services

As those familiar with Docker may know, Docker runs containers on a single machine.
In contrast, Kubernetes manages containers located on multiple machines.
In large-scale applications, multiple machines are often used to distribute load and functionality, and in this case, instead of running `docker run` on each machine individually, you can prepare the aforementioned manifest file, and Kubernetes will deploy the containers nicely to each machine. This is a valuable tool when deploying large-scale services.

merit

In addition to being the de facto standard for deploying large-scale services in containers, it offers several
advantages, such as the ability to deliver configuration files along with the container, similar to Docker, and the use of replica sets for version control, allowing updates without stopping the container. The
active global open-source community is also a major advantage.

Kubernetes Glossary

Kubernetes has many unique concepts and terminology. You
'll just have to memorize them, so I'll give a rough explanation.
For now, just read this and get a general idea that such things exist. You'll learn them whether you like it or not as you use it.

Masternode

This is where we issue commands. As we'll see later, there are also worker nodes, and the master node's role is to give instructions to those worker nodes. It's a collection of
components that provide management functions, such as "kube-appointer (which receives and processes commands from us)", "
(a database that holds the received 'desired state')"
"kube-scheduler (which assigns Pods to the worker nodes)"
etcd

Worker nodes

This is the area where Pods are actually deployed, working in conjunction with the master node.
"kube-let" (which communicates with kube-schedule)
"kube-proxy" (which handles networking efficiently)
It includes components such as

Pod

A volume is a collection of containers and their associated storage space.
Even if you don't use volumes, Kubernetes manages Pods as a single unit.

service

Its role is to group Pods together.
Each service is assigned one IP address, and when you access that IP address, the service load-balances the load across its subordinate Pods.
This load balancing is done on a per-worker node basis, so load balancing between multiple worker nodes is outside the scope of the Service.

Replica Set

I explained that Kubernetes automatically restores the number of Pods to the desired level when they stop or are deleted, but this requires managing the number of Pods.
This is where replica sets come in.

Deployment

This is used to manage Pod deployments.
While the aforementioned "Pod," "service," and "replica set" can all be described in a manifest file (in YAML format), "Pod" and "replica set" are often included within this "deployment" manifest.
In other words, when creating a Pod, it will work if you have a manifest for "deployment" and a manifest for "service."

About the environment

As the saying goes, seeing is believing, so let's take a look at a manifest file containing the terminology explained in the previous section.
But before that, you'll need a Kubernetes execution environment to apply the manifest file.
You can use the Kubernetes services provided by each cloud provider (such as EKS for AWS, GKE for GCP, AKS for Azure, OKE for OCI, ACK for Alibaba Cloud, etc.), but these are rather expensive for beginners, so we recommend using the extension included with "Docker Desktop".

The setup is simple.
In Docker Desktop, go to "settings" -> "Kubernetes" -> and check "Enable Kubernetes".

Just wait for everything to install, and you're good to go. It's great!
With just this, you can start learning Kubernetes.

Example of a manifest file

Now, let's look at an example of a manifest file.
As an example, a YAML file from the documentation that describes a deployment to create nginxI've borrowed
It looks like this.

apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service labels: app: nginx spec: type: NodePort ports: - port: 8080 targetPort: 80 nodePort: 30080 protocol: TCP selector: app: nginx

The upper part separated by "---" is the deployment description, and the lower part is the service description.
It is possible to describe them in separate manifest files and manage them separately, but if you describe them in the same file, you should separate the resources with hyphens like this.
Here is an explanation of each item from the top.

■ "apiVersion" ⇒ Kubernetes resources have an API version, so specify the appropriate one.
Finding it can be a bit complicated depending on the resource, but for deployments and services, you can use the commands "kubectl api-resources | grep Deployment" and "kubectl api-resources | grep Services" to confirm that the APIVERSION column will show "apps/v1" and "v1" respectively.

■ "kind" ⇒ Enter the type of resource. In this case, we are creating a deployment and a service, so the types are "Deployment" and "Service" respectively

■ "metadata" ⇒ As the name suggests, this is metadata. You can assign labels to resources.
For now, just remember "name" and "labels".

■ "spec" ⇒ Describes the contents of the resource. The sub-items will differ depending on the resource.
[This deployment includes the following:]
Selector
deployments use this to manage Pods. In the example, labels are attached. We will use them later.

the
Specify

the spec
for the Pod. It specifies the image and port to be used in the container.

[This service includes the following:]
type
This item specifies how the service communicates with the outside world, and you select from several types.
Specifically, there are
"ClusterIP (can connect using ClusterIP. Used for internal communication, external access is not possible)",
"NodePort (can connect using the worker node's IP)",
"LoadBalancer (can connect using the load balancer's IP)", and
"ExternalName (a special setting used to connect from a Pod to the outside)".

When allowing external access, you would typically use "LoadBalancer" here, but since this is not public access, we've specified "NodePort" to connect directly to the worker node.
(For external access, you would often use a resource called Ingress, which functions as an L7 load balancer and can terminate SSL, but we're not using it this time, so we'll set it aside for now. Note that "LoadBalancer" can also be SSL-enabled, but when creating it in the cloud, some vendors may not support it due to their specifications.)
Also, "NodePort" can be used not only for tests like this but also in situations where you want to perform some operation on a NodePort basis.

port
Here, in addition to specifying the protocol as TCP, we define three ports.
" is the service port,
"targetPort" is the container port, and
"nodePort" is the worker node port.
Since we specified NodePort in the type here, we will access nginx using this "nodePort".

`selector`
field specifies the label configured in the Pod.

Creating Resources

Now that we've covered the basics, let's actually create a manifest file, run it, and create some resources.
By the way, you can name any file you want to use in Kubernetes.
Unlike Dockerfile and docker-compose.yml, there's no need for a specific name, so create them using a rule that makes them easy to manage.

# Open the file with vi, paste the example source, save it, and then create a deployment with the kubectl command vi example.yml kubectl apply -f example.yml

Check the Pods

$ kubectl get po -l app=nginx NAME READY STATUS RESTARTS AGE nginx-deployment-6595874d85-7mt97 1/1 Running 0 52s nginx-deployment-6595874d85-mds2z 1/1 Running 0 52s

You can see that two Pods have been created and are in the "Running" state.
For example, changing the number of "replicas" in the yml file will increase or decrease the number of Pods.

Next, you can check the service with the following command:

$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 26h nginx-service NodePort 10.106.92.238 8080:30080/TCP 21m

It looks like the nodePort and service port are linked properly, so try accessing the Pod by entering "http://localhost:30080/" in your browser

Yes, the usual welcome page has appeared!

summary

This is just an introduction, so I've only given a brief overview.
There are other things that need to be adjusted before actually releasing it to the public and providing it as a service, but I hope this helps to dispel any sense of it being something completely incomprehensible. I'd
like to write more articles beyond this introduction, so please wait a while.

Thank you very much!

If you found this article helpful,please give it a "Like"!
21
Loading...
21 votes, average: 1.00 / 121
9,723
X Facebook Hatena Bookmark pocket

The person who wrote this article

About the author

Infrastructure Wasshoi Man

I belong to the Systems Solutions Department.
I was lucky enough to be hired by Beyond because I enjoyed studying every day.
It's been nine years since I started debating whether to switch from glasses to contact lenses.