[Osaka/Yokohama/Tokushima] Looking for infrastructure/server side engineers!

[Osaka/Yokohama/Tokushima] Looking for infrastructure/server side engineers!

[Deployed by over 500 companies] AWS construction, operation, maintenance, and monitoring services

[Deployed by over 500 companies] AWS construction, operation, maintenance, and monitoring services

[Successor to CentOS] AlmaLinux OS server construction/migration service

[Successor to CentOS] AlmaLinux OS server construction/migration service

[For WordPress only] Cloud server “Web Speed”

[For WordPress only] Cloud server “Web Speed”

[Cheap] Website security automatic diagnosis “Quick Scanner”

[Cheap] Website security automatic diagnosis “Quick Scanner”

[Reservation system development] EDISONE customization development service

[Reservation system development] EDISONE customization development service

[Registration of 100 URLs is 0 yen] Website monitoring service “Appmill”

[Registration of 100 URLs is 0 yen] Website monitoring service “Appmill”

[Compatible with over 200 countries] Global eSIM “Beyond SIM”

[Compatible with over 200 countries] Global eSIM “Beyond SIM”

[If you are traveling, business trip, or stationed in China] Chinese SIM service “Choco SIM”

[If you are traveling, business trip, or stationed in China] Chinese SIM service “Choco SIM”

[Global exclusive service] Beyond's MSP in North America and China

[Global exclusive service] Beyond's MSP in North America and China

[YouTube] Beyond official channel “Biyomaru Channel”

[YouTube] Beyond official channel “Biyomaru Channel”

Try creating a Kubernetes cluster using GCP's GKE

This is Ohara from the technical sales department.


Create a network load balancing cluster using GCP's container orchestration tool "Google Kubernetes Engine (GKE)."

Introduction

Google Kubernetes Engine (GKE)
has a feature that manages network load balancing.

To use Network Load Balancing,
simply include
the type: LoadBalancer in your service configuration file GKE will set up your service and connect Network Load Balancing to it.

Reference: https://cloud.google.com/kubernetes-engine/

This time, we will start [Cloud Shell] on the GCP management machine and configure the settings.

Check your region

First, set default values ​​and environment variables for gcloud.
You must set your Project ID and select your GCP Zone and Region.

[Region] You can check the list of available GCP regions using the command below.
By the way, [Zone] cannot be checked using the command below, so
select any zone from [a to c] and assign it.

* You can also check [Region] and [Zone] on the GCP official website below.
Reference: https://cloud.google.com/compute/docs/regions-zones/regions-zones?hl=ja

 
gcloud compute regions list

This time, we will use the following example to configure the settings.

[Project name]
ohara-test

[Zone name]
asia-northeast1-a *Enter any zone from [a to c]

[Region name]
asia-northeast1

Set default values ​​for gcloud

gcloud config set project [project name] gcloud config set compute/zone [zone name] gcloud config set compute/region [region name] gcloud config list

 Setting environment variables

export CLUSTER_NAME="httploadbalancer" export ZONE="[Zone]" * The same zone set with the previous default value export REGION="[Region]" * The same region set with the previous default value

*Note: If the values ​​set in [gcloud default value] and [environment variable]
are not the same for the zone and region, an error will occur.

Create a Kubernetes cluster with GKE

In this example, we will start "3 nodes (3 instances)".
*It may take about 3 to 5 minutes to start up.

 gcloud container clusters create networklb --num-nodes 3 

*Startup successful.

If you check the Google Compute Engine (GCE) control panel, you will see that
three nodes (instances) have been created,
each configured as a Kubernetes node.

Deploy nginx on Kubernetes

 kubectl run nginx --image=nginx --replicas=3 

A replication controller is created that launches three pods,
each running an nginx container.

Verify the pod is working

Check the status of your deployment.
You can see that the pods are running on different nodes.

Once all pods are in running status,
you can expose your nginx cluster as an external service.

 kubectl get pods -owide 

Expose nginx to the outside world


I am creating a network load balancer that will load balance traffic to three nginx instances

 kubectl expose deployment nginx --port=80 --target-port=80 \ --type=LoadBalancer

Check the network load balancer address

Check EXTERNAL-IP (Global IP).

Please note that [pending] may appear immediately after executing the command below.
In that case, execute the command below until the IP address is displayed in EXTERNAL-IP.
*It may take several minutes for the IP address to be displayed.

 kubectl get service nginx

You're done!

you hit the IP address displayed in EXTERNAL-IP in your browser and
the following display appears, it is a success.

===============================================

Undeploy nginx (delete it)

The next step is to delete the Kubernetes cluster that we created earlier.
Although it is possible to delete it using the GCE/GKE control panel,
let's try deleting it from Cloud Shell.

■ Delete service

 kubectl delete service nginx

■ Delete a replication controller

 kubectl delete deployment nginx


■ Deleting a cluster

 gcloud container clusters delete networklb 

All deletion is now complete.
If you check the GCE control panel, you should see three instances deleted.

summary

■ Created three Kubernetes clusters using GKE from Cloud Shell.

■ I deployed nginx on a Kubernetes cluster and published it externally.

■ I deleted my Kubernetes cluster.

If you found this article helpful , please give it a like!
0
Loading...
0 votes, average: 0.00 / 10
3,098
X facebook Hatena Bookmark pocket
[2025.6.30 Amazon Linux 2 support ended] Amazon Linux server migration solution

[2025.6.30 Amazon Linux 2 support ended] Amazon Linux server migration solution

[Osaka/Yokohama] Actively recruiting infrastructure engineers and server side engineers!

[Osaka/Yokohama] Actively recruiting infrastructure engineers and server side engineers!

The person who wrote this article

About the author

ohara

I started my career in the telecommunications industry as a salesperson in charge of introducing IT products such as NW services, OA equipment, and groupware for corporations.

After that, he worked as a pre-sales engineer for physical servers/hosting services and as a customer engineer for SaaS-type SFA/CRM/BtoB e-commerce at an SIer-based data center business company, before joining his current company, Beyond.

Currently, I am stationed in Shenzhen, China, the Silicon Valley of Asia, and my daily routine is to watch Chinese dramas and billbill.

Qualification: Second class bookkeeping