Try setting a free SSL certificate on the GCP load balancer using the gcloud command
Hello.
I'm Mandai, in charge of Wild on the development team.
Today, as the title suggests, I would like to try out the recently added fully managed and free SSL certificate that can now be applied to the load balancer.
Nowadays, SSL has become a necessity, and the cost of an SSL certificate can no longer be ignored, so some people may be very happy about this.
When did it become possible to issue free SSL certificates to GCP load balancers?
Did you know that you can now apply a free SSL certificate to GCP's ELB?
This feature was launched in October 2018, and I'm embarrassed to say I wasn't aware of it.
The AWS Certificate Manager feature was added to AWS a long time ago in January 2016, so there is a considerable difference in terms of timing, but it is functionally simple and in line with global standards, so it is easy for individuals to use it. Overall, I thought it was likable.
What's inside GCP's free SSL certificate?
GCP's SSL certificate, which you can easily see by checking the GCP console, is issued via Let's Encrypt.
Speaking of Let's Encrypt, the certificate renewal cycle is as short as 3 months, so the theory is to embed an update script on the server and run it periodically.
That's all, but when this becomes a public cloud load balancer, things get complicated.
It can be said that GCP provides all the functions related to this update work on the load balancer side, creating an environment where SSL can be used comfortably and easily.
Additionally, to issue a certificate, the DNS of the domain to which you plan to apply the SSL certificate must point to the IP address configured on the load balancer where you will place the SSL certificate.
Due to this restriction, immediately after application, SSL may be disabled for a short time even if it is published externally.
Try setting it with the gcloud command
You can place an SSL certificate on your load balancer using the gcloud command.
First, let's update the gcloud command itself.
According to my research, there seems to be no problem with versions 220.0.0 or later, but just to be safe, let's update to the latest version.
gcloud components update
The gcloud command is updated frequently, so I'm sure there will be some updates.
Once the update is complete, try creating an SSL certificate using the command.
This time, let's create an SSL certificate by setting the name of the SSL certificate to test-example-ssl and the domain for creating the SSL certificate to test.example.com.
gcloud beta compute ssl-certificates create test-example-ssl \ --domains test.example.com Created [https://www.googleapis.com/compute/beta/projects/hogehoge-xxxxxx/global/sslCertificates/test-example -ssl]. NAME TYPE CREATION_TIMESTAMP EXPIRE_TIME MANAGED_STATUS test-example-ssl MANAGED 2019-04-01T03:12:53.962-07:00 PROVISIONING test.example.com: PROVISIONING
To see what's going on, use the gcloud beta compute ssl-certificates list command.
gcloud beta compute ssl-certificates list NAME TYPE CREATION_TIMESTAMP EXPIRE_TIME MANAGED_STATUS www-example-ssl MANAGED 2019-03-01T23:33:53.360-08:00 2019-05-30T23:41:52.000-07:00 ACTIVE www.example.com : ACTIVE test-example-ssl MANAGED 2019-04-01T03:12:53.962-07:00 PROVISIONING test.example.com: PROVISIONING
www-example-ssl was created a while ago and is already enabled.
It says that test-example-ssl is being provisioned, but if you leave it as is, it will eventually result in an error.
This is because the IP address written in the A record of test.example.com is not suitable for the load balancer that we will create.
Again, once provisioning is complete, the MANAGED_STATUS part will change to ACTIVE.
Apply an SSL certificate to your load balancer
Now, we will apply the provisioned SSL certificate to the load balancer, but since we don't have the essential load balancer, we will start by creating a new load balancer.
There are a few steps, but don't get discouraged and keep going!
Creating an instance group
Create an instance group.
Here, we will create a container to hold the instance.
gcloud compute instance-groups unmanaged create example-instance-group \ --zone=us-east1-b Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1- b/instanceGroups/example-instance-group]. NAME LOCATION SCOPE NETWORK MANAGED INSTANCES example-instance-group us-east1-b zone 0
There are two types of instance groups: managed instance groups , which allow you to easily use features such as autoscaling and instance templates unmanaged instance groups, which do not have such features
This time, we don't need such functionality, so we will create it as an unmanaged instance group.
The command to check all created instance groups is as follows.
gcloud compute instance-groups list NAME LOCATION SCOPE NETWORK MANAGED INSTANCES default-group asia-northeast1-b zone default No 1 example-instance-group us-east1-b zone default No 1
Add instance to instance group
Let's add instances to the instance group we created.
This time we will create it in Zone B of the us-east1 region where the example-web01 instance is located.
gcloud compute instance-groups unmanaged add-instances example-instance-group \ --instances example-web01 \ --zone us-east1-b Updated [https://www.googleapis.com/compute/v1/projects/hogehoge- xxxxxx/zones/us-east1-b/instanceGroups/example-instance-group].
To check the instances added to the instance group, use the following command:
gcloud compute instance-groups describe example-instance-group --zone [region-zone] creationTimestamp: '2019-04-05T00:13:10.372-07:00' description: '' fingerprint: hogehoge id: 'xxxxxxxxxxxxxxxx' kind: compute#instanceGroup name: example-instance-group namedPorts: - name: http port: 80 network: https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/networks/default selfLink: https: //www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b/instanceGroups/example-instance-group size: 1 subnetwork: https://www.googleapis.com/compute/ v1/projects/hogehoge-xxxxxx/regions/us-east1/subnetworks/default zone: https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b
Configure port mapping for instance groups
Set the listening port for the instance group.
Here we will open the HTTP (number 80) port.
gcloud compute instance-groups unmanaged set-named-ports example-instance-group \ --named-ports http:80 \ --zone us-east1-b Updated [https://www.googleapis.com/compute/v1/ projects/hogehoge-xxxxxx/zones/us-east1-b/instanceGroups/example-instance-group].
Multiple --named-ports options can be set by separating them with commas.
Creating a health check
A health check is a life-or-death monitoring performed by accessing a specified URL.
This is a function that will be linked to the backend service that will be set up later, and if number 200 is returned, it will be determined that it is functioning.
Here, we will create a health check for port 80.
gcloud compute health-checks create http example-check \ --port 80 Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/healthChecks/http-basic-check]. NAME PROTOCOL example-check HTTP
As for the health check, even if the load balancer has an SSL certificate like this one, the health check communication is between the load balancer and the instances, so an HTTP health check seems to be sufficient.
Create backend service
The backend service is responsible for organizing multiple instance groups.
Instance groups also have the role of grouping instances together, but backend services are the unit when changing the forwarding destination for each URL pattern in the URL map that will appear later.
Health check specifies the health check for the port on which the instance group in the backend service listens.
gcloud compute backend-services create example-backend \ --protocol HTTP \ --health-checks example-check \ --global Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global /backendServices/example-backend]. NAME BACKENDS PROTOCOL example-backend HTTPS
Attach an instance group to a backend service
Let's add an instance group to the backend service we created earlier.
Backend services can be registered using instance groups as the smallest unit.
gcloud compute backend-services add-backend example-backend \ --balancing-mode UTILIZATION \ --max-utilization 0.8 \ --capacity-scaler 1 \ --instance-group example-instance-group \ --instance-group- zone us-east1-b \ --global Updated [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/backendServices/example-backend].
To balance each instance group within a backend service, Google Cloud provides two balancing modes:
UTILIZATION
Distributes the load based on CPU usage. This mode is recommended if you want to use machine resources effectively.
Also, autoscaling will be executed when the --max-utilization value is exceeded, so if it is 0.8, autoscaling will be executed when the CPU usage exceeds 80%.
RATE
Load balancing based on requests per second (RPS). This mode is recommended if you want to stabilize the response of the service.
Autoscaling will occur if there are more requests than the percentage specified by --target-load-balancing-utilization.
In both modes, you can specify the maximum number of instances that autoscale will launch with the --max-num-replicas option.
Creating a URL map
There are two types of GCP load balancing mechanisms: TCP-based and content-based.
The GCP documentation cross-region load balancing , but the configuration is an advanced version based on content (I won't go into details).
In the content-based case, this URL map is what is displayed as the load balancer on the load balancing page in the GCP console.
As the name suggests, URL maps route URL patterns to multiple backend services.
By using this feature, you can reduce the granularity of the functions of the entire web service and turn it into a microservice.
gcloud compute url-maps create example-url-map \ --default-service example-backend Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/urlMaps/example-url- map]. NAME DEFAULT_SERVICE example-url-map backendServices/example-backend
Since this is a URL map with only a single backend service, there is no routing description.
To route with a URL map, create a path-based route called a path matcher, create objects, and add them to the URL map.
The path matcher is a little outside the scope of this article, so I won't explain it here.
If you are interested, please refer to the official documentation
Creating a target proxy
The target proxy is responsible for linking the URL map and the SSL certificate.
gcloud compute target-https-proxies create example-https-proxy \ --url-map example-url-map \ --ssl-certificates test-example-ssl Created [https://www.googleapis.com/compute/v1 /projects/hogehoge-xxxxxx/global/targetHttpsProxies/example-https-proxy]. NAME SSL_CERTIFICATES URL_MAP example-https-proxy test-example-ssl example-url-map
Creating an IP address
Next, create an IP address to use with the load balancer.
gcloud compute addresses create example-ip --ip-version=IPV4 --global Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/addresses/example-ip].
To check the IP address you are currently using, use the following command:
gcloud compute addresses list NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS default-web01 xxx.xxx.xxx.xxx asia-northeast1 IN_USE example-ip xxx.xxx.xxx.xxx RESERVED
Adding frontend services
Finally, create a front-end service that connects the URL map with the outside world.
gcloud compute forwarding-rules create example-frontend \ --address xxx.xxx.xxx.xxx \ --global \ --target-https-proxy example-https-proxy \ --ports 443 Created [https://www. googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/forwardingRules/example-frontend].
Link the global IP, port, and target proxy with the front-end service.
You have successfully applied a free SSL certificate to your load balancer.
summary
I was thinking of writing an article about applying an SSL certificate to a load balancer, but I regret that most of it ended up being about creating a load balancer. Conversely, I think it is very easy to apply an SSL certificate to a completed load balancer.
There are surprisingly many steps to start up a load balancer, and you might not know in what order to do them at first, so I'll try rearranging the order.
- Create an instance group
- Add a port to an instance group
- Create a health check
- Create a backend service
- Add an instance group to your backend service
- Create a URL mapping
- Create a target proxy
- Create an IP address
- Create a frontend service
We will add the step of creating an SSL certificate to this, but it seems that there will be no problem if you do it somewhere before creating the target proxy.
Additionally, SSL certificates are not available immediately after creation and may take some time to provision, so it's best to do this in plenty of time.
That's it.