Try setting a free SSL certificate on the GCP load balancer using the gcloud command

Hello.
I'm Mandai, in charge of Wild on the development team.

As the title suggests, today I would like to try out the fully managed and free SSL certificates that were recently added, which can now be applied to load balancers.
SSL has become a necessity these days, and the cost of SSL certificates can no longer be ignored, so I think some people will be very happy about this.

 

When did GCP start offering free SSL certificates for load balancers?

Did you know that you could apply free SSL certificates to GCP ELBs?
This feature was launched in October 2018, and I'm ashamed to say that I was unaware of it.

The AWS Certificate Manager feature was added to AWS in January 2016, so it's quite a while ago, so there's a big difference in timing, but I personally like it because it's functionally simple and in line with global standards

 

What's inside GCP's free SSL certificate?

You can easily see that GCP's SSL certificate is issued via Let's Encrypt by checking the GCP console

When it comes to Let's Encrypt, the certificate renewal cycle is short at three months, so the theory is that you install an update script on the server and run it regularly.
That's it, but when it comes to a public cloud load balancer, things get complicated.

GCP provides all of the functionality related to this update process on the load balancer side, providing an environment in which SSL can be used comfortably and easily

Also, to issue a certificate, the DNS of the domain to which you plan to apply the SSL certificate must point to the IP address set in the load balancer where the SSL certificate will be placed.
Due to this restriction, immediately after application, SSL will not be enabled for a short period of time, even if the site is publicly accessible.

 

Set it up with the gcloud command

Placing an SSL certificate on a load balancer can be done using the gcloud command

First, let's update the gcloud command itself.
After some research, it seems that there is no problem with versions 220.0.0 or later, but just to be safe, let's make sure it's the latest version.

gcloud components update

 

The gcloud command is frequently updated, so I think there will be some kind of update

Once the update is complete, let's create an SSL certificate using the command.
This time, let's create an SSL certificate with the name test-example-ssl and the domain for which we're creating the SSL certificate as test.example.com.

gcloud beta compute ssl-certificates create test-example-ssl \ --domains test.example.com Created [https://www.googleapis.com/compute/beta/projects/hogehoge-xxxxxx/global/sslCertificates/test-example-ssl]. NAME TYPE CREATION_TIMESTAMP EXPIRE_TIME MANAGED_STATUS test-example-ssl MANAGED 2019-04-01T03:12:53.962-07:00 PROVISIONING test.example.com: PROVISIONING

 

To see what's happening now, use the " gcloud beta compute ssl-certificates list " command

gcloud beta compute ssl-certificates list NAME TYPE CREATION_TIMESTAMP EXPIRE_TIME MANAGED_STATUS www-example-ssl MANAGED 2019-03-01T23:33:53.360-08:00 2019-05-30T23:41:52.000-07:00 ACTIVE www.example.com: ACTIVE test-example-ssl MANAGED 2019-04-01T03:12:53.962-07:00 PROVISIONING test.example.com: PROVISIONING

 

www-example-ssl was created a while ago and is already enabled.
test-example-ssl says it's being provisioned, but if you leave it like this, you'll eventually get an error.
This is because the IP address written in the A record for test.example.com doesn't point to the load balancer you're about to create.
Once provisioning is complete, the MANAGED_STATUS will change to ACTIVE.

 

Applying an SSL certificate to a load balancer

Now, we will apply the provisioned SSL certificate to the load balancer, but since we don't have a load balancer yet, we will start by creating a new one.
There are a few steps, but don't give up!

 

Creating an instance group

Create an instance group
, which is a container for your instances.

gcloud compute instance-groups unmanaged create example-instance-group \ --zone=us-east1-b Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b/instanceGroups/example-instance-group]. NAME LOCATION SCOPE NETWORK MANAGED INSTANCES example-instance-group us-east1-b zone 0

 

There are two types of managed instance groups , which allow you to easily use features such as autoscaling and instance templates unmanaged instance groups, which do not have these features

This time, we don't need such functionality, so we will create it as an unmanaged instance group

The command to check all the instance groups you created is as follows:

gcloud compute instance-groups list NAME LOCATION SCOPE NETWORK MANAGED INSTANCES default-group asia-northeast1-b zone default No 1 example-instance-group us-east1-b zone default No 1

 

Add an instance to an instance group

Add an instance to the created instance group.
In this example, create it in Zone B of the us-east1 region where the example-web01 instance is located.

gcloud compute instance-groups unmanaged add-instances example-instance-group \ --instances example-web01 \ --zone us-east1-b Updated [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b/instanceGroups/example-instance-group].

 

To see which instances have been added to an instance group, use the following command:

gcloud compute instance-groups describe example-instance-group --zone [region-zone] creationTimestamp: '2019-04-05T00:13:10.372-07:00' description: '' fingerprint: hogehoge id: 'xxxxxxxxxxxxxxxx' kind: compute#instanceGroup name: example-instance-group namedPorts: - name: http port: 80 network: https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/networks/default selfLink: https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b/instanceGroups/example-instance-group size: 1 subnetwork: https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/regions/us-east1/subnetworks/default zone: https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b

 

 

Configure port mapping for an instance group

Set the listening port for the instance group.
Here, we will open the HTTP (80) port.

gcloud compute instance-groups unmanaged set-named-ports example-instance-group \ --named-ports http:80 \ --zone us-east1-b Updated [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b/instanceGroups/example-instance-group].

 

You can set multiple --named-ports options by separating them with commas

 

Creating a Health Check

A health check is a monitoring function that accesses a specified URL to check whether the service is up or down.
This function will be linked to the backend service that will be configured later, and if a 200 number is returned, it will be determined that the service is functioning properly.

Here, we will create a health check for port 80

gcloud compute health-checks create http example-check \ --port 80 Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/healthChecks/http-basic-check]. NAME PROTOCOL example-check HTTP

 

Regarding health checks, even if the load balancer has an SSL certificate as in this case, the health check communication is from the load balancer to the instance, so an HTTP health check should be sufficient

 

Create a backend service

A backend service aggregates multiple instance groups.
Similarly, an instance group aggregates instances. A backend service is the unit used to change the forwarding destination for each URL pattern in the URL map, which will be explained later.

The health check specifies the health check for the port on which the instance group in the backend service listens

gcloud compute backend-services create example-backend \ --protocol HTTP \ --health-checks example-check \ --global Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/backendServices/example-backend]. NAME BACKENDS PROTOCOL example-backend HTTPS

 

 

Associate an instance group with a backend service

Add an instance group to the backend service you just created.
A backend service can be registered with an instance group as its smallest unit.

gcloud compute backend-services add-backend example-backend \ --balancing-mode UTILIZATION \ --max-utilization 0.8 \ --capacity-scaler 1 \ --instance-group example-instance-group \ --instance-group-zone us-east1-b \ --global Updated [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/backendServices/example-backend].

 

To balance across instance groups within a backend service, GCP provides two balancing modes:

 

UTILIZATION

Distributes the load based on CPU utilization. This mode is recommended if you want to use machine resources efficiently.
Also, autoscaling will be performed if the --max-utilization value is exceeded, so if the value is 0.8, autoscaling will be performed when CPU utilization exceeds 80%.

 

RATE

This mode balances the load based on requests per second (RPS). This mode is recommended if you want to stabilize the response of your service.
Autoscaling will be performed if the number of requests received exceeds the percentage specified by --target-load-balancing-utilization.

In both modes, you can specify the maximum number of instances that Autoscale will launch using the --max-num-replicas option

 

Creating a URL Map

GCP has two types of load balancing mechanisms: TCP-based and content-based

The GCP documentation also mentions cross-region load balancing , but the configuration is described as an advanced version of content-based load balancing (we won't go into details here).

In the content-based case, this URL map is what is displayed as a load balancer on the load balancing page in the GCP console

As the name suggests, URL maps route URL patterns to multiple backend services.
Using this feature, you can reduce the granularity of the overall web service functionality, creating microservices.

gcloud compute url-maps create example-url-map \ --default-service example-backend Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/urlMaps/example-url-map]. NAME DEFAULT_SERVICE example-url-map backendServices/example-backend

 

This time, there is no routing code because the URL map only has a single backend service.
To route with a URL map, create a path-based route called a path matcher, create an object, and add it to the URL map.

Since path matchers are a bit outside the scope of this article, we won't go into detail here.
If you're interested, please refer to the official documentation

 

Creating a Target Proxy

The target proxy is responsible for associating URL maps with SSL certificates

gcloud compute target-https-proxies create example-https-proxy \ --url-map example-url-map \ --ssl-certificates test-example-ssl Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/targetHttpsProxies/example-https-proxy]. NAME SSL_CERTIFICATES URL_MAP example-https-proxy test-example-ssl example-url-map

 

 

Creating IP Addresses

Next, create an IP address to use with the load balancer

gcloud compute addresses create example-ip --ip-version=IPV4 --global Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/addresses/example-ip].

 

To find out which IP address you are currently using, use the following command:

gcloud compute addresses list NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS default-web01 xxx.xxx.xxx.xxx asia-northeast1 IN_USE example-ip xxx.xxx.xxx.xxx RESERVED

 

 

Adding a Frontend Service

Finally, create a front-end service that connects the external world to the URL map

gcloud compute forwarding-rules create example-frontend \ --address xxx.xxx.xxx.xxx \ --global \ --target-https-proxy example-https-proxy \ --ports 443 Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/forwardingRules/example-frontend].

 

The front-end service associates the global IP, port, and target proxy

You have now successfully applied a free SSL certificate to your load balancer

 

summary

I thought I would write an article about applying an SSL certificate to a load balancer, but I regret that it ended up being mostly about creating a load balancer. In other words, I think it's very easy to apply an SSL certificate to an already-created load balancer

There are surprisingly many steps to setting up a load balancer, and at first it can be difficult to know in what order to create them, so let's organize the order again

  1. Create an instance group
  2. Adding a port to an instance group
  3. Create a health check
  4. Create the backend service
  5. Add an instance group to a backend service
  6. Create a URL mapping
  7. Create a target proxy
  8. Create an IP address
  9. Create a front-end service

To this we will add a step to create an SSL certificate, but it seems that there will be no problem if you do this at some point before creating the target proxy.
Also, since the SSL certificate will not be available immediately after creation and it takes some time to provision, it is best to do this work with plenty of time in mind.

 
That's it.

If you found this article helpful , please give it a like!
1
Loading...
1 vote, average: 1.00 / 11
8,125
X facebook Hatena Bookmark pocket

The person who wrote this article

About the author

Yoichi Bandai

My main job is developing web APIs for social games, but I'm also fortunate to be able to do a lot of other work, including marketing.
Furthermore, my portrait rights in Beyond are treated as CC0 by him.