How to set up a free SSL certificate for a GCP load balancer using gcloud commands

table of contents
Hello.
I'm Mandai, the Wild team member in charge of development.
As the title suggests, today I'd like to try out the fully managed and free SSL certificate that was recently added, which can now be applied to load balancers. With
SSL becoming a necessity these days, and the cost of SSL certificates no longer ignoring, I think this will be very welcome news for many people.
When did GCP start offering free SSL certificates for load balancers?
Did you know that you can now apply free SSL certificates to GCP's ELB?
This feature was launched in October 2018, and I'm embarrassed to admit I hadn't noticed it myself.
The AWS Certificate Manager feature was added to AWS in January 2016, so it's quite a while ago, so there's a big difference in timing, but I personally like it because it's functionally simple and in line with global standards
What's inside GCP's free SSL certificate?
You can easily see that GCP's SSL certificate is issued via Let's Encrypt by checking the GCP console
With Let's Encrypt, the certificate renewal cycle is short, only three months, so the standard practice is to embed a renewal script on the server and run it regularly.
That's all there is to it, but things suddenly get complicated when it comes to load balancers on public clouds.
GCP provides all of the functionality related to this update process on the load balancer side, providing an environment in which SSL can be used comfortably and easily
Furthermore, in order to issue a certificate, the DNS of the domain to which the SSL certificate will be applied must point to the IP address configured on the load balancer where the SSL certificate will be placed.
Due to this restriction, immediately after application, SSL will not be working for a short period of time, even if the site is publicly accessible.
Set it up with the gcloud command
Placing an SSL certificate on a load balancer can be done using the gcloud command
First, let's update the gcloud command itself.
From what I've researched, versions 220.0.0 and later should be fine, but let's update to the latest version just to be safe.
gcloud components update
The gcloud command is frequently updated, so I think there will be some kind of update
Once the update is complete, let's try creating an SSL certificate using the command line.
This time, we'll name the SSL certificate test-example-ssl and create it for the domain test.example.com.
gcloud beta compute ssl-certificates create test-example-ssl \ --domains test.example.com Created [https://www.googleapis.com/compute/beta/projects/hogehoge-xxxxxx/global/sslCertificates/test-example-ssl]. NAME TYPE CREATION_TIMESTAMP EXPIRE_TIME MANAGED_STATUS test-example-ssl MANAGED 2019-04-01T03:12:53.962-07:00 PROVISIONING test.example.com: PROVISIONING
To see what's happening now, use the " gcloud beta compute ssl-certificates list " command
gcloud beta compute ssl-certificates list NAME TYPE CREATION_TIMESTAMP EXPIRE_TIME MANAGED_STATUS www-example-ssl MANAGED 2019-03-01T23:33:53.360-08:00 2019-05-30T23:41:52.000-07:00 ACTIVE www.example.com: ACTIVE test-example-ssl MANAGED 2019-04-01T03:12:53.962-07:00 PROVISIONING test.example.com: PROVISIONING
www-example-ssl was created a while ago and is already active.
As for test-example-ssl, it shows as being provisioned, but if left as is, it will eventually result in an error. This
is because the IP address written in the A record of test.example.com does not point to the load balancer that will be created.
Once provisioning is complete, the MANAGED_STATUS will change to ACTIVE.
Applying an SSL certificate to a load balancer
Now, we'll apply the provisioned SSL certificate to the load balancer, but since we don't have a load balancer yet, we'll start by creating one.
There are quite a few steps, but let's not give up!
Creating an instance group
We will create an instance group.
Here, we will create a container to hold the instances.
gcloud compute instance-groups unmanaged create example-instance-group \ --zone=us-east1-b Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b/instanceGroups/example-instance-group]. NAME LOCATION SCOPE NETWORK MANAGED INSTANCES example-instance-group us-east1-b zone 0
, which make it easy to use features such as autoscaling and instance templatesmanaged instance groupsdo not have such featuresunmanaged instance groups, whichThere are two types of
This time, we don't need such functionality, so we will create it as an unmanaged instance group
The command to check all the instance groups you created is as follows:
gcloud compute instance-groups list NAME LOCATION SCOPE NETWORK MANAGED INSTANCES default-group asia-northeast1-b zone default No 1 example-instance-group us-east1-b zone default No 1
Add an instance to an instance group
We will now add instances to the instance group we created.
This time, we will create them in Zone B of the us-east1 region, where the example-web01 instance is located.
gcloud compute instance-groups unmanaged add-instances example-instance-group \ --instances example-web01 \ --zone us-east1-b Updated [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b/instanceGroups/example-instance-group].
To see which instances have been added to an instance group, use the following command:
gcloud compute instance-groups describe example-instance-group --zone [region-zone] creationTimestamp: '2019-04-05T00:13:10.372-07:00' description: '' fingerprint: hogehoge id: 'xxxxxxxxxxxxxxxx' kind: compute#instanceGroup name: example-instance-group namedPorts: - name: http port: 80 network: https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/networks/default selfLink: https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b/instanceGroups/example-instance-group size: 1 subnetwork: https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/regions/us-east1/subnetworks/default zone: https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b
Configure port mapping for an instance group
Configure the listening port for the instance group.
Here, we'll open HTTP port 80.
gcloud compute instance-groups unmanaged set-named-ports example-instance-group \ --named-ports http:80 \ --zone us-east1-b Updated [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/zones/us-east1-b/instanceGroups/example-instance-group].
You can set multiple --named-ports options by separating them with commas
Creating a Health Check
A health check is a monitoring function that checks the availability of a specified URL.
This function is linked to the backend service that will be configured later, and if a 200 status code is returned, it is considered to be functioning correctly.
Here, we will create a health check for port 80
gcloud compute health-checks create http example-check \ --port 80 Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/healthChecks/http-basic-check]. NAME PROTOCOL example-check HTTP
Regarding health checks, even if the load balancer has an SSL certificate as in this case, the health check communication is from the load balancer to the instance, so an HTTP health check should be sufficient
Create a backend service
Backend services are responsible for grouping multiple instance groups together.
Instance groups also serve the same purpose of grouping instances, but backend services are the unit used when changing the redirection destination based on URL patterns in the URL map, which will be discussed later.
The health check specifies the health check for the port on which the instance group in the backend service listens
gcloud compute backend-services create example-backend \ --protocol HTTP \ --health-checks example-check \ --global Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/backendServices/example-backend]. NAME BACKENDS PROTOCOL example-backend HTTPS
Associate an instance group with a backend service
We will now add an instance group to the backend service we created earlier.
A backend service can be registered with an instance group as its smallest unit.
gcloud compute backend-services add-backend example-backend \ --balancing-mode UTILIZATION \ --max-utilization 0.8 \ --capacity-scaler 1 \ --instance-group example-instance-group \ --instance-group-zone us-east1-b \ --global Updated [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/backendServices/example-backend].
To balance across instance groups within a backend service, GCP provides two balancing modes:
UTILIZATION
This mode distributes the load based on CPU usage. It's recommended if you want to make efficient use of your machine resources.
Additionally, autoscaling is triggered when the CPU usage exceeds the --max-utilization value; for example, a value of 0.8 means autoscaling will occur when CPU usage exceeds 80%.
RATE
This mode uses requests per second (RPS) as the basis for load balancing. It's recommended if you want to stabilize your service response
. Autoscaling is triggered when the number of requests exceeds the percentage specified by `--target-load-balancing-utilization`.
In both modes, you can specify the maximum number of instances that Autoscale will launch using the --max-num-replicas option
Creating a URL Map
GCP has two types of load balancing mechanisms: TCP-based and content-based
GCP documentation also mentionscross-region load balancing, but its configuration is described as an advanced version of content-based load balancing (I'll omit the details).
In the content-based case, this URL map is what is displayed as a load balancer on the load balancing page in the GCP console
As the name suggests, URL maps route to multiple backend services based on URL patterns.
This feature allows you to microservices by breaking down the granularity of the functionality across your entire web service.
gcloud compute url-maps create example-url-map \ --default-service example-backend Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/urlMaps/example-url-map]. NAME DEFAULT_SERVICE example-url-map backendServices/example-backend
In this case, the URL map only has a single backend service, so there is no routing description.
To route in a URL map, you create a path matcher, which is a routing method corresponding to the path, and then create an object and add it to the URL map.
Pathmatchers are a bit outside the scope of this article, so we won't explain them here.
If you're interested,the official documentationplease refer to
Creating a Target Proxy
The target proxy is responsible for associating URL maps with SSL certificates
gcloud compute target-https-proxies create example-https-proxy \ --url-map example-url-map \ --ssl-certificates test-example-ssl Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/targetHttpsProxies/example-https-proxy]. NAME SSL_CERTIFICATES URL_MAP example-https-proxy test-example-ssl example-url-map
Creating IP Addresses
Next, create an IP address to use with the load balancer
gcloud compute addresses create example-ip --ip-version=IPV4 --global Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/addresses/example-ip].
To find out which IP address you are currently using, use the following command:
gcloud compute addresses list NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS default-web01 xxx.xxx.xxx.xxx asia-northeast1 IN_USE example-ip xxx.xxx.xxx.xxx RESERVED
Adding a Frontend Service
Finally, create a front-end service that connects the external world to the URL map
gcloud compute forwarding-rules create example-frontend \ --address xxx.xxx.xxx.xxx \ --global \ --target-https-proxy example-https-proxy \ --ports 443 Created [https://www.googleapis.com/compute/v1/projects/hogehoge-xxxxxx/global/forwardingRules/example-frontend].
The front-end service associates the global IP, port, and target proxy
You have now successfully applied a free SSL certificate to your load balancer
summary
I thought I would write an article about applying an SSL certificate to a load balancer, but I regret that it ended up being mostly about creating a load balancer. In other words, I think it's very easy to apply an SSL certificate to an already-created load balancer
There are surprisingly many steps to setting up a load balancer, and at first it can be difficult to know in what order to create them, so let's organize the order again
- Create an instance group
- Adding a port to an instance group
- Create a health check
- Create the backend service
- Add an instance group to a backend service
- Create a URL mapping
- Create a target proxy
- Create an IP address
- Create a front-end service
We'll add a step to create an SSL certificate, but it seems that doing this somewhere before creating the target proxy is fine.
Also, SSL certificates aren't immediately usable after creation; provisioning takes some time, so it's best to do this with plenty of time to spare.
That's all
1
