[Osaka/Yokohama/Tokushima] Looking for infrastructure/server side engineers!

[Osaka/Yokohama/Tokushima] Looking for infrastructure/server side engineers!

[Deployed by over 500 companies] AWS construction, operation, maintenance, and monitoring services

[Deployed by over 500 companies] AWS construction, operation, maintenance, and monitoring services

[Successor to CentOS] AlmaLinux OS server construction/migration service

[Successor to CentOS] AlmaLinux OS server construction/migration service

[For WordPress only] Cloud server “Web Speed”

[For WordPress only] Cloud server “Web Speed”

[Cheap] Website security automatic diagnosis “Quick Scanner”

[Cheap] Website security automatic diagnosis “Quick Scanner”

[Reservation system development] EDISONE customization development service

[Reservation system development] EDISONE customization development service

[Registration of 100 URLs is 0 yen] Website monitoring service “Appmill”

[Registration of 100 URLs is 0 yen] Website monitoring service “Appmill”

[Compatible with over 200 countries] Global eSIM “Beyond SIM”

[Compatible with over 200 countries] Global eSIM “Beyond SIM”

[If you are traveling, business trip, or stationed in China] Chinese SIM service “Choco SIM”

[If you are traveling, business trip, or stationed in China] Chinese SIM service “Choco SIM”

[Global exclusive service] Beyond's MSP in North America and China

[Global exclusive service] Beyond's MSP in North America and China

[YouTube] Beyond official channel “Biyomaru Channel”

[YouTube] Beyond official channel “Biyomaru Channel”

Build a CI/CD pipeline with GitOps on Terraform Cloud

My name is Teraoka and I am an infrastructure engineer.
As the title suggests, this article is about Terraform Cloud.
I would like to summarize it step by step, from an overview to how to actually use it.

Terraform execution environment

What environment are you using when running Terraform?
I think the most basic thing is each local environment.
to download the binary from
the download page ( *1 run terraform apply directly from locally.
This is fine when testing by one person, but when actually performing construction work, it is often handled by multiple people.
At this time, if each individual runs the program in their own local environment, the following problems may occur.

Problem

  • If you forget to push to Git, differences may occur in each individual's local code.
  • Unable to share tfstate file
  • There is no mechanism to review the written code
  • Anyone can apply freely
  • Access keys to cloud platforms must be managed locally

These problems can be solved by using Terraform Cloud.

What is Terraform Cloud?

I am quoting part of Terraform's official website ( *2

Terraform Cloud is an application that helps teams use Terraform together. It manages Terraform runs in a consistent and reliable environment, and includes easy access to shared state and secret data, access controls for approving changes to infrastructure, a private registry for sharing Terraform modules, detailed policy controls for governing the contents of Terraform configurations, and more.

In summary, Terraform Cloud is an application that helps teams use Terraform together
, and is a SaaS that provides the features teams need when using Terraform, such as:

Main features

  • Consistent and reliable Terraform execution environment
  • Sharing secrets such as state files (tfstate) and access keys
  • Access control to approve changes to infrastructure
  • Private repository for sharing Terraform modules
  • Policy controls for managing Terraform configuration content

Terraform Cloud is basically free to use, but some features are only available with a paid plan.
Decide which features your team needs and choose the right plan.
Features and prices for each plan are summarized on *3

All of the ways to use Terraform Cloud described below can be configured in the free tier, so
it's a good idea to start with the free tier and switch to a paid plan later if there are any missing features.

Configuration diagram

Summarize the characters and workflow on the configuration diagram.

Characters

Team A

This is a team to which members involved in the development of the system used in Project A (hereinafter referred to as "PRJ A") belong.
We will build it on the AWS account for PRJ A while using the Terraform module written by the SREs described below.

Team B

This is a team that includes members involved in the development of systems used in Project B (hereinafter referred to as "PRJ B").
We will build it on PRJ B's AWS account while using the Terraform module written by the SREs described below.

SREs

This is a team to which SREs (Site Reliability Engineers) belong, and whose purpose is to assist other teams in their system development.
I think it could also be called the Platform Team based on the role.
Team A and Team B will write the Terraform modules that will be used in PRJ.
It is also responsible for managing the settings of Terraform Cloud itself, such as WorkSpace, which will be described later.

GitLab

Manages Terraform code written by Team A, Team B, and SREs.
In Terraform Cloud, the service that manages this kind of source code is called VCS Provider.
This time I'm using Gitlab's Community Edition.
Of course, it also supports Enterprise Edition and Github. ( *4 )

Repository Module VPC

This is a repository that manages Terraform modules written by SREs for building AWS VPCs.
In Terraform Cloud, the repository in VCS Provider is called VCS Repository.

Repository PRJ A

This is a repository that manages the Terraform code written by Team A for PRJ A.
The overview is the same as Repository Module VPC.

Repository PRJ B

This is a repository that manages the Terraform code written by Team A for PRJ B.
The overview is the same as Repository Module VPC.

WorkSpace PRJ A

This is a workspace for PRJ A in Terraform Cloud.
WorkSpace is a logical grouping for dividing the configuration described in Terraform code
into meaningful units, such as by PRJ or by service. ( *5 )

WorkSpace PRJ B

Workspace for PRJ B in Terraform Cloud.
The outline is the same as WorkSpace PRJ A.

Private Module Registry

This is a private repository that provides almost the same functionality as
Terraform Registry ( *6 Terraform modules written by SREs are managed here.

AWS Cloud(PRJ A)

This is an AWS account for PRJ A.
Team A will continue to build on this account.

AWS Cloud(PRJ B)

AWS account for PRJ B.
Team B will continue to build on this account.

workflow

The person or tool that performs the work is in parentheses.

1. Push Module Code(SREs)

Write a Terraoform module locally on SREs and push it to Git.
By modularizing the code, the scope of work can be divided appropriately between SREs and Teams.

2. Import Module Code(SREs)

Import the pushed module to Terraform Cloud's Private Module Registry.

3.Create WorkSpace(SREs)

Create a Terraform Cloud WorkSpace for each team to use.

4. Push Prj Infra Code(Team A or Team B)

Each team pushes the Terraform code written using the module written by SREs to Git.

5. VCS Provider and Automatic Plan(Terraform Cloud)

Detect Git push events on Terraform Cloud and automatically execute terraform plans on the pushed code.
With this mechanism, you can benefit from GitOps, which automatically executes CI/CD flows based on changes to Git.

6. Code Review and Approve(Team A or Team B)

Once the terraform plan is completed, Terraform Cloud will be waiting for Apply.
Before applying the settings, review the plan results and
approve Apply if the changes are as intended.

7. Apply(Terraform Cloud)

The changes will actually be reflected in the target environment.

8. Notification(Terraform Cloud)

Notify Slack etc. when some processing is executed in Terraform Cloud.

Review implementation and workflow


, I would like to check everything
from SREs pushing a Terraform module This will be described after creating a Terraoform Cloud account and organization.
Please prepare in advance from the sign-up page ( *7

Terraform module descriptions (SREs)

This time, I will write a module for building a VPC.
The directory hierarchy is as follows.

$ tree . ├── README.md ├── examples │ └── vpc │ ├── main.tf │ ├── outputs.tf │ ├── provider.tf │ ├── terraform.tfstate │ ├── terraform.tfstate.backup │ ├── terraform.tfvars │ └── variables.tf ├── main.tf ├── outputs.tf └── variables.tf 2 directories, 11 files

The three files at the root of the directory are the Module itself.
This will be read from the code written by Team A or Team B, so it is best to
clearly state the overview and specifications of the module in README.md and
leave a specific usage example as code below examples for later use. It's easy to get it.

Main.tf describes the Resource to create the VPC.

main.tf

resource "aws_vpc" "vpc" { cidr_block = var.vpc_config.cidr_block enable_dns_support = var.vpc_config.enable_dns_support enable_dns_hostnames = var.vpc_config.enable_dns_hostnames tags = { Name = var.vpc_config.name } } resource "aws_subnet" "public" { for_each = var.public_subnet_config.subnets vpc_id = aws_vpc.vpc.id availability_zone = each.key cidr_block = each.value map_public_ip_on_launch = true tags = { Name = "${var.public_subnet_config.name}-${substr(each.key, - 2, 0)}" } } resource "aws_subnet" "dmz" { for_each = var.dmz_subnet_config.subnets vpc_id = aws_vpc.vpc.id availability_zone = each.key cidr_block = each.value map_public_ip_on_launch = false tags = { Name = "$ {var.dmz_subnet_config.name}-${substr(each.key, -2, 0)}" } } resource "aws_subnet" "private" { for_each = var.private_subnet_config.subnets vpc_id = aws_vpc.vpc.id availability_zone = each .key cidr_block = each.value map_public_ip_on_launch = false tags = { Name = "${var.private_subnet_config.name}-${substr(each.key, -2, 0)}" } } resource "aws_route_table" "public" { count = var.public_subnet_config.route_table_name != "" ? 1 : 0 vpc_id = aws_vpc.vpc.id tags = { Name = var.public_subnet_config.route_table_name } } resource "aws_route_table" "dmz" { count = var.dmz_subnet_config.route_table_name ! = "" ? 1 : 0 vpc_id = aws_vpc.vpc.id tags = { Name = var.dmz_subnet_config.route_table_name } } resource "aws_route_table" "private" { count = var.private_subnet_config.route_table_name != "" ? 1 : 0 vpc_id = aws_vpc.vpc.id tags = { Name = var.private_subnet_config.route_table_name } } resource "aws_internet_gateway" "igw" { count = var.public_subnet_config.internet_gateway_name != "" ? 1 : 0 vpc_id = aws_vpc.vpc.id tags = { Name = var.public_subnet_config.internet_gateway_name } } resource "aws_route" "public" { count = var.public_subnet_config.route_table_name != "" ? 1 : 0 route_table_id = aws_route_table.public[0].id destination_cidr_block = "0.0.0.0/ 0" gateway_id = aws_internet_gateway.igw[0].id depends_on = [aws_route_table.public] } resource "aws_route" "dmz" { count = var.dmz_subnet_config.route_table_name != "" ? 1 : 0 destination_cidr_block = "0.0.0.0/ 0" route_table_id = aws_route_table.dmz[0].id nat_gateway_id = aws_nat_gateway.natgw[0].id depends_on = [aws_route_table.dmz] } resource "aws_route_table_association" "public" { for_each = aws_subnet.public subnet_id = each.value.id route_table_id = aws_route_table.public[0].id } resource "aws_route_table_association" "dmz" { for_each = aws_subnet.dmz subnet_id = each.value.id route_table_id = aws_route_table.dmz[0].id } resource "aws_route_table_association" "private" { for_each = aws_subnet.private subnet_id = each.value.id route_table_id = aws_route_table.private[0].id } resource "aws_eip" "natgw" { count = var.dmz_subnet_config.route_table_name != "" ? 1 : 0 vpc = true tags = { Name = var.dmz_subnet_config.nat_gateway_name } } resource "aws_nat_gateway" "natgw" { count = var.dmz_subnet_config.route_table_name != "" ? 1 : 0 allocation_id = aws_eip.natgw[0].id subnet_id = aws_subnet.public[ keys(aws_subnet.public)[0]].id tags = { Name = var.dmz_subnet_config.nat_gateway_name } depends_on = [aws_internet_gateway.igw] }

In outputs.tf, write Output that outputs the information of the Resource created by Module.

outputs.tf

output "vpc" { value = aws_vpc.vpc } output "public_subnet" { value = aws_subnet.public } output "dmz_subnet" { value = aws_subnet.dmz } output "private_subnet" { value = aws_subnet.private }

variables.tf describes the structure of variables received by the Module.
Be sure to write a description and default value for variable.
The reason will be explained later.

variables.tf

variable "vpc_config" { description = "VPC Config" type = object({ name = string cidr_block = string enable_dns_support = bool enable_dns_hostnames = bool }) default = { name = "" cidr_block = "" enable_dns_support = false enable_dns_hostnames = false } } variable "public_subnet_config" { description = "Subnet Config for Public" type = object({ name = string route_table_name = string internet_gateway_name = string subnets = map(string) }) default = { name = "" route_table_name = "" internet_gateway_name = "" subnets = {} } } variable "dmz_subnet_config" { description = "Subnet Config for DMZ" type = object({ name = string route_table_name = string nat_gateway_name = string subnets = map(string) }) default = { name = "" route_table_name = " " nat_gateway_name = "" subnets = {} } } variable "private_subnet_config" { description = "Subnet Config for Private" type = object({ name = string route_table_name = string subnets = map(string) }) default = { name = "" route_table_name = "" subnets = {} } }

Examples Below, we will leave specific usage examples as code.
This is the part about how to load modules and how to pass variables.
Load AWS access keys etc. from environment variables instead of terraform.tfvars.

examples/provider.tf

provider "aws" { access_key = var.access_key secret_key = var.secret_key region = var.region assume_role { role_arn = var.role_arn } }

examples/variables.tf

variable "project" { description = "Project Name" } variable "environment" { description = "Environment" } variable "access_key" { description = "AWS Access Key" } variable "secret_key" { description = "AWS Secret Key" } variable "role_arn" { description = "AWS Role ARN for Assume Role" } variable "region" { description = "AWS Region" }

examples/terraform.tfvars

########################## # Project ##################### ###### project = "terraform-vpc-module" environment = "local" region = "ap-northeast-1"

examples/main.tf

module "vpc" { source = "../../" vpc_config = { name = "vpc-${var.project}-${var.environment}" cidr_block = "10.0.0.0/16" enable_dns_support = true enable_dns_hostnames = true } public_subnet_config = { name = "subnet-${var.project}-${var.environment}-public" route_table_name = "route-${var.project}-${var.environment}-public" internet_gateway_name = "igw-${var.project}-${var.environment}" subnets = { ap-northeast-1a = "10.0.10.0/24" ap-northeast-1c = "10.0.11.0/24" ap-northeast- 1d = "10.0.12.0/24" } } dmz_subnet_config = { name = "subnet-${var.project}-${var.environment}-dmz" route_table_name = "route-${var.project}-${var .environment}-dmz" nat_gateway_name = "nat-${var.project}-${var.environment}" subnets = { ap-northeast-1a = "10.0.20.0/24" ap-northeast-1c = "10.0. 21.0/24" ap-northeast-1d = "10.0.22.0/24" } } private_subnet_config = { name = "subnet-${var.project}-${var.environment}-private" route_table_name = "route-${ var.project}-${var.environment}-private" subnets = { ap-northeast-1a = "10.0.30.0/24" ap-northeast-1c = "10.0.31.0/24" ap-northeast-1d = " 10.0.32.0/24" } } }

examples/outputs.tf

output "vpc_id" { value = module.vpc.vpc.id }

Once you have written this, create a repository on GitLab and push it to the Master branch.
This time, I created a repository in advance with the name Terraform AWS Module VPC.

Add tags to Git commits.
Terraform Cloud allows you to version control modules according to this tag.

$ git tag v1.0.0 $ git push origin v1.0.0

Import Modules (SREs) into Terraform Cloud

To import Modules from GitLab,
you need to add VCS provider settings to Terraform Cloud.

Since we will be using GitLab this time, let's proceed while referring to
*8 There are also procedures when using other VCS providers, so please refer to the applicable one. ( *9 )
After that, it will be added to the Settings > VCS Providers item from the Terraform Cloud console.
Imports can be added from the Terraform Cloud console using Settings > Modules > Add module.

The VCS provider you added earlier will be displayed, so select it.

When you select it, VCS Repository will be displayed, so select the repository to which you pushed the module earlier.

Click Publish module on the confirmation screen.

When Publish is finished, you can see that the Module's README.md and the version tagged with Git have been loaded.

You can also see a list of variables that need to be passed to the module.
If you write the description and default value when writing the variable, you
can check the details on this screen, which is convenient.

You can also see a list of resources created when you run it, which is great.

Create a WordSpace for PRJ B (SREs)

Create a Terraform Cloud WordSpace and give it to Team B.
You can create one from the Terraform Cloud console by selecting Wordspaces > New workspace.

First, I want to create only a WorkSpace and add settings later, so select No VCS connection.

Enter a name for your WorkSpace and click Create workspace.
The name can be in any format, but something like team-name_prj-name_environment will be easier to manage.

Once created, it will be displayed in the list like this.

Create a repository for PRJ B and push the Terraform code (Team B)

Write Terraform code for PRJ B.
Team B will use the module written by SREs in advance.

directory structure

$ tree . ├── backend.tf ├── main.tf ├── outputs.tf ├── providers.tf └── variables.tf 0 directories, 5 files

First of all, backend.tf is important here.
The state file (tfstate) after running Terraform this time
specifies the WorkSpace created in the remote backend to be managed on Terraform Cloud.

backend.tf

terraform { backend "remote" { hostname = "app.terraform.io" organization = "Org-Name" workspaces { prefix = "team_b_prj_b_prod" } } }

provider.tf

provider "aws" { access_key = var.access_key secret_key = var.secret_key region = var.region assume_role { role_arn = var.aws_role_arn } }

Let's write the variable.
If you want to store a value in a variable, you can use either terraform.tfvars or an environment variable, but
this time the value itself will be managed on Terraform Cloud, so neither will be prepared locally.

variables.tf

##################### # Project ##################### variable "project" { description = "Project Name" } variable "environment" { description = "Environment" } ##################### # AWS Common ######### ############ variable "access_key" { description = "AWS Access Key" } variable "secret_key" { description = "AWS Secret Key" } variable "role_arn" { description = "AWS Role ARN for Assume Role" } variable "region" { description = "AWS Region" }

main.tf specifies the module imported into the Private Module Registry as source.

main.tf

module "vpc" { source = "app.terraform.io/Org-Name/module-vpc/aws" version = "1.0.0" vpc_config = { name = "vpc-${var.project}-${var. environment}" cidr_block = "10.0.0.0/16" enable_dns_support = true enable_dns_hostnames = true } public_subnet_config = { name = "subnet-${var.project}-${var.environment}-public" route_table_name = "route-${ var.project}-${var.environment}-public" internet_gateway_name = "igw-${var.project}-${var.environment}" subnets = { ap-northeast-1a = "10.0.10.0/24" ap -northeast-1c = "10.0.11.0/24" ap-northeast-1d = "10.0.12.0/24" } } dmz_subnet_config = { name = "subnet-${var.project}-${var.environment}-dmz " route_table_name = "route-${var.project}-${var.environment}-dmz" nat_gateway_name = "nat-${var.project}-${var.environment}" subnets = { ap-northeast-1a = "10.0.20.0/24" ap-northeast-1c = "10.0.21.0/24" ap-northeast-1d = "10.0.22.0/24" } } private_subnet_config = { name = "subnet-${var.project}- ${var.environment}-private" route_table_name = "route-${var.project}-${var.environment}-private" subnets = { ap-northeast-1a = "10.0.30.0/24" ap-northeast- 1c = "10.0.31.0/24" ap-northeast-1d = "10.0.32.0/24" } } }

outputs.tf

output "vpc_id" { value = module.vpc.vpc.id }

Once you have written this, let's push it to Git.

Add Variable to WorkSpace for PRJ B (Team B)

When you select the WorkSpace you created from the list, you will see an item called Variables,
where you can manage the values ​​of variables used in the WorkSpace.

The distinctive part is that confidential information such as AWS access keys is stored as Sensitive Value.
If you do this, you can edit the value, but it will not be displayed on the screen or in the API results, so
it is very convenient when adding a value that you want to keep hidden.

Change WorkSpace settings for PRJ B (Team B)

Change the following two settings from WorkSpace Settings.

Add settings for the Slack channel you want to send notifications to Notifications settings

Add settings by referring to
the Terraform CLoud steps ( *8 Since we will be using WebHook, let's set it up on Slack side in advance.

Add Version Control settings

Register the repository for PRJ B as the VCS Repository to be read in WorkSpace.

From your WorkSpace screen, go to Settings > Version Control.

Select the VCS Provider that you have registered in advance.

Select the appropriate repository.

Click Update VCS settings.

After a while, loading the Terraform code from the repository will be completed.
Since variable settings have already been done, let's click on Queue plan.

Terraform plans are still executed manually on Terraform Cloud.
Once the Plan is completed, Apply will not be executed and will remain in the state of waiting for approval as shown below, so
if you want to execute Apply, you will need to check the results of the Plan and approve from Confirm & Apply.

Once approved, Apply will be executed as shown below and the settings will be reflected in the target environment.

The state file (tfstate) is also firmly managed on Terraform Cloud.

I specified the Slack channel in the Notifications settings, so the notifications were perfect.

Check if GitOps is possible (Team B)

I was able to confirm that there was no problem by running it manually.
This time, we want to automatically execute it in response to a push to Git.
Let's make some changes to the Terraform code and see how it works.
Let's change some of the names of the VPC's public subnets and push them to Git.

Plans started to be executed automatically in response to Git pushes.

Naturally, this time too, it stops before Apply, and only the changed parts are output as differences.
There is no problem, so let's approve and Apply.

Apply completed successfully, and there seems to be no problem.

summary

What did you think?
When using Terraform in a team, you will need to consider the issues described at the beginning.
Terraform Cloud is equipped with useful functions for this purpose, and provides strong support for use by teams.
Many features are available even in the free tier, so please give it a try.

Reference URL

*1 https://www.terraform.io/downloads.html
*2 https://www.terraform.io/docs/cloud/index.html
*3 https://www.hashicorp.com/products/terraform/ pricing/
*4 https://www.terraform.io/docs/cloud/vcs/index.html
*5 https://www.terraform.io/docs/cloud/workspaces/index.html
*6 https:// registry.terraform.io
*7 https://app.terraform.io/signup/account
*8 https://www.terraform.io/docs/cloud/vcs/gitlab-eece.html
*9 https://www .terraform.io/docs/cloud/vcs/index.html
*10 https://www.terraform.io/docs/cloud/workspaces/notifications.html#slack

If you found this article helpful , please give it a like!
0
Loading...
0 votes, average: 0.00 / 10
3,431
X facebook Hatena Bookmark pocket
[2025.6.30 Amazon Linux 2 support ended] Amazon Linux server migration solution

[2025.6.30 Amazon Linux 2 support ended] Amazon Linux server migration solution

[Osaka/Yokohama] Actively recruiting infrastructure engineers and server side engineers!

[Osaka/Yokohama] Actively recruiting infrastructure engineers and server side engineers!

The person who wrote this article

About the author

Yuki Teraoka

Joined Beyond in 2016 and is currently in his 6th year as an Infrastructure Engineer
MSP, where he troubleshoots failures while
also designing and building infrastructure using public clouds such as AWS.
Recently, I
have been working with Hashicorp tools such as Terraform and Packer as part of building container infrastructure such as Docker and Kubernetes and automating operations, and I
also play the role of an evangelist who speaks at external study groups and seminars.

・GitHub
https://github.com/nezumisannn

・Presentation history
https://github.com/nezumisannn/my-profile

・Presentation materials (SpeakerDeck)
https://speakerdeck.com/nezumisannn

・Certification:
AWS Certified Solutions Architect - Associate
Google Cloud Professional Cloud Architect