[Osaka/Yokohama/Tokushima] Looking for infrastructure/server side engineers!

[Osaka/Yokohama/Tokushima] Looking for infrastructure/server side engineers!

[Deployed by over 500 companies] AWS construction, operation, maintenance, and monitoring services

[Deployed by over 500 companies] AWS construction, operation, maintenance, and monitoring services

[Successor to CentOS] AlmaLinux OS server construction/migration service

[Successor to CentOS] AlmaLinux OS server construction/migration service

[For WordPress only] Cloud server “Web Speed”

[For WordPress only] Cloud server “Web Speed”

[Cheap] Website security automatic diagnosis “Quick Scanner”

[Cheap] Website security automatic diagnosis “Quick Scanner”

[Reservation system development] EDISONE customization development service

[Reservation system development] EDISONE customization development service

[Registration of 100 URLs is 0 yen] Website monitoring service “Appmill”

[Registration of 100 URLs is 0 yen] Website monitoring service “Appmill”

[Compatible with over 200 countries] Global eSIM “Beyond SIM”

[Compatible with over 200 countries] Global eSIM “Beyond SIM”

[If you are traveling, business trip, or stationed in China] Chinese SIM service “Choco SIM”

[If you are traveling, business trip, or stationed in China] Chinese SIM service “Choco SIM”

[Global exclusive service] Beyond's MSP in North America and China

[Global exclusive service] Beyond's MSP in North America and China

[YouTube] Beyond official channel “Biyomaru Channel”

[YouTube] Beyond official channel “Biyomaru Channel”

Import existing infrastructure resources into Terraform using Terraformer

My name is Teraoka and I am an infrastructure engineer.

, we introduced the terraform import command in the following article as a way to import existing resources into Terraform

How to import existing infrastructure resources in Terraform

As mentioned in the summary of this article,
only tfstate can be rewritten using the import command, so
you need to write the tf file yourself while checking the differences with tfstate.
If there are a lot of people, it seems like it will take a tremendous amount of time, which is a problem.

This is good news for all of you.
A tool called terraformer is released as OSS.

https://github.com/GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code

As written like this, it seems to be a CLI tool that automatically generates Terraform files from existing infrastructure.
How to use it is also described on Github, so let's try it out.

install

For Mac, you can install it using the brew command.

$ brew install terraformer $ terraformer version Terraformer v0.8.7

So far it's easy.
This time we will use v0.8.7.

Infrastructure configuration

I created the infrastructure to be imported using terraformer in advance.

https://github.com/beyond-teraoka/terraform-aws-multi-environment-sample

Configuration diagram

I have 3 environments in the same AWS account.

  1. develop
  2. production
  3. manage

Additionally, each environment has the following resources:

  1. VPC
  2. Subnet
  3. Route Table
  4. Internet Gateway
  5. NAT Gateway
  6. Security Group
  7. VPC Peering
  8. EIP
  9. EC2
  10. ALB
  11. RDS

Although it is not shown in the diagram, each environment resource is given an Environment tag, and
dev, prod, and mng are set as values ​​for each environment.

Prepare credentials

Prepare your AWS credentials.
Please prepare this according to your environment.

$ cat /Users/yuki.teraoka/.aws/credentials [beyond-poc] aws_access_key_id = XXXXXXXXXXXXXXXXXX aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX [beyond-poc-admin] role_arn = arn:aws:iam::XXXXXXXXXXXX:role/XXXXXXXXXXXXXXXX XXXX source_profile = beyond-poc

Running terraformer

First, try executing the example written on GitHub.

$ terraformer import aws --resources=alb,ec2_instance,eip,ebs,igw,nat,rds,route_table,sg,subnet,vpc,vpc_peering --regions=ap-northeast-1 --profile=beyond-poc-admin 2020 /05/05 18:29:14 aws importing region ap-northeast-1 2020/05/05 18:29:14 aws importing... vpc 2020/05/05 18:29:15 open /Users/yuki.teraoka /.terraform.d/plugins/darwin_amd64: no such file or directory

It seems like you need the plugin directory when you do terraform init.
Prepare init.tf and init.

$ echo 'provider "aws" {}' > init.tf $ terraform init

I'll try running it again.

$ terraformer import aws --resources=alb,ec2_instance,eip,ebs,igw,nat,rds,route_table,sg,subnet,vpc,vpc_peering --regions=ap-northeast-1 --profile=beyond-poc-admin

It seems that the import was successful.
A directory called generated, which did not exist until now, has been created.

directory structure

$ tree . └── aws ├── alb │ ├── lb.tf │ ├── lb_listener.tf │ ├── lb_target_group.tf │ ├── lb_target_group_attachment.tf │ ├── outputs.tf │ ├── provider.tf │ ├── terraform.tfstate │ └── variables.tf ├── ebs │ ├── ebs_volume.tf │ ├── outputs.tf │ ├── provider.tf │ └── terraform.tfstate ├ ── ec2_instance │ ├── instance.tf │ ├── outputs.tf │ ├── provider.tf │ ├── terraform.tfstate │ └── variables.tf ├── eip │ ├── eip.tf │ ├── outputs.tf │ ├── provider.tf │ └── terraform.tfstate ├── igw │ ├── internet_gateway.tf │ ├── outputs.tf │ ├── provider.tf │ ├── terraform .tfstate │ └── variables.tf ├── nat │ ├── nat_gateway.tf │ ├── outputs.tf │ ├── provider.tf │ └── terraform.tfstate ├── rds │ ├── db_instance .tf │ ├── db_parameter_group.tf │ ├── db_subnet_group.tf │ ├── outputs.tf │ ├── provider.tf │ ├── terraform.tfstate │ └── variables.tf ├── route_table │ ├ ── main_route_table_association.tf │ ├── outputs.tf │ ├── provider.tf │ ├── route_table.tf │ ├── route_table_association.tf │ ├── terraform.tfstate │ └── variables.tf ├── sg │ ├── outputs.tf │ ├── provider.tf │ ├── security_group.tf │ ├── security_group_rule.tf │ ├── terraform.tfstate │ └── variables.tf ├── subnet │ ├─ ─ outputs.tf │ ├── provider.tf │ ├── subnet.tf │ ├── terraform.tfstate │ └── variables.tf ├── vpc │ ├── outputs.tf │ ├── provider.tf │ ├── terraform.tfstate │ └── vpc.tf └── vpc_peering ├── outputs.tf ├── provider.tf ├── terraform.tfstate └── vpc_peering_connection.tf 13 directories, 63 files

I would like you to pay attention to the directory structure.
Terraformer seems to import it in the default structure "{output}/{provider}/{service}/{resource}.tf".
This is also documented on GitHub.

Terraformer by default separates each resource into a file, which is put into a given service directory.

The default path for resource files is {output}/{provider}/{service}/{resource}.tf and can vary for each provider.

This structure presents the following problems:

  1. Since tfstate is divided for each Terraform resource, it is necessary to apply multiple times even for small changes.
  2. Resources from all environments are recorded in the same tfstate, so changes in one environment can affect all environments.

If possible, I would like to divide tfstate by environment such as develop, production, manage, etc.
so that all the resources of each environment are recorded in the same tfstate.
When I investigated whether this could be done, I found the following:

  1. Hierarchical structure can be specified explicitly with --path-pattern option
  2. Only resources that have the tag specified with the --filter option can be imported.

It seems possible to achieve this by combining these two, so let's give it a try.

$ terraformer import aws --resources=alb,ec2_instance,eip,ebs,igw,nat,rds,route_table,sg,subnet,vpc,vpc_peering --regions=ap-northeast-1 --profile=beyond-poc-admin - -path-pattern {output}/{provider}/develop/ --filter="Name=tags.Environment;Value=dev" $ terraformer import aws --resources=alb,ec2_instance,eip,ebs,igw,nat,rds ,route_table,sg,subnet,vpc,vpc_peering --regions=ap-northeast-1 --profile=beyond-poc-admin --path-pattern {output}/{provider}/production/ --filter="Name= tags.Environment;Value=prod" $ terraformer import aws --resources=ec2_instance,eip,ebs,igw,route_table,sg,subnet,vpc,vpc_peering --regions=ap-northeast-1 --profile=beyond-poc- admin --path-pattern {output}/{provider}/manage/ --filter="Name=tags.Environment;Value=mng"

The directory structure after importing is as follows.

directory structure

$ tree . └── aws ├── develop │ ├── db_instance.tf │ ├── db_parameter_group.tf │ ├── db_subnet_group.tf │ ├── eip.tf │ ├── instance.tf │ ├── internet_gateway.tf │ ├── lb.tf │ ├── lb_target_group.tf │ ├── nat_gateway.tf │ ├── outputs.tf │ ├── provider.tf │ ├── route_table.tf │ ├── security_group .tf │ ├── subnet.tf │ ├── terraform.tfstate │ ├── variables.tf │ └── vpc.tf ├── manage │ ├── instance.tf │ ├── internet_gateway.tf │ ├ ── outputs.tf │ ├── provider.tf │ ├── route_table.tf │ ├── security_group.tf │ ├── subnet.tf │ ├── terraform.tfstate │ ├── variables.tf │ ├─ ─ vpc.tf │ └── vpc_peering_connection.tf └── production ├── db_instance.tf ├── db_parameter_group.tf ├── db_subnet_group.tf ├── eip.tf ├── instance.tf ├── internet_gateway. tf ├── lb.tf ├── lb_target_group.tf ├── nat_gateway.tf ├── outputs.tf ├── provider.tf ├── route_table.tf ├── security_group.tf ├── subnet.tf ├ ── terraform.tfstate ├── variables.tf └── vpc.tf 4 directories, 45 files

Resources are divided by environment.
If you try looking at vpc.tf under develop, only the VPC of the corresponding environment is imported.

develop/vpc.tf

resource "aws_vpc" "tfer--vpc-002D-0eea2bc99da0550a6" { assign_generated_ipv6_cidr_block = "false" cidr_block = "10.1.0.0/16" enable_classiclink = "false" enable_classiclink_dns_support = "false" enable_dns_hostnames = "true" enable_dns_support = "true" instance _tenancy = "default" tags = { Environment = "dev" Name = "vpc-terraformer-dev" } }

Regarding tfstate, all resources for each environment are recorded in one file, so there seems to be no problem here.
The content is long so I will omit it.

Points of concern

There are places where resource ID values ​​are hard-coded.


There are multiple places where resource ID values ​​are hard-coded, such as the vpc_id of aws_security_group below

resource "aws_security_group" "tfer--alb-002D-dev-002D-sg_sg-002D-00d3679a2f3309565" { description = "for ALB" egress { cidr_blocks = ["0.0.0.0/0"] description = "Outbound ALL" from_port = "0" protocol = "-1" self = "false" to_port = "0" } ingress { cidr_blocks = ["0.0.0.0/0"] description = "allow_http_for_alb" from_port = "80" protocol = "tcp" self = "false" to_port = "80" } name = "alb-dev-sg" tags = { Environment = "dev" Name = "alb-dev-sg" } vpc_id = "vpc-0eea2bc99da0550a6" }

When writing a new HCL, dynamically reference it like "vpc_id = aws_vpc.vpc.id", but
it seems that it is still difficult to complete it at the time of import.
For this part, the VPC ID is already recorded in tfstate, so you only need to modify the tf file.

The description of terraform_remote_state is not compatible with 0.12

There are places where the HCL description is for Terraform 0.11 series, such as the vpc_id of aws_subnet below.

resource "aws_subnet" "tfer--subnet-002D-02f90c599d4c887d3" { assign_ipv6_address_on_creation = "false" cidr_block = "10.1.2.0/24" map_public_ip_on_launch = "true" tags = { Environment = "dev" Name = "subnet-terraformer-dev -public-1c" } vpc_id = "${data.terraform_remote_state.local.outputs.aws_vpc_tfer--vpc-002D-0eea2bc99da0550a6_id}" }

If you apply in this state with the 0.12 series, it will run, but a warning will occur.

Also, if you change the --path-pattern option to separate directories for each environment,
the tfstate itself will be output in one file, but
when referring to resources in the tf file, they will still be referenced as terraform_remote_state.
If you look at GitHub, there is the following description, so it is the specification.

Connect between resources with terraform_remote_state (local and bucket).

In the case of the vpc_id above, aws_vpc and aws_subnet are recorded in the same tfstate, so they
can be referenced simply by "vpc_id = aws_vpc.tfer--vpc-002D-0eea2bc99da0550a6.id".
It looks like you will have to fix this part yourself.

summary

What did you think?
It seems like importing existing infrastructure will be easier if you use terraformer.
There are some concerns, but it's not that difficult to fix, so I think it's within an acceptable range.
I have always thought that importing terraform is difficult, so I
honestly respect the people at "Waze SRE" who created a tool that solved this problem in a good way.
Everyone please try using it.

If you found this article helpful , please give it a like!
1
Loading...
1 vote, average: 1.00 / 11
18,252
X facebook Hatena Bookmark pocket
[2025.6.30 Amazon Linux 2 support ended] Amazon Linux server migration solution

[2025.6.30 Amazon Linux 2 support ended] Amazon Linux server migration solution

[Osaka/Yokohama] Actively recruiting infrastructure engineers and server side engineers!

[Osaka/Yokohama] Actively recruiting infrastructure engineers and server side engineers!

The person who wrote this article

About the author

Yuki Teraoka

Joined Beyond in 2016 and is currently in his 6th year as an Infrastructure Engineer
MSP, where he troubleshoots failures while
also designing and building infrastructure using public clouds such as AWS.
Recently, I
have been working with Hashicorp tools such as Terraform and Packer as part of building container infrastructure such as Docker and Kubernetes and automating operations, and I
also play the role of an evangelist who speaks at external study groups and seminars.

・GitHub
https://github.com/nezumisannn

・Presentation history
https://github.com/nezumisannn/my-profile

・Presentation materials (SpeakerDeck)
https://speakerdeck.com/nezumisannn

・Certification:
AWS Certified Solutions Architect - Associate
Google Cloud Professional Cloud Architect