How to use Terraform with multiple AWS profiles

6 minute read

When collaborating across different organizations or simply across different teams, environment configuration values can differ. This post shows how the Terraform configuration can be customized to align personal and target environments.

Scenario

My laptop is configured so that by default, AWS CLI and SDKs point to my own account and the nordic region (eu-north-1).

While working on my customer’s project, I need Terraform to correctly point at their infrastructure hosted in their account in the Irish region (eu-west-1).

As most people do, my customer’s developers have their machines configured so that by default, AWS SDKs targets their company’s development environment.

This creates a clear divergence between my work environment and the how the environment is expected to be configured to properly utilize the needed tools.

Furthermore, Terraform requires the user to specify where the current state of the application is stored and which locking mechanism is used. In my customer’s case, the state is stored in a S3 bucket and a DynamoDB table is used as the locking mechanism. In Terraform lingo, this is usually referred to as backend.

Here is how the backend is configured.

terraform {
    backend "s3" {
        key = "terraform.tfstate"
        bucket = "devops-bucket"
        region = "eu-west-1"
        dynamodb_table = "terraform-locks"
        encrypt = true
        workspace_key_prefix = "very_cool/application"
    }
}

In the snippet above, we can see how the backend is configured. In the documentation, we can see a list of available configuration fields.

Next, let’s give a look at how the AWS provider is configured.

provider "aws" {
    region  = var.aws_region
    profile = var.aws_profile
}

variable "aws_profile" {
    type = string
    default = null
}

variable "aws_region" {
    type = string
    default = null
}

Symptoms of the problem

The divergence between my system and those of my customer’s developers causes the Terraform CLI being unable to authenticate itself with the right credentials when interacting with AWS to check the state of the resources needed by the application.

Specifically, Terraform needs these credentials in two moments:

  • When initializing the current working directory using the command terraform init
  • When working with the resources using commands like terraform plan and terraform apply

If I were to attempt to initialize Terraform without any special configuration, I’d get the following

$ terraform init
Initializing modules...

Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
|

Once initialized, if I were to execute the plan command without any special configuration, I’d get the following

$ terraform plan -out tfplan
╷
│ Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request send failed, Get "http://169.254.169.254/latest/meta-data/iam/security-credentials/": dial tcp 169.254.169.254:80: connectex: A socket operation was attempted to an unreachable network.
│
│
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on backend.tf line 12, in provider "aws":
│   12: provider "aws" {
│
╵
Releasing state lock. This may take a few moments...

We will see how to solve each of these two cases.

Solution requirements

For a better understanding, let’s state clearly the requirements we want to meet.

  • The solution should be transparent to my customer’s developers
  • The solution should be as natural as possible

In addition, basic pillars like security requirements should also be met. Specifically, no credentials should be saved anywhere near the Terraform application directory.

Configuring AWS

Let’s start by configuring the AWS credentials to connect to my customer’s account.

To make sure I can quickly access my customer’s account without compromising the safety of the keys I was given, I have a profile saved in the ~/.aws/credentials file named CUSTOMER.

[CUSTOMER]
aws_access_key_id=AKIA1234567890
aws_secret_access_key=u3nyn5nz9ams38yvekr2

Similarly, I added an entry in the ~/.aws/config file to specify the default region.

[profile CUSTOMER]
region = eu-west-1

With these two files so configured, I can simply target my customer environment from any SDK or CLI using AWS by simply specifying the profile to be used.

$ aws lambda list-functions --profile CUSTOMER

Quick and dirty solution

Once the profile is configured, we can leverage the environment variables AWS_PROFILE and AWS_REGION to help Terraform fetch the correct credentials.

While working, this solution is not optimal because it would pollute my own environment preventing me to work with other AWS accounts or requiring me to continuously swith profiles by changing the value of the environment variables.

Initializing the directory

Before working in an application, it’s necessary to initialize the working directory so that all needed components are downloaded and correctly initialized.

The init command takes care of this step.

In this scenario, the commands has the responsibility to:

  • Initialize the backend by connecting to the specified S3 bucket and DynamoDB table
  • Download the AWS provider
  • Download any additional module

To perform the first step, the Terraform CLI needs to be able to authenticate itself with Amazon by using the correct pair of keys and pointing to the correct region.

Normally, the CLI would find all the required configuration in the backend configuration but it offers the possibility to override the configuration by providing new values as argument.

In our scenario, we will supply the profile value with the name of the profile used to store my customer’s credentials.

$ terraform init -backend-config "profile=CUSTOMER"

Once we execute the command, we are greeted with a success message.

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Reusing previous version of hashicorp/archive from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Installing hashicorp/archive v2.2.0...
- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
- Installing hashicorp/aws v4.14.0...
- Installed hashicorp/aws v4.14.0 (signed by HashiCorp)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

This means that we are ready to work with our application.

Once the repository is initialized for the first time, we can omit the backend configuration parameters. This is important because the init command needs to be executed in order to use new modules or providers.

Working with Terraform

Once the backend is initialized, it’s time to make sure that Terraform can access the AWS account when performing operations like creating an execution plan. We have few options available to us.

First option, we can use an environment variable to specify the AWS profile we intend to use.

$ AWS_PROFILE=CUSTOMER; AWS_REGION=eu-west-1; terraform plan -out tfplan

Or, when using PowerShell,

> $env:AWS_PROFILE="Customer"
> $env:AWS_REGION="eu-west-1"
> terraform plan -out tfplan

The second option available to us is providing the variable values as arguments of the command.

$ terraform plan -out tfplan -var "aws_profile=CUSTOMER" -var "aws_region=eu-west-1"

The obvious negative side of this solution is that it’s quite inconvenient to provide the configuration values every time we want to execute a command.

To solve scenarios like this one, Terraform supports variable files, also referred to as tfvars files.

These files, usually ignored by default in git repositories, can be used to specify variable values that are not supposed to be persisted in version control systems.

Furthermore, all files named terraform.tfvars are ignored by git and automatically imported by Terraform. These two qualities make them the perfect solution for our scenario.

By simply creating a terraform.tfvars file and specifying the values for the fields we want to override, we are able to configure Terraform so that the correct profile and its configuration are used when creating an execution plan.

# terraform.tfvars
aws_profile = "CUSTOMER"
aws_region = "eu-west-1"

Once the file is created, we can use the Terraform CLI without specifying any extra value.

$ terraform plan -out tfplan

Recap

In this blog post we have seen how we can leverage Terraform extension points to customize the configuration of the working directory in ways that are transparent to other developers or other applications on the same machine.

Support this blog

If you liked this article, consider supporting this blog by buying me a pizza!