Build infrastructure

Security & Compliance

Security scanning is graciously provided by Bridgecrew. Bridgecrew is the leading fully hosted, cloud-native solution providing continuous Terraform security and compliance.

Benchmark Description
Infrastructure Security Compliance
Center for Internet Security, KUBERNETES Compliance
Center for Internet Security, AWS Compliance
Center for Internet Security, AZURE Compliance
Payment Card Industry Data Security Standards Compliance
National Institute of Standards and Technology Compliance
Information Security Management System, ISO/IEC 27001 Compliance
Service Organization Control 2 Compliance
Center for Internet Security, GCP Compliance
Health Insurance Portability and Accountability Compliance

Application Load Balancer

The next step is to setup a Load Balancer. As you could notice on the ECS configuration is that there’s a reference to a on it.

Enter fullscreen modeExit fullscreen mode

Now let’s add a security group for the Load Balancer

Enter fullscreen modeExit fullscreen mode

We also need to create a Load Balancer Target Group, it will relate the Load Balancer with the Containers.

Enter fullscreen modeExit fullscreen mode

One very important thing here is the attribute within . This is a route on the application that the Load Balancer will use to check the status of the application.

At last let’s create a HTTP listener for out Load Balancer.

Enter fullscreen modeExit fullscreen mode

Attaching the policy to the role using Terraform:

This is where, we’ll be attaching the policy which we wrote above, to the role we created in the first step.

The terraform script:

The aws_iam_policy_attachment in the above resource block, is used to attach a Managed IAM Policy to user(s), role(s), and/or group(s). But in our case, it was a role. The value for the roles parameter has been accessed from the resource block which we created in step 1.

Value of the role = ${aws_iam_role.ec2_s3_access_role.name}

Explanation:

> aws_iam_role is the type of the resource block which we created in step 1.

> ec2_s3_access_role is the name of the variable which we defined.

> name is a property of that resource block.

The same thing applies to the value for policy_arn.

Service

resource "kubernetes_service" "app" {  metadata {    name      = "owncloud-service"    namespace = "fargate-node"  }  spec {    selector = {      app = "owncloud"    }    port {      port        = 80      target_port = 80      protocol    = "TCP"    }    type = "NodePort"  }  depends_on = }

Note: There’s a twist here. If you want to access this web app from the public world, three load balancers are available: CLB, NLB, and ALB . You can choose any one of them. You can easily create a CLB or NLB. But creation of an ALB is very typical choice in this setup. I’ll guide you through all of the load balancers.

If you want to create a CLB, the service to use is , and to create an NLB, use:

NLB service

Creating an ALB is a bit more complicated. We need an ALB to connect us to any running pod and also to register the available target pods with the ALB. We need an Ingress controller for this.

For the Ingress controller to have access rights to create the ALB and to register target pods on the ALB, we need to create a policy allowing that.

Ingress policy

Now create a role, and also attach that policy with the role.

Kubernetes Ingress role

Now we also need a cluster role for the Ingress controller, a service account that’s bound to this role that has the previously created IAM role attached.

Kubernetes cluster role

We can now deploy the Ingress controller into our cluster.

Kubernetes Ingress controller

With the Ingress controller deployed, now we can create the ALB for the web app using Kubernetes Ingress.

resource "kubernetes_ingress" "app" {  metadata {    name      = "owncloud-lb"    namespace = "fargate-node"    annotations = {      "kubernetes.io/ingress.class"           = "alb"      "alb.ingress.kubernetes.io/scheme"      = "internet-facing"      "alb.ingress.kubernetes.io/target-type" = "ip"    }    labels = {        "app" = "owncloud"    }  }  spec {      backend {        service_name = "owncloud-service"        service_port = 80      }    rule {      http {        path {          path = "/"          backend {            service_name = "owncloud-service"            service_port = 80          }        }      }    }  }  depends_on = }

»Troubleshooting

If was successful and your apply still failed, you may be
encountering one of these common errors.

  • If you use a region other than , you will also need to change
    your , since AMI IDs are region-specific. Choose an AMI ID specific to
    your region by following ,
    and modify with this ID. Then re-run .

  • If you do not have a default VPC in your AWS account in the correct region,
    navigate to the AWS VPC Dashboard in the web UI, create a new VPC in
    your region, and associate a subnet and security group to that VPC. Then add the
    security group ID () and subnet ID () arguments to
    your resource, and replace the values with the ones from your new
    security group and subnet.

    Save the changes to , and re-run .

    Remember to add these lines to your configuration for the rest of the tutorials in this collection.
    For more information, review this
    document
    from AWS on working with VPCs.

Terraform file

As we are already aware that terraform is a command line tool for creating, updating and versioning infrastructure in the cloud then obviously we want to know how does it do so? Terraform describes infrastructure in a file using the language called Hashicorp Configuration Language (HCL) with the extension of .tf It is a declarative language that describes infrastructure in the cloud. When we write our infrastructure using HCL in .tf file, terraform generates an execution plan that describes what it will do to reach the desired state. Once execution plan is ready, terraform executes the plan and generates a state file by the name terraform.tfstate by default. This file maps resource meta data to the actual resource ID and lets terraform knows what it is managing in the cloud.

Destroy Your Resources in AWS

Just now we have seen how to launch ec2 instance in aws with terraform script. Now let’s see How To Remove or terminate ec2 instance in aws. To remove resources in aws with terraform we use terraform destroy command. This command will remove your entire aws resources that we created with terraform. Once you execute this command in command prompt it will ask/prompt you for yes to destroy the resources in aws, write yes and hit enter your resources in aws will be removed.

rootmaniprabu-172-31-37-35:~/devops# terraform destroy
aws_instance.web: Refreshing state... 

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # aws_instance.web will be destroyed
  - resource "aws_instance" "web" {
      - ami                          = "ami-00068cd7555f543d5" -> null
      - arn                          = "arn:aws:ec2:us-east-1:787645912603:instance/i-070936b9abf6110bc" -> null
      - associate_public_ip_address  = true -> null
      - availability_zone            = "us-east-1b" -> null
      - cpu_core_count               = 1 -> null
      - cpu_threads_per_core         = 1 -> null
      - disable_api_termination      = false -> null
      - ebs_optimized                = false -> null
      - get_password_data            = false -> null
      - id                           = "i-070936b9abf6110bc" -> null
      - instance_state               = "running" -> null
      - instance_type                = "t2.micro" -> null
      - ipv6_address_count           = 0 -> null
      - ipv6_addresses               = [] -> null
      - monitoring                   = false -> null
      - primary_network_interface_id = "eni-05bbd42eadb14678b" -> null
      - private_dns                  = "ip-172-31-37-183.ec2.internal" -> null
      - private_ip                   = "172.31.37.183" -> null
      - public_dns                   = "ec2-3-84-90-145.compute-1.amazonaws.com" -> null
      - public_ip                    = "3.84.90.145" -> null
      - security_groups              =  -> null
      - source_dest_check            = true -> null
      - subnet_id                    = "subnet-1ecf9e42" -> null
      - tags                         = {
          - "Name" = "HelloWorld"
        } -> null
      - tenancy                      = "default" -> null
      - volume_tags                  = {} -> null
      - vpc_security_group_ids       =  -> null

      - credit_specification {
          - cpu_credits = "standard" -> null
        }

      - root_block_device {
          - delete_on_termination = true -> null
          - encrypted             = false -> null
          - iops                  = 100 -> null
          - volume_id             = "vol-03ca5733fc67130fb" -> null
          - volume_size           = 8 -> null
          - volume_type           = "gp2" -> null
        }
    }

Plan: 0 to add, 0 to change, 1 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_instance.web: Destroying... 
aws_instance.web: Still destroying... 
aws_instance.web: Still destroying... 
aws_instance.web: Destruction complete after 29s

Destroy complete! Resources: 1 destroyed.
  • terraform ec2 example github
  • create aws ec2 instance using terraform
  • terraform create ec2 instance
  • deploy ec2 insatnce using terraform
  • terraform ec2 instance example
  • terraform-aws-modules/ec2-instance/aws
  • launch ec2 insatnce using terraform
  • terraform ec2 module

Запуск инстанса AWS EC2 с использованием Terraform

Создадим каталог и настроим в нем Terraform. Выполните следующие команды

$ mkdir terraform
$ cd terraform

Теперь создайте файл конфигурации. Я даю здесь имя config.tf . Вы можете указать имя по вашему выбору, но помните, что расширение должно быть « tf ».

$ vi config.tf

Добавьте следующие термины поставщик AWS, ваш ключ доступа, секретный ключ и регион, в котором вы собираетесь запустить экземпляр ec2. Здесь я собираюсь использовать мой любимый регион Сингапура.

Во втором блоке кода определите ресурс как aws_instance, ami (я выбрал ami из Centos AMI <https://wiki.centos.org/Cloud/AWS>). Укажите тип экземпляра, а также тег по вашему выбору.

provider "aws" {
access_key = "YOUR-ACCESS-kEY"
secret_key = "YOUR-SECRET-KEY"
region = "ap-southeast-1"
}

resource "aws_instance" "instance1" {
ami = "ami-05930ce55ebfd2930"
instance_type = "t2.micro"
tags = {
Name = "Centos-8-Stream"
}
}

Сохраните и закройте файл.

Теперь инициализируйте свою конфигурацию, выполнив команду под terraform

$ terraform init

После инициализации Terraform посмотрите, что произойдет, выполнив команду,

$ terraform plan

Если все пойдет нормально, вы должны увидеть следующий результат.

Теперь выполните свой код терраформирования,

$ terraform apply

Введите « да » и нажмите ввод для подтверждения.

При успешном выполнении вы должны увидеть результат, как показано ниже:

Войдите в свою учетную запись AWS и перейдите в сервис ec2, вы должны найти экземпляр ec2 с тегом, который вы определили выше.

С помощью terraform можно легко и просто подготовить инфраструктуру в облаке. Надеюсь, вам понравится статья. Если вы столкнулись с трудностями, прокомментируйте нас.

Terraform Setup

For running our examples, let us download a binary distribution for our specific operating system for local installation. We will use this to install the Terraform command-line interface (CLI) where we will execute different Terraform commands. We can check for successful installation by running the below command:

This gives the below output on my Mac OS showing the version of Terraform that is installed:

We can view the list of all Terraform commands by running the command without any arguments:

We will use the main commands , , and throughout this post.

Since we will be creating resources in AWS, we will also set up the AWS CLI by running the below command:

When prompted, we will provide AWS access key id and secret access key and choose a default region and output format:

We are using us-east-1 as the region and JSON as the output format.

For more details about the AWS CLI, have a look at our .

Inputs

Name Description Type Default Required
allow_cidr_blocks List of CIDR blocks to permit SSH access list no
attributes Additional attributes (e.g. or ) list no
delimiter Delimiter to be used between , , , etc. string no
dns_ttl The time for which a DNS resolver caches a response string no
ec2_ami By default it is an AMI provided by Amazon with Ubuntu 16.04 string no
github_api_token GitHub API token string yes
github_organization GitHub organization name string yes
github_team GitHub team string yes
instance_type The type of instance that will be created (e.g. ) string no
namespace Namespace (e.g. or ) string yes
security_groups List of Security Group IDs permitted to connect to this instance list no
ssh_key_pair SSH key pair to be provisioned on instance string yes
stage Stage (e.g. , , ) string yes
subnets List of VPC Subnet IDs where the instance may be launched list yes
tags Additional tags (e.g. ) map no
vpc_id The ID of the VPC where the instance will be created string yes
zone_id Route53 DNS Zone id string « no

Security & Compliance

Security scanning is graciously provided by Bridgecrew. Bridgecrew is the leading fully hosted, cloud-native solution providing continuous Terraform security and compliance.

Benchmark Description
Infrastructure Security Compliance
Center for Internet Security, KUBERNETES Compliance
Center for Internet Security, AWS Compliance
Center for Internet Security, AZURE Compliance
Payment Card Industry Data Security Standards Compliance
National Institute of Standards and Technology Compliance
Information Security Management System, ISO/IEC 27001 Compliance
Service Organization Control 2 Compliance
Center for Internet Security, GCP Compliance
Health Insurance Portability and Accountability Compliance

»Write configuration

The set of files used to describe infrastructure in Terraform is known as a
Terraform configuration. You will write your first configuration to define a single
AWS EC2 instance.

Each Terraform configuration must be in its own working directory. Create a
directory for your configuration.

Copy

Change into the directory.

Copy

Create a file to define your infrastructure.

Copy

Open in your text editor, paste in the configuration below, and save
the file.

Tip: The AMI ID used in this configuration is specific to the
region. If you would like to use a different region, see the
for guidance.

Copy

This is a complete configuration that you can deploy with Terraform. The
following sections review each block of this configuration in more
detail.

Terraform Block

The block contains Terraform settings, including the required
providers Terraform will use to provision your infrastructure. For each provider, the
attribute defines an optional hostname, a namespace, and the provider
type. Terraform installs providers from the Terraform
Registry by default. In this example
configuration, the provider’s source is defined as , which
is shorthand for .

You can also set a version constraint for each provider defined in the
block. The attribute is optional, but we
recommend using it to constrain the provider version so that Terraform does not
install a version of the provider that does not work with your configuration. If
you do not specify a provider version, Terraform will automatically download the
most recent version during initialization.

To learn more, reference the provider source
documentation.

Providers

The block configures the specified provider, in this case . A
provider is a plugin that Terraform uses to create and manage your resources.

The attribute in the provider block refers Terraform to the AWS
credentials stored in your AWS configuration file, which you created when you
configured the AWS CLI. Never hard-code credentials or other secrets in your
Terraform configuration files. Like other types of code, you may share and
manage your Terraform configuration files using source control, so hard-coding
secret values can expose them to attackers.

You can use multiple provider blocks in your Terraform configuration to manage
resources from different providers. You can even use different providers
together. For example, you could pass the IP address of your AWS EC2 instance to
a monitoring resource from DataDog.

Resources

Use blocks to define components of your infrastructure. A resource
might be a physical or virtual component such as an EC2 instance, or it can be a
logical resource such as a Heroku application.

Resource blocks have two strings before the block: the resource type and the
resource name. In this example, the resource type is and the name
is . The prefix of the type maps to the name of the provider. In the
example configuration, Terraform manages the resource with the
provider. Together, the resource type and resource name form a unique ID
for the resource. For example, the ID for your EC2 instance is
.

Resource blocks contain arguments which you use to configure the resource.
Arguments can include things like machine sizes, disk image names, or VPC IDs.
Our providers reference
documents the required and optional arguments for each resource. For your EC2
instance, the example configuration sets the AMI ID to an Ubuntu image, and the instance
type to , which qualifies for AWS’ free tier. It also sets a tag to
give the instance a name.

Outputs

Name Description
The ARN of the instance
Capacity reservation specification of the instance
The ID of the instance
The state of the instance. One of: , , , , ,
The IPv6 address assigned to the instance, if applicable.
The ARN of the Outpost the instance is assigned to
Base-64 encoded encrypted password data for the instance. Useful for getting the administrator password for instances running Microsoft Windows. This attribute is only exported if is true
The ID of the instance’s primary network interface
The private DNS name assigned to the instance. Can only be used inside the Amazon EC2, and only available if you’ve enabled DNS hostnames for your VPC
The private IP address assigned to the instance.
The public DNS name assigned to the instance. For EC2-VPC, this is only available if you’ve enabled DNS hostnames for your VPC
The public IP address assigned to the instance, if applicable. NOTE: If you are using an aws_eip with your instance, you should refer to the EIP’s address directly and not use as this field will change after the EIP is attached
The current bid status of the Spot Instance Request
The Instance ID (if any) that is currently fulfilling the Spot Instance request
The current request state of the Spot Instance Request
A map of tags assigned to the resource, including those inherited from the provider default_tags configuration block

Create EC2 user

When you create an account in AWS for the first time, you are provided with root login that access all services/features in AWS. For AWS best security practice, using root account, create user accounts with limited access to AWS services. Since we will create an infrastructure in AWS using terraform’s  API which will interact with EC2 services therefore, we will create an user with access to all EC2 service only.

Login in to AWS console using the root account. Select services->A-Z->IAM

Click Users from IAM dashboard.

Click «Add user»

Provide an user name and click only «Programmatic access». We have provided user name as «terraformuser». Click «Next:Permission»

Next click «Create Group». Provide a group name and in the policy type, filter by AmazonEC2. Select the first row which which gives Amazon EC2 full access.

Click «Next: Review»

Click «Create user»

Download the newly created users Access key ID and Secret key by clicking «Download .csv’. These credentials are needed to connect to Amazon EC2 service through terraform

Conclusion

In this post, we introduced the following concepts of Terraform with examples of creating resources in AWS Cloud:

  1. A resource is the basic building block of creating infrastructure with Terraform.
  2. Plugins as executable Go binaries which expose implementation for a specific service, like AWS or Azure.
  3. Terraform resources are defined in a configuration file ending with and written in Terraform language using HCL syntax.
  4. Modules are used for organizing and grouping resources to create logical abstractions.
  5. Basic workflow is composed of cycle.
  6. Terraform backend is configured as local or remote where state information is stored.
  7. Terraform Cloud and Terraform Enterprise use remote backends and are suitable for use in team environments.

These concepts should help you to get started with Terraform and inspire you to explore more advanced features like automation, extending its features, and integration capabilities.

How it Works

AWS CloudFormation codifies the details of an infrastructure into a configuration file, referred to as a template. CloudFormation currently supports a large number of resources.

If your resource is not currently on the AWS list, CloudFormation lets you create a resource using the CloudFormation Registry.

Terraform is not on the list of currently supported resources, so Cloudsoft had to create a registry resource for it. We named it . To communicate with the Terraform server, our resource uses the Secure Shell (SSH) networking protocol.

Cloudsoft is an AWS Partner Network (APN) Advanced Consulting Partner with the AWS DevOps Competency. Cloudsoft helps businesses throughout their cloud journey by providing innovative combinations of services, software, and expertise.

To set up the registry resource, you need to gather the following information beforehand:

  • Terraform DNS hostname or IP address
  • SSH KeyPair
  • SSH username
  • SSH client private key
  • SSH port
  • SSH serve public key fingerprint

Our registry resource creates and uses the following AWS Systems Manager parameters:

The AWS CloudFormation template acts as a proxy to Terraform. To communicate with the Terraform server, it uses a resource type. The resulting architecture is shown in the following diagram.

Figure 1 – Architecture of Terraform customer resource on AWS CloudFormation.

Once the solution is deployed, the CloudFormation and Terraform files are placed in an Amazon Simple Storage Service (Amazon S3) bucket.

You can then launch the CloudFormation wrapper files, and also use them to create AWS Service Catalog products so end users with the proper permissions can launch them from the Service Catalog console based on the Terraform CloudFormation wrapper file.

Either way, CloudFormation uses the resource to communicate with the Terraform server. After that, the Terraform server manages the AWS resources, and the resource provider logs the activity into an S3 bucket.

Install Terraform

Download terraform depending on your system. Installation is very simple. Download the terraform zip archive and unzip it in a suitable location. Once we have unzipped the terraform, update PATH environment variable pointing to terraform. Since the folder /usr/local/bin is already set to PATH environment variable, we don’t need to set it again. If you are using any other location, then specify it in the PATH environment variable either in .bash_profile or in /etc/profile.

Now add the following line to add terraform in PATH location.

Verify the installation of terraform with the following command

Installing aws-iam-authenticator

  1. Download the Amazon EKS-vended binary from Amazon S3
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.7/2019-06-11/bin/linux/amd64/aws-iam-authenticator

2. Apply execute permissions to the binary

chmod +x ./aws-iam-authenticator

3. Copy the binary to a folder in your

mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH

4. Add to your environment variable

echo ‘export PATH=$HOME/bin:$PATH’ >> ~/.bashrc

Configure kubectl for Amazon EKS

Before creating kubeconfig file use aws configure. Here you can also create a profile and add it to kubeconfig file

aws eks --region region update-kubeconfig -- name cluster_name -- profile profile-name

Test your configuration

kubectl get svc

Output:

NAME           TYPE      CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGEkubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   1m

To enable worker nodes to join your cluster

Download, edit, and apply the AWS IAM Authenticator configuration map

curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/aws-auth-cm.yaml

open the aws-auth-cm.yaml file any editor. Replace the ARN of instance role (not instance profile)

Note: do not change any other line in this file

apiVersion: v1kind: ConfigMapmetadata:  name: aws-auth  namespace: kube-systemdata:  mapRoles: |    - rolearn: <ARN of instance role (not instance profile)>      username: system:node:`EC2PrivateDNSName`      groups:        - system:bootstrappers        - system:nodes

Apply the configuration. This command may take a few minutes to finish

kubectl apply -f aws-auth-cm.yaml

Watch the status of your nodes and wait for them to reach the Ready status

kubectl get nodes — watch

Deploy the Nginx container to the Cluster

Kubernetes Cluster is now ready, it’s time to deploy the Nginx container.

On the Master Node, run the following command to create an Nginx deployment:

kubectl create deployment nginx --image=nginx

Output:

deployment.apps/nginx created

You can list out the deployments with the following command:

kubectl get pods

Output :

NAME   READY   UP-TO-DATE AVAILABLE   nginx   0/1     1            0          

After, we will need to make the Nginx container available to the network with this command:

kubectl create service nodeport nginx --tcp=80:80

Now, list out all the services by running the following command:

kubectl get svc

You should see the Nginx service with assigned port 30784:

NAME        TYPE         CLUSTER-IP    EXTERNAL-IP    PORT(S)      nginx       NodePort    10.102.166.47   <none>        80:30784/TCP  kubernetes  ClusterIP   10.100.0.1      <none>        443/TCP

Finally, just open your web browser and type the URL http://<workernode-private-ip>:30784 (Worker Node IP: The port number from your output). You should see the default Nginx Welcome page

Congratulations! Your Nginx container has been deployed on your Kubernetes Cluster.

If you want to deploy Application LoadBalancer for more efficient LoadBalancer please follow the ALB Ingress controller on AWS EKS.

Reference Guide

Terraform

AWS EKS

Nginx

»Inspect state

When you applied your configuration, Terraform wrote data into a file called
. Terraform stores the IDs and properties of the resources it
manages in this file, so that it can update or destroy those resources going
forward.

The Terraform state file is the only way Terraform can track which resources it
manages, and often contains sensitive information, so you must store your state
file securely and restrict access to only trusted team members who need to manage
your infrastructure. In production, we recommend storing your state
remotely with Terraform
Cloud or Terraform Enterprise. Terraform also supports several other remote
backends
you can use to store and manage your state.

Inspect the current state using .

Copy

When Terraform created this EC2 instance, it also gathered the resource’s metadata from the
from the AWS provider and wrote the metadata to the state file. Later in this collection,
you will modify your configuration to reference these values to configure
other resources and output values.

Manually Managing State

Terraform has a built-in command called for advanced state
management. Use the subcommand to list of the resources in your
project’s state.

Copy

Группы безопасности

Выберите необходимый тип экземпляра и перейдите на вкладку 6. Настройте группу безопасности в верхней части страницы.Группы безопасностифильтровать трафик в наш экземпляр и из него — в основном, кто имеет доступ к нашему виртуальному компьютеру

Ты итолько ты) нужно будет получить доступ к экземпляру черезтак что добавьте правило, которое позволяет«Мой IP» для SSH. Мы хотимдругиечтобы иметь возможность доступа к нашему приложению через веб-браузер, добавьте правило, разрешающее доступ HTTP для всех источников. Окончательная конфигурация безопасности:


Правила группы безопасности

Затем нажмите Обзор и Запустить, а затем Запустить. Это поднимает варианты использованияпара ключей, Это нужно для доступа к серверу через, поэтому обязательно создайте новую пару ключей и сохраните закрытый ключ там, где вы его помните. Если вы потеряете это, вы не сможете снова получить доступ к своему экземпляру!

Why Terraform?

Terraform utilizes the cloud provider APIs (Application programming interfaces) to provision infrastructure, hence there’re no authentication techniques after, what the customer is using with the cloud provider already. This could be considered as one of the best option, in terms of maintainability, security and ease-of-use.

The motivation behind this post is to, illustrate an example of:

  1. creating an AWS IAM role using terraform.
  2. creating an IAM policy using terraform.
  3. attaching the policy to the role using terraform.
  4. creating the IAM instance profile using terraform.
  5. Assigning the IAM role, to an EC2 instance on the fly using terraform.
Рейтинг
( Пока оценок нет )
Понравилась статья? Поделиться с друзьями:
Мой редактор ОС
Добавить комментарий

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: