Security & Compliance
Security scanning is graciously provided by Bridgecrew. Bridgecrew is the leading fully hosted, cloud-native solution providing continuous Terraform security and compliance.
Benchmark | Description |
---|---|
Infrastructure Security Compliance | |
Center for Internet Security, KUBERNETES Compliance | |
Center for Internet Security, AWS Compliance | |
Center for Internet Security, AZURE Compliance | |
Payment Card Industry Data Security Standards Compliance | |
National Institute of Standards and Technology Compliance | |
Information Security Management System, ISO/IEC 27001 Compliance | |
Service Organization Control 2 Compliance | |
Center for Internet Security, GCP Compliance | |
Health Insurance Portability and Accountability Compliance |
Usage
IMPORTANT: We do not pin modules to versions in our examples because of the
difficulty of keeping the versions in the documentation in sync with the latest released versions.
We highly recommend that in your code you pin the version to the exact version you are
using so that your infrastructure remains stable, and update versions in a
systematic way so that they do not catch you by surprise.
Also, because of a bug in the Terraform registry (hashicorp/terraform#21417),
the registry shows many of our inputs as required when in fact they are optional.
The table below correctly indicates which inputs are required.
To enable custom_alerts the map needs to be defined like so :
All those keys are required to be there so if the alarm you are setting does not requiere one or more keys you can just set to empty but do not remove the keys otherwise you could get a weird merge error due to the maps being of different sizes.
Работа с AWS Route53 и Terraform в Unix/Linux
Amazon Route 53 – это высокодоступный и масштабируемый облачный веб-сервис системы доменных имен (DNS). Разработчики и владельцы веб-сервисов используют его как очень надежный и эффективный метод перенаправления конечных пользователей к интернет-приложениям, переводя доменные имена (например, www.example.com) в формат цифровых IP-адресов (например, 192.0.2.1), понятных для компьютеров. Amazon Route 53 также полностью совместим с протоколом IPv6.
Сервис Amazon Route 53 направляет запросы пользователей к инфраструктуре AWS, например к инстансам Amazon EC2, балансировщикам нагрузки Elastic Load Balancing или корзинам Amazon S3. Кроме того, он может использоваться для перенаправления пользователей в инфраструктуру за пределами AWS. Amazon Route 53 можно использовать как для организации подключений только к «здоровым» адресам (с использованием проверок DNS), так и для независимого мониторинга состояния приложения и его конечных точек. С помощью сервиса Amazon Route 53 Traffic Flow можно легко управлять глобальным трафиком, используя различные типы маршрутизации (такие как маршрутизация на базе задержки, DNS с учетом географического положения, географическая близость и циклический взвешенный алгоритм), которые можно сочетать с возможностью переброса сервиса DNS, создавая в результате отказоустойчивые архитектуры с низкой задержкой. Используя несложный визуальный редактор Amazon Route 53 Traffic Flow, можно легко управлять маршрутизацией конечных пользователей к конечным точкам ваших приложений как в рамках одного региона AWS, так и при распределении трафика по всему миру. Кроме того, в сервисе Amazon Route 53 можно зарегистрировать доменное имя: при покупке доменов (например, example.com) и управлении ими Amazon Route 53 автоматически настроит для них параметры DNS.
Outputs
Name | Description |
---|---|
ARN of the AutoScaling Group | |
Time between a scaling activity and the succeeding scaling activity | |
The number of Amazon EC2 instances that should be running in the group | |
Time after instance comes into service before checking health | |
or . Controls how health checking is done | |
The AutoScaling Group id | |
The maximum size of the autoscale group | |
The minimum size of the autoscale group | |
The AutoScaling Group name | |
A list of tag settings associated with the AutoScaling Group | |
ARN of the AutoScaling policy scale down | |
ARN of the AutoScaling policy scale up | |
The ARN of the launch template | |
The ID of the launch template |
Outputs
Name | Description |
---|---|
Arn of cloudwatch log group created | |
Name of cloudwatch log group created | |
The Amazon Resource Name (ARN) of the cluster. | |
Nested attribute containing certificate-authority-data for your cluster. This is the base64 encoded certificate data required to communicate with your cluster. | |
The endpoint for your EKS Kubernetes API. | |
IAM role ARN of the EKS cluster. | |
IAM role name of the EKS cluster. | |
The name/id of the EKS cluster. Will block on cluster creation until the cluster is really ready. | |
The URL on the EKS cluster OIDC Issuer | |
The cluster primary security group ID created by the EKS cluster on 1.14 or later. Referred to as ‘Cluster security group’ in the EKS console. | |
Security group ID attached to the EKS cluster. On 1.14 or later, this is the ‘Additional security groups’ in the EKS console. | |
The Kubernetes server version for the EKS cluster. | |
A kubernetes configuration to authenticate to this EKS cluster. | |
IAM role ARN for EKS Fargate pods | |
IAM role name for EKS Fargate pods | |
Amazon Resource Name (ARN) of the EKS Fargate Profiles. | |
EKS Cluster name and EKS Fargate Profile names separated by a colon (:). | |
kubectl config file contents for this EKS cluster. Will block on cluster creation until the cluster is really ready. | |
The filename of the generated kubectl config. Will block on cluster creation until the cluster is really ready. | |
Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys | |
The ARN of the OIDC Provider if . | |
Security group rule responsible for allowing pods to communicate with the EKS cluster API. | |
default IAM instance profile ARN for EKS worker groups | |
default IAM instance profile name for EKS worker groups | |
default IAM role ARN for EKS worker groups | |
default IAM role name for EKS worker groups | |
Security group ID attached to the EKS workers. | |
IDs of the autoscaling groups containing workers. | |
Names of the autoscaling groups containing workers. | |
ID of the default worker group AMI | |
ID of the default Windows worker group AMI | |
ARNs of the worker launch templates. | |
IDs of the worker launch templates. | |
Latest versions of the worker launch templates. | |
User data of worker groups |
Inputs
Name | Description | Type | Default | Required |
---|---|---|---|---|
Application type that the ASG’s instances will serve | n/a | yes | ||
A list of classic load balancer names to add to the autoscaling group | no | |||
Time, in seconds, the minimum interval of two scaling activities | no | |||
The created ASG will have this number of instances at desired | no | |||
The list of ASG metrics to collect | no | |||
Time, in seconds, to wait for new instances before checking their health | no | |||
Controls how ASG health checking is done | no | |||
The created ASG will be attached to this target group | no | |||
The created ASG will have this number of instances at maximum | no | |||
The granularity to associate with the metrics to collect | no | |||
The created ASG will have this number of instances at minimum | no | |||
The placement group for the spawned instances | no | |||
The ARN of the service-linked role that the ASG will use to call other AWS services | no | |||
The created ASG will have these tags applied over the default ones (see main.tf) | no | |||
Specify policies that the auto scaling group should use to terminate its instances | no | |||
The created ASG will spawn instances to these subnet IDs | n/a | yes | ||
A maximum duration that Terraform should wait for ASG instances to be healthy before timing out | n/a | yes | ||
Terraform will wait for exactly this number of healthy instances in all attached load balancers on both create and update operations. If left to default, the value is set to asg_min_capacity | no | |||
Whether to associate public IP to the instance | no | |||
Primary role/function of the cluster | no | |||
The credit option for CPU usage, can be either ‘standard’ or ‘unlimited’ | no | |||
Whether the network interface will be deleted on termination | no | |||
Whether the volume should be destroyed on instance termination | no | |||
Free form description of this ASG and its instances | n/a | yes | ||
whether to protect your instance from accidently being terminated from console or api | no | |||
Whether the volume will be encrypted or not | no | |||
The spawned instances will have EBS optimization if enabled | no | |||
The created resources will belong to this infrastructure environment | n/a | yes | ||
list(object({ name = string, values = list(string) }) ) |
n/a | yes | ||
n/a | yes | |||
The spawned instances will have this IAM profile | n/a | yes | ||
The spawned instances will have this SSH key name | no | |||
no | ||||
{ "on_demand_allocation_strategy": "prioritized", "on_demand_base_capacity": "0", "on_demand_percentage_above_base_capacity": "100", "spot_allocation_strategy": "lowest-price", "spot_instance_pools": "2", "spot_max_price": ""} |
no | |||
The spawned instances will have enhanced monitoring if enabled | no | |||
Abbreviation of the product domain this ASG and its instances belongs to | n/a | yes | ||
The spawned instances will have these security groups | n/a | yes | ||
The name of the service | n/a | yes | ||
Whether to use ASG mixed instances policy or the plain launch template | no | |||
The spawned instances will have this user data. Use the rendered value of a terraform’s data | no | |||
The size of the volume in gigabytes | no | |||
The type of volume. Can be standard, gp2, or io1 | no |
Related Projects
Check out these related projects.
- terraform-aws-ec2-instance — Terraform module for providing a general purpose EC2 instance
- terraform-aws-ec2-bastion-server — Terraform module to define a generic bastion host with parameterized user data
- terraform-aws-ec2-admin-server — Terraform module for providing an EC2 instance capable of running admin tasks
- terraform-aws-ec2-instance-group — Terraform module for provisioning multiple general purpose EC2 hosts for stateful applications
- terraform-aws-ec2-ami-snapshot — Terraform module to easily generate AMI snapshots to create replica instances
DevOps Accelerator for Startups
We deliver 10x the value for a fraction of the cost of a full-time engineer. Our track record is not even funny. If you want things done right and you need it done FAST, then we’re your best bet.
- Reference Architecture. You’ll get everything you need from the ground up built using 100% infrastructure as code.
- Release Engineering. You’ll have end-to-end CI/CD with unlimited staging environments.
- Site Reliability Engineering. You’ll have total visibility into your apps and microservices.
- Security Baseline. You’ll have built-in governance with accountability and audit logs for all changes.
- GitOps. You’ll be able to operate your infrastructure via Pull Requests.
- Training. You’ll receive hands-on training so your team can operate what we build.
- Questions. You’ll have a direct line of communication between our teams via a Shared Slack channel.
- Troubleshooting. You’ll get help to triage when things aren’t working.
- Code Reviews. You’ll receive constructive feedback on Pull Requests.
- Bug Fixes. We’ll rapidly work with you to fix any bugs in our projects.
Getting Started
Conventions
the auto scaling group will have Service, Cluster, Environment, and ProductDomain tags by default, which are propagated to all instances it spawns
Behaviour
- To specify on-demand instance types, use the variable. Auto Scaling will launch instances based on the order of preference specified in that list. means the ASG will always try to launch if it’s available, falling back to if it’s not available, and falling back to if the previous two aren’t available
- On the first deployment, this module will provision an ASG with a launch template that select the most recent AMI that passes through the given
- Each time there’s a change in the values of the ‘s keepers (e.g. security group, AMI ID), a new ASG will be provisioned by terraform, and the old one will later be destroyed (doing the «simple swap» deployment strategy).
- When there’s a change in launch template parameters’ values, terraform will create a new launch template version unless the new configuration is already the same as the latest version of the launch template (e.g. when the launch template had been updated externally).
Migration from pre-launch-template versions
terraform init terraform state rm module.<this module name in your terraform code>.module.random_lc terraform apply