This repository contains a Terraform module for creating an EKS cluster and all the necessary infrastructure to install Jenkins X as described in https://jenkins-x.io/v3/admin/platforms/eks/.
A Terraform module refers to a self-contained package of Terraform configurations that are managed as a group.
For more information about modules refer to the Terraform documentation.
How do you use this module
Prerequisites
This Terraform module allows you to create an EKS cluster ready for the installation of Jenkins X.
You need the following binaries locally installed and configured on your PATH:
terraform (>= 1.0.0, < 2.0.0)
kubectl (>= 1.10)
aws-cli
helm (>= 3.0)
Cluster provisioning
From version 3.0.0 this module creates neither the EKS cluster nor the VPC.
All s3 buckets created by the module use Server-Side Encryption with Amazon S3-Managed Encryption Keys
(SSE-S3) by default.
You can set the value of use_kms_s3 to true to use server-side encryption with AWS KMS (SSE-KMS).
If you don’t specify the value of s3_kms_arn, then the default aws managed cmk is used (aws/s3)
Note: Using AWS KMS with customer managed keys has cost
considerations.
In addition, you should make sure to specify the region via the AWS_REGION environment variable. e.g.
export AWS_REGION=us-east-1 and the region variable (make sure the region variable matches the environment variable)
The IAM user does not need any permissions attached to it.
Once you have your initial configuration, you can apply it by running:
terraform init
terraform apply
This creates an EKS cluster with all possible configuration options defaulted.
Note: This example is for getting up and running quickly.
It is not intended for a production cluster.
Refer to Production cluster considerations for things to consider when creating a production cluster.
Migrating to current version of module from a version prior to 3.0.0
If you already have created an EKS cluster using a pre 3.0.0 version of this module there is unfortunately no easy
way to upgrade without recreating the cluster. If you already create the cluster in some other way and now set
create_eks = false you only n eed to remove some inputs. I won’t cover that much simpler case here.
While it would be a bit easier if you started using the same version of terraform-aws-modules/eks/aws as previously used by this module we
would advise against that. The reason is that this version is very old and doesn’t support a lot of feature currently available with AWS.
Let’s say you created your
cluster using an old version of the template and change
your configuration to a current version. If you then run terraform plan you will see that basically
everything would be destroyed and then created. To mitigate that you can move resources in the terraform state to the new addresses. In some cases there are no
corresponding new address, instead you are better off removing resources to avoid that they get destroyed before the new resources are created. This means that you
need to remove those cloud resources manually later. You can also tweak configurations to prevent resources from be
replaced. If you check the output from terraform plan you will see that resources marked as “must be replaced”
have one or more inputs with the comment “# forces replacement”. If it is a resource that you need to keep to
prevent disruption or data loss you should try to tweak the configuration so that the inputs value is reverted to
what it was before.
terraform state mv module.eks-jx.random_pet.current random_pet.current # Only needed if cluster_name wasn't specified
terraform state mv module.eks-jx.module.cluster.module.vpc module.vpc # Only needed if create_vpc wasn't false
terraform state mv module.eks-jx.module.cluster.module.eks module.eks
terraform state mv 'module.eks.aws_iam_role.cluster[0]' 'module.eks.aws_iam_role.this[0]'
# If the following two commands fail it is because you are migrating from a version of this module that didn't
# create these resource. That is not a problem, but if you have installed the add on in some other way you will
# need to issue some other terraform command: either "terraform state mv" command or "terragrunt import"
terraform state mv 'module.eks-jx.module.cluster.aws_eks_addon.ebs_addon[0]' 'module.eks.aws_eks_addon.this["aws-ebs-csi-driver"]'
terraform state mv module.eks-jx.module.cluster.module.ebs_csi_irsa_role module.ebs_csi_irsa_role
# Removing the following resources from the state prevent terraform apply from destroying existing node groups and
# related resources before new ones are created. But this means that ypu need to delete the resources manually later.
terraform state rm module.eks.module.node_groups
terraform state rm 'module.eks.aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[0]' 'module.eks.aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[0]' 'module.eks.aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[0]'
terraform state rm 'module.eks.aws_security_group.workers[0]' 'module.eks.aws_iam_role.workers[0]'
terraform state rm $(terraform state list | grep aws_security_group_rule.workers)
terraform state rm $(terraform state list | grep aws_security_group_rule.cluster)
In main.tf some tweaks are needed. Add the following inputs to the module eks
If you already create cluster addons with terraform you can either remove the corresponding addon from the
cluster_addons input of the eks module or use terraform state mv to change the address in the state file and
thus prevent destroying and creating the add-on.
aws-auth config map
If you have configured the config map aws-auth by setting any of the inputs map_accounts, map_roles or
map_users you will need to either configure aws-auth ins some other way, see https://registry.terraform.
io/modules/terraform-aws-modules/eks/aws/20.20.0/submodules/aws-auth or switch to using access entries.
See the documentation for the input access_entries in terraform-aws-modules/eks/aws and the
AWS documentation.
If you keep aws-auth you should remove the old configuration, so the config map isn’t deleted temporarily during
terraform apply:
terraform state rm 'module.eks.kubernetes_config_map.aws_auth[0]'
Cluster Autoscaling
This does not automatically install cluster-autoscaler, it installs all of the prerequisite policies and roles required to install autoscaler.
Create a pull request for your cluster repository the changes created by the following command (with the root of
your cluster repo as current directory):
In the file kube-system/helmfile.yaml you should now configure a version of cluster autoscaler suitable for your
version of Kubernetes by adding values for the chart:
Notice the image tag is v1.30.0 - this tag goes with clusters running Kubernetes 1.30.
If you are running another version, you will need to find the image tag that matches your cluster version.
Open the Cluster Autoscaler releases page and find the latest Cluster Autoscaler version that
matches your cluster’s Kubernetes major and minor version. For example, if your cluster’s Kubernetes version is 1.29
find the latest Cluster Autoscaler release that begins with 1.29. Use the semantic version number (1.29.3 for
example) for that release to form the tag.
Other values to configure for the chart (apart from image.tag) can be seen in the documentation.
The verify pipeline for the cluster repository will add some default values to helmfile.yaml. When this is done
the PR can be merged by approving it.
Note: If you later on remove helmfiles/kube-system/helmfile.yaml from the root helmfiles.yaml the
jx boot job will try to remove the kube-system namespace, which would make the Kubernetes cluster
non-functional. To prevent this you would need to remove the label gitops.jenkins-x.io/pipeline from the
kube-system namespace (i.e. run kubectl label ns kube-system gitops.jenkins-x.io/pipeline-) before the change to
the root helmfiles.yaml.
Long Term Storage
You can choose to create S3 buckets for long term storage of Jenkins X build artefacts with enable_logs_storage, enable_reports_storage and enable_repository_storage.
During terraform apply the enabled S3 buckets are created, and the jx_requirements output will contain the following section:
If you just want to experiment with Jenkins X, you can set the variable force_destroy to true.
This allows you to remove all generated buckets when running terraform destroy.
Note: If you set force_destroy to false, and run a terraform destroy, it will fail. In that case empty the s3 buckets from the aws s3 console, and re run terraform destroy.
Vault is the default tool used by Jenkins X for managing secrets.
Part of this module’s responsibilities is the installation of Vault Operator which in turn install vault.
You can also configure an existing Vault instance for use with Jenkins X.
In this case
provide the Vault URL via the vault_url input variable
set the boot_secrets in main.tf to this value:
boot_secrets = [
{
name = "jxBootJobEnvVarSecrets.EXTERNAL_VAULT"
value = "true"
type = "string"
},
{
name = "jxBootJobEnvVarSecrets.VAULT_ADDR"
value = "https://enter-your-vault-url:8200"
type = "string"
}
]
follow the Jenkins X documentation around the installation of an external Vault instance.
To use AWS Secrets Manager instead of vault, set use_vault variable to false, and use_asm variable to true.
You will also need a role that grants access to AWS Secrets Manager, this will be created for you by setting create_asm_role variable to true.
Setting the above variables will add the asm role arn to the boot job service account, which is required for the boot job to interact with AWS secrets manager to populate secrets.
NGINX
The module can install the nginx chart by setting create_nginx flag to true.
Example can be found here.
You can specify a nginx_values.yaml file or the module will use the default one stored here.
If you are using terraform to create nginx resources, do not use the chart specified in the versionstream.
Remove the entry in the helmfile.yaml referencing the nginx chart
path: helmfiles/nginx/helmfile.yaml
ExternalDNS
You can enable ExternalDNS with the enable_external_dns variable. This modifies the generated jx-requirements.yml file to enable External DNS when running jx boot.
If enable_external_dns is true, additional configuration is required.
If you want to use a domain with an already existing Route 53 Hosted Zone, you can provide it through the apex_domain variable:
This domain will be configured in the jx_requirements output in the following section:
If you want to use a subdomain and have this module create and configure a new Hosted Zone with DNS delegation, you can provide the following variables:
subdomain: This subdomain is added to the apex domain and configured in the resulting jx-requirements.yml file.
create_and_configure_subdomain: This flag instructs the script to create a new Route53 Hosted Zone for your subdomain and configure DNS delegation with the apex domain.
By providing these variables, the script creates a new Route 53 HostedZone that looks like <subdomain>.<apex_domain>, then it delegates the resolving of DNS to the apex domain.
This is done by creating a NS RecordSet in the apex domain’s Hosted Zone with the subdomain’s HostedZone nameservers.
This ensures that the newly created HostedZone for the subdomain is instantly resolvable instead of having to wait for DNS propagation.
cert-manager
You can enable cert-manager to use TLS for your cluster through LetsEncrypt with the enable_tls variable.
LetsEncrypt has two environments, staging and production.
If you use staging, you will receive self-signed certificates, but you are not rate-limited, if you use the production environment, you receive certificates signed by LetsEncrypt, but you can be rate limited.
You can choose to use the production environment with the production_letsencrypt variable:
You need to provide a valid email to register your domain in LetsEncrypt with tls_email.
Customer’s CA certificates
Customer has got signed certificates from CA and want to use it instead of LetsEncrypt certificates. Terraform creates k8s tls-ingress-certificates-ca secret with tls_key and tls_cert in default namespace.
User should define:
enable_external_dns = true
apex_domain = "office.com"
subdomain = "subdomain"
enable_tls = true
tls_email = "custome@office.com"
// Signed Certificate must match the domain: *.subdomain.office.com
tls_cert = "/opt/CA/cert.crt"
tls_key = "LS0tLS1C....BLRVktLS0tLQo="
Production cluster considerations
The configuration, as seen in Cluster provisioning, is not suited for creating and maintaining a production Jenkins X cluster.
The following is a list of considerations for a production use case.
Specify the version attribute of the module, for example:
module "eks-jx" {
source = "github.com/jenkins-x/terraform-aws-eks-jx"
version = "1.0.0"
# insert your configuration
}
output "jx_requirements" {
value = module.eks-jx.jx_requirements
}
Specifying the version ensures that you are using a fixed version and that version upgrades cannot occur unintended.
Keep the Terraform configuration under version control by creating a dedicated repository for your cluster configuration or by adding it to an already existing infrastructure repository.
Setup a Terraform backend to securely store and share the state of your cluster. For more information refer to Configuring a Terraform backend.
Disable public API for the EKS cluster.
If that is not not possible, restrict access to it by specifying the cidr blocks which can access it.
Configuring a Terraform backend
A “backend“ in Terraform determines how state is loaded and how an operation such as apply is executed.
By default, Terraform uses the local backend, which keeps the state of the created resources on the local file system.
This is problematic since sensitive information will be stored on disk and it is not possible to share state across a team.
When working with AWS a good choice for your Terraform backend is the s3 backend which stores the Terraform state in an AWS S3 bucket.
The examples directory of this repository contains configuration examples for using the s3 backed.
To use the s3 backend, you will need to create the bucket upfront.
You need the S3 bucket as well as a Dynamo table for state locks.
You can use terraform-aws-tfstate-backend to create these required resources.
Examples
You can find examples for different configurations in the examples folder.
Each example generates a valid jx-requirements.yml file that can be used to boot a Jenkins X cluster.
Flag to determine whether storage buckets get forcefully destroyed. If set to false, empty the bucket first in the aws s3 console, else terraform destroy will fail with BucketNotEmpty error
Flag to determine whether subdomain zone get forcefully destroyed. If set to false, empty the sub domain first in the aws Route 53 console, else terraform destroy will fail with HostedZoneNotEmpty error
Whether or not this modules creates and manages the Vault instance. If set to false and use_vault is true either an external Vault URL needs to be provided or you need to install vault operator and instance using helmfile.
Flag to control if apex domain should be managed/updated by this module. Set this to false,if your apex domain is managed in a different AWS account or different provider
The cluster connection string to use once Terraform apply finishes. You may have to provide the region and profile (as options or environment variables)
The IAM Role that the build pods will assume to authenticate
FAQ: Frequently Asked Questions
IAM Roles for Service Accounts
This module sets up a series of IAM Policies and Roles. These roles will be annotated into a few Kubernetes Service accounts.
This allows us to make use of IAM Roles for Sercive Accounts to set fine-grained permissions on a pod per pod basis.
There is no way to provide your own roles or define other Service Accounts by variables, but you can always modify the modules/cluster/irsa.tf Terraform file.
Jenkins X EKS Module
This repository contains a Terraform module for creating an EKS cluster and all the necessary infrastructure to install Jenkins X as described in https://jenkins-x.io/v3/admin/platforms/eks/.
What is a Terraform module
A Terraform module refers to a self-contained package of Terraform configurations that are managed as a group. For more information about modules refer to the Terraform documentation.
How do you use this module
Prerequisites
This Terraform module allows you to create an EKS cluster ready for the installation of Jenkins X. You need the following binaries locally installed and configured on your PATH:
terraform(>= 1.0.0, < 2.0.0)kubectl(>= 1.10)aws-clihelm(>= 3.0)Cluster provisioning
From version 3.0.0 this module creates neither the EKS cluster nor the VPC.
We recommend using the Terraform modules terraform-aws-modules/eks/aws to create the cluster and terraform-aws-modules/vpc/aws to create the VPC.
A Jenkins X ready cluster can be provisioned using the configuration in jx3-terraform-eks as described in https://jenkins-x.io/v3/admin/platforms/eks/.
All s3 buckets created by the module use Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3) by default. You can set the value of
use_kms_s3to true to use server-side encryption with AWS KMS (SSE-KMS). If you don’t specify the value ofs3_kms_arn, then the default aws managed cmk is used (aws/s3)You should have your AWS CLI configured correctly.
AWS_REGION
In addition, you should make sure to specify the region via the AWS_REGION environment variable. e.g.
export AWS_REGION=us-east-1and the region variable (make sure the region variable matches the environment variable)The IAM user does not need any permissions attached to it.
Once you have your initial configuration, you can apply it by running:
This creates an EKS cluster with all possible configuration options defaulted.
Migrating to current version of module from a version prior to 3.0.0
If you already have created an EKS cluster using a pre 3.0.0 version of this module there is unfortunately no easy way to upgrade without recreating the cluster. If you already create the cluster in some other way and now set
create_eks = falseyou only n eed to remove some inputs. I won’t cover that much simpler case here.While it would be a bit easier if you started using the same version of
terraform-aws-modules/eks/awsas previously used by this module we would advise against that. The reason is that this version is very old and doesn’t support a lot of feature currently available with AWS.Let’s say you created your cluster using an old version of the template and change your configuration to a current version. If you then run
terraform planyou will see that basically everything would be destroyed and then created. To mitigate that you can move resources in the terraform state to the new addresses. In some cases there are no corresponding new address, instead you are better off removing resources to avoid that they get destroyed before the new resources are created. This means that you need to remove those cloud resources manually later. You can also tweak configurations to prevent resources from be replaced. If you check the output fromterraform planyou will see that resources marked as “must be replaced” have one or more inputs with the comment “# forces replacement”. If it is a resource that you need to keep to prevent disruption or data loss you should try to tweak the configuration so that the inputs value is reverted to what it was before.In main.tf some tweaks are needed. Add the following inputs to the module eks
Cluster add ons
If you already create cluster addons with terraform you can either remove the corresponding addon from the
cluster_addonsinput of the eks module or useterraform state mvto change the address in the state file and thus prevent destroying and creating the add-on.aws-auth config map
If you have configured the config map aws-auth by setting any of the inputs
map_accounts,map_rolesormap_usersyou will need to either configure aws-auth ins some other way, see https://registry.terraform. io/modules/terraform-aws-modules/eks/aws/20.20.0/submodules/aws-auth or switch to using access entries. See the documentation for the inputaccess_entriesin terraform-aws-modules/eks/aws and the AWS documentation.If you keep aws-auth you should remove the old configuration, so the config map isn’t deleted temporarily during
terraform apply:Cluster Autoscaling
This does not automatically install cluster-autoscaler, it installs all of the prerequisite policies and roles required to install autoscaler.
Create a pull request for your cluster repository the changes created by the following command (with the root of your cluster repo as current directory):
In the file kube-system/helmfile.yaml you should now configure a version of cluster autoscaler suitable for your version of Kubernetes by adding
valuesfor the chart:Notice the image tag is
v1.30.0- this tag goes with clusters running Kubernetes 1.30. If you are running another version, you will need to find the image tag that matches your cluster version.Open the Cluster Autoscaler releases page and find the latest Cluster Autoscaler version that matches your cluster’s Kubernetes major and minor version. For example, if your cluster’s Kubernetes version is 1.29 find the latest Cluster Autoscaler release that begins with 1.29. Use the semantic version number (1.29.3 for example) for that release to form the tag.
Other values to configure for the chart (apart from
image.tag) can be seen in the documentation.The verify pipeline for the cluster repository will add some default values to
helmfile.yaml. When this is done the PR can be merged by approving it.helmfiles/kube-system/helmfile.yamlfrom the roothelmfiles.yamlthe jx boot job will try to remove the kube-system namespace, which would make the Kubernetes cluster non-functional. To prevent this you would need to remove the labelgitops.jenkins-x.io/pipelinefrom the kube-system namespace (i.e. runkubectl label ns kube-system gitops.jenkins-x.io/pipeline-) before the change to the roothelmfiles.yaml.Long Term Storage
You can choose to create S3 buckets for long term storage of Jenkins X build artefacts with
enable_logs_storage,enable_reports_storageandenable_repository_storage.During
terraform applythe enabled S3 buckets are created, and the jx_requirements output will contain the following section:If you just want to experiment with Jenkins X, you can set the variable force_destroy to true. This allows you to remove all generated buckets when running terraform destroy.
force_destroyto false, and run aterraform destroy, it will fail. In that case empty the s3 buckets from the aws s3 console, and re runterraform destroy.enable_aclvariable was introduced and set to false (default). If the requirement is to provide ACL with bucket ownership conrols for the bucket, then set theenable_aclvariable to true.Secrets Management
Vault is the default tool used by Jenkins X for managing secrets. Part of this module’s responsibilities is the installation of Vault Operator which in turn install vault.
You can also configure an existing Vault instance for use with Jenkins X. In this case
boot_secretsinmain.tfto this value:To use AWS Secrets Manager instead of vault, set
use_vaultvariable to false, anduse_asmvariable to true. You will also need a role that grants access to AWS Secrets Manager, this will be created for you by settingcreate_asm_rolevariable to true. Setting the above variables will add the asm role arn to the boot job service account, which is required for the boot job to interact with AWS secrets manager to populate secrets.NGINX
The module can install the nginx chart by setting
create_nginxflag totrue. Example can be found here. You can specify a nginx_values.yaml file or the module will use the default one stored here. If you are using terraform to create nginx resources, do not use the chart specified in the versionstream. Remove the entry in thehelmfile.yamlreferencing the nginx chartExternalDNS
You can enable ExternalDNS with the
enable_external_dnsvariable. This modifies the generated jx-requirements.yml file to enable External DNS when runningjx boot.If
enable_external_dnsis true, additional configuration is required.If you want to use a domain with an already existing Route 53 Hosted Zone, you can provide it through the
apex_domainvariable:This domain will be configured in the jx_requirements output in the following section:
If you want to use a subdomain and have this module create and configure a new Hosted Zone with DNS delegation, you can provide the following variables:
subdomain: This subdomain is added to the apex domain and configured in the resulting jx-requirements.yml file.create_and_configure_subdomain: This flag instructs the script to create a newRoute53 Hosted Zonefor your subdomain and configure DNS delegation with the apex domain.By providing these variables, the script creates a new
Route 53HostedZone that looks like<subdomain>.<apex_domain>, then it delegates the resolving of DNS to the apex domain. This is done by creating aNSRecordSet in the apex domain’s Hosted Zone with the subdomain’s HostedZone nameservers.This ensures that the newly created HostedZone for the subdomain is instantly resolvable instead of having to wait for DNS propagation.
cert-manager
You can enable cert-manager to use TLS for your cluster through LetsEncrypt with the
enable_tlsvariable.LetsEncrypt has two environments,
stagingandproduction.If you use staging, you will receive self-signed certificates, but you are not rate-limited, if you use the
productionenvironment, you receive certificates signed by LetsEncrypt, but you can be rate limited.You can choose to use the
productionenvironment with theproduction_letsencryptvariable:You need to provide a valid email to register your domain in LetsEncrypt with
tls_email.Customer’s CA certificates
Customer has got signed certificates from CA and want to use it instead of LetsEncrypt certificates. Terraform creates k8s
tls-ingress-certificates-casecret withtls_keyandtls_certindefaultnamespace. User should define:Production cluster considerations
The configuration, as seen in Cluster provisioning, is not suited for creating and maintaining a production Jenkins X cluster. The following is a list of considerations for a production use case.
Specify the version attribute of the module, for example:
Specifying the version ensures that you are using a fixed version and that version upgrades cannot occur unintended.
Keep the Terraform configuration under version control by creating a dedicated repository for your cluster configuration or by adding it to an already existing infrastructure repository.
Setup a Terraform backend to securely store and share the state of your cluster. For more information refer to Configuring a Terraform backend.
Disable public API for the EKS cluster. If that is not not possible, restrict access to it by specifying the cidr blocks which can access it.
Configuring a Terraform backend
A “backend“ in Terraform determines how state is loaded and how an operation such as apply is executed. By default, Terraform uses the local backend, which keeps the state of the created resources on the local file system. This is problematic since sensitive information will be stored on disk and it is not possible to share state across a team. When working with AWS a good choice for your Terraform backend is the s3 backend which stores the Terraform state in an AWS S3 bucket. The examples directory of this repository contains configuration examples for using the s3 backed.
To use the s3 backend, you will need to create the bucket upfront. You need the S3 bucket as well as a Dynamo table for state locks. You can use terraform-aws-tfstate-backend to create these required resources.
Examples
You can find examples for different configurations in the examples folder.
Each example generates a valid jx-requirements.yml file that can be used to boot a Jenkins X cluster.
Module configuration
Providers
Modules
Requirements
Inputs
list(string)[]string""string""string""list(object({name = string
value = string
type = string
}))
[]stringstringboolfalseboolfalsebooltruebooltruebooltruebooltruebooltruebooltrueboolfalsebooltruebooltrueboolfalsebooltrueboolfalsejx-requirements.ymlfileboolfalsebooltruebooltruebooltruejx-requirements.ymlfileboolfalsenumber90boolfalseboolfalseboolfalseboolfalsebooltruestring""string""list(string)[]string""booltruebooltruestringnullstring"nginx"string"nginx-ingress"string"nginx_values.yaml"jx-requirements.ymlfileboolfalsestring"us-east-1"string""map(any){}string""jx-requirements-eks.ymlfilestring""string""jx-requirements.ymlfilestring""string""boolfalseboolfalsebooltruelist(string)[]list(string)[]string""Outputs
profile (as options or environment variables)
FAQ: Frequently Asked Questions
IAM Roles for Service Accounts
This module sets up a series of IAM Policies and Roles. These roles will be annotated into a few Kubernetes Service accounts. This allows us to make use of IAM Roles for Sercive Accounts to set fine-grained permissions on a pod per pod basis. There is no way to provide your own roles or define other Service Accounts by variables, but you can always modify the
modules/cluster/irsa.tfTerraform file.How can I contribute
Contributions are very welcome! Check out the Contribution Guidelines for instructions.