Here is the deployment flow we want to settle:
Copy and paste the following buildspec.yaml
:
Don’t forget to change the path to the Dockerfile on line 21, the Helm release name and the path to the Helm files on line 27.
As you may notice on the pre_build step, a kube-config file is copied to be the in the ~/.kube/config
, thus commit it at the root of the repository under the name `kube-<ENV>
`.
Here are the steps to follow on the AWS console. Each bullet point corresponds to a new page:
deploy-staging
` or `deploy-production
`AWS_ACCOUNT_ID = <YOUR_ACCOUNT_ID>
AWS_DEFAULT_REGION = <PROJECT_REGION>
IMAGE_REPO_NAME = <ECR_URL>
ENV = <DEPLOYED_ENV>
buildspec.yaml file
. Usually, this file is at the root of the code repository.It is an important step to grant all the permissions that are needed, especially if you want, for instance, to restrict the access to your EKS cluster.
During the deployment, a Docker image will be pulled and pushed to the ECR, the AWS container image registry, therefore the CodeBuild process needs to have the right to interact with it.
On the IAM page, in the role section, find the role you’ve created during the CodeBuild creation and attached it to a new strategy: the AmazonEC2ContainerRegistryPowerUser one.
The last phase of the deployment is the upgrade of the cluster thanks to the `helm upgrade
` command. Thus, the CodeBuild process needs to be able to access the Kubernetes cluster. To do so, modify the `aws-auth
` configMap with the command `kubectl edit -n kube-system configmap/aws-auth
` and add the following lines below the `mapUsers
` key:
You’re ready to launch your first deployment, either by pushing new code to the Git branch or by triggering from your terminal `aws codepipeline start-pipeline-execution --name deploy-<ENV>
`.