Continuous deployment to AWS ECS from CircleCI
You know what it all about. So let’s start.
We have some kind of website with dockerized environment and we want to configure automatic zero-time deployment on push to the our github repository
What we a going to use to do that:
- CircleCI as build server
- Github as code repository
- Amazon EC2 Container Service (ECS) as production environment and our deployment target
- Amazon S3 bucket to keep our secret keys used by website
It’s great build service integrated with Github. Very easy configurable by yml file - just put
circle.yml into your repository, configure dependencies, test and deploy sections and that’s it.
Amazon EC2 Container Service
AWS ECS organise cluster from multiple Amazon machines (AWS EC2 instances).
To run container on cluster from your docker image you should define
Task - which is just resources definition for your container instance, like which docker image to use, how much memory to allocate for container etc.
Tasks runs on cluster instances, and which
Tasks and how many
Tasks should be are defined in
Service settings allow to configure which ELB (Amazon load balancer) to use for current cluster and keep the rules for Auto Scaling - in case you want to change number of containers dynamically.
To deploy new version of your docker image you should create a new
Task Definition with new image tag inside, register it in
Service and run
Service update after that to populate new
Service update, new
Task will be run on any free reserved EC2 instance, and, after that, old
Task will be stopped.
Unfortunately, if you want to have smooth updating without maintenance period, you can’t just run
Task with new image version on the same machine because it use the same port as previous one, so to do that you should shot down the old container before run of new one.
Which means you should keep one EC2 instance on cluster free, for deployment purposes.
Amazon EC2 Container Registry (ECR)
It’s a docker registry like Dockerhub, but managed by Amazon and have more options on access management. We will use it because of speed and security reasons.
Common problem people face during deployment with docker is how to handle security info like database credentials, or third-party API keys, passwords and tokens.
Including this information into docker image is not secure enough because it can be retrieved from docker cache or your infrastructure will be compromised if someone got access to your docker repository or image. Anyway, to include this variables to image you should provide them to the build server or commit it into your code repository which is also not a good choice to keep secured info, especially in case you use third-party build services. Some people add this information to the ECS Task definition, but it’s also require to store this info inside build server environment or code repository.
We are going to use S3 bucket with the special access rules based on AWS Identity and Access Management (IAM) role to handle this. In this case, environment variables with sensitive data can be retrieved only from inside of our VPC.
What we are going to do
So, our detailed deployment process will be look like this:
And implementation steps:
- Create repository for docker images
- Create task template for ECS service
- Configure secrets bucket
- Configure docker image
- Create cluster
- Configure cluster Service
- Configure CircleCI
Create repository for docker images
Go to the
Amazon EC2 container service ->
Repositories in your AWS Console. Click
Create repository and enter repository name.
I suggest to use more specific name, like
Create task template for ECS service
Now we need to create template for
Task Definitions. We will create new
Task Definition after build new docker image using this template.
<Repository URL> with docker repository url from the previous step.
<TASK FAMILY> we will replace later with the correct
Task family option.
You can choose any
<TASK NAME> you want (
website, for example).
On deployment our deploy script will replace
%IMAGE_TAG% to the real one and produce new
Task Definition for
cpu options are self-explained, but make sure that you allocate enough memory for container.
Configure secrets bucket
Create bucket on S3 with name
ecs-secrets and add following policy to the bucket (
Edit bucket policy)
This policy allow to put only encrypted files inside, and allow to get files from specific VPC only.
(Assume you already have configured VPC. If not - you can create new one on
Create cluster step and use this new VPC for bucket policy).
To upload a new file to the bucket you can use following command with aws cli (AWS command line tool):
aws s3 cp website_secrets.txt s3://ecs-secrets/website_secrets.txt --sse
Put your secret environment inside
website_secrets.txt, for example:
Also, I suggest to enable logging for this bucket — just in case.
UPD. Now we can provide access to this bucket for specific Task only, which is more secure than give access from VPC.
Configure docker image
Now we need to configure our docker image to load secrets from S3 into container environment on container start. We will use endpoint script for this.
It will load each line of the
website_secrets.txt into container environment, so all environment variables will be accessible by webserver.
secrets-endpoint.sh inside your repository:
Don’t forget to add
execute permissions to script.
As you can see, this script use aws cli to download file with secrets, so before run it in container we should install aws cli into docker images.
Dockerfile and add following lines to install aws cli inside docker image (example for debian-based distro images, like Ubuntu):
and, we need to put endpoint script to the image and run it right before
CMD line in
For cluster creation I suggest to use ecs cli. There is two reasons to use ecs cli insted of aws cli - first of all it mo simple to use. The secуnd reason - ecs cli setup cluster through Cloud Formation which will manage cluster resources - EC2 instances, vpc, security roles etc. So we don’t need to do that manually. Additionally it allow to use docker-compose yml file for Task Definition creation. We are not going to use this feature, but it can be helpful in other scenarios.
So, install the ecs cli and create configuration file for it (on Linux it will be
~/.ecs/config). Example config file:
Choose your cluster name at that point. Usually I create the name by pattern
projectname-formation to add information how this cluster resources are managed (
Cloud Formation in this case).
To create cluster on existing VPC run the following command:
Keypairs are stored on
AWS EC2 -> look to the left panel ->
Key pairs. There you can create key pair to access to the EC2 instances.
If you still have no VPC,
ecs-cli up without
--vpc option will create a new one.
You should add minimum 2 instances in to your cluster in order to have one machine free as deployment target.
Configure cluster Service
After cluster creation we should create
Service in the cluster to manage our
Tasks. Before do that, lets push latest docker image to the ECR repository and register a new Task for this image.
Login to the ECR:
Build new image
Tag image with ECR tags
<ECR Repository URL> - URL to the project docker repository (looks like
Push images to ECR
Now, let’s create a new
Task Definition and register it on ECS:
<TASK FAMILY> can be any name you want to group your
Tasks. Ususally, I use
Ok, now we can create
Service. Go to the ECS -> your cluster ->
Service tab -> click
Task Definition, add service name (
website) and set 1 task in field
Number of tasks.
Service should run container on one of the registered instances.
Let’s make sure that everything works. Go to the
Tasks tab and click on value in
Container Instance column for the active task.
You should be able to see
Public IP. Try to open it in browser, and check is everything works as expected.
If something goes wrong, you can check your container in instance. To do that, at first, you must allow SSH connection to the instance.
- Go to instance on EC2 (you can do it from
- Click on current security group
- Go to
- Add rule for SSH
Connect to the instance:
<INSTANCE IP> is public IP for EC2 instance with pur container.
After that you can check containers on this instance:
And see container logs:
Also you can do regular docker stuff.
Don’t forget to remove SSH permission from instance after debugging!
If everything works fine, we can configure CircleCI to automatically deploy new version of docker image to the AWS ECS.
circle.yml to the project and connect project on service.
We need AWS credentials to allow CircleCI to push new images to the ECR and update ur ECS Service. I’s highly recommended to create separate role for that on AWS IAM.
Add AWS credentials to the CircleCI project settings.
project settings ->
Permissions section ->
AWS permissions -> add AWS key and secret
aws cli expect default region for work, so we need to set environment variable with AWS region in build environment
Add default AWS region:
project settings ->
Build Settings section ->
Environment Varibles -> add
AWS_DEFAULT_REGION varible with you region (
us-east-1 in my case)
circle.yml to build new image and push it to the ECR and run deployment script. Example:
Now we need to creare
deploy.sh. This script should:
- Create new task definition for docker image with
- Register it in cluster
Don’t forget to replace
<YOUR SERVICE NAME>
<YOUR CLUSTER NAME>
<YOUR TASK FAMILY> with the correct values.
That’s it. Now you have Continues Deployment to the Amazon EC2 Container Service. To more information, please check the Literature section. Hopefully this will be helpfull for someone.