CI - CD of Flask Application Using AWS CodeBuild, CodeDeploy and CodePipeline
CI - CD of Flask Application Using AWS CodeBuild, CodeDeploy and CodePipeline
CodePipeline
PART 1
Continuous Integration (CI) and Continuous Deployment (CD) are core practises that teams use to
integrate changes quickly and ship features smoothly in the fast-paced software development
environment of today. The development-to-deployment cycle can be significantly accelerated by a
successful CI/CD pipeline, guaranteeing that end users always have access to the newest features and
fixes.
In this multi-part series, we will delve deep into CI/CD using AWS. Being the most commonly used cloud
services, AWS has emerged as a frontrunner in providing scalable and trustworthy options for setting
up CI/CD pipelines. From it’s Devops-focused tools we will use AWS CodeBuild, CodeDeploy and
CodePipeline. Trio of these tools work flawlessly with each other in to offer an end-to-end solution.
Throughout this series, CI/CD guide, I’ll walk you through each step — from setting up an EC2 instance
to attaching the appropriate service roles for our pipeline. Stay tuned as we unpack each stage into
these 4 part blogs series.
1. Pipeline Flow
1. First, we have to understand the workflow of this whole pipeline that we are about to set. Source
of the Application will be Github(we can use AWS CodeCommit as well, it is same as Github just
owned by AWS people) for this tutorial. Our target is to containerize the Flask Application
(building Flask application docker image) and run/deploy this container on EC2 instance. Wait,
let me elaborate this:
2. First, let us assume we already have our Flask application existing on some github repository
with a proper Dockerfile in it.
3. Then we use AWS CodeBuild, to build the docker image of our Flask application and push the
image to the ECR repository.
4. Next, AWS CodeDeploy will pull the respective Flask image from ECR on an EC2 instance and run
the flask container, which will deploy the flask application.
5. At the end, we will use AWS CodePipeline to automate the build and deployment (in (2) and (3))
process, whenver any new changes is pushed on the Github repository.
Flask Application
You can find the code for this web application, here. Either you can fork the repository, to follow along
with this tutorial to get started right away. And jump to the 3rd section. Or, if you want to create this
from scratch and push this web app to a github repository (will be using beginner friendly ways to push
the code) then follow the steps mentioned next:
1. Create a directory (Flask_app_CICD), go to Flask_app_CICD directory. Then, create a python
environment venv and activate this environment.
mkdir Flask_app_CICD
cd Flask_app_CICD
python -m venv venv
source venv/bin/activate
3. Create a python script app.py inside Flask_app_CICD directory, edit this script using any editor
(I will use nano to edit).
touch app.py
nano app.py
app = Flask(__name__)
@app.route('/')
def homepage():
return render_template('homepage.html')
if __name__ == '__main__':
app.run(debug=True)
Edit this homepage.html file using any editor of your choice and add the following lines.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<h1>CI/CD pipeline</h1>
<p>Let's build a CI/CD pipeline, to automate the deployment of this flask
application.</p>
</body>
</html>
5. On the terminal, execute the app.py script using python app.py. Here, we can see flask
application running in our terminal.
Open http://127.0.0.1:5000/ in your browser, we can see the content of homepage.html being
rendered.
6. Create a requirements.txt file.
touch .gitignore
nano .gitignore
WORKDIR /app
COPY . /app
EXPOSE 5000
8. Create a .gitignore File. Edit this file and exclude files/folders such as venv from getting pushed
to github repository.
touch .gitignore
nano .gitignore
venv/
__pycache__/
.env
9. Initialize a git repository locally. Add files in staging area and commit the files.
git init
git add .
git commit -m 'Initial changes'
11. Link the local directory(repo) Flask_app_CICD to the Github repo created one step previously.
git remote add origin https://github.com/yourusername/reponame.git
12. Time to push the changes to Github
git push -u origin master
Please note that, if you are facing any issue while pushing the code. Then skip 11 and 12 steps, just head
over to your repository created on Github in step 9. One can find Add file button, click on it and
choose upload files option. Then upload all the required files and folders and click commit.
You can also skip this ‘Flask Web App’ setup section completely, just fork the repository and jump over
to next section Configuring ECR to get started quickly on the road of Automating Flask deployment.
AWS Console
2. Click on ‘Get Started’ button.
AWS ECR
3. A page will open up, where we have to select the required options to create an ECR repository.
In General settings. I am keeping visbility as Private and choosing repo name as flask_image.
4. Then scroll down, we can see Image scan settings and Encryption settings. We will leave this at
it’s default option. And click on Create repository.
5. And our ECR repository is ready.
ECR Repository
4. Create an EC2 instance
We will create, one EC2 instance. This instance will be used as the platform where the Flask application
will be deployed. Follow these steps to create an EC2 instance:
1. Go to AWS console, and type ec2 in the search bar. Click on the first search result.
AWS console
2. Click on Launch Instance option.
3. Put any name of your choice in the Name and Tags field. We will name this ec2 instance as
‘FlaskCICD’.
4. Scroll down, to choose OS image. We will use ubuntu for this tutorial.
5. Scroll down more, to choose instance type. By default, t2 micro is selected which is Free tier
eligible. So no need to do anything here.
Then in key pair, We will click on create new key pair option. A pop up window will appear, which allows
us to create a new key pair. I have created a key pair with name cicd_key. This step is optional, if you
already have key pair generated you can use that instead of creating a new one.
6. In Network settings, we choose to select already existing security group. Otherwise, we can also
create security group by clicking on Create security group where we add SSH port 22, HTTP port
80 and HTTPS port 443 as inbound rules.
7. Scroll further, in configure storage and advanced details no need to any changes. By default right
option is already selected for us. Just click on Launch Instance, and you are done. On the EC2
instance dashboard you can see the instance in running state.
EC2 Dashboard
2. A new page will open up, their Click on Create new IAM role.
You can see a message telling that role has been created.
6. Now go back to the parent tab, click on button containing refresh icon. Then go to the dropdown
menu, you will see EC2_codedeploy_role that we created just now as an option. Select that,
finally click Update IAM role. And voila! you are done.
Congratulations, on reaching this far. It may seem, intimidating right now. And very much possible, that
all these steps till now won’t make much sense to you. However, I request you to hang on. If needed go
through all the steps again, please don’t hesitate.
In the next part of this multi-series blog post i.e Part 2, where we will create a CodeBuild project. And
this CodeBuild project will create docker image of our Flask Application and push this image to ECR.
PART 2
Let’s now transition to the next segment of this multi-part blog series. In this second part, we’ll establish
a Continuous Integration (CI) pipeline. Or more specifically, configure an AWS CodeBuild project. This
new service setup will be responsible for taking the latest code of our flask application from Github and
build a docker image for that application. Then push this image to the ECR repository flask_image that
we have created in Part 1(section 3).
AWS Console
2. Then click on Create build project.
3. A page will open up, where we will be configuring a build project. In Project configuration give
a project name of your choice. I will choose, FlaskAppBuild to identify this CodeBuild project.
And write a small description for the sake of convenience (although it is optional).
4. On scrolling down, in Source section we will select Github as our source provider since our
source code exists there. Now we have to authorize AWS CodeBuild to access our Github
account. For this click on Connect to GitHub.
A small popup window will appear, asking to approve the access request. On that window, there will be
a green button. Click on that green button Authorise aws-codesuite. Then you will be prompted for your
GitHub password. Enter your password and proceed. (I forgot to take screenshot of this step, pardon
me )
After this you will redirected to aws console window asking you to confirm to connect CodeBuild to
Github. Click Confirm. As shown below.
As the connection of github and CodeBuild is successful, we can see this source section with a field
asking for repository URL. Enter your respective repository URL, then in Source version put the branch
name where the source code exists. Like for me, it is v0.1 branch(for you it can be main, master, dev,
etc any branch of your choice)
5. Sroll down further, in Environment section choose Managed image, in Operating system
choose Ubuntu, Runtime(s) as Standard, Image as aws/codebuild/standard:7.0 and rest of the
field’s option to be choosen as shown in the image below. One important point, we will
tick Privileged option because we have to build a Docker image.
In service role, we don’t have to do anything as by default, New service role is selected, and default role
name is specified. If you remember from Part 1 of this tutorial series, in the end we have created a
Service Role for EC2 instance giving it some specific set of permissions. Here a service role will be
created by codebuild itself (we don’t have to create it explicitly) giving some specific set of permissions
to this build project.
6. Scroll down more, in Buildspec section we have to do nothing since by default Use a buildspec
file option is selected. We will understand the role of this buildspec later in this same tutorial.
We see a message indicating the project was created successfully. Hmm, Nice! You might feel the urge
to kick off the build process immediately by clicking the Start build button, but hold on - there are just
a few more things to address before that.
7. Define build specification
7. a. Buildspec
As mentioned earlier about Buildspec, we will understand the role of buildspec in this build project. For
a moment, think about this! How will our build project know, what it has to do with the source code
provided to it? Isn’t it?
We definitely need to provide some set of instructions for our build project to ensure we get the
expected output. Hence the buildspec comes into the picture. Any CodeBuild project, requires
a buildspec.yml file containing all the steps that it has to perform in the build stage. And this
buildspec.yml file will be located in the root directory of our source code.
Let us head back to our Flask_app_CICD directory. Create a buildspec.yml file and edit this file to add
all the required instruction as given below:
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region us-east-1 | docker login --username
AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
- echo Logged in to Amazon ECR successfully
build:
commands:
- echo Building Docker Image for Flask Application
- docker build -t flask_image .
- echo Image built successfully
post_build:
commands:
- echo Tagging Flask Docker image
- docker tag flask_image:latest 123456789.dkr.ecr.us-east-
1.amazonaws.com/flask_image:latest
- docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/flask_image:latest
- echo Flask image pushed to ECR
Flask_app_CICD/
├── templates/
│ └── homepage.html
├── .gitignore
├── app.py
├── requirements.txt
└── buildspec.yml
The above error suggests that, the IAM Role/Service Role attached to this CodeBuild project doesn’t
have permission to our ECR repository(flask_image). What’s the solution then, in order to get rid of this
error? The answer is simple, we need to attach AmazonEC2ContainerRegistryFullAccess permission to
the service role used by CodeBuild. If we read the above error message carefully, the service role
attached to this CodeBuild project is codebuild-FlaskAppBuild-service-role.
Follow these steps to attach the AmazonEC2ContainerRegistryFullAccess permission to codebuild-
FlaskAppBuild-service-role:
1. Navigate to the IAM (Identity and Access Management) dashboard by searching for IAM in
search bar of aws console.
2. In the navigation pane, click Roles.
3. In the search bar, enter codebuild-FlaskAppBuild-service-role to find the role. Then click on it.
4. In the right side click on Add permissions, then Attach policies.
5. Then search for the required permission AmazonEC2ContainerRegistryFullAccess and select it. Then
click Add Permission.
And this is enough to solve the above error.
toomanyrequests: You have reached your pull rate limit. You may increase the limit
by authenticating and upgrading: https://www.docker.com/increase-rate-limit
If we recall, in our Docekrfile the very first line we have added is FROM python:3.10-slim . That means
we are pulling a python base image from docker hub, to create our Application container. So, sometimes
docker will stop us from pulling base images due to their limiting policy for Docker Hub. Now, you may
ask I haven’t requested to pull the images from docker so many times then why I am encountering this
error? This is because, we are considered by docker hub as anonymous users or free-tier authenticated
users. And users classified under these categories, docker imposes limited number of pulls within a
certain time frame.
Workaround to this issue is to use AWS public ECR for pulling python base image. We will replace FROM
python:3.10-slim with this FROM public.ecr.aws/docker/library/python:3.10-slim
FROM public.ecr.aws/docker/library/python:3.10-slim
WORKDIR /app
COPY . /app
RUN python -m pip install -r requirements.txt
EXPOSE 5000
Push the code after making changes, and you are good to go.
After solving these errors (if you get any), restart the build process. This time our build should succeed.
If we open the ECR (flask_image) repository we will see our Flask application image listed.
Hurray! After some tweaks and efforts, we’ve successfully built our flask image. And pushed it to the
respective ECR repository using AWS CodeBuild.
In the next segment of our four-part tutorial series, we’ll deploy this flask container by configuring a
CodeDeploy project.
PART 3
In the previous parts (Part 1 and Part 2), we learned how to setup EC2 instance, attach required
permissions to the service role, and configure the CodeBuild project to dockerize our Flask Application.
In this part, we will establish a CodeDeploy project and deploy our Flask Application on the EC2 instance
created in Part 1.
In case, if anyone have missed first and second part from this tutorial series:
Without futher ado, let’s start right away so that we can see our Flask application getting deployed by
some AWS services.
4. In Add permissions, there is already one policy added (AWSCodeDeploy). Nothing to do here,
just click Next.
5. Assign a name to this role of your choice, I have used CodeDeploy-Role. The description field
was pre-filled, not changing anything there.
• In the root directory of our Flask Application, we will create a yaml file appspec.yml . Edit it and
add the following instructions:
version: 0.0
os: linux
hooks:
ApplicationStop:
- location: scripts/application_stop.sh
timeout: 300
runas: root
BeforeInstall:
- location: scripts/before_install.sh
timeout: 300
runas: root
AfterInstall:
- location: scripts/after_install.sh
timeout: 300
runas: root
ApplicationStart:
- location: scripts/application_start.sh
timeout: 300
runas: root
1. version defines the version of Appspec file. Currently, 0.0 is the only allowed value.
2. os defines the operating system of the instance. Values can be ‘linux’ or ‘windows’.
3. hooks section facilitate us to specify scripts to be run at various stages of deployment. Under
this hook we have various lifecycle events such as ApplicationStop, ApplicationStart, etc. With
each of this lifecycle events, a location of shell script is defined that means whenever a particular
event happens, shell scripts associated under that lifecycle will be executed. These lifecycle
event get executed in a particular order. For example, here we have four lifecycle events
(ApplicationStop, ApplicationStart, BeforeInstall, AfterInstall). And their order of execution is
ApplicationStop → BeforeInstall → AfterInstall → ApplicationStart. To understand more about
this application specification file refer this and this.
o In the root directory create one folder named as scripts. This scripts directory contains
all those shell scripts that is needed by all the lifecycle events defined by us in the
appspec.yml file.
Commands to create scripts directory and four deployment scripts:
mkdir scripts
cd scripts
touch application_stop.sh
touch before_install.sh
touch after_install.sh
touch application_start.sh
We will edit each of the scripts one by one, in the order of there execution.
1. First application_stop.sh will be executed during deployment. This shell script will contain commands
that is responsible for stopping docker containers, removing those containers and finally removing the
pre-existing docker image. Now, one may ask why do we need to do these steps of stopping and
removing? Okay, the answer is: think of these steps as a way of clearing out older version of docker image
or containers (if exists) and making space for new and update docker images and running updated
containers.
#!/bin/bash
echo "This Script is used to stop already running docker container, remove them
and remove the image as well"
sudo docker stop $(sudo docker ps -q) stops all the running containers.
sudo docker rm $(sudo docker ps -a -q) deletes all the containers.
sudo docker rmi $(sudo docker images -q) deletes all the images.
2. Second before_install.sh will be executed during deployment. This shell script checks if Docker
and AWS CLI are installed on the EC2 instance or not. If those are not installed then, Docker and
AWS CLI will be installed.
#!/bin/bash
3. Third after_install.sh will be executed during deployment. This shell script will be responsible for
logging into the ECR and pulling the flask image from the respective repository.
#!/bin/bash
4. Fourth application_start.sh will be executed during deployment. This script will run the docker
container at port 5000 in detached mode.
git add .
git commit -m "Added appspec.yml file and corresponding deployment scripts"
git push
Note: Order of files and folder can be little bit here and there, the above directory structure tends to
show the content inside our root directory.
As we have ubuntu server, so we will follow this guide to install this Agent package. Here are the steps
to for the installation:
1. If your EC2 instance is stopped, you have to start the instance by clicking on Start instance.
2. Beside Instance state button, there is a Connect option. Click on it.
3. Then in Connect to instance page, just hit the Connect option again.
4. A terminal will open up in your browser. In this terminal, whatever command we run, the execution
happens on our respective EC2 server. So to install codedeploy agent, execute the following command
one by one:
sudo apt update
sudo apt install ruby-full
sudo apt install wget
cd /home/ubuntu
5. Now we have to enter below command, with slight changes. We have to replace bucket-
name with the name of the Amazon S3 bucket that contains the CodeDeploy Resource Kit files
for our region where this respective EC2 instance lies. And region-identifier with the identifier
for our region.
wget https://bucket-name.s3.region-identifier.amazonaws.com/latest/install
Let’s say our ec2 instance lies in us-east-1 region, then our above command will be:
wget https://aws-codedeploy-us-east-1.s3.us-east-1.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
By using this command sudo service codedeploy-agent status, we can see codedeploy agent running
successfully in the background ready to do it’s magic soon.
Once nginx installation is completed. Open any browser in your system and type http://your-ip-address-
of-ec2-instance press enter. You can see Nginx landing page.
Now we will do one small change in config of nginx. Enter the following command in the terminal to
open nginx config file using vim editor:
Then add the following lines in the config file, and save the file:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location / {
proxy_pass http://127.0.0.1:5000;
}
Here, location directive we have added proxy_pass of localhost port 5000. This will forward all the
incoming URL to the flask server running on port 5000. In upcoming section, the flask container we will
runing on port 5000 that’s why we have added http://127.0.0.1:5000 as proxy in nginx config.
Restart nginx service using sudo systemctl restart nginx.service and using sudo systemctl status
nginx.service we can see nginx service is running.
Now we have enough prerequisite, to get going with the set up for CodeDeploy project.
4. While creating an Application under CodeDeploy, we don’t have to do much. We have to give a
name to our Application, I am using FlaskAppDeployment . And in Compute platform we have
to select EC2/On-premise option. And the click on Create Application.
We can see a message “Application Created”.
3. Scroll down, then select the service role CodeDeploy-Role we have created in section 8 of this
tutorial. Select In-place for the Deployment type.
In the Environment configuration, select Amazon EC2 instances. If you remember from Part 1, we have
given FlaskCICD name to our EC2 instance. That name will be helpful for us now, to identify whcich EC2
instance to be used by this CodeDeployment Group.
4. Further scroll down, in Agent configuration we will keep the default setting.
In Deployment settings, again we will keep the default setting. However, in Load balancer, Enable load
balancing is by default selected. We will deselect this option. No need to make any changes in Advanced
option. Just click Create deployment group.
And we are done with a deployment group. Under this deployment group we will create a deployment
next in 11.3.
5. We can see, a deployment is in progress. Wait for few minutes for the deployment to
completed.
We can see the status as Succeeded. If anyone of you noticed, the deployment-id is different when the
deployment process started and when the deployment ended. This is because, initial deployment failed
since I forgot to start my EC2 instance so I have to restart the deployment process by clicking on
Retry deployment.
Now we open any browser in our system and type http://our-ip-address-of-ec2-instance press enter.
Voila! Our webApp is online.
With CodeDeploy successfully set up, congratulations on finishing third part of this comprehensive
tutorial. Remember to take a break and go for a walk — absorbing this much information can be
overwhelming for newcomers. Just relax, you’re doing great! I’ll catch you in the upcoming and final
part of this tutorial series.
PART 4
welcome to Part 4 of this four-part blog series. It feels so great to reach this close, in achieving end-to-
end automation. Very soon our CI/CD pipeline is going to be up and running making our life easier. We
have now seen how to setup build step using AWS CodeBuild and deployment step using AWS
CodeDeploy separately. In this final post, we will set up a CodePipeline to bring the CodeBuild and
CodeDeploy steps together into one automated workflow.
Before establishing the CodePipeline project, there’s one more pre-requisite to tackle. I know, I know,
yet another prerequisite before we can get to the fun stuff. But I swear, this is the last one. Then we
jump straight in CodePipeline project, the exciting part.
artifacts:
files:
- 'scripts/**/*'
- 'appspec.yml'
In files section of artifacts we are specifying, to include all the files and folders inside scripts and
appspec.yml file. This is how our final builspec.yml file looks:
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region us-east-1 | docker login --username
AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
- echo Logged in to Amazon ECR successfully
build:
commands:
- echo Building Docker Image for Flask Application
- docker build -t flask_image .
- echo Image built successfully
post_build:
commands:
- echo Tagging Flask Docker image
- docker tag flask_image:latest 123456789.dkr.ecr.us-east-
1.amazonaws.com/flask_image:latest
- docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/flask_image:latest
- echo Flask image pushed to ECR
artifacts:
files:
- 'scripts/**/*'
- 'appspec.yml'
Lastly, push all this changes to Github repository. Next, we will move to CodePipeline configuration.
3. In the step 1(Choose pipeline settings), we will give a name to this CI/CD pipeline. I am
choosing FlaskCICDPipeline as the name of the pipeline. In the service role select, New service
role option. And tick, Allow AWS CodePipeline to create a service role so it can be used with this
new pipeline option if it is not selected by default.
Scroll down, in Advanced settings keep both Artifact store and Encryption key at default option. Then
click Next.
4. In Step 2(Add source stage), select Github (Version 2) as source provider. Then click on Connect
to Github.
And follow along the prompts we get while connecting to Github. Give some connection name to
identify this connection. Click Connect to Github and proceed.
7. In step 4(Add deploy stage), choose AWS CodeDeploy as Deploy provider. Select the region
where your deployment application exists.
In Application name, select the application that we have created in Part 3 and in Deployment group,
choose the deployment group created under our respective deployment. Now, click Next.
8. Here, we can see the summary of all options we have selected in previous steps such as what is
the source, the repository, the build project, and the deployment project. Scroll down, and click
on Create pipeline.
9. After creating the pipeline, just wait and watch all the stages getting completed one by one. Our
pipeline has sourced the required set of code base, docker image is built in Build stage and lastly
flask application is deployed in Deployment stage.
PS: I have taken the screenshots after two days
10. Time for the result, open your browser and enter http://our-ip-address-of-ec2-instance
Our flask application is up and running. yippee………..
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<h1>CI/CD pipeline</h1>
<p>Let's build a CI/CD pipeline, to automate the deployment of this flask
application.</p>
<h2>We have successfully build our first CI/CD pipeline. Hurray!!!</h2>
</body>
</html>
We have added this <h2>We have successfully build our first CI/CD pipeline. Hurray!!!</h2> . Now save
and push the code on Github. We can see on the FlaskCICD pipeline page, the pipeline is triggered.
When the execution of pipeline completes, goto http://our-ip-address-of-ec2-instance. We can see the
webapp is up with the changes.
This 4-part tutorial series now comes to an end. We have learnt how to setup the CI/CD pipeline using
AWS CodeBuild, CodeDeploy and CodePipeline. We have seen how to equip this CI/CD pipeline with all
the pre-requisites such as creating an EC2, attaching appropriate permissions(creating IAM roles),
creating deployment scripts. These are also, one of the valuable skills that we have learnt apart from
building this full-fledge CI/CD pipeline. Congratulation to all of us, for came this far. It feels really
amazing to automate something. Isn’t it?
Feel free to go back and revisit all the steps whenever required. For now, ending this tutorial series with
a quote from Albert Einstein.