Section5 - Jenkins-AWS
Section5 - Jenkins-AWS
to Amazon S3
Goal: Create a Jenkins job that automates MySQL database backups and
uploads them to Amazon S3.
Process:
Technologies Used:
• Jenkins
• MySQL
• Amazon S3
• Docker
Steps:
1. Modify docker-compose.yml:
◦ Add a new service, e.g., db_host. This name will be used to access
the container from other containers (Jenkins and remote host).
◦ Specify the container name, e.g., db.
◦ Define the image: mysql:5.7. (Found by searching “mysql docker”
on Google).
◦ Set the MySQL root password using the environment variable
MYSQL_ROOT_PASSWORD. Example: MYSQL_ROOT_PASSWORD: "1234"
◦ Create a volume to persist data:
▪ Create a local directory (e.g., db_data).
▪ Mount the directory to /var/lib/mysql inside the container.
This ensures data persists even if the container is deleted.
Example: ./db_data:/var/lib/mysql
◦ Add the container to the existing network (e.g., net) to allow
communication with other containers.
2. Start the Container:
◦ Run docker-compose up -d. This recreates the containers based
on the updated docker-compose.yml file. Docker will download
the mysql:5.7 image if it doesn’t exist locally.
3. Verify MySQL is Running:
◦ Check container status using docker ps.
◦ Check logs to confirm MySQL is ready: docker logs -f db. Look
for the message “MySQL is ready for connection.”
4. Connect to MySQL:
◦ Access the container’s shell: docker exec -ti db bash (where db
is the container name).
◦ Log in to MySQL: mysql -u root -p. Enter the password defined
in the docker-compose.yml file (e.g., “1234”).
◦ Test the connection: show databases;
Key Concepts:
Steps:
Explanation:
By following these steps, the remote_host container will have both the
MySQL client and AWS CLI installed, ready for use in the backup and upload
process.
Steps:
Key Commands:
This process sets up a MySQL database with a sample table and data, ready
for the backup process to be demonstrated in the following lectures.
Creating an S3 Bucket
This lecture demonstrates creating an Amazon S3 bucket using the AWS
Management Console.
Prerequisites:
• An AWS account (a credit card is required for signup, but you won’t be
charged unless you use services beyond the free tier).
Steps:
Steps:
1. Navigate to IAM:
◦ In the AWS Management Console, click on “Services”.
◦ Search for “IAM” and click on it.
2. Add a New User:
◦ Click on “Users” in the left-hand navigation menu.
◦ Click the “Add users” button.
3. Configure User Details:
◦ Enter a user name (e.g., backup-user).
◦ Select the “Programmatic access” checkbox. This grants the user
access keys for use with the AWS CLI or SDKs.
◦ Click “Next: Permissions”.
4. Assign Permissions:
◦ Choose “Attach existing policies directly”.
◦ Search for “S3”.
◦ For simplicity in this tutorial, select “AmazonS3FullAccess” (Note:
In a production environment, grant only the necessary
permissions, like access to the specific S3 bucket).
◦ Click “Next: Tags” (you can skip tagging for this tutorial).
◦ Click “Next: Review”.
5. Create User:
◦ Review the user details and permissions.
◦ Click “Create user”.
6. Download Credentials:
◦ Important: Download the .csv file containing the Access Key ID
and Secret Access Key. These credentials are displayed only once.
◦ The downloaded file contains the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY, essential for authenticating with AWS
programmatically.
This process creates an IAM user with the necessary permissions to interact
with S3. The downloaded credentials will be used in later lectures to
authenticate the backup upload process.
Steps:
Steps:
◦ Create a file named script.sh in the /tmp directory (or any other
preferred location): bash vi /tmp/script.sh
This script automates the backup process by taking the database credentials
as parameters. The use of the date command ensures unique filenames for
each backup, which is essential for proper backup management. This sets
the stage for integrating this script into a Jenkins job for fully automated
backups.
Steps:
◦ Include the AWS secret access key and the S3 bucket name as
parameters to the script. Modify the script as follows:
#!/bin/bash
db_host=$1
db_password=$2
db_name=$3
aws_secret_access_key=$4
bucket_name=$5
date=$(date +%H%M%S)
backup="db-$date.sql"
mysqldump -u root -h "$db_host" -p"$db_password"
"$db_name" > /tmp/"$backup" && \
echo "Uploading your db backup: $backup" && \
aws s3 cp /tmp/"$backup" "s3://$bucket_name/$backup"
This improved script automates both the backup and upload process, taking
all necessary parameters as input. This prepares the script for integration
with Jenkins, where these parameters can be dynamically provided.
Remember to replace the placeholder access key ID and secret access key
with your actual credentials. Storing the access key ID directly in the script
is not a good security practice and should be avoided in production. Use
Jenkins credentials management instead.
Steps:
1. Navigate to Credentials:
◦ In the Jenkins dashboard, click on “Credentials” in the left-hand
navigation menu.
2. Access Global Credentials:
◦ Click on “(global)” or “System” under “Credentials”. Then click on
“Global credentials (unrestricted)”.
3. Add MySQL Password:
◦ Click “Add Credentials”.
◦ Select “Secret text” from the “Kind” dropdown menu.
◦ Enter an ID, such as MYSQL_PASSWORD. This ID will be used to
reference the credential later.
◦ In the “Secret” field, paste the MySQL root password (e.g.,
“1234”).
◦ Click “OK”.
4. Add AWS Secret Key:
◦ Repeat the “Add Credentials” process.
◦ Select “Secret text” from the “Kind” dropdown menu.
◦ Enter an ID, such as AWS_SECRET_KEY.
◦ Paste the AWS Secret Access Key from the downloaded credentials
file into the “Secret” field.
◦ Click “OK”.
5. Verify Credentials:
◦ You should now see both credentials listed in the global
credentials list. Clicking on the “Update” action will reveal masked
values, preventing accidental exposure of the secrets.
Key Concepts:
Steps:
This configuration creates a parameterized Jenkins job. The job executes the
backup script on the remote host, passing the necessary parameters,
including the sensitive information retrieved from Jenkins Credentials. Using
parameters allows for flexibility in the backup process, while using
credentials ensures the security of sensitive information. The job is now
ready for testing.
Steps:
Problem: When a Docker container is deleted, all files within it are lost
unless they are stored on a persistent volume. If the backup script is inside
the container and the container is deleted, the Jenkins job will fail.
Solution: Use a Docker volume to mount the script from the host machine
into the container.
Steps:
remote_host:
# ... other configurations
volumes:
- ./aws-s3.sh:/tmp/script.sh
Explanation:
By using a Docker volume, the backup script is now persistent, making the
automated backup process more robust and reliable. It also demonstrates a
best practice for managing files within Docker containers that need to be
preserved across container lifecycles.
Steps:
Key Takeaway:
This demonstrates the flexibility and reusability of parameterized Jenkins
jobs. Without modifying the job’s core configuration, you can easily change
the database and target S3 bucket by simply modifying the build
parameters. This allows for dynamic configuration and supports various
backup scenarios without needing to create separate jobs for each.