0% found this document useful (0 votes)
25 views14 pages

Section5 - Jenkins-AWS

The document outlines the process of creating a Jenkins job to automate MySQL database backups to Amazon S3, including setting up a MySQL container with Docker, installing necessary tools, and creating an IAM user for S3 access. It details steps for manual backups and uploading to S3, as well as automating these processes with a bash script. Key concepts include Docker usage, AWS CLI installation, and security practices for managing AWS credentials.

Uploaded by

Sachin Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views14 pages

Section5 - Jenkins-AWS

The document outlines the process of creating a Jenkins job to automate MySQL database backups to Amazon S3, including setting up a MySQL container with Docker, installing necessary tools, and creating an IAM user for S3 access. It details steps for manual backups and uploading to S3, as well as automating these processes with a bash script. Key concepts include Docker usage, AWS CLI installation, and security practices for managing AWS credentials.

Uploaded by

Sachin Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Lecture Notes: Jenkins Job for MySQL Backup

to Amazon S3
Goal: Create a Jenkins job that automates MySQL database backups and
uploads them to Amazon S3.

Process:

1. Backup: Jenkins job initiates a MySQL database backup.


2. Upload: Jenkins job transfers the backup file over the internet.
3. Storage: Backup file is stored in an Amazon S3 bucket.

Technologies Used:

• Jenkins
• MySQL
• Amazon S3
• Docker

Creating a MySQL Container with Docker


This lecture outlines the steps to create a MySQL container using Docker,
alongside existing Jenkins and remote host containers.

Steps:

1. Modify docker-compose.yml:
◦ Add a new service, e.g., db_host. This name will be used to access
the container from other containers (Jenkins and remote host).
◦ Specify the container name, e.g., db.
◦ Define the image: mysql:5.7. (Found by searching “mysql docker”
on Google).
◦ Set the MySQL root password using the environment variable
MYSQL_ROOT_PASSWORD. Example: MYSQL_ROOT_PASSWORD: "1234"
◦ Create a volume to persist data:
▪ Create a local directory (e.g., db_data).
▪ Mount the directory to /var/lib/mysql inside the container.
This ensures data persists even if the container is deleted.
Example: ./db_data:/var/lib/mysql
◦ Add the container to the existing network (e.g., net) to allow
communication with other containers.
2. Start the Container:
◦ Run docker-compose up -d. This recreates the containers based
on the updated docker-compose.yml file. Docker will download
the mysql:5.7 image if it doesn’t exist locally.
3. Verify MySQL is Running:
◦ Check container status using docker ps.
◦ Check logs to confirm MySQL is ready: docker logs -f db. Look
for the message “MySQL is ready for connection.”
4. Connect to MySQL:
◦ Access the container’s shell: docker exec -ti db bash (where db
is the container name).
◦ Log in to MySQL: mysql -u root -p. Enter the password defined
in the docker-compose.yml file (e.g., “1234”).
◦ Test the connection: show databases;

Key Concepts:

• Docker Image: A snapshot containing the necessary configuration for


a service.
• Docker Tag: Represents a specific version of a Docker image.
• Docker Volume: Provides persistent storage for container data,
independent of the container’s lifecycle.
• Docker Network: Allows communication between containers.
• Environment Variables: Used to configure software within a
container.

Installing MySQL and AWS CLI in a Docker


Container
This lecture demonstrates how to install the MySQL client and AWS CLI
inside a Docker container based on CentOS 7.

Steps:

1. Modify the Dockerfile:

◦ Open the Dockerfile for the remote_host container (located in the


centos7 directory).

◦ Add the following lines to install the MySQL client:

RUN yum -y install mysql

◦ Add the following lines to install the AWS CLI:

RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-


x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install -i /usr/local/aws-cli -b /usr/local/bin

2. Build the Updated Image:


◦ Navigate to the directory containing the docker-compose.yml file.
◦ Run docker-compose build to rebuild the image with the new
instructions. This will download the necessary packages and
create a new image.
3. Recreate the Container:
◦ Run docker-compose up -d to recreate the remote_host
container. Docker automatically uses the newly built image,
incorporating the installed MySQL client and AWS CLI.
4. Verify Installation:
◦ Access the container’s shell: docker exec -ti remote_host
bash.
◦ Check for MySQL client installation: mysql. You should see a
different error message indicating the client is installed but not
configured to connect to a database yet.
◦ Check for AWS CLI installation: aws. You should see the AWS CLI
help options, confirming successful installation.

Explanation:

• yum -y install mysql: Installs the MySQL client on CentOS 7. The -y


flag automatically confirms any prompts.
• curl ... -o "awscliv2.zip": Downloads the AWS CLI zip file.
• unzip awscliv2.zip: Extracts the downloaded zip file.
• ./aws/install ...: Installs the AWS CLI to specified directories. The
-i flag specifies the installation directory and -b flag specifies the
directory for binaries.

By following these steps, the remote_host container will have both the
MySQL client and AWS CLI installed, ready for use in the backup and upload
process.

Creating a MySQL Database and Table within


Docker
This lecture explains how to create a MySQL database and table within a
Docker container, preparing it for backup in subsequent lectures.

Steps:

1. Verify Container Status:


◦ Use docker ps to confirm the remote_host and db_host (MySQL)
containers are running.
2. Connect to the MySQL Container:
◦ From the remote_host container, connect to the MySQL server
running in the db_host container: bash mysql -u root -h
db_host -p
◦ Enter the MySQL root password when prompted (e.g., “1234”).
3. Create the Database:
◦ Create a new database: sql CREATE DATABASE testdb;
4. Use the Database:
◦ Select the newly created database: sql USE testdb;
5. Create a Table:
◦ Create a table named “info”: sql CREATE TABLE info
( name VARCHAR(20), lastname
VARCHAR(20), age INT(2) );
6. Verify Table Creation:
◦ List the tables in the database: sql SHOW TABLES;
◦ Display the table structure: sql DESCRIBE info;
7. Populate the Table with Data:
◦ Insert a row of data: sql INSERT INTO info (name,
lastname, age) VALUES ('John', 'Doe', 30);
8. Verify Data Insertion:
◦ Display the data in the table: sql SELECT * FROM info;

Key Commands:

• mysql -u root -h db_host -p: Connects to the MySQL server. -u


specifies the username, -h the hostname, and -p prompts for the
password.
• CREATE DATABASE testdb: Creates a new database.
• USE testdb: Selects the specified database.
• CREATE TABLE ...: Defines the table structure, including column
names and data types.
• SHOW TABLES: Lists tables in the current database.
• DESCRIBE info: Shows the structure of the info table.
• INSERT INTO ...: Adds data to the table.
• SELECT * FROM info: Retrieves all data from the table.

This process sets up a MySQL database with a sample table and data, ready
for the backup process to be demonstrated in the following lectures.

Creating an S3 Bucket
This lecture demonstrates creating an Amazon S3 bucket using the AWS
Management Console.

Prerequisites:

• An AWS account (a credit card is required for signup, but you won’t be
charged unless you use services beyond the free tier).

Steps:

1. Access the AWS Console:


◦ Search for “AWS console” on Google and click the relevant link.
◦ Sign in to your AWS account.
2. Navigate to S3:
◦ Click on the “Services” tab.
◦ Search or scroll down to find “S3” and click on it.
3. Create a Bucket:
◦ Click on the “Create bucket” button.
◦ Enter a unique bucket name (e.g., jenkins-mysql-backup). Bucket
names must be globally unique across all AWS accounts.
◦ Click “Create”.
4. (Optional) Upload Files:
◦ To manually upload files, click the “Upload” button within the
newly created bucket.
◦ Select the files you wish to upload.
◦ Click “Upload”.

Creating an IAM User for AWS Authentication


This lecture demonstrates how to create an IAM (Identity and Access
Management) user with programmatic access to S3, allowing authentication
for uploading backups.

Steps:

1. Navigate to IAM:
◦ In the AWS Management Console, click on “Services”.
◦ Search for “IAM” and click on it.
2. Add a New User:
◦ Click on “Users” in the left-hand navigation menu.
◦ Click the “Add users” button.
3. Configure User Details:
◦ Enter a user name (e.g., backup-user).
◦ Select the “Programmatic access” checkbox. This grants the user
access keys for use with the AWS CLI or SDKs.
◦ Click “Next: Permissions”.
4. Assign Permissions:
◦ Choose “Attach existing policies directly”.
◦ Search for “S3”.
◦ For simplicity in this tutorial, select “AmazonS3FullAccess” (Note:
In a production environment, grant only the necessary
permissions, like access to the specific S3 bucket).
◦ Click “Next: Tags” (you can skip tagging for this tutorial).
◦ Click “Next: Review”.
5. Create User:
◦ Review the user details and permissions.
◦ Click “Create user”.
6. Download Credentials:
◦ Important: Download the .csv file containing the Access Key ID
and Secret Access Key. These credentials are displayed only once.
◦ The downloaded file contains the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY, essential for authenticating with AWS
programmatically.

Important Security Note: The “AmazonS3FullAccess” policy grants full


access to all S3 buckets. In a real-world scenario, it’s crucial to create a
more restrictive policy that grants access only to the specific bucket needed
for backups. Treat access keys like passwords and store them securely.

This process creates an IAM user with the necessary permissions to interact
with S3. The downloaded credentials will be used in later lectures to
authenticate the backup upload process.

Manually Backing Up MySQL and Uploading


to S3
This lecture shows how to manually back up a MySQL database within a
Docker container and upload the backup file to an S3 bucket using the AWS
CLI.

Steps:

1. Access the Remote Host Container:


◦ Use docker ps to verify containers are running.
◦ Access the remote_host container’s shell: docker exec -ti
remote_host bash.
2. Create the MySQL Backup:
◦ Use mysqldump to create the backup: bash mysqldump -u
root -h db_host -p testdb > /tmp/db.sql
▪ -u root: Connects as the root user.
▪ -h db_host: Specifies the hostname of the MySQL server
(the db_host container).
▪ -p: Prompts for the password.
▪ testdb: The name of the database to back up.
▪ > /tmp/db.sql: Redirects the output to a file named db.sql
in the /tmp directory.
◦ Enter the MySQL root password when prompted.
3. Configure AWS Credentials:
◦ Set the AWS access key ID and secret access key as environment
variables: bash export
AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY_ID" export
AWS_SECRET_ACCESS_KEY="YOUR_SECRET_ACCESS_KEY"
▪ Replace "YOUR_ACCESS_KEY_ID" and
"YOUR_SECRET_ACCESS_KEY" with the actual values from the
downloaded credentials file.
4. Upload the Backup to S3:
◦ Use the aws s3 cp command to upload the file:
bash aws s3 cp /tmp/db.sql s3://jenkins-mysql-
backup/db.sql
▪ /tmp/db.sql: The path to the local backup file.
▪ s3://jenkins-mysql-backup/db.sql: The S3 bucket and the
desired filename for the uploaded backup. Replace jenkins-
mysql-backup with your bucket name.
5. Verify Upload:
◦ In the AWS Management Console, navigate to S3 and your bucket
to confirm the file has been uploaded.

This manual process demonstrates the individual steps involved in backing


up the database and uploading it to S3, which will be automated using
Jenkins in subsequent lectures. It also highlights the importance of setting
AWS credentials securely using environment variables.

Automating MySQL Backup with a Bash


Script
This lecture explains how to create a bash script within the remote_host
Docker container to automate the MySQL database backup process. The
script is designed to be reusable and generate backups with timestamps in
their filenames.

Steps:

1. Access the Remote Host Container:


◦ docker exec -ti remote_host bash
2. Create the Backup Script:

◦ Create a file named script.sh in the /tmp directory (or any other
preferred location): bash vi /tmp/script.sh

◦ Add the following content to the script: ```bash #!/bin/bash

db_host=$1 db_password=$2 db_name=$3

date=(date+db_host” -p”$db_password" "$db_name” > /tmp/


db-“$date”.sql ```

▪ #!/bin/bash: Specifies the interpreter for the script.


▪ db_host=$1, db_password=$2, db_name=$3: Assigns the first,
second, and third command-line arguments to the respective
variables.
▪ date=$(date +%H%M%S): Captures the current hour, minute,
and second into the date variable.
▪ mysqldump ...: Executes the mysqldump command with the
provided variables, redirecting the output to a file named db-
HHMMSS.sql in the /tmp directory.
3. Make the Script Executable:
◦ chmod +x /tmp/script.sh
4. Run the Script:
◦ Execute the script, providing the database host, password, and
name as arguments: bash /tmp/script.sh db_host 1234
testdb
5. Verify Backup Creation:
◦ List the files in the /tmp directory: ls /tmp. You should see files
named db-HHMMSS.sql containing the backups with timestamps.

This script automates the backup process by taking the database credentials
as parameters. The use of the date command ensures unique filenames for
each backup, which is essential for proper backup management. This sets
the stage for integrating this script into a Jenkins job for fully automated
backups.

Automating MySQL Backup Upload to S3


This lecture builds upon the previous one, adding functionality to the bash
script to automatically upload the MySQL backup to an S3 bucket.

Steps:

1. Modify the Backup Script:


◦ Open the /tmp/script.sh file for editing.
2. Add AWS Credentials and Bucket Name as Parameters:

◦ Include the AWS secret access key and the S3 bucket name as
parameters to the script. Modify the script as follows:

#!/bin/bash

db_host=$1
db_password=$2
db_name=$3
aws_secret_access_key=$4
bucket_name=$5

export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY_ID" # Replace


with your actual key ID
export AWS_SECRET_ACCESS_KEY="$aws_secret_access_key"

date=$(date +%H%M%S)
backup="db-$date.sql"
mysqldump -u root -h "$db_host" -p"$db_password"
"$db_name" > /tmp/"$backup" && \
echo "Uploading your db backup: $backup" && \
aws s3 cp /tmp/"$backup" "s3://$bucket_name/$backup"

▪ The AWS_ACCESS_KEY_ID is hardcoded in the script (Not


recommended for production; store securely outside the
script, e.g., using Jenkins credentials).
▪ The AWS_SECRET_ACCESS_KEY is passed as a parameter for
enhanced security.
▪ The bucket_name is now a parameter, allowing flexibility to
upload to different buckets.
▪ The backup variable stores the generated filename for better
readability and reusability within the script.
▪ The && operator chains commands, ensuring the upload runs
only after successful backup creation.
3. Run the Modified Script:
◦ Execute the script, providing all five parameters: bash /
tmp/script.sh db_host 1234 testdb
"YOUR_SECRET_ACCESS_KEY" your-bucket-name
▪ Replace "YOUR_SECRET_ACCESS_KEY" with your actual secret
access key and your-bucket-name with your S3 bucket name.

This improved script automates both the backup and upload process, taking
all necessary parameters as input. This prepares the script for integration
with Jenkins, where these parameters can be dynamically provided.
Remember to replace the placeholder access key ID and secret access key
with your actual credentials. Storing the access key ID directly in the script
is not a good security practice and should be avoided in production. Use
Jenkins credentials management instead.

Managing Sensitive Information in Jenkins


This lecture demonstrates how to securely store sensitive information, like
passwords and access keys, using Jenkins Credentials.

Steps:

1. Navigate to Credentials:
◦ In the Jenkins dashboard, click on “Credentials” in the left-hand
navigation menu.
2. Access Global Credentials:
◦ Click on “(global)” or “System” under “Credentials”. Then click on
“Global credentials (unrestricted)”.
3. Add MySQL Password:
◦ Click “Add Credentials”.
◦ Select “Secret text” from the “Kind” dropdown menu.
◦ Enter an ID, such as MYSQL_PASSWORD. This ID will be used to
reference the credential later.
◦ In the “Secret” field, paste the MySQL root password (e.g.,
“1234”).
◦ Click “OK”.
4. Add AWS Secret Key:
◦ Repeat the “Add Credentials” process.
◦ Select “Secret text” from the “Kind” dropdown menu.
◦ Enter an ID, such as AWS_SECRET_KEY.
◦ Paste the AWS Secret Access Key from the downloaded credentials
file into the “Secret” field.
◦ Click “OK”.
5. Verify Credentials:
◦ You should now see both credentials listed in the global
credentials list. Clicking on the “Update” action will reveal masked
values, preventing accidental exposure of the secrets.

Key Concepts:

• Credentials: Securely stores sensitive information within Jenkins.


• Secret Text: A credential type suitable for storing passwords and
access keys.
• ID: A unique identifier used to reference the stored credential within
Jenkins jobs.

By storing sensitive information in Jenkins Credentials, you avoid


hardcoding secrets in scripts and configuration files, significantly improving
security. The masking of secret values also helps prevent accidental
exposure during credential management. These stored credentials will be
used in the next lecture to configure the Jenkins job.

Creating the Jenkins Job for Automated


Backup and Upload
This lecture demonstrates creating a Jenkins job to automate the entire
MySQL backup and upload to S3 process.

Steps:

1. Create a New Freestyle Project:


◦ Click “New Item” on the Jenkins dashboard.
◦ Enter a name for the job (e.g., “Backup to AWS”).
◦ Select “Freestyle project” and click “OK”.
2. Configure Parameters:
◦ In the “General” section, check “This project is parameterized”.
◦ Add the following string parameters:
▪ MYSQL_HOST (default value: db_host)
▪ DATABASE_NAME (default value: testdb)
▪ AWS_BUCKET_NAME (default value: jenkins-mysql-backup)
3. Configure Build Environment:
◦ In the “Build Environment” section, check “Use secret text(s) or
file(s)”.
◦ Add the following secret texts using the “Add” dropdown:
▪ MYSQL_PASSWORD (select the previously created
MYSQL_PASSWORD credential)
▪ AWS_SECRET_KEY (select the previously created
AWS_SECRET_KEY credential)
4. Configure Build Step:
◦ In the “Build” section, click “Add build step”.
◦ Select “Execute shell script on remote host using ssh”.
◦ Choose the configured SSH remote host (if you have multiple).
◦ In the “Command” field, enter the following script, replacing
placeholders with the correct Jenkins parameter names:
bash /tmp/script.sh ${MYSQL_HOST} ${MYSQL_PASSWORD}
${DATABASE_NAME} ${AWS_SECRET_KEY} ${AWS_BUCKET_NAME}
5. Save the Job:
◦ Click “Save”.

This configuration creates a parameterized Jenkins job. The job executes the
backup script on the remote host, passing the necessary parameters,
including the sensitive information retrieved from Jenkins Credentials. Using
parameters allows for flexibility in the backup process, while using
credentials ensures the security of sensitive information. The job is now
ready for testing.

Testing the Jenkins Job and Demonstrating


Automated Backup
This lecture tests the created Jenkins job to verify that it successfully backs
up the MySQL database and uploads the backup file to the S3 bucket.

Steps:

1. Build with Parameters:


◦ Click on the created Jenkins job (e.g., “Backup to AWS”).
◦ Click “Build with Parameters”.
2. Review Parameters (Optional):
◦ The parameterized job allows modification of the default values for
the database host, database name, and bucket name before
running the job. For this test, the default values are used.
3. Trigger the Build:
◦ Click the “Build” button.
4. View Console Output:
◦ Once the build completes, click on the build number.
◦ Click “Console Output”.
◦ Verify the following:
▪ Sensitive information (passwords, keys) are masked with
asterisks.
▪ The mysqldump command executes successfully.
▪ The “Uploading your db backup” message is displayed.
▪ The aws s3 cp command executes successfully, showing the
uploaded filename.
5. Verify Backup in S3:
◦ Copy the uploaded filename from the console output.
◦ Go to the AWS Management Console, navigate to S3, and then to
the specified bucket.
◦ Refresh the bucket contents. You should see the newly uploaded
backup file with the correct timestamp in its name.
6. Test with Multiple Builds:
◦ Run the job multiple times. Each build should create a new backup
file in S3 with a unique timestamp.
This testing process confirms the Jenkins job’s functionality and
demonstrates the automated backup and upload process. The masking of
sensitive information in the console output highlights the secure handling of
credentials. The unique filenames for each backup demonstrate the
robustness of the solution for managing multiple backups. This concludes
the process of automating MySQL backups to S3 using Jenkins.

Persisting the Backup Script using Docker


Volumes
This lecture demonstrates how to persist the backup script within the
remote_host container using Docker volumes, ensuring the script survives
container deletions.

Problem: When a Docker container is deleted, all files within it are lost
unless they are stored on a persistent volume. If the backup script is inside
the container and the container is deleted, the Jenkins job will fail.

Solution: Use a Docker volume to mount the script from the host machine
into the container.

Steps:

1. Copy the Script to the Host:


◦ Copy the script.sh file from the container to your host machine
(e.g., using docker cp).
2. Modify the docker-compose.yml File:

◦ Add a volumes section to the remote_host service definition:

remote_host:
# ... other configurations
volumes:
- ./aws-s3.sh:/tmp/script.sh

▪ ./aws-s3.sh: The path to the script on your host machine


(replace aws-s3.sh with the actual filename if different).
▪ /tmp/script.sh: The path where the script should be
mounted inside the container. Ensure this matches the path
used in your Jenkins job configuration.
3. Recreate the Container:
◦ docker-compose up -d
4. Set Execute Permissions:
◦ Ensure the script has execute permissions on the host machine:
chmod +x aws-s3.sh. This change will be reflected inside the
container because of the volume mount.
5. Test the Jenkins Job:
◦ Run the Jenkins job again. It should now execute successfully, as
the script is available at the expected location within the
container.

Explanation:

The volumes configuration in docker-compose.yml creates a bind mount,


linking the local file on the host machine to a directory within the container.
Any changes made to the file on the host will be reflected in the container,
and vice versa (if permissions allow). This ensures that the script persists
even if the container is deleted and recreated. The -f option with docker rm
removes the container forcefully even if it’s running.

By using a Docker volume, the backup script is now persistent, making the
automated backup process more robust and reliable. It also demonstrates a
best practice for managing files within Docker containers that need to be
preserved across container lifecycles.

Demonstrating the Power of Jenkins


Parameters
This lecture showcases the flexibility of Jenkins parameterized jobs by
demonstrating how to back up a different database and upload it to a
different S3 bucket without modifying the job configuration itself.

Steps:

1. Create a New Database:


◦ Connect to the MySQL server from the remote_host container:
bash mysql -u root -h db_host -p
◦ Create a new database: CREATE DATABASE test2;
2. Create a New S3 Bucket:
◦ In the AWS Management Console, navigate to S3.
◦ Create a new bucket (e.g., mysql-jenkins-2).
3. Modify Build Parameters:
◦ In Jenkins, go to the “Backup to AWS” job.
◦ Click “Build with Parameters”.
◦ Change the following parameter values:
▪ DATABASE_NAME: test2
▪ AWS_BUCKET_NAME: mysql-jenkins-2
4. Trigger the Build:
◦ Click “Build”.
5. Verify Backup in S3:
◦ After the build completes successfully, go to the AWS Management
Console and verify that the mysql-jenkins-2 bucket contains the
new backup file.

Key Takeaway:
This demonstrates the flexibility and reusability of parameterized Jenkins
jobs. Without modifying the job’s core configuration, you can easily change
the database and target S3 bucket by simply modifying the build
parameters. This allows for dynamic configuration and supports various
backup scenarios without needing to create separate jobs for each.

You might also like