Dev Ops
Dev Ops
1 git init: Initializes a new Git repository in ii) Switching the branches
the current directory. iii) Merging the branches of git
Syntax: git init 1 Creating the branches:
2 git clone: Copies an existing Git repository In Git, branches are essentially pointers to a
from a remote source to your local machine. specific commit. When you create a branch,
Syntax: git clone <repository_URL> you're essentially creating a new pointer
3 git add: Adds changes in the working that references the same commit as the
directory to the staging area. branch you're currently on. This allows you
Syntax: git add <file_name> (to add specific to diverge from the main line of
file(s)) development and work on new features,
4 git add . (to add all changes) bug fixes, or experiments without affecting
git status: Shows the current state of the the main branch (usually master or main).
working directory and staging area. To create a branch, you typically use the git
Syntax: git status branch command followed by the name of
5 git commit: Records changes to the the new branch you want to create. For
repository. example:
Syntax: git commit -m "commit_message" git branch new-feature
6 git push: Uploads local repository content 2 Switching the branches: Once you've
to a remote repository. created a branch, you can switch to it using What is distributes vcs:- What is cvcs: :-In centralized source
Syntax: git push <remote_name> the git checkout command followed by the In distributed version control most of the mechanism or control, there is a server and a client. The
<branch_name> name of the branch you want to switch to. model applies the same as centralized. The only major server is the master repository that
7 git pull: Fetches changes from a remote For example: git checkout new-feature difference you will find here is, instead of one single contains all of the versions of the code.
repository and merges them into the current This command switches you to the new- repository which is the server, here every single developer To work on any project, firstly user or
branch. feature branch, allowing you to start or client has their own server and they will have a copy of client needs to get the code from the
Syntax: git pull <remote_name> working on it. Alternatively, if you're using the entire history or version of the code and all of its master repository or server. So the client
<branch_name> Git version 2.23 or later, you can use the git branches in their local server or machine. Basically, every communicates with the server and pulls
8 git fetch: Retrieves changes from a remote switch command: git switch new-feature client or user can work locally and disconnected which is all the code or current version of the
repository without merging them into your 3 Merging the branches: After you've made more convenient than centralized source control and that’s code from the server to their local
local branch. changes and commits on a branch (e.g., why it is called distributed. machine. In other terms we can say, you
Syntax: git fetch <remote_name> new-feature), you may want to incorporate You don’t need to rely on the central server, you can clone need to take an update from the master
9 git merge: Combines changes from one those changes back into another branch the entire history or copy of the code to your hard drive. So repository and then you get the local
branch into another branch. (e.g., master or main). This process is called when you start working on a project, you clone the code copy of the code in your system. So once
Syntax: git merge <branch_name> merging. To merge a branch into another from the master repository in your own hard drive, then you get the latest version of the code,
10 git branch: Lists, creates, or deletes branch, you typically switch to the branch you get the code from your own repository to make you start making your own changes in
branches. you want to merge into (e.g., master), and changes and after doing changes, you commit your the code and after that, you simply need
Syntax: git branch (to list branches) then use the git merge command followed changes to your local repository and at this point, your to commit those changes straight
git branch <branch_name> (to create a new by the name of the branch you want to local repository will have ‘change sets‘ but it is still forward into the master repository.
branch) merge in (e.g., new-feature). For example: disconnected with the master repository (master Committing a change simply means
git branch -d <branch_name> (to delete a git checkout master repository will have different ‘sets of changes‘ from each merging your own code into the master
branch) git merge new-feature and every individual developer’s repository), so to repository or making a new version of
11 git checkout: Switches between branches communicate with it, you issue a request to the master the source code. So everything is
or restores files in the working directory to repository and push your local repository code to the centralized in this model.
their state at a specific commit. Explain Maven build life cycle master repository. Getting the new change from a There will be just one repository and that
Syntax: git checkout <branch_name> (to 1 validate :-Validates whether project is repository is called “pulling” and merging your local will contain all the history or version of
switch branches) correct and all necessary information is repository’s ‘set of changes’ is called “pushing“. It doesn’t the code and different branches of the
git checkout -- <file_name> (to discard available to complete the build process. follow the way of communicating or merging the code code. So the basic workflow involves in
changes in a file) 2 initialize :-Initializes build state, for straight forward to the master repository after making the centralized source control is getting
12 git stash: Temporarily shelves changes example set properties. changes. Firstly you commit all the changes in your own the latest version of the code from a
that are not ready to be committed. 3 generate-sources :- Generate any source server or repository and then the ‘set of changes’ will central repository that will contain other
Syntax: git stash (to stash changes) code to be included in compilation phase. merge to the master repository. people’s code as well, making your own
git stash apply (to apply stashed changes) 4 process-sources :- Process the source
changes in the code, and then
13 git rebase: Reapplies commits from one code, for example, filter any value.
committing or merging those changes
branch to another. 5 generate-resources:- Generate resources
into the central repository.
Syntax: git rebase <branch_name> to be included in the package.
14 git tag: Creates, lists, deletes, or verifies 6 process-resources:- Copy and process the
tags in the repository. resources into the destination directory,
Syntax: git tag <tag_name> (to create a tag) ready for packaging phase.
git tag (to list tags) 7 compile :- Compile the source code of the
git tag -d <tag_name> (to delete a tag) project. Importance of linux in devops
Explain Maven Dependancy
15 git log: Displays the commit history of the 8 process-classes :- Post-process the 1 Open Source Philosophy: Linux is open
Maven dependency management is a critical aspect of
repository. generated files from compilation, for source, which means it is highly customizable
Java-based development, particularly when using the
Syntax: git log example to do bytecode and can be tailored to fit specific needs. This
Maven build automation tool. In Maven, dependencies
enhancement/optimization on Java classes. openness aligns well with the principles of
are external libraries or modules required by a project
9 generate-test-sources :- Generate any test DevOps, promoting collaboration,
explain 5 linux command with commond to compile, build, and run successfully. Maven simplifies
source code to be included in compilation transparency, and flexibility in development
with proper syntax and sutaible example dependency management by automatically resolving,
phase. and operations processes.
1 ls (list): Syntax: ls [options] [directory] downloading, and including these dependencies in the
10 process-test-sources :-Process the test 2 Compatibility: Linux distributions are widely
Description: Lists the files and directories in project's classpath.
source code, for example, filter any values. used across various platforms and
the specified directory. 1 Maven helps to avoid such requirements to discover
11 test-compile:-Compile the test source architectures, making them compatible with a
Example: ls all the libraries required. Maven does so by reading
code into the test destination directory. wide range of software and tools commonly
2 cd (change directory): project files (pom.xml) of dependencies, figure out their
12 process-test-classes:-Process the used in DevOps practices. This compatibility
Syntax: cd [directory] dependencies and so on.
generated files from test code file ensures smooth integration of different tools
Description: Changes the current working 2 We only need to define direct dependency in each
compilation. and technologies within the DevOps
directory. project pom. Maven handles the rest automatically.
13 test:-Run tests using a suitable unit toolchain.
Ex:-cd Documents 3 With transitive dependencies, the graph of included
testing framework (Junit is one). 3 Stability and Reliability: Linux distributions
3 mkdir (make directory): libraries can quickly grow to a large extent. Cases can
14 prepare-package:-Perform any are known for their stability and reliability,
Syntax: mkdir [options] directory_name arise when there are duplicate libraries. Maven
operations necessary to prepare a package which are essential qualities in DevOps
Description: Creates a new directory with the provides few features to control extent of transitive
before the actual packaging. environments where continuous integration,
specified name. dependencies
15 package:- Take the compiled code and deployment, and delivery are critical. The
Ex:-mkdir new_directory 1 Dependency Mediation: Determines what version of
package it in its distributable format, such as robustness of Linux systems ensures
4 rm (remove): a dependency is to be used when multiple versions of
a JAR, WAR, or EAR file. consistent performance and minimizes
Syntax: rm [options] file_name an artifact are encountered. In the dependency tree, if
16 pre-integration-test:-Perform actions downtime, contributing to overall system
Description: Deletes files or directories. two dependency versions are at the same depth, then
required before integration tests are reliability.
Ex:-rm old_file.txt the first declared dependency will be used.
executed. For example, setting up the 4 Resource Efficiency: Linux is known for its
5 cp (copy): 2 Dependency Management: Directly specify the
required environment. efficient use of system resources, making it
Syntax: cp [options] source_file versions of artifacts to be used when they are
17 integration-test:- Process and deploy the suitable for both small-scale development
destination_file encountered in transitive dependencies. For example,
package if necessary into an environment environments and large-scale production
Description: Copies files or directories. Project C' can include B as a dependency in its
where integration tests can be run. systems. This efficiency is particularly
Ex:- cp file1.txt file2.txt dependency management section and directiy control
18 post-integration-test:-Perform actions valuable in cloud-based DevOps
6 pwd (Print Working Directory): which version of B is to be used when it is ever
required after integration tests have been environments where resource optimization is
Syntax: pwd referenced.
executed. For example, cleaning up the essential for cost-effectiveness and scalability.
Example: pwd 3. Dependency Scope: Includes dependencies as per
environment. 5 Command-Line Tools and Automation:
Explanation: This command prints the current the current stage of the build.
19 verify:-Run any check-ups to verify the Linux provides a rich set of command-line
working directory. 4 Excluded Dependencies: Any transitive dependency
package is valid and meets quality criteria. tools and utilities that facilitate automation,
7 touch (Create Empty File): can be excluded using "exclusion" element. As example,
20 install :- Install the package into the local scripting, and configuration management
Syntax: touch [filename] A depends upon B and B depends upon C, then A can
repository, which can be used as a tasks essential in DevOps practices. Tools like
Example: touch index.html mark C as excluded.
dependency in other projects locally. Bash, awk, sed, and grep, along with powerful
Explanation: This command creates a new 5. Optional Dependencies: Any transitive dependency
21 deploy :- Copies the final package to the scripting languages like Python and Perl,
empty file with the specified name. can be marked as optional using "optional" element.
remote repository for sharing with other empower DevOps professionals to automate
For example, A depends upon B and B depends upon C.
developers and projects. repetitive tasks, streamline workflows, and
Now B marked Cas optional. Then A will not use C.
maintain infrastructure as code.
Containerization and Orchestration: Linux
serves as the foundation for containerization
What is DevOps Explain a details
The DevOps is a combination of two words, one
Explain Devops life cycle Agile Devops Write
mangement
notes on configuration
:-Configuration
1 Plan: Professionals determine the
is software Development, and second is 1 It started in the year 1 It started in the year
commercial need and gather end-user management is a systems engineering
Operations. This allows a single team to handle 2001. 2007.
opinions throughout this level. In this step, process for establishing consistency of a
the entire application lifecycle, from 2 Invented by John 2 Invented by John product's attributes throughout its life. In
they design a project plan to optimize business
development to testing, deployment, Kern, and Martin Allspaw and Paul the technology world, configuration
impact and produce the intended result.
and operations. DevOps helps you to reduce the Fowler. Hammond at Flickr, and management is an IT management
2 Code – During this point, the code is being
disconnection between software developers, the Phoenix Project by process that tracks individual
developed. To simplify the design process, the
2
quality assurance (QA) engineers, and system developer team employs lifecycle DevOps
Gene Kim. configuration items of an IT system.2
administrators.DevOps promotes collaboration tools and extensions like Git that assist them 3 Agile is a method for 3 It is not related to Software configuration management is a
between Development and Operations team to in preventing safety problems and bad coding creating software. software development. system engineering process that tracks
deploy code to production faster in an standards. 3 Build – After programmers have Instead, the software that and monitors changes to software
automated & repeatable way. completed their tasks, they use tools such as is used by DevOps is pre- systems configuration metadata.
DevOps helps to increase organization speed to Maven and Gradle to submit the code to the built, dependable, and Configuration management is a key part
deliver applications and services. It also allows common code source. 4 Test – To assure simple to deploy. of a DevOps lifecycle. DevOps
organizations to serve their customers better software integrity, the product is first 4 An advancement and 4 Typically a conclusion of configuration is the evolution and
and compete more strongly in the market. delivered to the test platform to execute administration administration related to automation of the systems
DevOps can also be defined as a sequence of various sorts of screening such as user approach. designing. administration role, bringing automation
development and IT operations with better acceptability testing, safety testing, integration 5 The agile handle 5 DevOps centers on to infrastructure management and
communication and collaboration. checking, speed testing, and so on, utilizing centers on consistent steady testing and deployment
DevOps has become one of the most valuable tools such as JUnit, Selenium, etc. 5 Release – changes. conveyance. 1 Delivering Infrastructure as Code:-
business disciplines for enterprises or At this point, the build is prepared to be 6 A few of the finest 6 DevOps to have a few Infrastructure as code (laC) in DevOps
organizations. With the help of DevOps, quality, deployed in the operational environment. The steps embraced in best hones that ease the configuration management means
and speed of the application delivery has DevOps department prepares updates or Agile are recorded method – 1. Focus on managing and provisioning infrastructure
improved to a great extent. sends several versions to production when the underneath – 1. specialized greatness. 2. through code rather than manually
DevOps is nothing but a practice or build satisfies all checks based on the Backlog Building Collaborate configuring it through a web interface or
methodology of making "Developers" and organizational demands. 6 Deploy – At this 2.Sprint advancement straightforwardly with command-line interface. This approach
"Operations" folks work together. DevOps point, Infrastructure-as-Code assists in clients and join their allows for infrastructure to be version
represents a change in the IT culture with a creating the operational infrastructure and feedback. controlled, tested, and automated, just
complete focus on rapid IT service delivery subsequently publishes the build using various 7 Agile relates 7This may be like application code.
through the adoption of agile practices in the DevOps lifecycle tools. 7 Operate – This generally to the way accomplished through Example: Assume you wish to build a
context of a system-oriented approach. version is now convenient for users to utilize. advancement is carried preparation. DevOps web application on a cloud provider such
DevOps is all about the integration of the With tools including Chef, the management of, any division of the centers more on program as AWS. Traditionally, you would have to
operations and development process. department take care of server configuration company can be spry arrangement choosing the establish a virtual server manually, install
Organizations that have adopted DevOps and deployment at this point. 8 Monitor – in its hones. foremost dependable and the operating system, configure the
noticed a 22% improvement in software quality The DevOps workflow is observed at this level most secure course. network settings, and install the
and a 17% improvement in application depending on data gathered from consumer 8 A big team for your 8 It demands application-specific software
deployment frequency and achieve a 22% hike behavior, application efficiency, and other project is not required. collaboration among 2. Delivering Configuration as Code:*
in customer satisfaction. 19% of revenue hikes sources. The ability to observe the complete different teams for the Delivering configuration as code (CaC) is
as a result of the successful DevOps surroundings aids teams in identifying completion of work. a way to manage and set up your system
implementation.’ bottlenecks affecting the production and and apps using code instead of manual
operations teams’ performance. adjustments. This makes it possible to
9 It does not focus on 9It focusses on keep track of changes, test them, and
the automation. automation. automate the process just like your app's
10 It focusses on 10 It centers on the code. This method lets you manage
automation.It is complete engineering settings the same way as, the app,
suitable for managing process. making it easier to deploy or roll back
complex projects in changes quickly, see what settings are
any department. being used, and ensure everything is
Explain SDLC Models, Lean, ITIL and Agile in details. running correctly.
SDLC (Software Development Life Cycle) Models: SDLC refers to the
Architecture of linux
process of planning, creating, testing, and deploying software Explain continuous intrgration and
1. Kernel:- The kernel is one of the core Write a RPM AND YUM Installtion
applications. There are several models within SDLC, each with its own deployment
section of an operating system. It is details
approach to managing these stages. Some common SDLC models Continuous Integration: Consider an
responsible for each of the major actions of RPM (Red Hat Package Manager):
include: 1 Waterfall Model: Sequential approach where progress application that has its code stored
the Linux OS. This operating system 1 RPM is a package management system
flows in one direction (like a waterfall) through the phases of in a Git repository in GitLab.
contains distinct types of modules and used primarily by Linux distributions
conception, initiation, analysis, design, construction, testing, Developers push code changes every
cooperates with underlying hardware based on the Red Hat package manager
deployment, and maintenance. 2 Agile Model: Iterative and day, multiple times a day. For every
directly. The kernel facilitates required format.
incremental approach that focuses on collaboration, customer push to the repository, you can
abstraction for hiding details of low-level 2 It is used for installing, updating, and
feedback, and small, rapid releases. It emphasizes adaptability and create a set of scripts to build and
hardware or application programs to the managing software packages on Red Hat
flexibility in response to changing requirements. test your application automatically.
system. There are some of the important Enterprise Linux (RHEL), CentOS, Fedora,
Spiral Model: Combines elements of both waterfall and iterative These scripts help decrease the
kernel types which are mentioned below: and other related distributions.
development models. It involves a series of cycles (spirals) where each chances that you introduce errors in
1 Monolithic Kernel 2 Micro kernels 3 Exo 3 In a DevOps environment, RPM
cycle represents a phase of the software development process. 3 V- your application. This practice is
4 kernels 5 Hybrid kernels packages can be created to bundle
Model: Extension of the waterfall model where testing is integrated at known as Continuous Integration.
2. System Libraries:- These libraries can be applications, libraries, or other
each stage of development. It emphasizes the verification and Each change submitted to an
specified as some special functions. These components along with their
validation of deliverables at corresponding stages. 4 Iterative Model: application, even to development
are applied for implementing the operating dependencies into a single package for
Similar to Agile, it involves repetitive cycles of development and branches, is built and tested
system's functionality and don't need code easy distribution and deployment.
testing, with each cycle adding new features or refining existing ones automatically and continuously.
access rights of the modules of kernel. 4 DevOps teams can use RPM to
based on feedback. These tests ensure the changes pass
3. System Utility Programs:- It is automate the installation and
1 Lean: Lean is a methodology derived from Toyota's production all tests, guidelines, and code
responsible for doing specialized level and management of software across
system and is focused on maximizing customer value while minimizing compliance standards you
individual activities. multiple servers or environments,
waste. In software development, Lean principles aim to optimize the established for your application.
4. Hardware layer:- Linux operating system ensuring consistency and reliability in
development process by eliminating activities that do not add value to Gitlab itself is an example of a
contains a hardware layer that consists of software deployment.
the end product. Key principles of Lean include: Identifying value from project that uses Continuous
several peripheral devices like CPU, HDD, YUM (Yellowdog Updater, Modified):
the customer's perspective. Mapping the value stream to identify and Integration as a software
and RAM. 1 YUM is a package management utility
eliminate waste. Creating flow by minimizing delays and interruptions development method. For every
5. Shell:- It is an interface among the kernel and dependency resolver for RPM-based
in the development process. Establishing pull systems to ensure work push to the project, a set of checks
and user. It can afford the services of Linux distributions.
is completed only when there is demand. Striving for perfection by run against the code
kernel. It can take commands through the 2 It provides a high-level interface for
continuously improving processes. ITIL (Information Technology 2. Continuous Delivery: Continuous
user and runs the functions of the kernel. managing software packages, including
Infrastructure Library): ITIL is a set of best practices for IT service Delivery is a step beyond Continuous
The shell is available in distinct types of installation, updating, removal, and
management (ITSM) that focuses on aligning IT services with the Integration. Not only is your
OSes. These operating systems are dependency resolution.
needs of the business. It provides guidance on various aspects of IT application built and tested each
categorized into two different types, which 3 YUM simplifies the process of
service delivery and support, including: Service Strategy: Planning and time a code change is pushed to the
are the graphical shells and command-line installing and managing software by
aligning IT services with business objectives. Service Design: Designing codebase, the application is also
shells. nThe graphical line shells facilitate automatically resolving dependencies
new or changed IT services to meet business requirements. Service deployed continuously. However,
the graphical user interface, while the and retrieving packages from designated
Transition: Managing the transition of new or changed services into with continuous delivery, you trigger
ommand line shells facilitate the command repositories.
production. Service Operation: Day-to-day management of IT services the deployments manually.
line interface. Thus, both of these shells 4 In a DevOps context, YUM can be used
to ensure they meet agreed-upon service levels. Continual Service Continuous Delivery checks the code
implement operations. However, the to automate the deployment of
Improvement: Continuously improving IT services and processes to automatically, but it requires human
graphical user interface shells work slower software packages across multiple
enhance efficiency and effectiveness. intervention to manually and
as compared to the command-line interface servers or instances, making it easier to
2 Agile: Agile is an iterative and incremental approach to software strategically trigger the deployment
shells. maintain consistency and manage
development that emphasizes flexibility, collaboration, and customer of the changes.
There are a few types of these shells which software versions.
feedback. Agile methodologies promote adaptive planning, 3. Continuous Deployment:
are categorized as follows: YUM repositories can be set up
evolutionary development, early delivery, and continuous Continuous Deployment is another
1 Korn shell internally within an organization or used
improvement. Some key Agile frameworks and methodologies step beyond Continuous Integration,
2 Bourne shell with public repositories to access a wide
include: similar to Continuous Delivery. The
3 C shell range of software packages and
Scrum: A framework for managing and organizing work in cross- difference is that instead of
4 POSIX shell updates.
functional teams through iterations called sprints, typically lasting 2-4 deploying your application manually,
weeks. you set it to be deployed
Kanban: A method for managing workflow by visualizing work on a automatically. Human intervention is
Kanban board and limiting work in progress to improve efficiency and not required.
flow.
Extreme Programming (XP): A set of practices focused on improving
What is different type of maven repositories Explain custom image in details Explain chef attributes and create custom Explain in git creating repository, cloning and
A maven repository is a directory of packaged JAR file In Docker, a custom image refers to an image attributes check-in and commiting
with pom.xml file. Maven searches for dependencies that you create by starting with an existing Attributes cn be defined in several different 1 Creating a Repository: To create a new
in the repositories. There are 3 types of maven Docker image and then modifying it according ways, and they can be used to specify a repository, navigate to the directory where
repository: to your requirements. Docker images serve as wide range of different settings. For example, you want to store your project and initialize a
1) Maven Local Repository :- Maven local repository is the basis for containers, which are lightweight, attributes can be used to set the new Git repository using the git init command.
located in your local system. It is created by the portable, and self-sufficient environments that hostname of a node start or to specify which Eg :- cd /path/to/project
maven when you run any maven command. By run your applications. application should be installed on a git init
default, maven local repository Here's how you typically create a custom image node. Attributes can be used to override This will create a new .git directory in your
is %USER_HOME%/.m2 directory. For example: in Docker: default settings for a cookbook or recipe. For project directory, which contains all the
C:\Users\SSS IT\.m2.. 1 Select a Base Image: You start by selecting an instance, if a Cookbook contains a default necessary files for Git to manage your
Update location of Local Repository existing Docker image that serves as the setting for the hostname attribute, that project's version control.
We can change the location of maven local repository foundation for your custom image. This base setting can be overridden by specifying a 2 Cloning an Existing Repository: To clone an
by changing the settings.xml file. It is located in image typically contains the operating system different hostname in the attributes list file. existing repository from a remote location
MAVEN_HOME/conf/settings.xml, for and any necessary runtime environments or Types of Chef Attributes: (like GitHub), use the git clone command
example:E:\apache-maven-3.1.1\conf\settings.xml. dependencies for your application. As a Chef, you will need to be aware with the followed by the repository URL.
Let's see the default code of settings.xml file. 2 Write a Docker file: Next, you create a different attribute types that can be used Git clone
settings.xml Dockerfile, which is a text file that contains to configure a node run. There are six https://github.com/user/repository.git
...<settings instructions for building your custom image. attribute types that can be assigned to a Chef This will create a new directory named after
xmlns="http://maven.apache.org/SETTINGS/1.0.0" The Dockerfile specifies how to configure and cookbook: default, automatic, normal, the repository and copy all the files from the
xmlns:xsi="http://www.w3.org/2001/XMLSchema- package your application within the image. You force_default, override, force_override. Each remote repository into it.
instance" define things like which base image to use, type hasits own purpose and use case. 3 Checking the Status: After making changes
xsi:schemaLocation="http://maven.apache.org/SETTI which files to include, and what commands to 1, Default: A default attribute is an attribute to your project files, you can check the status
NGS/1.0.0 http://maven.apache.org/xsd/settings- run during the image build process. that does not have a value set on the node. If of your repository using the git status
1.0.0.xsd"> 3 Build the Image: Once you have written the a default attribute is not set in the default command. This command shows you which
<!-- localRepository Dockerfile, you use the docker build command attribute file, the Chef-client will use a nil files have been modified, added, or deleted
| The path to the local repository maven will use to to build the custom image. Docker reads the value for the attribute. You can override the since the last commit.
store artifacts. instructions from the Dockerfile and executes default attributes list just like any other 4 Adding Changes to the Staging Area: Before
| Default: ${user.home}/.m2/repository them step by step, creating layers in the image attribute, which can also be set in the default committing changes, you need to add them to
<localRepository>/path/to/local/repo</localRepositor for each instruction. These layers represent the attribute file. the staging area using the git add command.
y> changes made to the base image, and they are 2. Automatic: An automatic attribute is set by You can add specific files or directories, or use
--> ... stored efficiently by Docker, making the Chef-client node itself during the Chef- git add . to add all changes in the current
</settings> subsequent builds faster. client run. These server attributes are directory and its subdirectories.
2) Maven Central Repository :- Maven central 4 Tag and Push the Image (Optional): After typically set based on information gathered Eg :- git add myfile.txt
repository is located on the web. It has been created building the custom image, you can tag it with a from the node such as the operating system 5 Committing Changes: Once you've added
by the apache maven community itself. name and version using the docker tag type or platform. Automatic attributes can be your changes to the staging area, you can
The path of central repository is: command. You can then push the image to a overridden like any other attribute, but they commit them to the repository using the git
http://repo1.maven.org/maven2/. Docker registry (such as Docker Hub or a cannot be set in the default attribute file. commit command. Each commit should have a
The central repository contains a lot of common private registry) using the docker push 3. Normal: This is the most common type of descriptive commit message summarizing the
libraries that can be viewed by this url command. This allows you to share the custom attribute list and is typically used when you changes made.
http://search.maven.org/#browse. image with others or deploy it to different want to set a specific value for an attribute on Eg :-git commit -m "Added new feature"
3) Maven Remote Repository :- Maven remote environments. a node start. The value for a normal attribute 6 Pushing Changes to Remote Repository
repository is located on the web. Most of libraries can 5 Run Containers from the Custom Image: can be set in the default attribute file, or it (Optional): If you're working with a remote
be missing from the central repository such as JBoss Finally, you can create containers from your can be overridden on a per-node basis. Chef repository (like GitHub), you can push your
library etc, so we need to define remote repository in custom image using the docker run command. for Configuration Management committed changes to the remote repository
pom.xml file. Each container is an instance of the image, 4. Force_default: The value for this attribute using the git push command. This command
Let's see the code to add the jUnit library in pom.xml running your application in an isolated list is always taken from the default attribute uploads your commits to the remote
file. environment. You can run multiple containers file. If the force_default attribute is set on a repository, making them available to others.
pom.xml from the same custom image, and each node, any other values set for that node are Eg:-git push origin main
<project container behaves independently of the others. ignored. This can be useful if you want to Replace origin with the name of your remote
xmlns="http://maven.apache.org/POM/4.0.0" ensure that all nodes in your environment repository, and main with the name of the
xmlns:xsi="http://www.w3.org/2001/XMLSchema- have the same value for an attribute. branch you're pushing to.
instance" 5. Override: An override attribute will take
xsi:schemaLocation="http://maven.apache.org/POM/ precedence over any other values that have
4.0.0 Explain some of the DevOps tools with its
been set for an attribute, including the default
http://maven.apache.org/xsd/maven-4.0.0.xsd"> features.
value. This type of attribute is often used
<modelVersion>4.0.0</modelVersion> 1) Puppet:- Puppet is the most widely used
when you need to quickly change the value of
<groupId>com.javatpoint.application1</groupId> DevOps tool. It allows the delivery and
an attribute on a node run without having to
<artifactId>my-application1</artifactId> release of the technology changes quickly and
edit the default attribute file.
<version>1.0</version> frequently. It has features of versioning,
6. Force_override: A force_override attribute
<packaging>jar</packaging> automated testing, and continuous delivery.
list overrides any other attribute values,
<name>Maven Quick Start Archetype</name> It enables to manage entire infrastructure as
whether they are default values or override
<url>http://maven.apache.org</url> code without expanding the size of the team.
Explain MAVEN POM Builds values. This type of attribute should be used
<dependencies> 2) Ansible:- Ansible is a leading DevOps tool.
1 POM stands for Project Object Model. It is sparingly, as it can make it difficult to track
<dependency> Ansible is an open-source IT engine that
fundamental unit of work in Maven. It is an XML down the source of anm attribute value.
<groupId>junit</groupId> automates application deployment, cloud
file that resides in the base directory of the provisioning, intra service orchestration, and
<artifactId>junit</artifactId>
project as pom.xml. other IT tools. It makes it easier for DevOps
<version>4.8.2</version> Write a short note on fetch,full and remote?
2 The POM contains information about the teams to scale automation and speed up
<scope>test</scope> 1 Fetch: 1 git fetch is a Git command used to
project and various configuration detail used by productivity.Ansible is easy to deploy because
</dependency> retrieve changes from a remote repository
Maven to build the project(s). it does not use any agents or custom security
</dependencies> without merging them into your local branch.
3 POM also contains the goals and plugins. infrastructure on the client-side, and by
</project> 2 When you fetch, Git downloads the latest
While executing a task or goal, Maven looks for pushing modules to the clients. These
commits and updates the remote tracking
the POM in the current directory. It reads the modules are executed locally on the client-
branches (e.g., origin/master) in your local
POM, gets the needed configuration side, and the output is pushed back to the
repository to reflect the state of the remote
information, and then executes the goal. Some Ansible server. 3) Docker :-Docker is a high-
repository. 3 Fetching allows you to see what
of the configuration that can be specified in the end DevOps tool that allows building, ship,
changes others have made to the remote
POM are following: and run distributed applications on multiple
repository without affecting your local
project dependencies systems. It also helps to assemble the apps
working copy.
1 plugins quickly from the components, and it is
2 full: 1 git pull is a Git command used to
2 goals typically suitable for container management.
fetch changes from a remote repository and
3 build profiles 4) Nagios:-Nagios is one of the more useful
integrate them into the current branch.
4 project version tools for DevOps. It can determine the errors
2 It is essentially a combination of git fetch
5 developers and rectify them with the help of network,
followed by git merge (or git rebase if
6 mailing list infrastructure, server, and log monitoring
configured).
Before creating a POM, we should first decide systems. 5) CHEF:-A chef is a useful tool for
3 Pulling updates your local working copy with
the project group (groupId), its name (artifactid) achieving scale, speed, and consistency. The
the latest changes from the remote repository
and its version as these attributes help in chef is a cloud-based system and open source
and automatically merges them into your
uniquely identifying the project in repository. technology. This technology uses Ruby
current branch.
POM Example: encoding to develop essential building blocks
3 Remote: 1 In Git, a "remote" refers to a
<project such as recipes and cookbooks. The chef is
named reference to another repository,
xmlns="http://maven.apache.org/POM/4.0.0" used in infrastructure automation and helps
typically located on a remote server (e.g.,
xmlns:xsi="http://www.w3.org/2001/XMLSche in reducing manual and repetitive tasks for
GitHub, GitLab). 2 Remotes are used to fetch,
ma-instance" infrastructure management. 6) Selenium :-
pull, and push changes between your local
xsi:schemaLocation= Selenium is a portable software testing
repository and the remote repository.
"http://maven.apache.org/PCM/4.8.0 framework for web applications. It provides
3 By default, when you clone a repository, Git
http://maven.apache.org/xsd/maven- an easy interface for developing automated
creates a remote called "origin" that points to
4.0.0.xsd"> tests. 7) SALTSTACK :- Stackify is a lightweight
the original repository from which you cloned.
<modelVersion>4.0.0</modelVersion> DevOps tool. It shows real-time error queries,
<groupId>com.companyname.project- logs, and more directly into the workstation.
group</groupId> SALTSTACK is an ideal solution for intelligent
<artifactId>project</artifactId> orchestration for the software-defined data
<version>1.0</version> center.
</project>
what is lifecycle of a docker Explain the method to upload the image Name and explain the various docker components in Explain docker networking
container ?explain in details In docker registry and aws details 1 Accessing containers
1 Created state:- In the created state, a The Amazon ECR repository must exist 1. Docker Daemon:n 1 The Docker daemon (dockerd) is 2 linking container
docker container is created from a docker before you push the image. Amazon ECR a background service that runs on the host operating 1. Accessing Containers: When a container is
image. also provides a way to replicate your system. It is responsible for managing Docker objects running, it can expose ports to the host machine or
docker create --name <name-of-container> images to other repositories, across such as containers, images, volumes, and networks. 2 other containers, allowing communication with
<docker-image-name> Regions in own registry and across The Docker daemon listens for Docker API requests and services running inside the container. Docker
2 Running state :- In the running state, the different accounts, by specifying a processes them. It handles container lifecycle provides several networking options for accessing
docker container starts executing the replication configuration in your private operations, image management, network containers:
commands mentioned in the image. To run a registry settings. configuration, and more. 3 The daemon communicates Host Networking: In host networking mode, the
docker container, use the docker run Step 1: Authenticate your Docker client with the Docker client (docker) via RESTful APIs or UNIX container uses the network stack of the Docker
command. to the Amazon ECR registry to which you socket. host. This means that the container shares the
docker run <container-id> intend to push your image. 2. Docker Client:1 The Docker client (docker) is a network namespace with the host and has direct
or Authentication tokens must be obtained command-line interface tool that allows users to access to the host's network interfaces. This mode
docker run <container-name> for each registry used, and the tokens interact with the Docker daemon. Users can issue provides the highest network performance but may
The docker run command creates a are valid for 12 hours. To authenticate- commands to build, run, manage, and monitor Docker expose the container to potential security risks.
container if it is not present. In this case, the Docker to an Amazon ECR registry, run containers and images. 2 The Docker client sends Bridge Networking: By default, Docker uses bridge
creation of the container can be skipped. the aws ecr get- login-password commands to the Docker daemon using the Docker networking, where each container runs in its own
3 Paused state :- In the paused state, the command. When passing the Remote API. It can be used locally on the host machine network namespace with its own IP address on a
current executing command in the docker authentication token to the docker login or remotely to manage Docker instances running on bridge network. Containers on the same bridge
container is paused. Use the docker pause command, use the value AWS for the remote servers. network can communicate with each other using
command to pause a running container. username and specify the Amazon ECR 3. Docker Images: 1 Docker images are read-only their IP addresses or container names. Additionally,
docker pause container <container-id or registry URI you want to authenticate to, templates that contain everything needed to run a you can expose ports from the container to the
container-name> If authenticating to multiple registries, container, including the application code, runtime, host machine to allow external access.
Note: docker pause pauses all the processes you must repeat the command for each libraries, dependencies, and environment variables. 2 Overlay Networking: Overlay networking is used in
in the container. It sends the SIGSTOP signal registry. Images are built using Dockerfiles, which are text files Docker Swarm mode to enable communication
to pause the processes in the container. ws ecr get-login-password-region region containing a series of instructions for assembling the between containers running on different nodes in a
5 Unpaused state :- In the unpaused state, | docker image. 3 Docker images are stored in a registry, such as cluster. It creates a virtual network overlay on top
the paused container resumes executing the login--username AWS --password-stdin Docker Hub or Amazon ECR, and can be pulled from the of the physical network infrastructure, allowing
commands once it is unpaused. Use the aws_account_id.dkr.ecr.region.amazona registry to run containers on any Docker-enabled host. containers to communicate securely across nodes.
docker unpause command to resume a ws.com 4. Docker Containers: 1 Docker containers are Macvlan Networking: Macvlan networking allows
paused container. Step 2:- If your image repository doesn't lightweight, portable, and isolated runtime containers to have their own MAC addresses and
Then, Docker sends the SIGCONT signal to exist in the registry you intend to push to environments that run applications and their appear as physical devices on the network. This
resume the process. yet, create it. dependencies in a sandboxed manner. 2 Each container mode is useful for scenarios where containers need
docker unpause <container-id or container- Step 3:Identify the local image to push: is an instance of a Docker image and runs as a separate to be directly exposed to the external network with
name> Run the docker images command to list process on the host operating system. 3 Containers their own IP addresses.
6 Stopped state :- In the stopped state, the the container images on your system. provide process isolation using kernel namespaces and 2. Linking Containers:
container’s main process is shutdown docker images control groups (cgroups), ensuring that applications Linking containers allows one container to
gracefully. Docker sends SIGTERM for You can identify an image with the running in different containers do not interfere with communicate with another container by creating a
graceful shutdown, and if needed, SIGKILL, repository: tag value or the image ID in each other. 4 Docker containers can be started, secure tunnel between them. Docker provides a
to kill the container’s main process. the resulting command output. stopped, paused, restarted, and deleted using Docker built-in mechanism for linking containers using the -
Use the docker stop command to stop a Step 4: Tag your image with the commands. -link flag when running containers. Here's how it
container. Amazon ECR registry, repository, and 5. Docker Registries: 1 Docker registries are works:
docker stop <container-id or container- optional image tag name combination repositories for storing and distributing Docker images. Container A links to Container B: When Container
name> to use. They serve as centralized repositories where users can A is started, you can specify that it should be linked
Restarting a docker container would The registry format is upload, share, and download Docker images. 2 Docker to Container B using the --link flag. Docker
translate to docker stop, then docker run, aws_account_id.dkr.ecr.us-west- Hub is the official public registry maintained by Docker, automatically sets up environment variables inside
i.e., stop and run phases. 2.amazonaws.com. Inc., but users can also set up private registries using Container A that contain information about
7 Killed/Deleted state :- In the killed state, the repository name should match the Docker Trusted Registry (DTR), Amazon ECR, Google Container B, such as its IP address and exposed
the container’s main processes are repository that you created for your Container Registry (GCR), or self-hosted solutions like ports.
shutdown abruptly. Docker sends a SIGKILL image. If you omit the image tag, we Harbor. 3 Registries allow users to version images, Accessing Linked Containers: Once Container A is
signal to kill the container’s main process. assume that the tag is latest. control access permissions, and manage image lifecycle linked to Container B, Container A can
docker kill <container-id or container- Example: The following example tags a through features like tagging, searching, and scanning communicate with Container B using the
name> local image with the ID e9ae3c220b23 as for vulnerabilities. environment variables provided by Docker. For
aws_account_id,dkr.ecr.us-west- 6. Docker Volumes: 1 Docker volumes are persistent example, if Container B exposes a service on port
2.amazonaws.com/my- repository:tag. storage mechanisms that allow data to persist beyond 8080, Container A can access it using the
docker tag e9ae3c720b23 the lifecycle of a container. They enable data sharing environment variable
2.amazonaws.com/my-repository:tag and data persistence for Docker containers. 2 Volumes $CONTAINER_B_PORT_8080_TCP_ADDR and
aws_account_id.dkr.ecr.us-west- can be mounted from host directories or created as $CONTAINER_B_PORT_8080_TCP_PORT.
Step 5:Push the image using the docker Docker-managed volumes. Docker volumes are isolated
push command. from the container's filesystem and can be shared
docker push aws_account_id.dkr.ecr.us- discuss the core concepts of maven and How does
among multiple containers.
west-2.amazonaws.com/my- maven work?
3 Docker volumes are used for storing application data,
repository:tag 1 Project Object Model (POM): Maven uses a Project
databases, configuration files, logs, and other
Object Model, which is an XML file named pom.xml,
persistent data that needs to survive container restarts
to describe the project configuration, dependencies,
and updates.
and build settings. This file serves as the project's
7. Docker Networks: 1 Docker networks provide
blueprint and contains information such as project
Explain Docker Hub communication channels for connecting Docker
What is -release plugin and name, version, dependencies, build plugins, and
Docker Hub is a repository service and it is a cloud- containers to each other and to external networks.
how does it work? repositories. 2 Dependency Management: Maven
based service where people push their Docker 2 Docker supports various network drivers, including
Maven is actually a plugin manages project dependencies efficiently.
Container Images and also pull the Docker Container bridge, overlay, host, and macvlan, which offer
execution framework where Dependencies are declared in the pom.xml file along
Images from the Docker Hub anytime or anywhere every task is actually done by different networking modes and capabilities.
with their version numbers and scopes (like compile,
via the internet. It provides features such as you can plugins. Maven Plugins are 3 Docker networks enable containers to communicate
test, runtime, etc.). When Maven builds the project,
push your images as private or public. Mainly generally used to: securely with each other, expose ports to the host or
it automatically downloads the required
DevOps team uses the Docker Hub. It is an open- 1 create jar file external networks, and provide isolation and
dependencies from remote repositories (like Maven
source tool and freely available for all operating 2 create war file segmentation for network traffic.
Central) and includes them in the project's classpath.
systems. It is like storage where we store the images 3 compile code files 3 Build Lifecycle: Maven defines a standard build
and pull the images when it is required. When a 4 unit testing of code lifecycle consisting of phases such as compile, test,
person wants to push/pull images from the Docker 5 create project documentation package, install, and deploy. Each phase represents a
Hub they must have a basic knowledge of Docker. 6 create project report specific stage in the build process. When you execute
Let us discuss the requirements of the Docker tool. A plugin generally provides set a Maven command (like mvn compile, mvn test,
Docker is a tool nowadays enterprises adopting of goals, which can be executed etc.), Maven executes all the phases up to and
rapidly day by day. When a Developer team wants to using the followings including the specified phase in the lifecycle.n 4
share the project with all dependencies for testing mvn [plugin-name ]:(goal- Plugins: Maven plugins are used to extend its
then the developer can push their code on Docker name) functionality. Plugins are configured in the pom.xml
Hub with all dependencies. Firstly create the Images 1 cleans:- Cleans up target after file and are responsible for executing specific tasks
and push the Image on Docker Hub. After that, the the build. Deletes the target during the build process. For example, the Maven
testing team will pull the same image from the directory. Compiler Plugin is used to compile Java source code,
Docker Hub eliminating the need for any type of file, 2 compiler: -Compiles and the Maven Surefire Plugin is used to execute unit
software, or plugins for running the Image because Java.source files tests. 5 Convention over Configuration: Maven
the Developer team shares the image with all 3 surface: -Runs the JUnit unit follows the principle of "convention over
dependencies. tests. Creates test reports. configuration," which means that it uses sensible
Advantages:-1) Docker Container Images are light in 4 jar:- Builds a JAR file from the defaults and conventions to minimize the need for
weight. 2) We can push the images within a minute current project. configuration. For example, Maven expects project
and with help of a command. 3) It is a secure 5 war:- Builds a WAR file from source code to be located in src/main/java and
method and also provides a feature like pushing the the current project. src/test/java, and resources in src/main/resources
private image or public image. 4)Docker hub plays a 6 javadoc:- Generates Javadoc and src/test/resources. 6 Integration with IDEs:
very important role in industries as it becomes more for the project Maven integrates seamlessly with popular Integrated
popular day by day and it acts as a bridge between 7 antrub:- Runs a set of ant Development Environments (IDEs) like Eclipse, IntelliJ
the developer team and the testing team 5 ) If a tasks from any phase IDEA, and NetBeans. IDEs can import Maven projects
person wants to share their code, software any type mentioned ofthe build. directly, allowing developers to work with Maven-
of file for public use, you can just make the images managed projects within their preferred
public on the docker hub. development environment.
Explian in details node in chef/ Component List Type of handlers in chef and explain it Explain the use of a knife to the chef Explain chef architecture with suitable
1 Chef Server: The Chef Server is a centralized hub that In Chef, handlers are components that allow In Chef, the knife command-line tool is a diagram
acts as the main repository for storing configuration data, you to define actions or behaviors to be versatile utility that serves as a primary Chef works on a three-tier client server
policies, and cookbooks. It manages the nodes (servers or executed at interface for administrators to interact with model wherein the working units such as
virtual machines) in the infrastructure and facilitates specific points during the Chef run. They are the Chef ecosystem. It provides a wide cookbooks are developed on the Chef
communication between Chef clients (nodes) and the used to respond to specific events or conditions range of functionalities for managing workstation. From the command line
Chef Workstation. that infrastructure, configurations, and utilities such as knife, they are uploaded
Cookbooks Repository: Stores cookbooks, which are occur during the configuration process. Here are deployments. Here are some key uses of the to the Chef server and all the nodes which
collections of recipes and other resources used to define some types of handlers in Chef: knife are present in the architecture are
configurations for infrastructure components. 1. Exception Handlers: tool: registered with the Chef server.
Data Bags: Securely stores configuration data in JSON ● Exception handlers are triggered when an 1. Cookbook management: Knife allows you
format. Data bags are used to store sensitive information exception occurs during the Chef run. to create, upload, and manage cookbooks.
such as passwords, secrets, or any other data that needs ● They allow you to define custom actions to You can use knife commands to generate
to be shared across nodes. handle specific exceptions and errors. new cookbooks, upload them to the Chef
Roles: Defines the role or function of a node in the ● For example, you can configure an exception Server, and manage cookbook versions.
infrastructure. Roles allow for the easy assignment of handler to send an email notification 2. Node management: With knife, you can
configurations to nodes based on their intended purpose. when a particular error occurs. manage nodes in your infrastructure. This
Environments: Defines the environment-specific 2. Report Handlers: includes adding new nodes, deleting
configurations and attributes. Environments allow for the ● Report handlers are executed at the end of a existing nodes, and modifying node
separation of configurations between development, successful Chef run or after all configurations. Knife provides commands
testing, staging, and production environments. resources have converged. for assigning roles, adding run lists, setting
Chef Server provides authentication and authorization ● They are used to generate reports or perform attributes, and managing node-specific
mechanisms to control access to sensitive information actions based on the final state of the data.
and ensure the security of the infrastructure. configuration. 3. Bootstrapping nodes: Knife simplifies the In order to get the working Chef
2 Chef Workstation: The Chef Workstation is the ● Report handlers can be used to send process of bootstrapping new nodes. infrastructure in place, we need to set up
development and management environment where notifications, generate log files, or update 4. Environment management: Knife allows multiple things in sequence.
administrators author, test, and manage cookbooks and external systems with the run status. you to create and manage environments. In the above setup, we have the following
other configuration artifacts. 3. Start Handlers: Environments provide a way to group nodes components.
It typically includes the following components: ● Start handlers are executed at the beginning and apply specific configurations to them. 1 Chef Workstation
Chef Development Kit (ChefDK): A package that includes of a Chef run, before any resources Using knife commands, you can create This is the location where all the
essential tools and libraries for developing and testing are processed. environments, assign nodes to configurations are developed. Chef
Chef cookbooks. ChefDK includes utilities like Test ● They are useful for performing initialization environments, and modify environment workstation is installed on the local
Kitchen for cookbook testing, ChefSpec for unit testing, tasks or setting up the environment attributes. machine. Detailed configuration structure
and Berkshelf for cookbook dependency management. before the configuration begins. 5. Querying and searching: Knife provides is discussed in the later chapters of this
Knife CLI: A command-line tool used for interacting with ● Start handlers can be used to log start events, commands for querying and searching the tutorial.
the Chef Server and managing various aspects of the load configuration data, or prepare Chef Server to retrieve information about 2 Chef Server
infrastructure. Knife allows administrators to upload resources for the run. nodes, cookbooks, environments, roles, and This works as a centralized working unit of
cookbooks, manage nodes, roles, environments, and 4. Notification Handlers: more. Chef setup, where all the configuration
perform other administrative tasks. ● Notification handlers are triggered when a 6. Remote command execution: Knife files are uploaded post development.
Chef Repository: A directory structure that contains specific resource or attribute changes enables you to execute commands on There are different kinds of Chef server,
cookbooks, roles, environments, and other configuration state during the Chef run. remote nodes using SSH( Secure Shell) some are hosted Chef server whereas
artifacts. Administrators work within the Chef Repository ● They allow you to define actions that should 7. Data bag and secret management: Knife some are built-in premise.
to develop and manage infrastructure configurations. occur when a particular event or allows you to manage data bags, which are 3 Chef Nodes
The Chef Workstation serves as the control center for condition is met. used to store and retrieve data in JSON They are the actual machines which are
defining and managing the desired state of the ● For example, you can configure a notification format. You can create, edit, and manage going to be managed by the Chef server.
infrastructure, allowing administrators to implement handler to restart a service if a data bags using knife commands. All the nodes can have different kinds of
changes, enforce policies, and ensure consistency across related configuration file is modified. Additionally, knife provides support for setup as per requirement. Chef client is
the environment. managing encrypted data the key component of all the nodes, which
3 Cookbooks: bags, allowing you to securely store helps in setting up the communication
Cookbooks are fundamental units of configuration in explain how to create and manage data bags
sensitive information between the Chef server and Chef node.
Chef. They contain reusable code and resources that 1. Creating Data Bags using CLI: To create a
The other components of Chef node is
define the desired state of various components in the new data bag, you use the knife Command
Ohai, which helps in getting the current
infrastructure. Line tool or Chef APIs. Data bags are typically Expalin docker archtecuture in details with state of any node at a given point of time.
Each cookbook typically consists of the following organized by name similar to directories in a diagram
components: file system. Docker Architecture
Recipes: Contains Ruby code that defines the steps knife data bag create BAG_NAMEG-NAME What is Docker Daemon? How to setup chef workstation
needed to configure a specific component or service. 2. Creating Data Bag Items: Inside a data bag. Docker daemon manages all the services by Organization Setup involves creating an
Recipes are the building blocks of configuration in Chef you store individual items. Each itenm is a communicating with other daemons. It manages organization in Chef and adding
and are organized based on the desired configuration JSON object that contains the data you want docker objects such as images, containers, yourself as a user and a node to that
tasks. to store. For instance, if you aré creating a networks, and volumes with the help of the API organization. Here's a brief overview of
Attributes: Defines configurable parameters and settings data bag for database connection strings, requests of Docker. 1 Docker Client:- With the the process:
that control the behavior of recipes. Attributes allow each item might represent a different help of the docker client, the docker users can 1. Create Organization: Using the Chef
administrators to customize cookbook behavior based on database. interact with the docker. The docker command server's web interface (such as Chef
specific requirements or environment variables. knife data bag create BAG_NAME ITEM uses the Docker API. The Docker client can Manage or Chef Automate), log in as an
Templates: Provides dynamic content generation NAME communicate with multiple daemons. When a administrator and create a new
3. Editing Data Bag Items: Once created, you docker client runs any docker command on the organization. Provide a name and any
What is a data bag in chef can edit the data bag items using a text editor docker terminal then the terminal sends necessary details for the organization.
In Chef, a data bag is a global variable that is stored as or directly through the command line using instructions to the daemon. The Docker daemon 2. Add User to Organization: Associate
JSON data and is used to store global data that is the knife tool. gets those instructions from the docker client your user account with the newly
accessible across multiple nodes in a Chef infrastructure. knife data bag edit BAG NAME ITEM_NAME withinside the shape of the command and REST created organization. This step typically
Data bags are typically used to store sensitive information 4. Uploading Data Bags: After creating and API’s request. The main objective of the docker involves specifying your username and
such as passwords, API keys, and other configuration data editing data bags and their items, you upload client is to provide a way to direct the pull of selecting the
that needs to be shared among multiple nodes or recipes. them to the Chef Server using the knife images from the docker registry and run them on organization you want to join.
Data bags are structured as collections of JSON objects, command. the docker host. The common commands which 3. Generate User Key: Generate a user
where each object represents a piece of data. Each data knife data bag from file BAG _NAME are used by clients are docker build, docker pull, key for yourself. This key will be used
bag item has a unique name within the data bag. Data ITEM_NAME.json and docker run. 2 Docker Host :-A Docker host is for authentication when interacting
bags are stored on the Chef Server, which makes them 5. Accessing Data Bags in Recipes: In Chef a type of machine that is responsible for running with the organization. Download the
accessible to any node that is configured to use that Chef Recipes, you can access data bag items and more than one container. It comprises the user key file and keep it secure.
Server. their content. These items can be used to Docker daemon, Images, Containers, Networks, 4. Prepare Node: Set up the node
Data bags are commonly used in Chef recipes to provide configure resources within your cookbooks. and Storage. 3 Docker Registry:-All the docker (server or virtual machine) that you
configuration data to recipes at runtime. Recipes can # Load a data bag item images are stored in the docker registry. There is want to add to the organization. Ensure
access data bag items using the Chef data_bag_item my_data = data_bag_item('BAG NAME', a public registry which is known as a docker hub that the Chef client is installed on the
resource or the chef-vault gem for encrypted data bags. 'ITEM NAME') node. 5. Configure Node: Create a
Overall, data bags provide a flexible way to store and that can be used by anyone. We can run our
# Access attributes within the data bag item private registry also. With the help of docker run client configuration file on the node
manage configuration data in Chef, making it easier to db_host = my_data['database']['host'] or docker pull commands, we can pull the (usually named "client.rb") and specify
manage and maintain infrastructure configurations across db_user = my_data[' database']['username' ] required images from our configured registry. the Chef server URL, organization name,
multiple nodes 1 Data bags are especially useful for Images are pushed into configured registry with and the location of the user key file.
separating sensitive data from Cookbooks and the help of the docker push command. 4 Docker This configuration file informs the Chef
configurations, providing better security and Objects Whenever we are using a docker, we are client about the server and organization
separation of concerns. However, it is creating and use images, containers, volumes, it should connect to. 6. Register Node:
important to note that data bags are not networks, and other objects. Now, we are going Run the Chef client on the node with
inherently encrypted. They can be optionally to discuss docker objects:- Docker objects 5 the appropriate registration command.
encrypted to enhance security. When - Docker Images:-An image contains instructions This command typically includes the
encrypted, data bag items can only be for creating a docker container. It is just a read- organization name, node name, and the
decrypted by nodes that have the decryption only template. It is used to store and ship path to the user key file. The Chef client
keys. applications. Images are an important part of the will register the node with the specified
2 Data Bags are treated as Global variables docker experience as they enable collaboration organization.
like JSON data. They. are indexed for between developers in any way which is not 7. Verify Membership: Go to the Chef
searching and accessed during search possible earlier. 6 Docker Containers:- server's web interface and navigate to
process. We can access JSON data from Chet. Containers are created from docker images as theorganization and confirm that your
For example, a data bag can store global they are ready applications. With the help of user account is listed as a member.
variables such as an app's source URL, the Docker API or CLI, we can start, stop, delete, or Additionally, verify that the node you
. instance's hostname, and the associated move a container. A container can access only registered is also listed as part of the
stack's VPC identifier. those resources which are defined in the image organization