0% found this document useful (0 votes)
23 views33 pages

Unit-7 Se

Uploaded by

pank01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views33 pages

Unit-7 Se

Uploaded by

pank01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Software Engineering (PCCO6010T)

Credits : 03
Examination Scheme
Term Test : 15 Marks
Teacher Assessment : 20 Marks
End Sem Exam : 65 Marks
Total Marks : 100 Marks

Prerequisite:
1. Concepts of Object Oriented Programming & Methodology.
2. Knowledge of developing applications with front end & back end connectivity.

By : Prof. Mr. P.R. Patil


• Course Objectives: To provide the knowledge of Standard Software Engineering discipline.

By : Prof. Mr. P.R. Patil


Unit-VII
3 Hrs.

Latest Trends in Software Development Engineering


DevOps: DevOps Toolchain, DevOps Architecture (e.g. Docker), DevOps for Deployment

By : Prof. Mr. P.R. Patil


Latest Trends in Software Development Engineering
DevOps
• DevOps Toolchain
A DevOps toolchain is a collection of tools, often from a variety of vendors, that operate as an integrated unit to
design, build, test, manage, measure, and operate software and systems. It enables development and
operations teams to collaborate across the entire product lifecycle and tackles key DevOps fundamentals
including continuous integration, continuous delivery, automation, and collaboration.
What is a DevOps toolchain?
A DevOps toolchain includes the tools and technology that enable development and operations teams to
collaborate across the entire software lifecycle. It tackles key DevOps fundamentals including continuous
integration, continuous delivery, automation, and collaboration.

By : Prof. Mr. P.R. Patil


By : Prof. Mr. P.R. Patil
By : Prof. Mr. P.R. Patil
• Key DevOps fundamentals revolve around the concepts of continuous integration, continuous delivery,
automation, and collaboration. Since DevOps is more of a practice than technology, there’s no single tool that
can do justice to all stages of software development. Rather, DevOps forms a series of tools.
• There are a number of open-source DevOps tools available. Clubbing them together based on your needs
makes a DevOps toolchain. This makes product delivery faster and more efficient. A toolchain is basically a
set of various tools that solves a particular problem.
• As mentioned above, different tools are used at different stages of the software development cycle.

By : Prof. Mr. P.R. Patil


Collaboration
• The greatest catch of the DevOps culture is collaboration and communication between different teams. Different teams like
development, testing, and product coordinate and work to automate this entire process. Collaboration tools help teams work
together regardless of time zones and locations. Faster communication means faster software releases. A few examples of
collaboration tools are Slack, Campfire, and Skype.
Planning
• Stakeholders, clients, and employees working with different teams should have common goals. Therefore, transparency among all
participants is important. Planning tools provide this transparency. A couple of examples of planning tools are Asana and Clarizen.
Source Control
• You need a centralized storage location for all your data, documentation, code, configurations, files, etc. Data from this source
control can then further be divided into different branches for teams to work on. Source control tools give you these features to
exploit. A few examples of source control tools are Git, Subversion, and SVN.
Issue Tracking
• An increase in transparency results in clearer vision, making it easier and faster to track issues. There are issue tracking tools, but
there is a condition: all the teams should be using the same tracking tool. A few examples of these issue tracking tools are Jira,
ZenDesk, and Backlog.
Configuration Management
• Wouldn’t it be perfect if all your system was automatically configured and updated without you having to worry about it?
Configuration management tools are meant for that. These tools help manage your infrastructure as code, which then avoids
configuration drifts across environments. A few examples of configuration management tools are Ansible, Puppet, and Chef.
Continuous Integration
• A good software development cycle gets the code developed in chunks by different teams and then continuously integrates them.
The codes might work perfectly fine individually but can create issues when integrated. Continuous integration tools let you
detect errors quickly and resolve them faster. A few examples
By : Prof. of
Mr.continuous
P.R. Patil integration tools are Bamboo, Jenkins, and TeamCity.
Binary Repositories
• A product might be getting developed on a daily basis or an hourly basis. The code needs to be
flowing smoothly from the developer’s machine to the production environment, thus a repository
manager is a good way to bridge this gap. Repositories contain collections of binary software
artifacts, metadata, and code. A few examples of binary repositories are Artifactory, Nexus, and
Maven.
Monitoring
• As the name suggests, monitoring is a must in DevOps for smooth execution. Monitoring tools
ensure service uptime and optimal performance. A couple of examples of monitoring tools are
BigPanda and Sensu.
Automated Testing
• The entire integrated code needs to be tested before passing it to the build. The quicker the
feedback loop runs, the quicker you reach your goal. A few examples of automated testing tools are
Telerik, QTP, and TestComplete.
Development
• Another great concept of DevOps that allows the application deployment to be frequent and
reliable is development. Deployment tools let you release your products faster to the market. A few
examples of development tools are the Docker toolset, and IBM uDeploy.
Database
• Finally, there’s handling the data. Data is valuable for getting insights, and every application
development requires a lot of data. Database management tools help you handle cumbersome data
with ease. Some examples of database management tools are RazorSQL, TeamDesk, etc.
By : Prof. Mr. P.R. Patil
Why Do We Need a DevOps Toolchain?
• DevOps culture brings you good results in terms of product delivery and money. Companies
require developers with the skills and expertise of using different DevOps tools. Is it worth
spending so much on the skilled employees and changing the entire company’s
infrastructure? Well let’s have a look.
• Faster deployments: Using these tools automates most of the stages of the software
development cycle. Agile and rapid product deliveries are the result of using standardized
pipelines. Consequently, businesses that innovate faster win the competition.
• Fine-tuned incident control: Humans are careless and make reckless mistakes—hence, it’s
better to trust tools. Using a standardized pipeline and infrastructure makes various teams
respond faster and more effectively during an incident.
• Quality assurance: Resolving software defects quickly and certainly with precision is pretty
difficult. But DevOps tools make it seem like a walk in the park. The DevOps toolchain brings
out the best product with the best quality, as quality is one of the major selling points for
most of the products.

By : Prof. Mr. P.R. Patil


How Do We Create a DevOps Toolchain?
There are five main aspects of creating a DevOps toolchain.
• Acceptance: The first step to making a revolutionary change is accepting that something is wrong and, furthermore,
accepting that change is required. If your developments aren’t moved to production quickly, then you most
definitely need another toolchain. In other words, you need a toolchain that moves things faster.
• Inspiration: There are many companies that have already adopted DevOps and have benefited from it. Techies are
always ready to contribute. Read some of their success stories, reach out and connect with them in different tech
communities, and learn from them.
• Analysis: Analyze your current system, as well as the tools that you’re using. Find out how much time each step
takes and what the accuracy is. This will help you identify the loopholes in your current system. You now know what
needs to be changed.
• Build: Once you know what has to be changed, you can go ahead and start selecting the best tools for your
requirements. Build the prototype of your toolchain. This is the time where you put all the theoretical knowledge
into practice. Improvise your current metrics using these tools.
• Strategy: Businesses these days are very dynamic. The competition demands a scale-up at some point. Hence, your
toolchain should be capable of handling unexpected situations. You need to maintain, upgrade, and configure your
tools over time. Plan your long-term toolchain support strategy.
• To learn about the future of the DevOps toolchain, take a look at this Gartner report: The Future of DevOps
Toolchains Will Involve Maximizing Flow in IT Value Streams.
• Conclusion
Leave all the boring work like installing, upgrading, configuring, and setting up the infrastructure to the tools in the
DevOps toolchain while you concentrate on building and then deploying the product. In this competitive IT industry,
it’s important for you to stay up-to-date with the latest products and techniques to deliver the best results.

By : Prof. Mr. P.R. Patil


• Options for Building Your DevOps Toolchain
1. All-in-one DevOps toolchain
• An all-in-one DevOps solution provides a complete solution that may not integrate with other third-party tools. This can be
useful for companies or groups just beginning their DevOps journey, or if a team wants to start a project quickly. The
downside of this type of toolchain is that most established teams already have a set of tools they use and prefer, which
may not integrate with a complete solution. Plus, such a comprehensive toolchain can suffer from the “jack of all trades,
master of none” syndrome. One tool simply can’t evolve to fast-changing markets. Finally, more often than not companies
need to integrate legacy tools into a DevOps toolchain and an all-in-one toolchain can limit this.

2.Customizable DevOps toolchain


• The other approach is to use a DevOps toolchain that can be customized for a team’s needs with different tools. This
allows teams to bring the existing tools they know and love into the wider DevOps toolchain. For example, a team can use
Jira for planning and workflow tracking, Kubernetes to provision individual development environments, Github for
collaborative coding, Jenkins for continuous integration, and more. Organizations can customize their workflows by teams
and/or by project.
• Integration is essential for these types of toolchains. If the different tools don’t integrate, team members spend
unnecessary time switching between screens, logging in to multiple places, and it may be challenging to share information
between tools.

By : Prof. Mr. P.R. Patil


List of Best DevOps Tools To Learn and Master
1) Docker
2) Ansible
3) Git
4) Puppet
5) Chef
6) Jenkins
7) Nagios
8) Splunk
9) Bamboo
10) ELK Stack
11) Kubernetes
12) Selenium
13) Vagrant
14) Maven
15) Gradle

By : Prof. Mr. P.R. Patil


DevOps Architecture (e.g. Docker)
Docker overview
• Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your
applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your
infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for
shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it
in production.
The Docker platform
• Docker provides the ability to package and run an application in a loosely isolated environment called a container. The
isolation and security allows you to run many containers simultaneously on a given host. Containers are lightweight and
contain everything needed to run the application, so you do not need to rely on what is currently installed on the host.
You can easily share containers while you work, and be sure that everyone you share with gets the same container that
works in the same way.
• Docker provides tooling and a platform to manage the lifecycle of your containers:
• Develop your application and its supporting components using containers.
• The container becomes the unit for distributing and testing your application.
• When you’re ready, deploy your application into your production environment, as a container or an orchestrated service.
This works the same whether your production environment is a local data center, a cloud provider, or a hybrid of the two.
• What can I use Docker for?
By : Prof. Mr. P.R. Patil
Fast, consistent delivery of your applications
• Docker streamlines the development lifecycle by allowing developers to work in standardized environments using local
containers which provide your applications and services. Containers are great for continuous integration and continuous
delivery (CI/CD) workflows.
• Consider the following example scenario:
• Your developers write code locally and share their work with their colleagues using Docker containers.
• They use Docker to push their applications into a test environment and execute automated and manual tests.
• When developers find bugs, they can fix them in the development environment and redeploy them to the test
environment for testing and validation.
• When testing is complete, getting the fix to the customer is as simple as pushing the updated image to the production
environment.
Responsive deployment and scaling
• Docker’s container-based platform allows for highly portable workloads. Docker containers can run on a developer’s local
laptop, on physical or virtual machines in a data center, on cloud providers, or in a mixture of environments.
• Docker’s portability and lightweight nature also make it easy to dynamically manage workloads, scaling up or tearing
down applications and services as business needs dictate, in near real time.
Running more workloads on the same hardware
• Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual machines, so you
can use more of your server capacity to achieve your business goals. Docker is perfect for high density environments and
for small and medium deployments where you need to do more with fewer resources.
By : Prof. Mr. P.R. Patil
Docker architecture
• Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy
lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run
on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and
daemon communicate using a REST API, over UNIX sockets or a network interface. Another Docker client is
Docker Compose, that lets you work with applications consisting of a set of containers.

By : Prof. Mr. P.R. Patil


By : Prof. Mr. P.R. Patil
By : Prof. Mr. P.R. Patil
Figure - High-level workflow for the Docker containerized application life cycle

By : Prof. Mr. P.R. Patil


The Docker daemon
• The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks,
and volumes. A daemon can also communicate with other daemons to manage Docker services.

The Docker client


• The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as
docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The
Docker client can communicate with more than one daemon.

Docker Desktop
• Docker Desktop is an easy-to-install application for your Mac, Windows or Linux environment that enables you to build and share
containerized applications and microservices. Docker Desktop includes the Docker daemon (dockerd), the Docker client (docker),
Docker Compose, Docker Content Trust, Kubernetes, and Credential Helper. For more information, see Docker Desktop.

Docker registries
• A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for
images on Docker Hub by default. You can even run your own private registry.

• When you use the docker pull or docker run commands, the required images are pulled from your configured registry. When you
use the docker push command, your image is pushed to your configured registry.

Docker objects
• When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects. This section
is a brief overview of some of those objects.
By : Prof. Mr. P.R. Patil
Images
• An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization. For example, you
may build an image which is based on the ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to make your application
run. You might create your own images or you might only use those created by others and published in a registry. To build your own image, you create a Dockerfile with a simple syntax
for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only
those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.

Containers
• A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more
networks, attach storage to it, or even create a new image based on its current state. By default, a container is relatively well isolated from other containers and its host machine. You
can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine. A container is defined by its image as well
as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.

Example docker run command


The following command runs an ubuntu container, attaches interactively to your local command-line session, and runs /bin/bash.
$ docker run -i -t ubuntu /bin/bash
• When you run this command, the following happens (assuming you are using the default registry configuration):
1.If you do not have the ubuntu image locally, Docker pulls it from your configured registry, as though you had run docker pull ubuntu manually.
2.Docker creates a new container, as though you had run a docker container create command manually.
3.Docker allocates a read-write filesystem to the container, as its final layer. This allows a running container to create or modify files and directories in its local filesystem.
4.Docker creates a network interface to connect the container to the default network, since you did not specify any networking options. This includes assigning an IP address to the
container. By default, containers can connect to external networks using the host machine’s network connection.
5.Docker starts the container and executes /bin/bash. Because the container is running interactively and attached to your terminal (due to the -i and -t flags), you can provide input using
your keyboard while the output is logged to your terminal.
6.When you type exit to terminate the /bin/bash command, the container stops but is not removed. You can start it again or remove it.

By : Prof. Mr. P.R. Patil


The underlying technology
• Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to
deliver its functionality. Docker uses a technology called namespaces to provide the isolated workspace called the
container. When you run a container, Docker creates a set of namespaces for that container.
• These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its
access is limited to that namespace.

By : Prof. Mr. P.R. Patil


Benefits of DevOps for containerized applications
• Here are some of the most important benefits provided by a solid DevOps workflow:
• Deliver better-quality software, faster and with better compliance.
• Drive continuous improvement and adjustments earlier and more economically.
• Increase transparency and collaboration among stakeholders involved in delivering and operating software.
• Control costs and utilize provisioned resources more effectively while minimizing security risks.
• Plug and play well with many of your existing DevOps investments, including investments in open-source.

By : Prof. Mr. P.R. Patil


DevOps for Deployment
• Deployment in DevOps is a process that enables you to retrieve important codes from version control so that they can be
made readily available to the public and they can use the application in a ready-to-use and automated fashion.
Deployment tools DevOps comes into play when the developers of a particular application are working on certain
features that they need to build and implement in the application. It is a very effective, reliable, and efficient means of
testing and deploying organizational work.
• Continuous deployment tools in DevOps simply mean updating the required codes on a particular server. There can be
multiple servers and you need the required amount of tools to continuously update the codes and refresh the website.
The functionality of the DevOps continuous deployment tools can be explained as follows:
1.In the first phase of testing, the DevOps codes are merged for internal testing.
2.The next phase is staging where the client's test takes place as per their requirements.
3.Last but not least the production phase makes sure that any other feature does not get impacted because of the updating
these codes on the server.
• DevOps deployment tools make the functionality of the servers very convenient and easy for the users. It is different
from the traditional way of dealing with the applications and the improvement has given positive results to all the
companies as well as to all the users.

By : Prof. Mr. P.R. Patil


What are DevOps Deployment Tools?
• DevOps tools make it convenient and easier for companies to reduce the probability of errors and maintain
continuous integration in operations. It addresses the key aspects of a company. DevOps tools automate the
whole process and automatically build, test, and deploy the features.
• DevOps tools make the whole deployment process and easy going one and they can help you with the
following aspects:
1.Increased development.
2.Improvement in operational efficiency.
3.Faster release.
4.Non-stop delivery.
5.Quicker rate of innovation.
6.Improvement in collaboration.
7.Seamless flow in the process chain.

By : Prof. Mr. P.R. Patil


Best Deployment Tools in DevOps

By : Prof. Mr. P.R. Patil


By : Prof. Mr. P.R. Patil
By : Prof. Mr. P.R. Patil
By : Prof. Mr. P.R. Patil
By : Prof. Mr. P.R. Patil
By : Prof. Mr. P.R. Patil
By : Prof. Mr. P.R. Patil
By : Prof. Mr. P.R. Patil

You might also like