0% found this document useful (0 votes)
35 views15 pages

Oose Unit 5

Uploaded by

ak20032036
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views15 pages

Oose Unit 5

Uploaded by

ak20032036
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

UNIT V PROJECT MANAGEMENT

Software Project Management- Software Configuration Management - Project Scheduling

DevOps: Motivation-Cloud as a platform-Operations- Deployment Pipeline: Overall

Architecture Building and Testing-Deployment- Tools- Case Study

Software Project Management

Project:

A project is defined as a specific plan, transfer of ideas from one domain into another, activities
planned to achieve a particular aim.
Eg:
➢ A project to build a new hospital
➢ A research work undertaken by a college student

Project Characteristics:

• Planning is required

• Non-routine tasks are involved


• Work is carried out by the customers

• specific goal or objective should be created

• Project has a predetermined time span

• Work involving many people in the team

• Requirement of people with different skill sets

• The resources are limited (most of the time)

• Uncertainty regarding time, cost, quality and performance.


• Project may be small or complex
Software:
 Software is a set of instructions, data or programs used to operate computers and execute
specific tasks. Opposite of hardware, which describes the physical aspects of a
computer, software is a generic term used to refer to applications, scripts and programs
that run on a device.
Management:

 Management can be defined as all activities and tasks undertaken by one or more persons
for the purpose of planning and controlling the activities of others in order to achieve
objectives or complete an activity that could not be achieved by others acting
independently.

What is software project management?

• Software project management is an art and discipline of planning and supervising


software projects. It is a sub-discipline of software project management in which
software projects planned, implemented, monitored and controlled.
• It is a procedure of managing, allocating and timing resources to develop computer
software that fulfills requirements.
• In software Project Management, the client and the developers need to know the length,
period and cost of the project.

Prerequisite of software project management

There are three needs for software project management. These are:

1. Time
2. Cost
3. Quality

It is an essential part of the software organization to deliver a quality product, keeping the cost
within the client?s budget and deliver the project as per schedule. There are various factors, both
external and internal, which may impact this triple factor. Any of three-factor can severely affect
the other two.

Project Manager

A project manager is a character who has the overall responsibility for the planning, design,
execution, monitoring, controlling and closure of a project. A project manager represents an
essential role in the achievement of the projects.

A project manager is a character who is responsible for giving decisions, both large and small
projects. The project manager is used to manage the risk and minimize uncertainty. Every
decision the project manager makes must directly profit their project.
Role of a Project Manager:

1. Leader

A project manager must lead his team and should provide them direction to make them
understand what is expected from all of them.

2. Medium:

The Project manager is a medium between his clients and his team. He must coordinate and
transfer all the appropriate information from the clients to his team and report to the senior
management.

3. Mentor:

He should be there to guide his team at each step and make sure that the team has an attachment.
He provides a recommendation to his team and points them in the right direction.

Responsibilities of a Project Manager:

1. Managing risks and issues.


2. Create the project team and assigns tasks to several team members.
3. Activity planning and sequencing.
4. Monitoring and reporting progress.
5. Modifies the project plan to deal with the situation.

Software projects versus other types of project:

 Invisibility When a physical artifact such as a bridge or road is being constructed the
progress being made can actually be seen. With software, progress is not immediately
visible.
 Complexity Per dollar, pound or euro spent, software products contain more complexity
than other engineered artifacts.
 Conformity The ‘traditional’ engineer is usually working with physical systems and
physical materials like cement and steel. These physical systems can have some
complexity, but are governed by physical laws that are consistent. Software developers
have to conform to the requirements of human clients. It is not just that individuals can
be inconsistent. Organizations, because of lapses in collective memory, in internal
communication or in effective decision-making can exhibit remarkable ‘organizational
stupidity’ that developers have to cater for.
 Flexibility The ease with which software can be changed is usually seen as one of its
strengths. However, this means that where the software system interfaces with a physical
or organizational system, it is expected that, where necessary, the software will change to
accommodate the other components rather than vice versa. This means the software
systems are likely to be subject to a high degree of change.
SCM (S/w config management)

It is a set of activities carried out for identifying, organizing & controlling changesthroughout
the life cycle of the s/w product.

Changes due to the following reasons

• Changes in business strategy


• Reorganization of the business.
• Changes in Technology.
• Porting the application to an new OS.
• New regulations imposed by government.
• Budgeting (or) Scheduling constraints.

Need for SCM


Changes could affect
• User interface
• Architecture
• Database structure
• coding

Basic Requirements for an SCM

o identification of objects in the software configuration


o Version Control
o Change Control
o Configuration Audit
o Status Reporting

SCM Process

It uses the tools which keep that the necessary change has been implemented adequately to the
appropriate component. The SCM process defines a number of tasks:

o Identification of objects in the software configuration


o Version Control
o Change Control
o Configuration Audit
o Status Reporting
SCM Process (Basic Requirements for an SCM)
SCM process defines a series of tasks to meet the objectives.
1. Identification:
• Identify items that would collectively from a software configuration.
• Name each identified SCI.
• Organize all software configuration items to form a database or repository
along their attributes and relationship.
• If a change is made to one configuration item, it is possible to determine
which other software configuration items are also affected.
2. Version Control:
• To facilitate different versions of the software configuration items, makethe
first one as baseline, i.e. version 1.0
• Any changes in baseline become new versions.
3. Change control:
• A change is initiated by a ‘change request’.
• The affected software configuration items is ‘checked out’ of the database.
• The changes applied to the particular software configuration items and all
related SCI’s
• The software configuration items are then ‘checked in’ to the database withnext
version number.
4. Configuration Audit:
To check whether other changes are implements properly or not, we go for Audit.There
are two types of audit carried out.
• Formal Technical Review (to check software configuration items taken are
appropriate for the change done by Technical Reviewer)
• Software configuration audit (Done by SQA group to assess its adherenceto
quality)
5. Reporting and Documenting:
• All changes made are properly documents and communicated to all peoplewho
are directly or indirectly involved.
• A product which is formally received & agreed upon, there after serves as the
basis. It is called baseline.

Project Scheduling

Project-task scheduling is a significant project planning activity. It comprises deciding which


functions would be taken up when. To schedule the project plan, a software project manager
wants to do the following:

1. Identify all the functions required to complete the project.


2. Break down large functions into small activities.
3. Determine the dependency among various activities.
4. Establish the most likely size for the time duration required to complete the activities.
5. Allocate resources to activities.
6. Plan the beginning and ending dates for different activities.
7. Determine the critical path. A critical way is the group of activities that decide the
duration of the project.

Main stages made by project schedule:

• Identifying activities

• Risk analysis

• Resource allocation

• Schedule production

Identifying activities
The first step in producing the plan is to decide what activities need to be carried out and
in what order they are to be done.
Risk analysis
The ideal activity plan will then be the subject of an activity risk analysis, aimed at
identifying potential problems.
Resource allocation
Based on the activity plan, it provides resources to the activities
Schedule production
Once resources are allocated to each activity, it publishes project schedule
Scheduling:

Scheduling is a process of identifying the time at which each activity will start and end.
It was represented using bar chart.

It Depends on

1. Availability of staff members

2. Availability of resources
Advantages:

• Suitable for small projects

Disadvantages:

• Not much clear

• Difficulty for large projects

Introduction to DevOps:
DevOps is basically a combination of two words- Development and Operations. DevOps is a
culture that implements the technology in order to promote collaboration between the
developer team and the operations team to deploy code to production faster in an automated
and repeatable way.

Why DevOps?

The goal of DevOps is to increase an organization’s speed when it comes to delivering


applications and services. Many companies have successfully implemented DevOps to
enhance their user experience including Amazon, Netflix, etc.
Facebook’s mobile app which is updated every two weeks effectively tells users you can have
what you want and you can have it. Now ever wondered how Facebook was able to do social
smoothing? It’s the DevOps philosophy that helps Facebook ensure that apps aren’t outdated
and that users get the best experience on Facebook. Facebook accomplishes this true code
ownership model that makes its developers responsible that includes testing and supporting
through production and delivery for each kernel of code. They write and update their true
policies like this but Facebook has developed a DevOps culture and has successfully
accelerated its development lifecycle.
Industries have started to gear up for digital transformation by shifting their means to weeks
and months instead of years while maintaining high quality as a result. The solution to all this
is- DevOps.
DevOps Lifecycle:
DevOps lifecycle is the methodology where professional development teams come together to
bring products to market more efficiently and quickly. The structure of the DevOps lifecycle
consists of Plan, Code, Building, Test, Releasing, Deploying, Operating, and Monitoring.

• Plan: Determining the commercial needs and gathering the opinions of end-user by
professionals in this level of the DevOps lifecycle.
• Code: At this level, the code for the same is developed and in order to simplify the design,
the team of developers uses tools and extensions that take care of security problems.
• Build: After the coding part, programmers use various tools for the submission of the code
to the common code source.
• Test: This level is very important to assure software integrity. Various sorts of tests are
done such as user acceptability testing, safety testing, speed testing, and many more.
• Release: At this level, everything is ready to be deployed in the operational environment.
• Deploy: In this level, Infrastructure-as-Code assists in creating the operational
infrastructure and subsequently publishes the build using various DevOps lifecycle tools.
• Operate: At this level, the available version is ready for users to use. Here, the department
looks after the server configuration and deployment.
• Monitor: The observation is done at this level that depends on the data which is gathered
from consumer behavior, the efficiency of applications, and from various other sources.

Best Practices to follow:

• Implement automated dashboard


• Keep the entire team together
• Allow DevOps to be a cultural change
• Be patient with the developers
• Maintain a centralized unit
• Build a flexible infrastructure
Advantages:
1. Faster Delivery: DevOps enables organizations to release new products and updates faster
and more frequently, which can lead to a competitive advantage.
2. Improved Collaboration: DevOps promotes collaboration between development and
operations teams, resulting in better communication, increased efficiency, and reduced
friction.
3. Improved Quality: DevOps emphasizes automated testing and continuous integration,
which helps to catch bugs early in the development process and improve the overall quality
of software.
4. Increased Automation: DevOps enables organizations to automate many manual processes,
freeing up time for more strategic work and reducing the risk of human error.
5. Better Scalability: DevOps enables organizations to quickly and efficiently scale their
infrastructure to meet changing demands, improving the ability to respond to business
needs.
6. Increased Customer Satisfaction: DevOps helps organizations to deliver new features and
updates more quickly, which can result in increased customer satisfaction and loyalty.
7. Improved Security: DevOps promotes security best practices, such as continuous testing
and monitoring, which can help to reduce the risk of security breaches and improve the
overall security of an organization’s systems.
8. Better Resource Utilization: DevOps enables organizations to optimize their use of
resources, including hardware, software, and personnel, which can result in cost savings
and improved efficiency.
Disadvantages:
1. High Initial Investment: Implementing DevOps can be a complex and costly process,
requiring significant investment in technology, infrastructure, and personnel.
2. Skills Shortage: Finding qualified DevOps professionals can be a challenge, and
organizations may need to invest in training and development programs to build the
necessary skills within their teams.
3. Resistance to Change: Some employees may resist the cultural and organizational changes
required for successful DevOps adoption, which can result in resistance, resistance to
collaboration, and reduced efficiency.
4. Lack of Standardization: DevOps is still a relatively new field, and there is a lack of
standardization in terms of methodologies, tools, and processes. This can make it difficult
for organizations to determine the best approach for their specific needs.
5. Increased Complexity: DevOps can increase the complexity of software delivery, requiring
organizations to manage a larger number of moving parts and integrate multiple systems
and tools.
6. Dependency on Technology: DevOps relies heavily on technology, and organizations may
need to invest in a variety of tools and platforms to support the DevOps process.
7. Need for Continuous Improvement: DevOps requires ongoing improvement and
adaptation, as new technologies and best practices emerge. Organizations must be prepared
to continuously adapt and evolve their DevOps practices to remain competitive.
cloud as a platform operations:
Cloud platforms serve as a powerful foundation for operators in a DevOps environment. Here's
how:

Infrastructure Provisioning:

Cloud platforms offer Infrastructure as a Service (IaaS), allowing operators to provision and
manage virtual machines, storage, and networking resources. This is often done using
Infrastructure as Code (IaC) tools, providing a way to automate and version infrastructure
setups.

Scalability and Elasticity:

Cloud platforms enable operators to scale resources up or down based on demand. This
scalability ensures that the infrastructure can handle varying workloads efficiently, and elasticity
allows resources to automatically adjust to meet demand.

Automation and Orchestration:

Cloud services support automation tools and orchestration frameworks that allow operators to
automate routine tasks, such as provisioning, configuration changes, and scaling. This reduces
manual efforts, minimizes errors, and accelerates deployment processes.

Monitoring and Logging:

Cloud providers offer robust monitoring and logging services that operators can use to track
system performance, detect issues, and troubleshoot problems. These tools provide real-time
insights into the health of the infrastructure.

Security Management:

Cloud platforms provide a range of security services, including identity and access management,
encryption, and network security features. Operators can utilize these tools to implement and
enforce security best practices for the infrastructure.

Patch Management:

Operators can leverage cloud services to manage and apply patches to the operating system and
software components. Cloud platforms often provide tools to automate patching processes and
ensure that systems are up-to-date.

High Availability and Disaster Recovery:

Cloud platforms offer features for designing highly available and fault-tolerant architectures.
Operators can configure load balancing, implement failover strategies, and leverage backup and
recovery services for disaster resilience.

Cost Optimization:
Cloud-based infrastructure allows operators to optimize costs by adjusting resources based on
actual usage. Cloud providers typically offer tools for monitoring resource consumption and
optimizing spending.

Collaboration and Communication:

Cloud platforms often include collaboration and communication tools, fostering better teamwork
and communication among operators, developers, and other stakeholders involved in the
DevOps process.

DevOps Pipeline:

A DevOps pipeline is a set of automated processes and tools that allows developers and
operations professionals to collaborate on building and deploying code to a production
environment.

DevOps is a revolutionary movement, in that it revolutionizes the siloed organizational


structure that separated development and operations. The result is a cultural shift where
developers and operations professionals work together, embrace automation, increase
deployment speed, and are more flexible.

The resulting DevOps structure has clear benefits: Teams who adopt DevOps practices can
improve and streamline their deployment pipeline, which reduces incident frequency and impact.
The DevOps practice of “you build it, you run it” is fast becoming the norm and with good
reason — nearly every respondent (99%) of the 2020 DevOps Trends Survey said DevOps has
had a positive impact on their organization, with nearly half seeing a faster time to market and
improved deployment frequency.
Yet implementing DevOps is easier said than done. It takes the right people, processes, and tools
to successfully implement DevOps.

What is the DevOps pipeline?

A DevOps pipeline is a set of automated processes and tools that allows both developers and
operations professionals to work cohesively to build and deploy code to a production
environment. While a DevOps pipeline can differ by organization, it typically includes
build automation/continuous integration, automation testing, validation, and reporting. It may
also include one or more manual gates that require human intervention before code is allowed to
proceed.
Continuous is a differentiated characteristic of a DevOps pipeline. This includes continuous
integration, continuous delivery/deployment (CI/CD), continuous feedback, and continuous
operations. Instead of one-off tests or scheduled deployments, each function occurs on an
ongoing basis.
Considerations for building a DevOps pipeline:

Since there isn’t one standard DevOps pipeline, an organization’s design and implementation of
a DevOps pipeline depends on its technology stack, a DevOps engineer’s level of experience,
budget, and more. A DevOps engineer should have a wide-ranging knowledge of both
development and operations, including coding, infrastructure management, system
administration, and DevOps toolchains.
Plus, each organization has a different technology stack that can impact the process. For
example, if your codebase is node.js, factors include whether you use a local proxy npm registry,
whether you download the source code and run `npm install` at every stage in the pipeline, or do
it once and generate an artifact that moves through the pipeline. Or, if an application is
container-based, you need to decide to use a local or remote container registry, build the
container once and move it through the pipeline, or rebuild it at every stage.

While every pipeline is unique, most organizations use similar fundamental components. Each
step is evaluated for success before moving on to the next stage of the pipeline. In the event of a
failure, the pipeline is stopped, and feedback is provided to the developer.
Components of a DevOps pipeline

1. Continuous integration/continuous delivery/deployment (CI/CD)

Continuous integration is the practice of making frequent commits to a common source code
repository. It’s continuously integrating code changes into existing code base so that any
conflicts between different developer’s code changes are quickly identified and relatively easy to
remediate. This practice is critically important to increasing deployment efficiency.
We believe that trunk-based development is a requirement of continuous integration. If you are
not making frequent commits to a common branch in a shared source code repository, you are
not doing continuous integration. If your build and test processes are automated but your
developers are working on isolated, long-living feature branches that are infrequently integrated
into a shared branch, you are also not doing continuous integration.
Continuous delivery ensures that the “main” or “trunk” branch of an application's source code
is always in a releasable state. In other words, if management came to your desk at 4:30 PM on a
Friday and said, “We need the latest version released right now,” that version could be deployed
with the push of a button and without fear of failure.
This means having a pre-production environment that is as close to identical to the production
environment as possible and ensuring that automated tests are executed, so that every variable
that might cause a failure is identified before code is merged into the main or trunk branch.

Continuous deployment entails having a level of continuous testing and operations that is so
robust, new versions of software are validated and deployed into a production environment
without requiring any human intervention.
This is rare and in most cases unnecessary. It is typically only the unicorn businesses who have
hundreds or thousands of developers and have many releases each day that require, or even want
to have, this level of automation.

To simplify the difference between continuous delivery and continuous deployment, think of
delivery as the FedEx person handing you a box, and deployment as you opening that box and
using what’s inside. If a change to the product is required between the time you receive the box
and when you open it, the manufacturer is in trouble!

2. Continuous feedback

The single biggest pain point of the old waterfall method of software development — and
consequently why agile methodologies were designed — was the lack of timely feedback. When
new features took months or years to go from idea to implementation, it was almost guaranteed
that the end result would be something other than what the customer expected or wanted. Agile
succeeded in ensuring that developers received faster feedback from stakeholders. Now with
DevOps, developers receive continuous feedback not not only from stakeholders, but from
systematic testing and monitoring of their code in the pipeline.
Continuous testing is a critical component of every DevOps pipeline and one of the primary
enablers of continuous feedback. In a DevOps process, changes move continuously from
development to testing to deployment, which leads not only to faster releases, but a higher
quality product. This means having automated tests throughout your pipeline, including unit tests
that run on every build change, smoke tests, functional tests, and end-to-end tests.
Continuous monitoring is another important component of continuous feedback. A DevOps
approach entails using continuous monitoring in the staging, testing, and even development
environments. It is sometimes useful to monitor pre-production environments for anomalous
behavior, but in general this is an approach used to continuously assess the health and
performance of applications in production.
Numerous tools and services exist to provide this functionality, and this may involve anything
from monitoring your on-premise or cloud infrastructure such as server resources, networking,
etc. or the performance of your application or its API interfaces.

3. Continuous operations

Continuous operations is a relatively new and less common term, and definitions vary. One
way to interpret it is as “continuous uptime”. For example in the case of a blue/green
deployment strategy in which you have two separate production environments, one that is “blue”
(publicly accessible) and one that is “green” (not publicly accessible). In this situation, new code
would be deployed to the green environment, and when it was confirmed to be functional then a
switch would be flipped (usually on a load-balancer) and traffic would switch from the “blue”
system to the “green” system. The result is no downtime for the end-users.
Another way to think of Continuous operations is as continuous alerting. This is the notion that
engineering staff is on-call and notified if any performance anomalies in the application or
infrastructure occur. In most cases, continuous alerting goes hand in hand with continuous
monitoring.

Test:

The test phase is triggered after a build artifact is created and successfully deployed to staging or
testing environments. A comprehensive test suite takes a considerable amount of time to
execute. This phase should fail fast so that the more expensive test tasks are left for the end.

The test phase uses dynamic application security testing (DAST) tools to detect live application
flows like user authentication, authorization, SQL injection, and API-related endpoints. The
security-focused DAST analyzes an application against a list of known high-severity issues, such
as those listed in the OWASP Top 10.
There are numerous open source and paid testing tools available, which offer a variety of
functionality and support for language ecosystems, including BDD Automated Security
Tests, JBroFuzz, Boofuzz, OWASP ZAP, Arachi, IBM AppScan, GAUNTLT, and SecApp
suite.
Deploy:

If the previous phases pass successfully, it's time to deploy the build artifact to production. The
security areas of concern to address during the deploy phase are those that only happen against
the live production system. For example, any differences in configuration between the
production environment and the previous staging and development environments should be
thoroughly reviewed. Production TLS and DRM certificates should be validated and reviewed
for upcoming renewal.

The deploy phase is a good time for runtime verification tools like Osquery, Falco, and Tripwire,
which extract information from a running system in order to determine whether it performs as
expected. Organizations can also run chaos engineering principles by experimenting on a system
to build confidence in the system’s capability to withstand turbulent conditions. Real-world
events can be simulated, like servers that crash, hard drive failures, or severed network
connections. Netflix is widely known for its Chaos Monkey tool, which exercises chaos
engineering principles. Netflix also utilizes a Security Monkey tool that looks for violations or
vulnerabilities in improperly configured infrastructure security groups and cuts any vulnerable
servers.

You might also like