Oose Unit 5
Oose Unit 5
Project:
A project is defined as a specific plan, transfer of ideas from one domain into another, activities
planned to achieve a particular aim.
Eg:
➢ A project to build a new hospital
➢ A research work undertaken by a college student
Project Characteristics:
• Planning is required
Management can be defined as all activities and tasks undertaken by one or more persons
for the purpose of planning and controlling the activities of others in order to achieve
objectives or complete an activity that could not be achieved by others acting
independently.
There are three needs for software project management. These are:
1. Time
2. Cost
3. Quality
It is an essential part of the software organization to deliver a quality product, keeping the cost
within the client?s budget and deliver the project as per schedule. There are various factors, both
external and internal, which may impact this triple factor. Any of three-factor can severely affect
the other two.
Project Manager
A project manager is a character who has the overall responsibility for the planning, design,
execution, monitoring, controlling and closure of a project. A project manager represents an
essential role in the achievement of the projects.
A project manager is a character who is responsible for giving decisions, both large and small
projects. The project manager is used to manage the risk and minimize uncertainty. Every
decision the project manager makes must directly profit their project.
Role of a Project Manager:
1. Leader
A project manager must lead his team and should provide them direction to make them
understand what is expected from all of them.
2. Medium:
The Project manager is a medium between his clients and his team. He must coordinate and
transfer all the appropriate information from the clients to his team and report to the senior
management.
3. Mentor:
He should be there to guide his team at each step and make sure that the team has an attachment.
He provides a recommendation to his team and points them in the right direction.
Invisibility When a physical artifact such as a bridge or road is being constructed the
progress being made can actually be seen. With software, progress is not immediately
visible.
Complexity Per dollar, pound or euro spent, software products contain more complexity
than other engineered artifacts.
Conformity The ‘traditional’ engineer is usually working with physical systems and
physical materials like cement and steel. These physical systems can have some
complexity, but are governed by physical laws that are consistent. Software developers
have to conform to the requirements of human clients. It is not just that individuals can
be inconsistent. Organizations, because of lapses in collective memory, in internal
communication or in effective decision-making can exhibit remarkable ‘organizational
stupidity’ that developers have to cater for.
Flexibility The ease with which software can be changed is usually seen as one of its
strengths. However, this means that where the software system interfaces with a physical
or organizational system, it is expected that, where necessary, the software will change to
accommodate the other components rather than vice versa. This means the software
systems are likely to be subject to a high degree of change.
SCM (S/w config management)
It is a set of activities carried out for identifying, organizing & controlling changesthroughout
the life cycle of the s/w product.
SCM Process
It uses the tools which keep that the necessary change has been implemented adequately to the
appropriate component. The SCM process defines a number of tasks:
Project Scheduling
• Identifying activities
• Risk analysis
• Resource allocation
• Schedule production
Identifying activities
The first step in producing the plan is to decide what activities need to be carried out and
in what order they are to be done.
Risk analysis
The ideal activity plan will then be the subject of an activity risk analysis, aimed at
identifying potential problems.
Resource allocation
Based on the activity plan, it provides resources to the activities
Schedule production
Once resources are allocated to each activity, it publishes project schedule
Scheduling:
Scheduling is a process of identifying the time at which each activity will start and end.
It was represented using bar chart.
It Depends on
2. Availability of resources
Advantages:
Disadvantages:
Introduction to DevOps:
DevOps is basically a combination of two words- Development and Operations. DevOps is a
culture that implements the technology in order to promote collaboration between the
developer team and the operations team to deploy code to production faster in an automated
and repeatable way.
Why DevOps?
• Plan: Determining the commercial needs and gathering the opinions of end-user by
professionals in this level of the DevOps lifecycle.
• Code: At this level, the code for the same is developed and in order to simplify the design,
the team of developers uses tools and extensions that take care of security problems.
• Build: After the coding part, programmers use various tools for the submission of the code
to the common code source.
• Test: This level is very important to assure software integrity. Various sorts of tests are
done such as user acceptability testing, safety testing, speed testing, and many more.
• Release: At this level, everything is ready to be deployed in the operational environment.
• Deploy: In this level, Infrastructure-as-Code assists in creating the operational
infrastructure and subsequently publishes the build using various DevOps lifecycle tools.
• Operate: At this level, the available version is ready for users to use. Here, the department
looks after the server configuration and deployment.
• Monitor: The observation is done at this level that depends on the data which is gathered
from consumer behavior, the efficiency of applications, and from various other sources.
Infrastructure Provisioning:
Cloud platforms offer Infrastructure as a Service (IaaS), allowing operators to provision and
manage virtual machines, storage, and networking resources. This is often done using
Infrastructure as Code (IaC) tools, providing a way to automate and version infrastructure
setups.
Cloud platforms enable operators to scale resources up or down based on demand. This
scalability ensures that the infrastructure can handle varying workloads efficiently, and elasticity
allows resources to automatically adjust to meet demand.
Cloud services support automation tools and orchestration frameworks that allow operators to
automate routine tasks, such as provisioning, configuration changes, and scaling. This reduces
manual efforts, minimizes errors, and accelerates deployment processes.
Cloud providers offer robust monitoring and logging services that operators can use to track
system performance, detect issues, and troubleshoot problems. These tools provide real-time
insights into the health of the infrastructure.
Security Management:
Cloud platforms provide a range of security services, including identity and access management,
encryption, and network security features. Operators can utilize these tools to implement and
enforce security best practices for the infrastructure.
Patch Management:
Operators can leverage cloud services to manage and apply patches to the operating system and
software components. Cloud platforms often provide tools to automate patching processes and
ensure that systems are up-to-date.
Cloud platforms offer features for designing highly available and fault-tolerant architectures.
Operators can configure load balancing, implement failover strategies, and leverage backup and
recovery services for disaster resilience.
Cost Optimization:
Cloud-based infrastructure allows operators to optimize costs by adjusting resources based on
actual usage. Cloud providers typically offer tools for monitoring resource consumption and
optimizing spending.
Cloud platforms often include collaboration and communication tools, fostering better teamwork
and communication among operators, developers, and other stakeholders involved in the
DevOps process.
DevOps Pipeline:
A DevOps pipeline is a set of automated processes and tools that allows developers and
operations professionals to collaborate on building and deploying code to a production
environment.
The resulting DevOps structure has clear benefits: Teams who adopt DevOps practices can
improve and streamline their deployment pipeline, which reduces incident frequency and impact.
The DevOps practice of “you build it, you run it” is fast becoming the norm and with good
reason — nearly every respondent (99%) of the 2020 DevOps Trends Survey said DevOps has
had a positive impact on their organization, with nearly half seeing a faster time to market and
improved deployment frequency.
Yet implementing DevOps is easier said than done. It takes the right people, processes, and tools
to successfully implement DevOps.
A DevOps pipeline is a set of automated processes and tools that allows both developers and
operations professionals to work cohesively to build and deploy code to a production
environment. While a DevOps pipeline can differ by organization, it typically includes
build automation/continuous integration, automation testing, validation, and reporting. It may
also include one or more manual gates that require human intervention before code is allowed to
proceed.
Continuous is a differentiated characteristic of a DevOps pipeline. This includes continuous
integration, continuous delivery/deployment (CI/CD), continuous feedback, and continuous
operations. Instead of one-off tests or scheduled deployments, each function occurs on an
ongoing basis.
Considerations for building a DevOps pipeline:
Since there isn’t one standard DevOps pipeline, an organization’s design and implementation of
a DevOps pipeline depends on its technology stack, a DevOps engineer’s level of experience,
budget, and more. A DevOps engineer should have a wide-ranging knowledge of both
development and operations, including coding, infrastructure management, system
administration, and DevOps toolchains.
Plus, each organization has a different technology stack that can impact the process. For
example, if your codebase is node.js, factors include whether you use a local proxy npm registry,
whether you download the source code and run `npm install` at every stage in the pipeline, or do
it once and generate an artifact that moves through the pipeline. Or, if an application is
container-based, you need to decide to use a local or remote container registry, build the
container once and move it through the pipeline, or rebuild it at every stage.
While every pipeline is unique, most organizations use similar fundamental components. Each
step is evaluated for success before moving on to the next stage of the pipeline. In the event of a
failure, the pipeline is stopped, and feedback is provided to the developer.
Components of a DevOps pipeline
Continuous integration is the practice of making frequent commits to a common source code
repository. It’s continuously integrating code changes into existing code base so that any
conflicts between different developer’s code changes are quickly identified and relatively easy to
remediate. This practice is critically important to increasing deployment efficiency.
We believe that trunk-based development is a requirement of continuous integration. If you are
not making frequent commits to a common branch in a shared source code repository, you are
not doing continuous integration. If your build and test processes are automated but your
developers are working on isolated, long-living feature branches that are infrequently integrated
into a shared branch, you are also not doing continuous integration.
Continuous delivery ensures that the “main” or “trunk” branch of an application's source code
is always in a releasable state. In other words, if management came to your desk at 4:30 PM on a
Friday and said, “We need the latest version released right now,” that version could be deployed
with the push of a button and without fear of failure.
This means having a pre-production environment that is as close to identical to the production
environment as possible and ensuring that automated tests are executed, so that every variable
that might cause a failure is identified before code is merged into the main or trunk branch.
Continuous deployment entails having a level of continuous testing and operations that is so
robust, new versions of software are validated and deployed into a production environment
without requiring any human intervention.
This is rare and in most cases unnecessary. It is typically only the unicorn businesses who have
hundreds or thousands of developers and have many releases each day that require, or even want
to have, this level of automation.
To simplify the difference between continuous delivery and continuous deployment, think of
delivery as the FedEx person handing you a box, and deployment as you opening that box and
using what’s inside. If a change to the product is required between the time you receive the box
and when you open it, the manufacturer is in trouble!
2. Continuous feedback
The single biggest pain point of the old waterfall method of software development — and
consequently why agile methodologies were designed — was the lack of timely feedback. When
new features took months or years to go from idea to implementation, it was almost guaranteed
that the end result would be something other than what the customer expected or wanted. Agile
succeeded in ensuring that developers received faster feedback from stakeholders. Now with
DevOps, developers receive continuous feedback not not only from stakeholders, but from
systematic testing and monitoring of their code in the pipeline.
Continuous testing is a critical component of every DevOps pipeline and one of the primary
enablers of continuous feedback. In a DevOps process, changes move continuously from
development to testing to deployment, which leads not only to faster releases, but a higher
quality product. This means having automated tests throughout your pipeline, including unit tests
that run on every build change, smoke tests, functional tests, and end-to-end tests.
Continuous monitoring is another important component of continuous feedback. A DevOps
approach entails using continuous monitoring in the staging, testing, and even development
environments. It is sometimes useful to monitor pre-production environments for anomalous
behavior, but in general this is an approach used to continuously assess the health and
performance of applications in production.
Numerous tools and services exist to provide this functionality, and this may involve anything
from monitoring your on-premise or cloud infrastructure such as server resources, networking,
etc. or the performance of your application or its API interfaces.
3. Continuous operations
Continuous operations is a relatively new and less common term, and definitions vary. One
way to interpret it is as “continuous uptime”. For example in the case of a blue/green
deployment strategy in which you have two separate production environments, one that is “blue”
(publicly accessible) and one that is “green” (not publicly accessible). In this situation, new code
would be deployed to the green environment, and when it was confirmed to be functional then a
switch would be flipped (usually on a load-balancer) and traffic would switch from the “blue”
system to the “green” system. The result is no downtime for the end-users.
Another way to think of Continuous operations is as continuous alerting. This is the notion that
engineering staff is on-call and notified if any performance anomalies in the application or
infrastructure occur. In most cases, continuous alerting goes hand in hand with continuous
monitoring.
Test:
The test phase is triggered after a build artifact is created and successfully deployed to staging or
testing environments. A comprehensive test suite takes a considerable amount of time to
execute. This phase should fail fast so that the more expensive test tasks are left for the end.
The test phase uses dynamic application security testing (DAST) tools to detect live application
flows like user authentication, authorization, SQL injection, and API-related endpoints. The
security-focused DAST analyzes an application against a list of known high-severity issues, such
as those listed in the OWASP Top 10.
There are numerous open source and paid testing tools available, which offer a variety of
functionality and support for language ecosystems, including BDD Automated Security
Tests, JBroFuzz, Boofuzz, OWASP ZAP, Arachi, IBM AppScan, GAUNTLT, and SecApp
suite.
Deploy:
If the previous phases pass successfully, it's time to deploy the build artifact to production. The
security areas of concern to address during the deploy phase are those that only happen against
the live production system. For example, any differences in configuration between the
production environment and the previous staging and development environments should be
thoroughly reviewed. Production TLS and DRM certificates should be validated and reviewed
for upcoming renewal.
The deploy phase is a good time for runtime verification tools like Osquery, Falco, and Tripwire,
which extract information from a running system in order to determine whether it performs as
expected. Organizations can also run chaos engineering principles by experimenting on a system
to build confidence in the system’s capability to withstand turbulent conditions. Real-world
events can be simulated, like servers that crash, hard drive failures, or severed network
connections. Netflix is widely known for its Chaos Monkey tool, which exercises chaos
engineering principles. Netflix also utilizes a Security Monkey tool that looks for violations or
vulnerabilities in improperly configured infrastructure security groups and cuts any vulnerable
servers.