SEUML: Types of Testing + FP Oriented Estimation Model + Study of Any Two Open Source Tools in DevOps
SEUML: Types of Testing + FP Oriented Estimation Model + Study of Any Two Open Source Tools in DevOps
PRACTICAL – 8
Aim: Study of various software testing methods and design test cases.
Testing is the process of executing a program to find errors. To make our software perform well it should
be error-free. If testing is done successfully it will remove all the errors from the software. In this article,
we will discuss first the principles of testing and then we will discuss, the different types of testing.
1. Manual Testing
2. Automation Testing
1. Manual Testing
Manual testing is a technique to test the software that is carried out using the functions and features of an
application. In manual software testing, a tester carries out tests on the software by following a set of
predefined test cases. In this testing, testers make test cases for the codes, test the software, and give the final
report about that software. Manual testing is time-consuming because it is done by humans, and there is a
chance of human errors.
>> Advantages of Manual Testing:-
• Fast and accurate visual feedback: It detects almost every bug in the software application and is
used to test the dynamically changing GUI designs like layout, text, etc.
• Less expensive: It is less expensive as it does not require any high-level skill or a specific type of
tool.
• No coding is required: No programming knowledge is required while using the black box testing
method. It is easy to learn for the new testers.
• Efficient for unplanned changes: Manual testing is suitable in case of unplanned changes to the
application, as it can be adopted easily.
2. Automation Testing:-
Automated Testing is a technique where the Tester writes scripts on their own and uses suitable Software or
Automation Tool to test the software. It is an Automation Process of a Manual Process. It allows for executing
repetitive tasks without the intervention of a Manual Tester.
>> Advantages of Automation Testing:-
• Simplifies Test Case Execution: Automation testing can be left virtually unattended and thus it
allows monitoring of the results at the end of the process. Thus, simplifying the overall test
1
IU2141050116 SEUML
1. Functional Testing
Functional Testing is a type of Software Testing in which the system is tested against the functional
requirements and specifications. Functional testing ensures that the requirements or specifications are properly
satisfied by the application. This type of testing is particularly concerned with the result of processing. It
focuses on the simulation of actual system usage but does not develop any system structure assumptions. The
article focuses on discussing function testing.
2. Non-Functional Testing
Non-functional Testing is a type of Software Testing that is performed to verify the non-functional
requirements of the application. It verifies whether the behavior of the system is as per the requirement or not.
It tests all the aspects that are not tested in functional testing. Non-functional testing is a software testing
technique that checks the non-functional attributes of the system. Non-functional testing is defined as a type
of software testing to check non-functional aspects of a software application. It is designed to test the
readiness of a system as per nonfunctional parameters which are never addressed by functional testing.
Non-functional testing is as important as functional testing.
Test Case:-
The test case is defined as a group of conditions under which a tester determines whether a software
application is working as per the customer's requirements or not. Test case designing includes
preconditions, case name, input conditions, and expected result. A test case is a first level action and
derived from test scenarios.
2
IU2141050116 SEUML
It is an in-details document that contains all possible inputs (positive as well as negative) and the navigation
steps, which are used for the test execution process. Writing of test cases is a one-time attempt that can be
used in the future at the time of regression testing.
Test case gives detailed information about testing strategy, testing process, preconditions, and expected output.
These are executed during the testing process to check whether the software application is performing the task
for that it was developed or not.
Test case helps the tester in defect reporting by linking defect with test case ID. Detailed test case
documentation works as a full proof guard for the testing team because if developer missed something, then it
can be caught during execution of these full-proof test cases.
In functional testing or if the application is data-driven, we require the input column else; it is a bit time
consuming.
3
IU2141050116 SEUML
4
IU2141050116 SEUML
PRACTICAL – 9
Objectives of FPA
The basic and primary purpose of the functional point analysis is to measure and provide the software
application functional size to the client, customer, and the stakeholder on their request. Further, it is used to
measure the software project development along with its maintenance, consistently throughout the project
irrespective of the tools and the technologies.
Types of FP Attributes
1
IU2141050116 SEUML
1. FPs of an application is found out by counting the number and types of functions used in the applications.
Various functions used in an application can be put under five types, as shown in Table:
2. FP characterizes the complexity of the software system and hence can be used to depict the project time
and the manpower requirement.
3. The effort required to develop the project depends on what the software does.
5. FP method is used for data processing systems, business systems like information systems.
6. The five parameters mentioned above are also known as information domain characteristics.
7. All the parameters mentioned above are assigned some weights that have been experimentally determined
and are shown in Table
2
IU2141050116 SEUML
Here that weighing factor will be simple, average, or complex for a measurement parameter type.
The Function Point (FP) is thus calculated with the following formula.
and ∑(fi) is the sum of all 14 questionnaires and show the complexity adjustment value/ factor-CAF (where i
ranges from 1 to 14). Usually, a student is provided with the value of ∑(fi)
a. Errors/FP
b. $/FP.
c. Defects/FP
d. Pages of documentation/FP
e. Errors/PM.
f. Productivity = FP/PM (effort is measured in person-months).
g. $/Page of Documentation.
8. LOCs of an application can be estimated from FPs. That is, they are interconvertible. This process is
known as backfiring. For example, 1 FP is equal to about 100 lines of COBOL code.
3
IU2141050116 SEUML
9. FP metrics is used mostly for measuring the size of Management Information System (MIS) software.
10. But the function points obtained above are unadjusted function points (UFPs). These (UFPs) of a
subsystem are further adjusted by considering some more General System Characteristics (GSCs). It is a
set of 14 GSCs that need to be considered. The procedure for adjusting UFPs is as follows:
a. Degree of Influence (DI) for each of these 14 GSCs is assessed on a scale of 0 to 5. (b) If a particular
GSC has no influence, then its weight is taken as 0 and if it has a strong influence then its weight is 5. b.
The score of all 14 GSCs is totaled to determine Total Degree of Influence (TDI). c. Then Value
Adjustment Factor (VAF) is computed from TDI by using the formula:
the function point, productivity, documentation, cost per function for the following data:
Solution:
4
IU2141050116 SEUML
5
IU2141050116 SEUML
PRACTICAL – 10
Aim: Study of any two Open source tools in DevOps for Infrastructure Automation, Configuration
Management, Deployment Automation, Performance Management, Log Management and Monitoring.
Today, DevOps teams try to utilize automation as much as possible. This is to cut down on the sheer number
of repeatable processes to limit man-hours worked, throttle development efforts, and to reduce the possibility
of errors. This is also a business necessity to reduce overhead costs, increase the speed of the CI/CD process
and increase customer satisfaction. There are multiple individual areas that need to be automated to have a
fully autonomous infrastructure. Luckily, there are various tools we can take advantage of to help us automate
our infrastructure and make sure we have well-developed DevOps processes.
1. Jira
Jira is a collaborative tool used for managing and planning projects, as well as tracking issues and
bugs that might occur along the way. One of its better features is in its simplicity and
customizability. There are also a significant number of add-ons available which add to the overall
enhancements of its main features that increase its value. Pricing: Jira is free for up to 10 users.
2. Slack
Slack is one of the main collaboration tools used today. It makes communication within local or
remote teams easier and faster. It has an easy-to-use API which can connect to multiple other
platforms with a mouse click increasing its capabilities and use cases. Pricing: It is free for small
businesses.
1. Chef
Chef is a popular tool which is used for the configuration and management of your cloud
infrastructure. The chef server stores information about your node’s current and desired
configuration. Chef’s main task is to push the desired configuration instructions, also known
as cookbooks, to all other nodes connected to the server. These instructions help us easily
scale and modify our infrastructure if needed.
Pricing: Free and open source but does have enterprise options.
2. Puppet
Puppet is another popular configuration management tool on our list which consists of a
puppet master server and puppet agents which are located on the servers that we are
managing. The Puppet master server stores the configuration files needed for the servers that
are being managed. The Puppet master communicates constantly with puppet agents and
checks if something needs to be updated/changed.
Pricing: Information is available by contacting the Puppet sales team.
1
IU2141050116 SEUML
1. Jenkins
Jenkins is an open-source automation server that helps developers build, test, and deploy their
software projects. It is widely used in the software development industry to automate various
tasks, including building and testing code, releasing software updates.
2. Spinnaker
Spinnaker is an open-source, multi-cloud continuous delivery platform that helps teams automate
the release and deployment of software applications. It is designed to make it easier for teams to
manage and deploy applications across various environments, including on-premises, cloud, and
hybrid environments.
1. Mesos
Mesos is a DevOps tool that abstracts CPU, memory, storage, and other resources away from
virtual or physical machines to help DevOps teams to build and run fault-tolerant and elastic
distributed systems easily. We’re just starting to test Mesos (along with Marathon) to run our
entire software stack. While I do not have much to report at the moment, the tool does look
very promising and we’re very happy with the results so far. I will certainly update this post as we have
more data on it.
2. Kubernetes
Kubernetes is a tool that allows one to manage multiple Docker containers as a single unit to
make development occur more quickly and simplify operations overall.
Essentially, it is an open-source orchestration system that handles scheduling onto
nodes in a single cluster, manages workloads, and groups containers into logical
units for simplified management and discovery. We have been testing it for a
small part of our environment, and comparing it to stack built on Mesos and
Marathon. The jury is still out on this one .
1. Consul
Consul is used to assign DNS names to services. For example, DevOps engineers can
provide a single name to a cluster of several machines so that they need to access only that
entity — thereby making work easier and more efficient. As such, Consul is a useful in
service discovery and configuration, particularly in applications that are built from
microservices. Still, we imagine that Consul should be able to be used for a lot more — and we
look forward to seeing what the community will come up with next.
2. Docker
By now, everyone probably knows Docker — it makes configuration management, issue
control, and scaling much easier through the use of containers that can be moved from place to
place. In our environment, for example, our ELK-as-service solution has a data processing
pipeline that consists of twelve layers. We also use Docker containers to run a full pipeline
through all of the layers on one Mac machine.
2
IU2141050116 SEUML