Assignment-11 (13)
Assignment-11 (13)
Aim:
To develop equivalence class test cases for the classification of a triangle and calculating
weekdays for a given date and next date.
Theory:
Equivalence class testing (ECT), also known as equivalence class partitioning (ECP), is a
software testing technique that groups input data into equivalence classes, assuming that
inputs within a class will produce the same software behavior. This allows for a reduction in
the number of test cases needed. Test cases are derived from the Cartesian product of these
classes, ensuring comprehensive coverage. Invalid values are also considered. It's important
to note that not all test cases within a class need to be executed; one representative case
suffices. ECT is valuable for efficiently testing software units with a wide range of possible
inputs. It strikes a balance between thorough examination and resource optimization.
Equivalence class testing is a crucial technique for verifying the accuracy of a program that
classifies triangles based on their side lengths. This method involves systematically testing
different sets of input values to ensure the triangle classification function operates as
intended. The problem at hand accepts three integers, denoted as 'a', 'b', and 'c', representing
the sides of the triangle. Additionally, a valid range for these side lengths is defined as [l,r],
where 'l' is greater than zero. The program then returns the type of triangle formed (Scalene,
Isosceles, Equilateral, or Not a Triangle) based on these side lengths. For a set of sides to form
a valid triangle, the sum of any two sides must be greater than the length of the remaining
side. Notably, if all sides are equal, it constitutes an Equilateral triangle; if two sides are equal,
it's an Isosceles triangle; and if none of the sides are equal, it's a Scalene triangle. Assuming a
side length range of [1,100], with a nominal value of 50, any input falling outside this range is
considered an Invalid Input case. This meticulous approach to equivalence class testing
ensures robust validation of the triangle classification function.
Code: -
#include<stdio.h>
int main()
int a,b,c,result;
printf("Enter the values of a, b and c : "); scanf("%d %d %d", &a,&b,&c);
if ( ( a >= 1 && a <= 100 ) && ( b >= 1 && b <= 100 ) && ( c >= 1 && c
<= 100) )
else
else
printf("\nScalene Triangle");
else
printf("\nNot a Triangle");
else
}
Test cases: -
Equivalence Class Testing for calculating Next Date: -
Equivalence class testing for calculating the next date involves identifying and testing different equivalence
classes of input values to ensure that the date calculation function behaves correctly.
To determine the next date for a given day in the DD-MM-YYYY format, we will conduct equivalence-
class testing. The criteria we will use are as follows:
We will segment the input into different equivalence classes using the following variables:
Based on these segments, we will develop the logic to compute the expected output.
Test case
code
#include <stdio.h>
if(year%400 == 0)
return 1;
if(year%100 == 0)
return 0;
if(year%4 == 0)
return 1;
return 0;
int invalid = 0;
if(d < 1 || d > 31 || m < 1 || m > 12 || y < 1800 || y > 2048) invalid = 1;
else
d = 1; m++;
else
d++;
else
if(d == 29)
if(m == 2)
if(leap_year(y))
d = 1; m++;
else
invalid = 1;
else
d++;
else
if(d == 30)
if(m == 2)
invalid = 1; else
d = 1; m++;
else
d++;
else
if(d == 31)
if(m == 2)
invalid = 1; else
else
d = 1;
m++;
}
if(invalid == 0)
Discussions: -
Equivalence Class testing excels by thoroughly covering extreme values and invalid inputs based on
constraints, surpassing techniques like BVA in depth and quantity of test cases.
Equivalence class testing ensures comprehensive code coverage by addressing valid, invalid, and special
input scenarios, along with relevant edge cases. It successfully validated programs for tasks like triangle
classification and date determination. This highlights the limitations of relying solely on a single variable,
as in BVA, and underscores the importance of employing comprehensive testing techniques for robust
software assessment.
EXPERIMENT-12
Aim:
Theory:
Reliability Metrices
Reliability metrics serve as vital yardsticks for evaluating the consistency, quality, and effectiveness of a
system, product, or process throughout its operational lifespan. These metrics are invaluable tools for
organizations, engineers, and analysts as they provide insights into the dependability of a specific item or
process, enabling the pinpointing of areas that may need enhancement. Widely utilized across diverse
fields such as manufacturing, engineering, and software development, reliability metrics play a pivotal role
in quantifying the dependability of software products. The choice of which metric to employ hinges on the
nature of the system in question and the specific requirements of its application domain.
MTTF, or Mean Time to Failure, is a pivotal reliability metric utilized to gauge the anticipated average
operational duration of a component, system, or product before encountering a failure. This essential
parameter is typically expressed in measurable units of time, such as hours, days, or years, providing a
crucial measure for evaluating the dependability and longevity of a given item. MTTF precisely denotes
the time span between two consecutive failures. For instance, an MTTF of 200 signifies that, on average,
one failure is expected to occur within every 200 units of time. The specific units employed for
measurement are contingent upon the particular system, and can even be quantified in terms of
transactions. In systems with high transaction volumes, MTTF maintains its consistency and reliability.
Calculating MTTF is instrumental in making informed decisions about the dependability and
maintenance of critical systems.
MTTR, or Mean Time to Repair, is a pivotal reliability metric that quantifies the average duration it takes
to rectify a system, component, or item following a failure event. This parameter is of paramount
importance in assessing the availability and downtime of a system. Typically denominated in units of time,
such as hours or minutes, MTTR serves as a crucial indicator of how swiftly a system can be restored to
its normal operational state after encountering a failure.
In practical terms, when a failure occurs, a certain amount of time is needed to identify and address the
underlying issues. MTTR precisely measures this average time required for the diagnosis and resolution of
the problems leading to the failure. This metric is calculated as the total downtime divided by the number
of breakdowns. MTTR plays a pivotal role in optimizing maintenance strategies and ensuring prompt
recovery from system failures.
MTBR, which stands for Mean Time Between Repairs, is a crucial reliability metric that calculates the
average duration between two consecutive repair incidents for a system, component, or piece of
equipment. This metric plays a vital role in evaluating the overall reliability and maintainability of assets.
By combining the MTTF (Mean Time to Failure) and MTTR (Mean Time to Repair) metrics, we arrive at
the MTBF (Mean Time Between Failures) metric:
For instance, an MTBF of 300 signifies that once a failure occurs, the next failure is expected to happen
only after 300 hours. It's worth noting that in this approach, the time measurements are based on real-time
duration, as opposed to execution time, as is the case with MTTF. This combined metric provides a
comprehensive understanding of a system's reliability and recovery capabilities
Code: -
#include <stdio.h>
#include <math.h>
int main() {
double failureRate;
double time;
scanf("%lf", &time);
return 0;
Discussions: -
Metrics enhance system reliability by pinpointing requirement areas. They are crucial for evaluating the
dependability and performance of systems, products, or processes over time. Combining these metrics
offers a complete picture of a system or product's reliability and performance.
Equivalence class testing covers valid, invalid, and special inputs, including edge cases. It's crucial for
thoroughly validating programs, such as those for triangle types or date calculations. This approach shows
the drawbacks of relying only on Boundary Value Analysis (BVA) and emphasizes the need for
comprehensive testing methods for reliable software assessment.
EXPERIMENT-13
Aim:
Theory:
Maintenance metrics play a pivotal role in gauging the efficiency and success of maintenance operations
within an organization. The selection of specific metrics and models hinges on the type of maintenance
being undertaken—whether it's preventive, corrective, or predictive—and the overarching objectives of the
program.
Ensuring the effective upkeep of equipment is paramount in facilitating smooth operations that offer
resources promptly and at minimal expense. Nevertheless, professionals in the maintenance domain are
well aware that achieving optimal equipment reliability is no simple feat. To enhance and streamline
maintenance operations, it's imperative to monitor key metrics that provide valuable insights and
opportunities for improvement
also known as Planned Preventive Maintenance (PPM), is a crucial maintenance metric that evaluates an
organization's effectiveness in executing planned maintenance tasks on its assets or equipment. Expressed
as a percentage, it quantifies the proportion of maintenance activities that are scheduled in advance
relative to the total maintenance tasks, encompassing both planned and unplanned activities. Essentially,
PMP illustrates the percentage of time dedicated to planned maintenance in contrast to unexpected
repairs. Ideally, a well-functioning system should have around 90% of maintenance planned. The
calculation for PMP is:
serves as a critical performance measure in manufacturing and production environments, offering insights
into the efficiency and productivity of machinery or equipment. Comprehensively evaluated, OEE
considers availability, performance, and quality as key factors, each expressed as a percentage and then
multiplied to derive the OEE percentage. A 100% OEE indicates flawless, maximally efficient, and
uninterrupted production. The OEE is computed as the product of availability, performance, and quality:
OEE = availability x performance x quality
PMC is a pivotal maintenance metric assessing the adherence to scheduled preventive maintenance tasks
within an organization. This type of maintenance involves planned, proactive activities aimed at
preventing equipment failures, minimizing breakdowns, and prolonging asset lifespan. PMC is calculated
as a percentage and offers a valuable measure of how effectively an organization executes its preventive
maintenance program. Specifically, PMC is the percentage of scheduled preventive maintenance tasks
completed within the specified timeframe:
PMC= (Total Number of Preventive Maintenance Tasks completed on time /Total Number of Scheduled
Preventive Maintenance Tasks) ×100%
Code: -
#include <stdio.h>
int main() {
scanf("%d", &plannedMaintenance);
scanf("%d", &totalMaintenance);
scanf("%f", &availability);
scanf("%f", &performance);
printf("Enter product quality (in percentage): ");
scanf("%f", &quality);
return 0;
Discussions:
Creating a program to compute maintenance metrics with diverse models entails a series of steps. These
metrics are vital for appraising the efficiency and efficacy of maintenance endeavors within an
organization. The selection of models and algorithms should align with the specific needs of your
organization and the nature of your maintenance operations. Collaborating with domain experts and data
scientists may be necessary to refine and optimize the program for calculating these metrics.
Rectifying errors.
Enhancing design.
Implementing improvements.
Integrating with other systems.
Adapting programs for diverse hardware, software, and system features.
Migrating legacy software.
Phasing out outdated software.
Consideration is given to the typical lifespan of a software program, generally around ten to fifteen
years. The open-ended nature of maintenance, potentially spanning decades, can lead to substantial
costs.
Older software, designed for slower machines with limited memory and storage, may struggle to
compete with newer, more advanced programs on modern hardware.
Changes are often not properly documented, potentially leading to conflicts in the future.
With advancing technology, maintaining outdated software becomes costly.
Modifications made can inadvertently disrupt the original structure of the software, complicating
subsequent changes.