Unit 4
Unit 4
1. Software Testing
Software Testing is a method to assess the functionality of the software program. The process checks whether the actual software matches the expected requirements and
ensures the software is bug-free. The purpose of software testing is to identify the errors, faults, or missing requirements in contrast to actual requirements. It mainly aims
at measuring the specification, functionality, and performance of a software program or application.
2. Testing Objectives
To evaluate the work products such as requirements, design, user stories, and code: The work products such as Requirement document, Design, and User
Stories should be verified before the developer picks it up for development. Identifying any ambiguity or contradicting requirements at this stage saves considerable
development and test time. The static analysis of the code (reviews, walk-thru, inspection, etc.) happens before the code integrates/is ready for testing. This idea of
testing is known as Verification. It is the process of evaluating the product during the development phase of each work product.
To verify the fulfillment of all specified requirements: This objective reveals the fact that the essential elements of testing should be to fulfill the customer’s needs.
Testers test the product and ensure the implementation of all the specified requirements have. Developing all the test cases, regardless of the testing technique ensures
verification of the functionality for every executed test case. The Tester should also create a requirement traceability matrix (RTM), which will ensure the mapping of
all the test cases to requirements. RTM is an effective way to ensure that test cases have got the right requirement coverage.
To validate if the test object is complete and works as per the expectation of the users and the stakeholders: Testing ensures the implementation of
requirements along with the assurance that they work as per the expectation of users. This idea of testing is called Validation. It is the process of checking the product
after development. Validation can be a manual or automation. It usually employs various types of testing techniques, i.e., Black Box, White Box, etc. Generally,
testers perform validation, whereas customers can also validate the product as part of User acceptance testing. Every business considers the customer as the king.
Thus the customer's satisfaction is a predominant need for any business. For example, customer satisfaction and loyalty in online shopping and e-commerce
environments is a useful indicator for long-term business success.
To build confidence in the quality level of the test object: One of the critical objectives of software testing is to improve software quality. High-Quality software
means a lesser number of defects. In other words, the more efficient the testing process is, the fewer errors you will get in the end product. Which, in turn, will
increase the overall quality of the test object. Excellent quality contributes to a significant increase in customer satisfaction as well as lower maintenance costs.
To prevent defects in the software product: One of the objectives of software testing is to avoid the mistakes in the early stage of the development. Early detection
of errors significantly reduces the cost and effort. The prevention of defects involves doing a root cause analysis of the defects found previously and after that, taking
specific measures to prevent the occurrence of those types of errors in the future. Efficient testing helps in providing an error-free application. If you prevent defects,
it will result in reducing the overall defect count in the product, which further ensures a high-quality product to the customer.
To find defects in the software product: Another essential objective of software testing is to identify all defects in a product. The main motto of testing is to find
maximum defects in a software product while validating whether the program is working as per the user requirements or not. Defects should be identified as early in
the test cycle as possible. E.g., a defect found in the UAT phase will be much costlier to fix than the same defect found in the Sprint testing phase.
To provide sufficient information to stakeholders to allow them to make informed decisions, especially regarding the level of quality of the test object: The
purpose of testing is to provide complete information to the stakeholders about technical or other restrictions, risk factors, ambiguous requirements, etc. It can be in
the form of test coverage, testing reports covering details like what is missing, what went wrong. The aim is to be transparent and make stakeholders fully understand
the issues affecting quality.
To reduce the level of risk of insufficient software quality: The possibility of loss is also known as risk. The objective of software testing is to reduce the
occurrence of the risk. Each software project is unique and contains a significant number of uncertainties from different perspectives, such as market launch time,
budget, the technology chosen, implementation, or product maintenance. If we do not control these uncertainties, it will impose potential risks not only during the
development phases but also during the whole life cycle of the product. So, the primary objective of software testing is to integrate the Risk management process to
identify any risk as soon as possible in the development process.
To comply with contractual, legal, or regulatory requirements or standards, and to verify the test object’s compliance with such requirements or standards:
This objective ensures that software developed for a specific region must follow the legal rules and regulations of that region. Moreover, the software product must be
compatible with the national and international standards of testing. We have ISO/IEC/IEEE 29119 standards that deal with the software testing concept. E.g., each
country has laws specific to accessibility requirements which must be fulfilled to avoid legal implications. The European Union has strict rules on how the Personal
Identifiable Information (PII) like Social security number etc. should be handled. Failure to adhere to such requirements will lead to failure of the product, no matter
how defect-free it has been working!
3. Unit Testing
Unit testing is a type of software testing that focuses on individual units or components of a software system. The purpose of unit testing is to validate that each unit of
the software works as intended and meets the requirements. Developers typically perform unit testing, and it is performed early in the development process before the
code is integrated and tested as a whole system.
Unit tests are automated and are run each time the code is changed to ensure that new code does not break existing functionality. Unit tests are designed to validate the
smallest possible unit of code, such as a function or a method, and test it in isolation from the rest of the system. This allows developers to quickly identify and fix any
issues early in the development process, improving the overall quality of the software and reducing the time required for later testing.
Unit Testing is a software testing technique using which individual units of software i.e. group of computer program modules, usage procedures, and
operating procedures are tested to determine whether they are suitable for use or not.
It is a testing method using which every independent module is tested to determine if there is an issue by the developer himself. It is correlated with the
functional correctness of the independent modules.
Unit Testing is defined as a type of software testing where individual components of a software are tested. Unit Testing of the software product is carried out
during the development of an application.
An individual component may be either an individual function or a procedure. The developer typically performs Unit Testing. In SDLC or V Model, Unit
testing is the first level of testing done before integration testing.
Unit testing is a type of testing technique that is usually performed by developers. Although due to the reluctance of developers to test, quality assurance
engineers also do unit testing.
4. Integration Testing
Integration testing is the process of testing the interface between two software units or modules. It focuses on determining the correctness of the interface. The purpose
of integration testing is to expose faults in the interaction between integrated units. Once all the modules have been unit-tested, integration testing is performed.
Integration testing is a software testing technique that focuses on verifying the interactions and data exchange between different components or modules of a software
application. The goal of integration testing is to identify any problems or bugs that arise when different components are combined and interact with each other. Integration
testing is typically performed after unit testing and before system testing. It helps to identify and resolve integration issues early in the development cycle, reducing the
risk of more severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done so that there should be a proper sequence to be followed. And also if you don’t want to
miss out on any integration scenarios then you have to follow the proper sequence. Exposing the defects is the major focus of the integration testing and the time of
interaction between the integrated units.
5. System Testing
System testing is a type of software testing that evaluates the overall functionality and performance of a complete and fully integrated software solution. It tests if the
system meets the specified requirements and if it is suitable for delivery to the end-users. This type of testing is performed after the integration testing and before the
acceptance testing.
System Testing is a type of software testing that is performed on a completely integrated system to evaluate the compliance of the system with the corresponding
requirements. In system testing, integration testing passed components are taken as input.
The goal of integration testing is to detect any irregularity between the units that are integrated. System testing detects defects within both the integrated units
and the whole system. The result of system testing is the observed behavior of a component or a system when it is tested.
System Testing is carried out on the whole system in the context of either system requirement specifications or functional requirement specifications or the
context of both. System testing tests the design and behavior of the system and also the expectations of the customer.
It is performed to test the system beyond the bounds mentioned in the software requirements specification (SRS). System Testing is performed by a testing
team that is independent of the development team and helps to test the quality of the system impartial.
It has both functional and non-functional testing. System Testing is a black-box testing. System Testing is performed after the integration testing and before
the acceptance testing.
6. Acceptance Testing
Acceptance Testing is a method of software testing where a system is tested for acceptability. The major aim of this test is to evaluate the compliance of the system with
the business requirements and assess whether it is acceptable for delivery or not.
7. Regression Testing
Regression Testing is the process of testing the modified parts of the code and the parts that might get affected due to the modifications to ensure that no new errors have
been introduced in the software after the modifications have been made. Regression means the return of something and in the software field, it refers to the return of a
bug.
Functional testing and performance testing are one of the two testing and QA that entitle developers and performance engineers to guarantee the quality of code.
Performance Testing
Performance testing is a type of software testing that ensures software applications perform as expected under load. It is a testing technique used to determine system
performance in terms of sensitivity, reactivity, and stability under specific workload conditions.
Performance testing is a type of software testing that focuses on evaluating a system’s or application’s performance and scalability. Performance testing aims to identify
bottlenecks, measure system performance under varying loads and conditions, and ensure that the system can handle the expected number of users or transactions.
Functional Testing
Functional testing is a type of testing that ensures that each function of a software application operates to the requirements and specifications. This testing is not concerned
with the application’s source code. Each software application functionality is tested by providing appropriate test input, anticipating the output, and comparing the actual
output to the expected output. This testing focuses on the Application Under Test’s user interface, APIs, database, security, client or server application, and functionality.
Manual or automated functional testing is available.
Verifies that software functions as intended and meets specified Evaluates the system’s performance under various conditions like load, stress,
Purpose requirements. and scalability.
Focus Tests individual functions or features to ensure correct behavior. Measures responsiveness, speed, and stability of the entire system.
Scope Includes unit testing, integration testing, system testing, etc. Involves load testing, stress testing, scalability testing, etc.
Testing Criteria Validates functionality, user interface, data handling, etc. Assesses speed, reliability, scalability, and resource usage.
Key Metrics Pass/fail based on functional requirements. Response time, throughput, resource utilization, error rates, etc.
Users’
Concerned with what the system does. Concerned with how well the system performs under different conditions.
Perspective
Tools Selenium, JUnit, TestNG, etc. Apache JMeter, LoadRunner, Gatling, etc.
Top Down Testing : Top Down Integration testing which is also known as Incremental integration testing. In this Top Down approach the higher level modules are
tested first after higher level modules the lower level modules are tested. Then these modules undergo for integration accordingly. Here the higher level modules refers to
main module and lower level modules refers to submodules. This approach uses Stubs which are mainly used to simulate the submodule, if the invoked submodule is not
developed this Stub works as a momentary replacement.
2. Bottom Up Testing : Bottom Up Integration testing is another approach of Integration testing. In this Bottom Up approach the lower level modules are tested first after
lower level modules the higher level modules are tested. Then these modules undergo for integration accordingly. Here the lower level modules refers to submodules and
higher level modules refers to main modules. This approach uses test drivers which are mainly used to initiate and pass the required data to the sub modules means from
higher level module to lower level module if required. The below figure represents the Top Down and Bottom up Integration testing approach.
Difference between Top Down Integration Testing and Bottom Up Integration Testing :
TOP DOWN TESTING BOTTOM UP TESTING
Top Down Integration testing is one of the approach of Integration testing in which Bottom Up Integration testing is one of the approach of Integration testing in
integration testing takes place from top to bottom means system integration begins which integration testing takes place from bottom to top means system
with top level modules. integration begins with lowest level modules.
In this testing the higher level modules are tested first then the lower level modules In this testing the lower level modules are tested first then the higher level
are tested and then the modules are integrated accordingly. modules are tested and then the modules are integrated accordingly.
In this testing stubs are used for simulate the submodule if the invoked submodule is In this testing drivers are used for simulate the main module if the main module
not developed means Stub works as a momentary replacement. is not developed means Driver works as a momentary replacement.
Top Down Integration testing approach is beneficial if the significant defect occurs Bottom Up Integration testing approach is beneficial if the crucial flaws
toward the top of the program. encounters towards the bottom of the program.
In Top Down Integration testing approach the main module is designed at first then In Bottom Up Integration testing approach different modules are created first
the submodules/subroutines are called from it. then these modules are integrated with the main function.
The complexity of this testing is simple. The complexity of this testing is complex and highly data intensive.
In this approach Stub modules must be produced. In this approach, Driver modules must be produced.
In terms of cost, Top Down testing is more expensive because it requires the complete Bottom Up testing is less expensive as compared to Top Down because it allows
system for testing. early identification and resolution of the model issues.
Stubs : Stubs are developed by software developers to use them in place of modules, if the respective modules aren’t developed, missing in developing stage, or are
unavailable currently while Top-down testing of modules. A Stub simulates module which has all the capabilities of the unavailable module. Stubs are used when the
lower-level modules are needed but are unavailable currently.
Stubs are divided into four basic categories based on what they do :
Shows the traced messages,
Shows the displayed message if any,
Returns the corresponding values that are utilized by modules,
Returns the value of the chosen parameters(arguments) that were used by the testing modules.
Drivers : Drivers serve the same purpose as stubs, but drivers are used in Bottom-up integration testing and are also more complex than stubs. Drivers are also used when
some modules are missing and unavailable at time of testing of a specific module because of some unavoidable reasons, to act in absence of required module. Drivers are
used when high-level modules are missing and can also be used when lower-level modules are missing.
Ex : Suppose, you are told to test a website whose corresponding primary modules are, where each of them is interdependent on each other, as follows:
Module-A : Login page website,
Module-B : Home page of the website
Module-C : Profile setting
Module-D : Sign-out page
It’s always considered good practice to begin development of all modules parallelly because as soon as each gets developed they can be integrated and could be tested
further as per their corresponding interdependencies order with a module. But in some cases, if any one of them is in developing stage or not available in the testing
process of a specific module, stubs or drivers could be used instead.
Assume Module-A is developed. As soon as it’s developed, it undergoes testing, but it requires Module-B, which isn’t developed yet. So in this case, we can use
the Stubs or Drivers that simulate all features and functionality that might be shown by actual Module-B. So, we can conclude that Stubs and drivers are used to fulfill
the necessity of unavailable modules. Similarly, we may also use Stubs or Drivers in place of Module-C and Module-D if they are too not available.
Do both drivers and Stubs serve the same functionality? Yes, we can say both serve the same feature and are used in the absence of a module(M1) that has
interdependencies with an other module(M2) that is need to be test, so we use drivers or stubs in order to fulfill module(M1)’s unavailability’s and to serve its
functionality.
Stubs are used in Top-Down Integration Testing. Drivers are used in Bottom-Up Integration Testing.
Stubs are basically known as a “called programs” and are used in the Top-down While, drivers are the “calling program” and are used in bottom-up integration
integration testing. testing.
Stubs are similar to the modules of the software, that are under development
While drivers are used to invoking the component that needs to be tested.
process.
While drivers are mainly used in place of high-level modules and in some situation
Stubs are basically used in the unavailability of low-level modules.
as well as for low-level modules.
Whereas the drivers are used if the main module of the software isn’t developed for
Stubs are taken into use to test the feature and functionality of the modules.
testing.
The stubs are taken into concern if testing of upper-levels of the modules are done The drivers are taken into concern if testing of lower-levels of the modules are done
and the lower-levels of the modules are under developing process. and the upper-levels of the modules are under developing process.
Stubs are used when lower-level of modules are missing or in a partially developed Drivers are used when higher-level of modules are missing or in a partially
phase, and we want to test the main module. developed phase, and we want to test the lower(sub)- module.
White box testing techniques analyze the internal structures the used data structures, internal design, code structure, and the working of the software rather than just the
functionality as in black box testing. It is also called glass box testing or clear box testing or structural testing. White Box Testing is also known as transparent testing or
open box testing.
White box testing is a software testing technique that involves testing the internal structure and workings of a software application. The tester has access to the source
code and uses this knowledge to design test cases that can verify the correctness of the software at the code level.
White box testing is also known as structural testing or code-based testing, and it is used to test the software’s internal logic, flow, and structure. The tester creates test
cases to examine the code paths and logic flows to ensure they meet the specified requirements.
Testing Techniques
1. Statement Coverage
In this technique, the aim is to traverse all statements at least once. Hence, each line of code is tested. In the case of a flowchart, every node must be traversed at least
once. Since all lines of code are covered, it helps in pointing out faulty code.
2. Branch Coverage:
In this technique, test cases are designed so that each branch from all decision points is traversed at least once. In a flowchart, all edges must be traversed at least once.
4 test cases are required such that all branches of all decisions are covered, i.e, all edges of the flowchart are covered
3. Condition Coverage
In this technique, all individual conditions must be covered as shown in the following example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0
6. Loop Testing
Loops are widely used and these are fundamental to many algorithms hence, their testing is very important. Errors often occur at the beginnings and ends of loops.
Simple loops: For simple loops of size n, test cases are designed that:
Skip the loop entirely
Only one pass through the loop
2 passes
m passes, where m < n
n-1 ans n+1 passes
Nested loops: For nested loops, all the loops are set to their minimum count, and we start from the innermost loop. Simple loop tests are conducted for the
innermost loop and this is worked outwards till all the loops have been tested.
Concatenated loops: Independent loops, one after another. Simple loop tests are applied for each. If they’re not independent, treat them like nesting.
Black-box testing is a type of software testing in which the tester is not concerned with the internal knowledge or implementation details of the software but rather focuses
on validating the functionality based on the provided specifications or requirements.
Functional Testing:
Functional testing is defined as a type of testing that verifies that each function of the software application works in conformance with the requirement and specification.
This testing is not concerned with the source code of the application. Each functionality of the software application is tested by providing appropriate test input, expecting
the output, and comparing the actual output with the expected output. This testing focuses on checking the user interface, APIs, database, security, client or server
application, and functionality of the Application Under Test. Functional testing can be manual or automated. It determines the system’s software functional requirements.
Regression Testing:
Regression Testing is the process of testing the modified parts of the code and the parts that might get affected due to the modifications to ensure that no new errors have
been introduced in the software after the modifications have been made. Regression means the return of something and in the software field, it refers to the return of a
bug. It ensures that the newly added code is compatible with the existing code. In other words, a new software update has no impact on the functionality of the software.
This is carried out after a system maintenance operation and upgrades.
Nonfunctional Testing:
Non-functional testing is a software testing technique that checks the non-functional attributes of the system. Non-functional testing is defined as a type of software
testing to check non-functional aspects of a software application. It is designed to test the readiness of a system as per nonfunctional parameters which are never addressed
by functional testing. Non-functional testing is as important as functional testing. Non-functional testing is also known as NFT. This testing is not functional testing of
software. It focuses on the software’s performance, usability, and scalability.
Test suite is a container that has a set of tests which helps testers in executing and reporting the test execution status. It can take any of the three states namely Active,
Inprogress and completed. A Test case can be added to multiple test suites and test plans. After creating a test plan, test suites are created which in turn can have any number
of tests. Test suites are created based on the cycle or based on the scope. It can contain any type of tests, viz - functional or Non-Functional.
Alpha Testing is a type of software testing performed to identify bugs before releasing the product to real users or to the public. Alpha Testing is one of the user
acceptance tests. It is the first stage of software testing, during which the internal development team tests the program before making it available to clients or people
outside the company.
Performed at Alpha testing is performed at the developer’s site. Beta testing is performed at the end-user of the product.
Alpha testing ensures the quality of the product before Beta testing also concentrates on the quality of the product but collects users input on
Ensures forwarding to beta testing. the product and ensures that the product is ready for real time users.
Requirement Alpha testing requires a testing environment or a lab. Beta testing doesn’t require a testing environment or lab.
Execution Alpha testing may require a long execution cycle. Beta testing requires only a few weeks of execution.
Developers can immediately address the critical issues Most of the issues or feedback collected from the beta testing will be implemented in
Issues or fixes in alpha testing. future versions of the product.
Test Cycles Multiple test cycles are organized in alpha testing. Only one or two test cycles are there in beta testing.
Formal Technical Review (FTR) is a software quality control activity performed by software engineers. It is an organized, methodical procedure for assessing and raising
the standard of any technical paper, including software objects. Finding flaws, making sure standards are followed, and improving the product or document under review’s
overall quality are the main objectives of a fault tolerance review (FTR). Although FTRs are frequently utilized in software development, other technical fields can also
employ the concept.
The walkthrough is a review meeting process but it is different from the Inspection, as it does not involve any formal process i.e. it is a nonformal process. Basically, the
walkthrough [review meeting process] is started by the Author of the code.
In the walkthrough, the code or document is read by the author, and others who are present in the meeting can note down the important points or can write notes on the
defects and can give suggestions about them. The walkthrough is an informal way of testing, no formal authority is been involved in this testing.
As there is an informal way of testing involved so there is no need for a moderator while performing a walkthrough. We can call a walkthrough an open-ended discussion,
it does not focus on the documentation. Defect tracking is one of the challenging tasks in the walkthrough.
The development of any software application/product goes through SDLC (Software Development Life Cycle) where every phase is very important and needs to be
followed accordingly to develop a quality software product. Inspection is such an important element which have a great impact on the software development process.
The software development team not only develops the software application rather during the coding phase of software development they check for any error in the code of
the software which is called code verification. This code verification checks the software code in all aspects and finds out the errors that exist in the code. Generally, there
are two types of code verification techniques available i.e.
Dynamic technique – It is performed by executing some test data and the outputs of the program are monitored to find errors in the software code.
Static technique – It is performed by executing the program conceptually and without any data. Code reading, static analysis, symbolic execution, code
inspection, reviews, etc. are some of the commonly used static techniques.
Code Inspection :
Code inspection is a type of Static testing that aims to review the software code and examine for any errors. It helps reduce the ratio of defect multiplication and avoids
later-stage error detection by simplifying all the initial error detection processes. This code inspection comes under the review process of any application.
How does it work?
Moderator, Reader, Recorder, and Author are the key members of an Inspection team.
Related documents are provided to the inspection team who then plan the inspection meeting and coordinate with inspection team members.
If the inspection team is not aware of the project, the author provides an overview of the project and code to inspection team members.
Then each inspection team performs code inspection by following some inspection checklists.
After completion of the code inspection, conduct a meeting with all team members and analyze the reviewed code.
Purpose of code inspection :
1. It checks for any error that is present in the software code.
2. It identifies any required process improvement.
3. It checks whether the coding standard is followed or not.
4. It involves peer examination of codes.
5. It documents the defects in the software code.
Advantages Of Code Inspection :
Improves overall product quality.
Discovers the bugs/defects in software code.
Marks any process enhancement in any case.
Finds and removes defective efficiently and quickly.
Helps to learn from previous defeats.
Disadvantages of Code Inspection:
Requires extra time and planning.
The process is a little slower.
Different modules specified in the design document are coded in the Coding phase according to the module specification. The main goal of the coding phase is to code
from the design document prepared after the design phase through a high-level language and then to unit test this code.
Good software development organizations want their programmers to maintain to some well-defined and standard style of coding called coding standards. They usually
make their own coding standards and guidelines depending on what suits their organization best and based on the types of software they develop. It is very important for
the programmers to maintain the coding standards otherwise the code will be rejected during code review.
Purpose of Having Coding Standards:
A coding standard gives a uniform appearance to the codes written by different engineers.
It improves readability, and maintainability of the code and it reduces complexity also.
It helps in code reuse and helps to detect error easily.
It promotes sound programming practices and increases efficiency of the programmers.
Some of the coding standards are given below:
Limited use of globals: These rules tell about which types of data that can be declared global and the data that can’t be.
Standard headers for different modules: For better understanding and maintenance of the code, the header of different modules should follow some standard
format and information. The header format must contain below things that is being used in various companies:
Name of the module
Date of module creation
Author of the module
Modification history
Synopsis of the module about what the module does
Different functions supported in the module along with their input output parameters
Global variables accessed or modified by the module
Naming conventions for local variables, global variables, constants and functions: Some of the naming conventions are given below:
Meaningful and understandable variables name helps anyone to understand the reason of using it.
Local variables should be named using camel case lettering starting with small letter (e.g. localData) whereas Global variables names should start with a
capital letter (e.g. GlobalData). Constant names should be formed using capital letters only (e.g. CONSDATA).
It is better to avoid the use of digits in variable names.
The names of the function should be written in camel case starting with small letters.
The name of the function must describe the reason of using the function clearly and briefly.
Indentation:
Proper indentation is very important to increase the readability of the code. For making the code readable, programmers should use White spaces properly. Some
of the spacing conventions are given below:
There must be a space after giving a comma between two function arguments.
Each nested block should be properly indented and spaced.
Proper Indentation should be there at the beginning and at the end of each block in the program.
All braces should start from a new line and the code following the end of braces also start from a new line.
Error return values and exception handling conventions: All functions that encountering an error condition should either return a 0 or 1 for simplifying the
debugging. On the other hand, Coding guidelines give some general suggestions regarding the coding style that to be followed for the betterment of
understandability and readability of the code. Some of the coding guidelines are given below :
Avoid using a coding style that is too difficult to understand: Code should be easily understandable. The complex code makes maintenance and debugging
difficult and expensive.
Avoid using an identifier for multiple purposes: Each variable should be given a descriptive and meaningful name indicating the reason behind using it. This is
not possible if an identifier is used for multiple purposes and thus it can lead to confusion to the reader. Moreover, it leads to more difficulty during future
enhancements.
Code should be well documented: The code should be properly commented for understanding easily. Comments regarding the statements increase the
understandability of the code.
Length of functions should not be very large: Lengthy functions are very difficult to understand. That’s why functions should be small enough to carry out
small work and lengthy functions should be broken into small ones for completing small tasks.
Try not to use GOTO statement: GOTO statement makes the program unstructured, thus it reduces the understandability of the program and also debugging
becomes difficult.