Chatbot
Chatbot
1
CHAPTER 1
INTRODUCTION
Chatbot is a sophisticated AI-powered virtual assistant that uses the natural language processing
and machine learning to interact with users like a human would. Gone are the days of rigid, scripted
responses Chatbot is adaptable, constantly learning, and always ready to engage in the dynamic
conversation
The objective of this project is to design and implement an advanced chatbot that leverages
state-of-theartificial intelligence technologies to enhance its efficiency and user experience.
This Application uses the PHP Web Server used to search the information for the user, and
also track the location, condition , and usage of various asset.MySQL is used to store the
information.
2
1.5 FEASIBILITY STUDY
A feasibility study is a high-level capsule version of the entire System analysis and Design
Process. The study begins by classifying the problem definition. Feasibility is to determine if
it’s worth doing. Once an acceptance problem definition has been generated, the analyst
develops a logical model of the system. A search for alternatives is analyzed carefully. There
are 3 parts in feasibility study.
Operational feasibility is the measure of how well a proposed system solves the problems,
and takes advantage of the opportunities identified during scope definition and how it
satisfies the requirements identified in the requirements analysis phase of system
development.The operational feasibility assessment focuses on the degree to which the
proposed development projects fits in with the existing business environment and objectives
with regard to development schedule, delivery date, corporate culture and existing business
processes.To ensure success, desired operational outcomes must be imparted during design
and development. These include such design-dependent parameters as reliability,
maintainability, supportability, usability, producibility, disposability, sustainability,
affordability and others. These parameters are required to be considered at the early stages of
design if desired operational behaviours are to be realised. A system design and development
requires appropriate and timely application of engineering and management efforts to meet
the previously mentioned parameters. A system may serve its intended purpose most
effectively when its technical and operating characteristics are engineered into the design.
Therefore, operational feasibility is a critical aspect of systems engineering that needs to be
an integral part of the early design phases.
This involves questions such as whether the technology needed for the system exists, how
difficult it will be to build, and whether the firm has enough experience using that
technology.The assessment is based on outline design of system requirements in terms of
input,processes,output,fields,programs and procedures.This can be qualified in terms of
volume of data,trends,frequency of updating inorder to give an introduction to the
technical system.The application is the fact that it has been developed on windows XP
3
platform and a high configuration of 1GB RAM on Intel Pentium Dual core processor.This
is technically feasible .The technical feasibility assessment is focused on gaining an
understanding of the present technical resources of the organization and their applicability
to the expected needs of the proposed system. It is an evaluation of the hardware and
software and how it meets the need of the proposed system.
Establishing the cost-effectiveness of the proposed system i.e. if the benefits do not
outweigh the costs then it is not worth going ahead. In the fast paced world today there is a
great need of online social networking facilities. Thus the benefits of this project in the
current scenario make it economically feasible. The purpose of the economic feasibility
assessment is to determine the positive economic benefits to the organization that the
proposed system will provide. It includes quantification and identification of all the
benefits expected. This assessment typically involves a cost/benefits analysis.
4
CHAPTER 2
REQUIREMENT ANALYSIS
5
CHAPTER 3
PROPOSED SYSTEM
The proposed Chatbot Project aims to develop an advanced AI-powered virtual assistant
capable of engaging in human-like conversations with users. The chatbot will utilize cutting-
edge natural language processing (NLP) algorithms and machine learning models to
understand user queries, provide relevant information, and offer personalized assistance.
6
CHAPTER 4
DESIGN
UML Diagrams:
Actor:
A coherent set of roles that users of use cases play when interacting with the use
`cases.
Use case:
7
There are various kinds of methods in software design:
Sequence Diagram
Class Diagram
Use case diagrams model behavior within a system and helps the developers
understand of what the user require. The stick man represents what’s called an
actor.
Use case diagram can be useful for getting an overall view of the system and
clarifying who can do and more importantly what they can’t do.
Use case diagram consists of use cases and actors and shows the interaction
between the use case and actors.
The purpose is to show the interactions between the use case and actor.
To represent the system requirements from user’s perspective.
An actor could be the end-user of the system or an external system.
8
USECASE DIAGRAM:
9
SEQUENCE DIAGRAM:
10
CLASS DIAGRAM:
Class is nothing but a structure that contains both variables and methods.
The Class Diagram shows a set of classes, interfaces, and collaborations and
their relating ships. There is most common diagram in modeling the object
oriented systems and are used to give the static view of a system. It shows the
dependency between the classes that can be used in our system.
The interactions between the modules or classes of our projects are shown
below. Each block contains Class Name, Variables and Methods.
11
CHAPTER 5
5.1.1.Php
PHP is a server-side scripting language designed primarily for web development but also
used as a general-purpose programming language. Originally created by Rasmus Lerdorf in
1994,the PHP reference implementation is now produced by The PHP Development
Team. PHP originally stood for personal home page but it now stands for the recursive
acronym PHP: Hypertext Preprocessor.
PHP code may be embedded into HTML or HTML5 code, or it can be used in combination
with various web template systems, web content management systems and web frameworks.
PHP code is usually processed by a PHP interpreter implemented as a module in the web
server or as a Common Gateway Interface (CGI) executable. The web server combines the
results of the interpreted and executed PHP code, which may be any type of data, including
images, with the generated web page. PHP code may also be executed with a command-line
interface (CLI) and can be used to implement standalone graphical applications.
The standard PHP interpreter, powered by the Zend Engine, is free software released under
the PHP License. PHP has been widely ported and can be deployed on most web servers on
almost every operating system and platform, free of charge.
The PHP language evolved without a written formal specification or standard until 2014,
leaving the canonical PHP interpreter as a de facto standard. Since 2014 work has gone on to
create a formal PHP specification.PHP development began in 1995 when Rasmus
Lerdorf wrote several Common Gateway Interface (CGI) programs in C, which he used to
maintain his personal homepage. He extended them to work with web forms and to
communicate with databases, and called this implementation "Personal Home Page/Forms
Interpreter" or PHP/FI.
PHP/FI could help to build simple, dynamic web applications. To accelerate bug reporting
and to improve the code, Lerdorf initially announced the release of PHP/FI as "Personal
Home Page
Tools(PHPTools)version1.0"onthe Usenet discussiongroup comp.infosystems.www.authorin
12
g.cgi on June 8, 1995. This release already had the basic functionality that PHP has as of
2013. This included Perl-like variables, form handling, and the ability to embed HTML.
The syntax resembled that of Perl but was simpler, more limited and less consistent.
Lerdorf did not intend the early PHP to become a new programming language, but it grew
organically, with Lerdorf noting in retrospect. A development team began to form and, after
months of work and beta testing, officially released PHP/FI 2 in November 1997.
The fact that PHP lacked an original overall design but instead developed organically has led
to inconsistent naming of functions and inconsistent ordering of their parameters. In some
cases, the function names were chosen to match the lower-level libraries which PHP was
"wrapping",while in some very early versions of PHP the length of the function names was
used internally as a hash function, so names were chosen to improve the distribution of hash
values.
Php 3 and 4
Zeev Suraski and Andi Gutmans rewrote the parser in 1997 and formed the base of PHP 3,
changing the language's name to the recursive acronym PHP:Hypertext Preprocessor.
Afterwards, public testing of PHP 3 began, and the official launch came in June 1998.
Suraski and Gutmans then started a new rewrite of PHP's core, producing the Zend Engine in
1999. They also founded Zend Technologies in Ramat Gan, Israel. On May 22, 2000, PHP 4,
powered by the Zend Engine 1.0, was released. As of August 2008 this branch reached
version 4.4.9. PHP 4 is no longer under development nor will any security updates be
released.
Php 5
On July 13, 2004, PHP 5 was released, powered by the new Zend Engine II. PHP 5 included
new features such as improved support for object-oriented programming, the PHP Data
Objects (PDO) extension (which defines a lightweight and consistent interface for accessing
databases), and numerous performance enhancements. In 2008 PHP 5 became the only stable
version under development. Late static binding had been missing from PHP and was added in
version 5.3.
Many high-profile open-source projects ceased to support PHP 4 in new code as of February
5, 2008, because of the GoPHP5 initiative, provided by a consortium of PHP developers
promoting the transition from PHP 4 to PHP 5. Over time, PHP interpreters became available
13
on most existing 32-bit and 64-bit operating systems, either by building them from the PHP
source code, or by using pre-built binaries. For the PHP versions 5.3 and 5.4, the only
available Microsoft Windows binary distributions were 32-bit x86 builds, requiring Windows
32-bit compatibility mode while using Internet Information Services (IIS) on a 64-bit
Windows platform. PHP version 5.5 made the 64-bit x86-64 builds available for Microsoft
Windows.
PHP has received criticism due to lacking native Unicode support at the core language level,
instead only supporting byte strings. In 2005, a project headed by Andrei Zmievski was
initiated to bring native Unicode support throughout PHP, by embedding the International
Components for Unicode (ICU) library, and representing text strings as UTF-
16 internally. Since this would cause major changes both to the internals of the language and
to user code, it was planned to release this as version 6.0 of the language, along with other
major features then in development.
However, a shortage of developers who understood the necessary changes, and performance
problems arising from conversion to and from UTF-16, which is rarely used in a web context,
led to delays in the project. As a result, a PHP 5.3 release was created in 2009, with many
non-Unicode features back-ported from PHP 6, notably namespaces. In March 2010, the
project in its current form was officially abandoned, and a PHP 5.4 release was prepared
containing most remaining non-Unicode features from PHP 6, such as traits and closure re-
binding. Initial hopes were that a new plan would be formed for Unicode integration, but as
of 2014 none have been adopted.
Php 7
During 2014 and 2015, a new major PHP version was developed, which was numbered PHP
7. The numbering of this version involved some debate. While the PHP 6 Unicode
experiment had never been released, several articles and book titles referenced the PHP 6
name, which might have caused confusion if a new release were to reuse the name. After a
vote, the name PHP 7 was chosen.The foundation of PHP 7 is a PHP branch that was
originally dubbed PHP next generation (phpng). It was authored by Dmitry Stogov, Xinchen
Hui and Nikita Popov, and aimed to optimize PHP performance by refactoring the Zend
Engine to use more compact data structures with improved cache locality while retaining
near-complete language compatibility. As of 14 July 2014, WordPress-based benchmarks,
14
which served as the main benchmark suite for the phpng project, showed an almost 100%
increase in performance. Changes from phpng are also expected to make it easier to improve
performance in the future, as more compact data structures and other changes are seen as
better suited for a successful migration to a just-in-time (JIT) compiler. Because of the
significant changes, the reworked Zend Engine is called Zend Engine 3, succeeding Zend
Engine 2 used in PHP 5. Because of major internal changes in phpng, it must receive a
new major version number of PHP, rather than a minor PHP 5 release, according to PHP's
release process. Major versions of PHP are allowed to break backward-compatibility of code
and therefore PHP 7 presented an opportunity for other improvements beyond phpng that
require backward-compatibility breaks, including wider use of exceptions, reworking variable
syntax to be more consistent and complete, and the deprecation or removal of various legacy
features. PHP 7 also introduced new language features, including return type declarations for
functions,which complement the existing parameter type declarations, and support for the
scalar types (integer, float, string, and boolean) in parameter and return type declarations.
Data Types
variable that has no value; NULL is the only allowed value for this data type.
Variables of the "resource" type represent references to resources from external sources.
These are typically created by functions from a particular extension, and can only be
processed by functions from the same extension; examples include file, image, and database
resources.
Arrays can contain elements of any type that PHP can handle, including resources, objects,
and other arrays. Order is preserved in lists of values and in hashes with both keys and
values, and the two can be intermingled. PHP also supports strings, which can be used with
15
single quotes, double quotes, nowdoc or heredoc syntax. The Standard PHP Library (SPL)
attempts to solve standard problems and implements efficient data access interfaces and
classes.
Functions
PHP defines a large array of functions in the core language and many are also available in
various extensions,these functions are well documented in the online PHP
documentation. However, the built-in library has a wide variety of naming conventions and
associated inconsistencies, as described under history above.
In lieu of function pointers, functions in PHP can be referenced by a string containing their
name. In this manner, normal PHP functions can be used, for example, as callbacks or
within function tables. User-defined functions may be created at any time without
being prototyped. Functions may be defined inside code blocks, permitting a run-time
decision as to whether or not a function should be defined. There is
a function_exists function that determines whether a function with a given name has already
been defined. Function calls must use parentheses, with the exception of zero-argument
class constructor functions called with the PHP operator new, in which case parentheses are
optional.
Until PHP 5.3, support for anonymous functions and closures did not exist in PHP.
around eval() that allows normal PHP functions to be created during program
execution. PHP 5.3 added syntax to define an anonymous function or "closure"which can
capture variables from the surrounding scope. In the example above, getAdder() function
creates a closure using passed argument $x (the keyword use imports a variable from the
lexical context), which takes an additional argument $y , and returns the created closure to
the caller. Such a function is a first-class object, meaning that it can be stored in a variable,
passed as a parameter to other functions, etc.
5.1.2 Html
Hypertext Markup Language (HTML) is the standard markup language for creating web
pages and web applications. With Cascading Style Sheets (CSS) and JavaScript it forms a
triad of cornerstone technologies for the World Wide Web. Web browsers receive HTML
16
documents from a web server or from local storage and render them into multimedia web
pages. HTML describes the structure of a web page semantically and originally included cues
for the appearance of the document.HTML elements are the building blocks of HTML pages.
With HTML constructs, images and other objects, such as interactive forms, may be
embedded into the rendered page. It provides a means to create structured documents by
denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and
other items. HTML elements are delineated by tags, written using angle brackets. Tags such
as <img /> and <input /> introduce content into the page directly. Others such
as <p>...</p> surround and provide information about document text and may include other
tags as sub-elements. Browsers do not display the HTML tags, but use them to interpret the
content of the page.HTML can embed programs written in a scripting language such
as JavaScript which affect the behavior and content of web pages. Inclusion of CSS defines
the look and layout of content. The World Wide Web Consortium (W3C), maintainer of both
the HTML and the CSS standards, has encouraged the use of CSS over explicit presentational
HTML.
5.1.3 MySQL
For proprietary use, several paid editions are available, and offer additional
functionality.MySQL is a central component of the LAMP open-source web application
software stack (and other "AMP" stacks). LAMP is an acronym for "Linux, Apache,
MySQL, Perl/PHP/Python". Applications that use the MySQL database
include: TYPO3, MODx, Joomla, WordPress, phpBB, MyBB, and Drupal. MySQL is also
used in many high-profile, large-scale websites, including Google (though not for
searches), Facebook, Twitter, Flickr, and YouTube.
17
CODING
Index.php
<!DOCTYPE html>
<div class="wrapper">
<div class="content-header">
</div>
<section class="content">
<div class="container">
<?php
include '404.html';
}else{
18
if(is_dir($page))
include $page.'/index.php';
else
include $page.'.php';
?>
</div>
</section>
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title">Confirmation</h5>
</div>
<div class="modal-body">
<div id="delete_content"></div>
</div>
<div class="modal-footer">
</div>
19
</div>
</div>
</div>
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title"></h5>
</div>
<div class="modal-body">
</div>
<div class="modal-footer">
</div>
</div>
</div>
</div>
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title"></h5>
20
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
</button>
</div>
<div class="modal-body">
</div>
</div>
</div>
</div>
<div class="modal-content">
</div>
</div>
</div>
</div>
</body>
</html>
Portal.php
<style>
21
#chat_convo{
max-height: 65vh;
#chat_convo .direct-chat-messages{
min-height: 250px;
height: inherit;
#chat_convo .card-body {
overflow: auto;
</style>
<div class="container-fluid">
<div class="row">
<div class="card-tools">
</button>
</div>
</div>
22
<div class="card-body">
<div class="direct-chat-messages">
<div class="direct-chat-text">
</div>
</div>
</div>
<div class="end-convo"></div>
</div>
<div class="card-footer">
<div class="input-group">
<span class="input-group-append">
23
</span>
</div>
</form>
</div>
<!-- /.card-footer-->
</div>
</div>
</div>
</div>
<div class="direct-chat-text"></div>
</div>
</div>
<div class="direct-chat-text"></div>
</div>
</div>
24
<script type="text/javascript">
$(document).ready(function(){
$('[name="message"]').keypress(function(e){
console.log()
$('#send_chat').submit()
return false;
})
$('#send_chat').submit(function(e){
e.preventDefault();
uchat.find('.direct-chat-text').html(message);
$('#chat_convo .direct-chat-messages').append(uchat.html());
$('[name="message"]').val('')
$.ajax({
url:_base_url_+"classes/Master.php?f=get_response",
method:'POST',data:{message:message},
if(resp){
25
resp = JSON.parse(resp)
if(resp.status == 'success'){
bot_chat.find('.direct-chat-
text').html(resp.message);
$('#chat_convo .direct-chat-
messages').append(bot_chat.html());
$("#chat_convo .card-
body").animate({ scrollTop: $("#chat_convo .card-body").prop('scrollHeight') }, "fast");
} }
} }) }) })
</script>
OUTPUT SCREEN
Home Page:
26
Conversion Page:
Response Page:
27
Settings Page:
CHAPTER 6
28
TESTING AND IMPLEMENTATION
The term implementation has different meanings ranging from the conversation of a basic
application to a complete replacement of a computer system. The procedures however, are
virtually the same. Implementation includes all those activities that take place to convert from
old system to new. The new system may be totally new replacing an existing manual or
automated system or it may be major modification to an existing system. The method of
implementation and time scale to be adopted is found out initially. Proper implementation is
essential to provide a reliable system to meet organization requirement.
In computer programming, unit testing is a software testing method by which individual units
of source code, sets of one or more computer program modules together with associated
control data, usage procedures, and operating procedures, are tested to determine whether
they are fit for use. Intuitively, one can view a unit as the smallest testable part of an
application. In procedural programming, a unit could be an entire module, but it is more
commonly an individual function or procedure. In object-oriented programming, a unit is
often an entire interface, such as a class, but could be an individual method. Unit tests are
short code fragmentscreated by programmers or occasionally by white box testers during the
development process. It forms the basis for component testing. Ideally, each test case is
independent from the others. Substitutes such as method stubs, mock objects, fakes, and test
harnesses can be used to assist testing a module in isolation. Unit tests are typically written
and run by software developers to ensure that code meets its design and behaves as intended.
6.1.1.1 Benefits
The goal of unit testing is to isolate each part of the program and show that the individual
parts are correct. A unit test provides a strict, written contract that the piece of code must
satisfy. As a result, it affords several benefits.
29
that function frequently as the larger code base is developed either as the code is
changed or via an automated process with the build. If the unit tests fail, it is
considered to be a bug either in the changed code or the tests themselves. The unit
tests then allow the location of the fault or failure to be easily traced. Since the unit
tests alert the development team of the problem before handing the code off to testers
or clients, it is still early in the development process.
Facilitates Change
Unit testing allows the programmer to refactor code or upgrade system libraries at a
later date, and make sure the module still works correctly (e.g., in regression testing).
The procedure is to write test cases for all functions and methods so that whenever a
change causes a fault, it can be quickly identified. Unit tests detect changes which
may break a design contract.
Simplifies Integration
Unit testing may reduce uncertainty in the units themselves and can be used in
a bottom-up testing style approach. By testing the parts of a program first and then
testing the sum of its parts, integration testing becomes much easier.
Documentation
Integration testing (sometimes called integration and testing, abbreviated I&T) is the phase
in software testing in which individual software modules are combined and tested as a group.
It occurs after unit testing and before validation testing. Integration testing takes as its
input modules that have been unit tested, groups them in larger aggregates, applies tests
30
defined in an integration test plan to those aggregates, and delivers as its output the integrated
system ready for system testing.
Purpose
Big Bang
In the big-bang approach, most of the developed modules are coupled together to
form a complete software system or major part of the system and then used for
integration testing. This method is very effective for saving time in the integration
testing process. However, if the test cases and their results are not recorded properly,
the entire integration process will be more complicated and may prevent the testing
team from achieving the goal of integration testing.A type of big-bang integration
testing is called "usage model testing" which can be used in both software and
hardware integration testing. The basis behind this type of integration testing is to run
31
user-like workloads in integrated user-like environments. In doing the testing in this
manner, the environment is proofed, while the individual components are proofed
indirectly through their use. Usage Model testing takes an optimistic approach to
testing, because it expects to have few problems with the individual components. The
strategy relies heavily on the component developers to do the isolated unit testing for
their product. The goal of the strategy is to avoid redoing the testing done by the
developers, and instead flesh-out problems caused by the interaction of the
components in the environment. For integration testing, Usage Model testing can be
more efficient and provides better test coverage than traditional focused functional
integration testing. To be more efficient and accurate, care must be used in defining
the user-like workloads for creating realistic scenarios in exercising the environment.
This gives confidence that the integrated environment will work as expected for the
target customers.
In software project management, software testing, and software engineering, verification and
validation (V&V) is the process of checking that a software system meets specifications and
that it fulfills its intended purpose. It may also be referred to as software quality control. It is
normally the responsibility of software testers as part of the software development lifecycle.
32
Validation checks that the product design satisfies or fits the intended use (high-level
checking), i.e., the software meets the user requirements.This is done through dynamic
testing and other forms of review.Verification and validation are not the same thing, although
they are often confused. Boehm succinctly expressed the difference between
In other words, software verification is ensuring that the product has been built according to
the requirements and design specifications, while software validation ensures that the product
meets the user's needs, and that the specifications were correct in the first place. Software
verification ensures that "you built it right". Software validation ensures that "you built the
right thing". Software validation confirms that the product, as provided, will fulfill its
intended use.
Both verification and validation are related to the concepts of quality and of software quality
assurance. By themselves, verification and validation do not guarantee software quality;
planning, traceability, configuration management and other aspects of software engineering
are required.Within the modeling and simulation (M&S) community, the definitions of
verification, validation and accreditation are similar:
33
M&S Verification is the process of determining that a computer model, simulation, or
federation of models and simulations implementations and their associated data
accurately represent the developer's conceptual description and specifications.
M&S Validation is the process of determining the degree to which a model, simulation,
or federation of models and simulations, and their associated data are accurate
representations of the real world from the perspective of the intended use(s).
Accreditation is the formal certification that a model or simulation is acceptable to be
used for a specific purpose.
The definition of M&S validation focuses on the accuracy with which the M&S represents
the real-world intended use(s). Determining the degree of M&S accuracy is required because
all M&S are approximations of reality, and it is usually critical to determine if the degree of
approximation is acceptable for the intended use(s). This stands in contrast to software
validation.
Classification of Methods
In mission-critical software systems, where flawless performance is absolutely
necessary, formal methods may be used to ensure the correct operation of a system.
However, often for non-mission-critical software systems, formal methods prove to be
very costly and an alternative method of software V&V must be sought out. In such
cases, syntactic methods are often used.
Test Cases
A test case is a tool used in the process. Test cases may be prepared for software
verification and software validation to determine if the product was built according to
the requirements of the user. Other methods, such as reviews, may be used early in the
life cycle to provide for software validation.
34
Test Procedures
Specific knowledge of the application's code/internal structure and programming
knowledge in general is not required. The tester is aware of what the software is
supposed to do but is not aware of how it does it. For instance, the tester is aware that
a particular input returns a certain, invariable output but is not aware of how the
software produces the output in the first place.
Test Cases
Test cases are built around specifications and requirements, i.e., what the application
is supposed to do. Test cases are generally derived from external descriptions of the
software, including specifications, requirements and design parameters. Although the
tests used are primarily functional in nature, non-functional tests may also be used.
The test designer selects both valid and invalid inputs and determines the correct
output, often with the help of an oracle or a previous result that is known to be good,
without any knowledge of the test object's internal structure.
Test Design Techniques
Typical black-box test design techniques include:
White-box testing (also known as clear box testing, glass box testing, transparent box testing,
and structural testing) is a method of testing software that tests internal structures or workings
of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing
an internal perspective of the system, as well as programming skills, are used to design test
cases. The tester chooses inputs to exercise paths through the code and determine the
35
appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit
testing (ICT). White-box testing can be applied at the unit, integration and system levels of
the software testing process. Although traditional testers tended to think of white-box testing
as being done at the unit level, it is used for integration and system testing more frequently
today. It can test paths within a unit, paths between units during integration, and between
subsystems during a system–level test. Though this method of test design can uncover many
errors or problems, it has the potential to miss unimplemented parts of the specification or
missing requirements.
White-box test design techniques include the following code coverage criteria:
White-box testing is a method of testing the application at the level of the source
code. These test cases are derived through the use of the design techniques mentioned
above: control flow testing, data flow testing, branch testing, path testing, statement
coverage and decision coverage as well as modified condition/decision coverage.
White-box testing is the use of these techniques as guidelines to create an error free
environment by examining any fragile code. These White-box testing techniques are
the building blocks of white-box testing, whose essence is the careful testing of the
application at the source code level to prevent any hidden errors later on. [1] These
different techniques exercise every visible path of the source code to minimize errors
and create an error-free environment. The whole point of white-box testing is the
ability to know which line of the code is being executed and being able to identify
what the correct output should be.
36
6.1.5.1 Levels
1. Unit testing. White-box testing is done during unit testing to ensure that the code is
working as intended, before any integration happens with previously tested code.
White-box testing during unit testing catches any defects early on and aids in any
defects that happen later on after the code is integrated with the rest of the application
and therefore prevents any type of errors later on.
2. Integration testing. White-box testing at this level are written to test the interactions of
each interface with each other. The Unit level testing made sure that each code was
tested and working accordingly in an isolated environment and integration examines
the correctness of the behaviour in an open environment through the use of white-box
testing for any interactions of interfaces that are known to the programmer.
3. Regression testing. White-box testing during regression testing is the use of recycled
white-box test cases at the unit and integration testing levels.
White-box testing's basic procedures involves the tester having a deep level of understanding
of the source code being tested. The programmer must have a deep understanding of the
application to know what kinds of test cases to create so that every visible path is exercised
for testing. Once the source code is understood then the source code can be analyzed for test
cases to be created. These are the three basic steps that white-box testing takes in order to
create test cases:
37
6.1.5.3 Advantages
White-box testing is one of the two biggest testing methodologies used today. It has several
major advantages:
1. Side effects of having the knowledge of the source code is beneficial to thorough
testing.
2. Optimization of code by revealing hidden errors and being able to remove these
possible defects.
3. Gives the programmer introspection because developers carefully describe any new
implementation.
4. Provides traceability of tests from the source, allowing future changes to the software
to be easily captured in changes to the tests.
5. White box tests are easy to automate.
6. White box testing give clear, engineering-based, rules for when to stop testing.
6.1.5.4 Disadvantages
Although white-box testing has great advantages, it is not perfect and contains some
disadvantages:
1. White-box testing brings complexity to testing because the tester must have
knowledge of the program, including being a programmer. White-box testing requires
a programmer with a high level of knowledge due to the complexity of the level of
testing that needs to be done.
2. On some occasions, it is not realistic to be able to test every single existing condition
of the application and some conditions will be untested.
3. The tests focus on the software as it exists, and missing functionality may not be
discovered.
39
CHAPTER 7
CONCLUSION
40
CHAPTER 8
REFERENCES
https://www.chatbot.com/integrations/chat-widget/
https://blog.hubspot.com/marketing/best-ai-chatbot/
https://zapier.com/blog/best-ai-chatbot/
https://www.forbes.com/advisor/business/software/best-chatbots/
https://www.g2.com/articles/chatbot
https://powervirtualagents.microsoft.com/en-in/build-a-chatbot/
41