0% found this document useful (0 votes)
14 views133 pages

Quality Control An Anthology of Cases

Uploaded by

vicolim.co
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views133 pages

Quality Control An Anthology of Cases

Uploaded by

vicolim.co
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 133

6,200

Open access books available


169,000
International authors and editors
185M
Downloads

Our authors are among the

154
Countries delivered to
TOP 1%
most cited scientists
12.2%
Contributors from top 500 universities

Selection of our books indexed in the Book Citation Index


in Web of Science™ Core Collection (BKCI)

Interested in publishing with us?


Contact [email protected]
Numbers displayed above are based on latest data collected.
For more information visit www.intechopen.com
Chapter

Updates on Software Usability


Ahmed MateenButtar and MuhammadMajid

Abstract

Network security ensures that essential and accessible network assets are protected
from viruses, key loggers, hackers, and unauthorized gain. Interruption detection
system (IDS) is one of the most widespread significant network tools for network
security management. However, it has been shown that the current IDS is challenging
for network professionals to use. The interface, which assists users in evaluating the
software usability, is a crucial aspect that influences the effectiveness of IDS, while
security software such as IDS is effective. Usability testing is essential for supporting
users in successful interaction and IDS utilization because the user finds it difficult to
assess and use the output quality. Usability engineers are responsible for the majority
of usability evaluations. Small and large business software engineers must master
multiple usability paradigms. This is more difficult than teaching usability engineers
how to write software. The Cognitive Analysis of Software Interface (CASI) tech-
nology was created as a solution for software engineers. Furthermore, this system
aids software engineers in evaluating IDS based on user perception and evaluation
perspectives. This study also discusses a large body of research on software interfaces
and assessment procedures to evaluate novel heuristics for IDS. Finally, additional
interface challenges and new ways for evaluating programmed usability are discussed.
Topic Subject Areas: Intrusion Detection System (IDS) Usability.

Keywords: IDS, usable security, heuristics evaluation, cognitive analysis, SDLC

. Introduction

The Internet has evolved recently, and users have been confronted with network
security issues. Many firms are concerned about protecting their valuable and private
data from dangers inside and outside society. Human and organizational variables,
according to research, have an impact on network security. Security is a challenge
for network practitioners. As a result, they employ specific tools, such as intrusion
detection systems, firewalls, antivirus software, and Nmap, among others, to reduce
or completely eradicate incursion. Interruption detection system (IDS) is critical in
detecting malevolent behavior quickly and supporting real-time attack response.
But many intrusion detection systems are challenging to use, and users cannot take
advantage of all of their functions. These issues must be to boost IDS efficiency. One
option is to create an effective solution that may assist network administrators in con-
trolling security. Usability is a critical factor that has a significant impact on security
management. Software developers acknowledge that the software interface is critical
to its success. In terms of software usability, this success can be measured. Usability

Quality Control - An Anthology of Cases

discusses the quality of a user’s know-how when interacting with products or systems,
including websites, software, devices, or applications. Usability is an essential term in
the human-computer interaction (HCI) discipline. One option to overcome the issues
of IDS is to create a user-friendly interface to assist network experts in effectively
managing security.

. Usability

The way businesses and people interact has altered due to Twitter, which was
created in . Therefore, usability is a crucial aspect of software quality. It is
described by ISO  as the degree to which confident clients can use a product to
succeed in preset goals with sufficiency, competency, and fulfillment in an exact set
of users. The capacity of the product item is to be perceived, learned, and enjoyed by
the client when used in endorsed settings []. Definitions emphasize convenience as
a key component of programming that enables users to do tasks quickly and without
any issues. Nielson lists the five characteristics of learnability, memorability, and
adaptability essential to usability.
According to the client’s point of view, ease of use ensures that the result produced
is easy to measure, use, and recollect. The objective of effectiveness, adequacy,
security, utility, learnability, and memorability is reached. HCI’s center has grown,
and the errand-focused convenience worldview has expanded to include a refined and
epicurean client experience UX worldview.
Various methodologies assess the convenience of programming in ease of use.
There are two convenience testing techniques: ease-of-use assessment and ease-of-use
testing strategies. Convenience issues are perceived by ease-of-use professionals in
convenience assessment. However, ease-of-use issues are found in clients’ perceptions
of how they utilize the framework and connect with the product interface in ease-of-
use testing strategies.

. Heuristic evaluation

Users believe that testing applications are an essential step in making them better.
Heuristic evaluation is a well-known low-cost approach to usability testing. According
to some authors [], heuristics and recommendations can be used interchangeably.
Up to  of the usability flaws were identified []. However, a collection of heuris-
tics has never been designed expressly for evaluating security-related applications.
The project’s objective at this stage is to create criteria for assessing usability for this
particular problem space. These strategies are used to evaluate the quality of existing
products and to discover demands that products can meet. For the heuristic evalu-
ation, users selected snort as a candidate application. Snort is a simple yet popular
intrusion detection system. It can track and record IP traffic. Because it is a command
line-based tool, users decided to use a web-based application. Silicon defense has
created a user-interface front end.
Usability testing can be done in various ways, including cognitive walkthroughs,
formal usability inspections, heuristic evaluations, and pluralistic walkthroughs.
Heuristic evaluation was used to assess the usefulness of IDS additionally, and heu-
ristics are specifically developed for IDS. Heuristic evaluation entails a small group
of convenience specialists looking through the framework and comparing it with

Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423

usage standards. Customers can assess the ease of use of IDS and identify and address
usability matters more successfully by employing new heuristics.
However, given that assessment can be expensive in terms of time, money, and
human exertion, semi-mechanized or fully robotic evaluation is a viable option to
improve current assessment approaches. Additionally, research reveals [] the signifi-
cance of a particular framework in facilitating convenience assessment.
Regarding programming projects, utilizing a computerized or self-loader audit
framework is basic to guarantee the venture’s adequacy, mainly when the cutoff time
is tight. To guarantee project achievement, one choice is to develop further manual
evaluation utilizing robotization or semi-mechanization. This will help assessors
follow guide cycles and catch more mistakes significantly quicker. Finally, the assess-
ment’s discoveries are summed up and introduced to the planning group, alongside
ideas for development.

. Intrusion detection system

IDS is continuously monitoring and evaluating events within a computer system


or organization for precursors to upcoming events, such as infringement or dangers
of violation of PC security guidelines, acceptable use approaches, or usual security
rehearses. The interruption detection system (IDS) is an organization-specific
security arrangement that screens the organization for unapproved access. In IDS,
users deal with two essential issues: The first is related to best-in-class, and the second
is related to the state of training; the strategies or calculation used to recognize the
assault, and the human point of interaction that permits security overseers or organi-
zation specialists to identify and answer the assault rapidly. Different techniques and
calculations are being created to expand IDS’s capacity to distinguish unapproved net-
work access as depicted [] in Figure . On the other hand, when the UI is not good,
functional programming frequently fails.
Traditionally, IDS users have been network officers; however, the benefits of
employing IDS have turned out to be so well-known that users today range from PC
users who need to monitor network traffic passing through their business. There are
three different types of clients: network administrators, security-trained profession-
als, and software engineers. An organization developer’s skill is the ability to design
networks with traffic in mind. While LAN professionals manage and support an
organization’s LAN, security professionals have a comprehensive understanding of

Figure 1.
IDS architectural data flow diagram.


Quality Control - An Anthology of Cases

technology, including anti-infection, strong validation, interruption discovery, and


biometrics. While interruption discovery frameworks watch out for networks for
possible antagonistic exercises, they are inclined to deception. Thus, when ventures
first carry out IDS items, they should twist them. It involves properly arranging inter-
ruption documentation frameworks to recognize genuine organization traffic and
noxious exercises.
The interruption counteraction frameworks screen network parcels entering the
frameworks to search for pernicious action and immediately give cautioning signals.

. Intrusion detection system classification

.. NIDs (network intrusion detection system)

NIDs are implemented at a prearranged point in the organization to examine traf-


fic from all associated devices. It inspects completely subnetwork correspondence and
looks at it as an information base of perceived dangers. An alert can be given to the
chairman at whatever point an attack has been identified or bizarre conduct has been
found. To determine whether someone is attempting to breach the firewall, NIDs are
introduced on the subnet where firewalls are installed.

.. HIDs (host intrusion detection system)

HIDs are network interruption recognition frameworks that suddenly spike


demand for independent hosts or gadgets. HIDs just screen the gadget’s approaching
and active bundles, alarming the manager by assuming that dubious or malignant
action is found. It thinks about the ongoing depiction of the past preview of existing
framework records. An alarm is given to the director of the insightful framework
records that have been adjusted or eliminated. HIDs should be visible in real life on
tactical equipment that is not designed to change its format.

.. PIDS (protocol-based intrusion detection system)

PIDS is a structure that is frequently seen at the front end of the server, supervis-
ing and deciphering the communication between the client/contraption and the
server. By consistently examining the HTTPS show stream and enduring the con-
nected HTTP show, it attempts to connect to the web server. This system would need
to remain in collaboration for HTTPS to be used because HTTPS is not mixed until it
manifests at the web show layer.

.. APIDS (application protocol-based intrusion detection system)

A framework that exists inside an assortment of servers is called APIDS. It identi-


fies interruptions by checking and investigating application-explicit convention
traffic, for instance, the way the SQL convention the work communicates with the
information base on the web server.

.. HIDS (hybrid intrusion detection system)

HIDS is made by joining at least two interruption identification advancements.


First, the hosting specialist or framework data are converged with network data to get

Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423

Figure 2.
Life cycle or system flow diagram.

an entire viewpoint on the organizational frameworks in the crossover interruption


identification framework. The crossover interruption discovery framework shown in
Figure  is more powerful [].

. Detection method of IDS

. Signature-based method

Signature-put-together IDS recognizes attacks based on specific examples in


network traffic, that is, the quantity of s or s. It additionally identifies malware
given the infection’s recently realized hazardous guidance arrangement. Marks are
models that IDS perceives.
While fresh malware attacks are attempting to be recognized because their model
signature is dark, signature-based IDS can quickly identify attacks whose model
signature already exists in the system.

Figure 3.
Internal life cycle model.


Quality Control - An Anthology of Cases

. Anomaly-based methods

A peculiarity-based IDS was designed to identify the hazards posed by dark


malware because new malware is being produced at a rapid rate. A dependable
development model is fostered by computer-based intelligence in characteristic-based
intrusion detection systems IDS, and anything that enters is diverged from that model
and stepped suspect if it is not detected. In contrast, because these models may be
prepared by the applications and equipment plans, Figure  represents AI-based IDS
that has a prominent regular property [].

. Software interface cognitive analysis

According to studies [], the software is currently being developed by businesses


that can test ease of use completely on their own or with very little assistance from
humans. This is because many businesses dislike hiring convenience specialists,
because it examines and evaluates customer discernments, such as what clients think
of the connection point, how they associate with it, and how they believe it should be.
CASI is a strategy that helps programmers and IT clients assess user interface without
needing to enlist convenience specialists. CASI does not just recognize convenience
shortcomings in a framework’s connection point, yet, in addition, makes suggestions
to further develop it and make it more intelligent for the client.
The product connection point is essential in deciding the ease of use of program-
ming. Every IDS interface in CASI is evaluated for usability, and flaws are identified.
To show this test, the proposed IDS heuristics are executed on the IDS connection
point. Proposed heuristics are installed in CASI and run on each ID connection point
to recognize and suggest ease-of-use issues. The IDS connection point is picked and
organized by the client’s prerequisites at the primary level. Those IDSs that are as yet
being created can be used to work on their helpfulness during the advancement stage.
The authors believe that users should choose a single IDS point of interaction and
then execute the suggested heuristics at the following level.

. IDPS methodologies

IDPs utilize various ways to deal with distinguishing changes in the frameworks
they screen. Outside assaults or interior staff abuse can cause these changes. Four
procedures stand apart among the numerous others and are ordinarily utilized. The
four options are as follows:

• Signature-based,

• Oddity-based,

• State full convention examination-based, and

• Half-and-half-based.

The half-breed strategy, which consolidates various approaches to give prevalent


location and avoidance capacities, is utilized by most of the current IDPS frameworks.

Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423

Each methodology follows a similar broad framework; the main variations lie in how
they analyze data from the observed environment to determine whether an agreement
violation has happened as explained in Table  [].

. Anomaly-based methodology

The system of irregularity-based procedure analyzes noticed action to a gauge


profile. The gauge profile is the practical framework’s learned typical way of behav-
ing that is created during the learning time frame when the IDPS learns the climate
and produces an ordinary profile. This climate can incorporate organizations, clients,
frameworks, and other things. Fixed or dynamic profiles are accessible. A decent
profile stays consistent over time, yet a robust profile differs when the practical
frameworks change. A robust profile adds critical upward to the framework because
the IDPs continue refreshing it, making it defenseless against avoidance. By spreading
the assault throughout an extensive period, an aggressor can sidestep the IDPS that
utilizes a powerful profile.

. Signature-based methodology

Signature-based approach thinks about noticed marks to marks put away on


record. An information base or a rundown of known assault marks may be remem-
bered for this record. Any signature that matches the marks on a document in the
checked climate is set apart as a security strategy infringement or an assault. Since it
does not assess each activity or organization traffic on the observed climate, the mark-
based IDPS has a low upward. It simply looks at the information base or document for
perceived marks. Unlike irregularity-based approaches, signature-based systems are
simple to apply since they do not require studying the climate. This technique looks,
investigates, and analyzes the items in caught network bundles for known danger
marks. It likewise thinks about conduct marks to those that are allowed. The frame-
works’ known hazards payload is also broken down using a mark-based approach.
Signature-based systems are very effective against known attacks and infringe-
ments, but they cannot identify fresh attacks unless new marks are introduced.

Solar winds security Top features Common features

Kismet Risk assessment report trend Export to the PDF

Zeek Various plugins available Monitor SNMP Traffic


Open DLP Customizable policy scripts Agents
Sagan Identifies at rest data across thousands of Snort-like design
system
Suricata Compatible with rule management Detects complex threats
software
Security Onion Supported standard output and input Traffic pattern insight
formats
Security Level NIDS/HIDS Hybrid Automated asset discovery

Table 1.
Best intrusion detection software tools and features.


Quality Control - An Anthology of Cases

Signature-based IDPSs are not difficult to overcome because they depend on existing
assaults and require the utilization of new marks before they can identify new ones.
Attackers can easily lose signature-based identification frameworks if they modify
known attacks and target frameworks that have not been updated with new marks
that identify the alteration. Signature-based procedures demand significant resources
to maintain awareness of the potentially endless number of changes to known risks.
Systems based on signatures are easier to modify and enhance since the markings or
rules used to display them can still change.

. Hybrid-based methodology

. With the advancing assortment of assaults, the two old-style IDSs referenced
above can safeguard our data frameworks. New strategies for joining different
interruption location frameworks to further develop their adequacy have been
planned. The inquiry has shown that consolidated calculations perform well
compared with only calculations [].

. The objective of half-and-half interruption identification frameworks is to join a


few discovery models to accomplish improved results. A crossbreed interruption
location framework comprises two parts. The main part processes the unor-
ganized information. The subsequent part takes the handled information and
sweeps it to sleet available interruption exercises [].

. Crossbreed interruption location frameworks depend on consolidating two


learning calculations. Each learning calculation has novel highlights, it helps to
work on half-breed offering IDS mixture, and it can be broadly classified as fluid
half-breed, coordinated-based crossover over the bunch single, and half-breed.

. A crossover interruption discovery framework in light of mark-based and ir-


regularity location parts. In the principal phase of the model, an abuse discovery
part was applied to recognize realized assaults in light of the caught designs. The
next phase included an irregularity recognition component to capitalize on the
flaws of the abuse discovery component. Various one-class SVM calculations
were used to support the model’s second component. The KDD Cup  dataset
was used to test the model’s presentation. When compared with a single tradi-
tional IDS, the model outperformed it [].

. Experts combine highlight extraction strategies and management methods to


increase detection rates as well as reduce the amount of fraud. The crossover’s
initial phase employed chi-square to identify the highlights. The goal of this stage
was to reduce the number of entries in the dataset while maintaining the impor-
tant highlights that detect attacks. A multiclass support vector machine (SVM)
calculation was used for grouping in the following stage. To improve the charac-
terization rate of this model, a multiclass support vector machine was used. The
NSL-KDD dataset [] was used to evaluate the model, and the results showed
that the model had a high discovery rate and a low misleading problem rate.

. In light of a C choice tree classifier and a one-class support vector machine, sci-
entists developed a mixed location model OC-SVM. Two key components made
up the model []. The primary component of the abuse identification model

Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423

was developed using a C. decision tree classifier. The next section was devel-
oped with OC-SVM for irregularity discovery. The NSL-KDD and Australian De-
fense Force Academy (ADFA) datasets were used by the experts to demonstrate
the model, and the results revealed that the half-and-half model performed
better than single-based models.

. In light of repetitive brain architecture and convolutional brain organization


CNN, the author [] developed a crossover interruption discovery model RNN.
The investigation is anticipated to advance highlight extraction, in the presenta-
tion of interruption finding frameworks, which is crucial. The RNN was used
in the second stage to extract transient elements from the dataset, while CNN
was used in the main stage to differentiate neighboring highlights in the dataset.
The information irregularity on the accessible dataset was resolved by this tactic.
The CSE-CIC-DS dataset, which is the updated dataset, was used to test the
model’s presentation. With an interruption identification exactness of .,
the model outperformed other interruption location models.

. For smarter home security, the experts [] suggested a half-and-half model in-
terruption identification model. The model was divided into two pieces. The ma-
jority of the section used AI calculations to recognize continuous interruptions.
In this section, calculations using irregular forests, XG Boost, choice trees, and
K-closest neighbors were used. The abuse interruption identification approach
was used in the next section to find known assaults. Both the CSE-CIC-IDS
and NSL-KDD datasets were used to test the model’s presentation. For the loca-
tion of both organizational disruption and client-based anomalies in cunning
homes, the model captured an amazing display.

. A mixture location model given Catalyst ML and the convolutional_LSTM Conv-


LSTM network was planned. The model comprises two parts: The principal part
utilizes Catalyst ML to identify inconsistency interruption, while the subsequent
part sends Conv-LSTM for abuse discovery. To explore the exhibition of the
perfect, the specialists utilized the ISCX_UNB datasets []. The model kept a
remarkable presentation of . precision in identification. The specialists
suggested that the ideal can be assessed further utilizing an alternate dataset as
an approach to endeavoring to replicate the outcomes.

. The creators [] fostered an interruption location framework by joining firefly and
Hopfield brain organization HNN calculations. The analysts utilized Firefly calcu-
lation to identify refusal-of-rest assaults through hub grouping and verification.

. The scientists [] proposed a crossbreed recognition framework for VANET ve-
hicular impromptu organization. The model comprises two parts. The scientists
conveyed an order calculation on the main part and a grouping calculation on the
subsequent part. In the main stage, they utilized irregular woodland to identify
known assaults through the order. They sent a weighted K-implies computation
for the next step, which was the finding of an odd interruption. The most recent
dataset, the CICIDS  dataset, was used to evaluate the model. The experts
suggested conducting additional testing on the model under verifiable circum-
stances. They also combined arbitrary woods computation with unsupported
bunching calculation in light of corsets in another work. This model was used to

Quality Control - An Anthology of Cases

identify persistent VANET disruptions. In comparison with other models, this


maintained a better presentation in terms of accuracy, computational efficiency,
and identification rate.

. The author [] projected a mixture location perfect given hereditary calcula-
tion and fake-resistant framework AIS-GAAIS for interruption identification
on impromptu on-request distance vector-based versatile impromptu organiza-
tion AODV-based MAN, ET. The model was assessed utilizing different steering
assaults. In contrasted and different models, the model had superior recognition
rates and diminished the deception rates.

. The scientists [] involved incorporated firefly calculation with a hereditary
calculation to include determination MANET. To group the chosen highlights
in the main phase of the model as one or the other interruption or typical, the
specialists utilized a replicator brain system for arrangement. The models’ exhi-
bition was contrasted with that of fluffy-based IDS. The model beat fluffy-based
IDS in exactness as well as accuracy.

. Literary analysis

The objective of the literary investigations is to look into IDS and convenience to
track down replies to explore issues. Users will do an abstract analysis of IDS’ usabil-
ity to identify any usability challenges and determine the best course of action. To
advance the usability of IDS, Figure  shows the users will also need to ascertain the
present state of craftsmanship and methods [].
To improve convenience, users want to identify and study the IDSs that are used
the most frequently. Users have Grunt, KF Sensor, and Easy KF Sensor is a viable host
based-intrusion detection system (IDS) that acts as a honeypot to invite and detect
hackers by pretending weak systems. A few fundamental highlights of IDS are seen
during the examination, including client sorts, ease of use issues, and client collabo-
ration with IDS.

. Selection of IDS practitioners

It is critical to understand who the actual IDS users are to gain meaning-
ful user input in defining the heuristics for IDS. In addition, this will aid in

Figure 4.
Selection-and-study of IDS.


Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423

identifying IDS usability issues and determining ways to improve IDS usability
based on user perceptions.

. Survey questionnaire

Before working into the specifics of IDS convenience difficulties, an overview


survey is intended to provide more insight into how ease-of-use and IDS are handled
practically. Since users oversee various individuals with various backgrounds and
levels of expertise, information, and aptitude, this system was picked. While utilizing
IDS, it will help with getting what applies to these clients.

. Designing of heuristics for IDS

Based on the responses to the review questionnaire, users determine the problems
users have using IDS. This will support the creation of fresh IDS heuristics. The
heuristics are broken down into various groups, including:

a. Installation heuristics.

b. Interface heuristics.

c. Output heuristics.

d.Customization heuristics.

e. Help heuristics.

. Lab-based testing

After the heuristics have been planned, now is the right time to scrutinize them in
the lab. The good thing about CASI is that the user may use the provided calculations
at any point in the IDS process, including the result and customization phases. This
study aims to evaluate CASI’s performance in identifying and fixing ease of use flaws
compared with conventional heuristics.

. Experts-based testing

Following lab testing, the wished-for heuristics are currently prepared for exact
testing, in which network experts can participate in IDS interface assessment chal-
lenges and receive the outcomes. At the same time, another IDS interface mock-up
is assembled and tried for assessment relying upon the experimental outcomes.
Assuming that network experts find the point of interaction engaging and easy to
utilize, it will ultimately supplant the past IDS interface.

. Evaluation of intrusion detection system (IDS)

To observer-assess IDS, this can be achieved via the CASI and (Nielson) []
Usability on IDS to decide the number of ease-of-use flaws found and eliminated
from the IDS interface. The researcher’s ease-of-use was picked because they are

Quality Control - An Anthology of Cases

the most routinely used. The objective of contrasting the convenience of how CASI
functions contrast with scientists’ ease of use. A few elements should be considered
while contrasting including the quantity of ease-of-use defects distinguished, time,
dependability, proficiency, and accuracy.

. Challenges in intrusion detection for web-based applications

In the web application security field, the interruption identification system is


still at its outset. The identifying frameworks are mostly used as an organization
security gadget. In contrast to standard network IDS design, tackling the intricacies
associated with online applications necessitates a novel methodology in this segment.
One should outline some of the characteristics of online apps and web traffic that
make designing the IDS challenging. The elements depicted in the accompanying
subsets structure the theoretical starting point for fostering the web’s IDS. This will
aid in understanding the essential knowledge needed to create a solid engineering
framework.

. Communication protocol (HTTP//HTTPS)

To take advantage of online application weaknesses, aggressors solely use


HTTP/HTTPS conventions. HTTPS guarantees a protected and encoded associa-
tion. Hypertext transfer protocol (HTTP) is a solicitation reaction convention
intended to ease correspondence between the client and server. One major disad-
vantage of noticing HTTPS traffic from an IDS stance is that encryption blinds
network-based location frameworks. Based on their work on the application layers
or the Internet layer of the TCP/IP worldview, IDS can be delegated host-based
intrusion detection systems (HIDs) or networks-based intrusion detection systems
(NIDs).
NIDS observes the organization bundles, and in HTTPS association, the parcel
information is scrambled, which the framework neglects to check. If these frame-
works approach the SSL testament’s private key, they can examine HTTPS traffic.
HIDS, then again, experiences no difficulty managing HTTPS traffic since it safe-
guards endpoints where the encoded information is unscrambled once more into its
unique structure.

. Internet request

Information is sent from the client to the server through a web demand. The
data is sent utilizing HTTP demand header fields or solicitation boundaries. The
solicitation header fields contain client demand control data, while the solicitation
boundaries contain extra client data required by server-side projects to play out a
movement. GET and POST are the two standard strategies for passing boundaries
to the server. Boundary values are provided in the inquiry line of the URL in the
GET demand, and these qualities are conveyed in the solicitation body in the POST
demand. The client program typically characterizes the header fields. However, the
boundary values are either given by the client or recently arranged by waiter side
projects, for example, treats, stowed away fields. The hidden test with electronic
application security is that client information can be truly a factor and similarly
mind boggling, making it hard to interface them along with a legitimate arrange-
ment of values.

Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423

The primary function of identification frameworks is to scrutinize the attributes


listed in header fields and solicitation borders. Positive or negative methodologies
may be utilized to approve these qualities. The positive approval procedure indicates
what information the program anticipates. It incorporates information type (string,
the negative strategy, then again, involves filtration of values that contain attack
designs). Positive (whitelisting) and negative boycotting approval are remembered
for metaphysics and mark-based frameworks, while inconsistency-based frameworks
are concerned mostly with certain approvals. The information sent in a web solicita-
tion could contain a wide scope of values, and the methodology to utilize (whitelist or
boycott) is profoundly reliant upon the kind of significant worth set. The accompany-
ing classes have been created from the worth set.

. Finite values

These characteristics exist in a restricted reach and can be free, that is, general to
all or tweaked to the application’s business rationale. The main gathering contains an
assortment of normal qualities, for example, header fields, Accepts, Accept Charest,
Accept-Language. Since these qualities are regularly something similar across applica-
tions, they can be checked against a SIDS allow list. The last gathering of boundaries
contains values for HTML controls, such as dropdown records, checkboxes. These
controls assist clients with choosing values from a restricted determination of choices.
However, the business case for the application leaves the value arrangement of these
uncertain. Because of an assortment of elements, keeping up with the whitelist
to assess such boundary values can become a tedious activity for SIDS. First, the
whitelist has become excessively intended for the assortment of values that match the
business rationale. Second, this rundown may be huge dependent upon how much an
application controls. Third, staying up with the latest is troublesome since the pass-
able arrangement of values could shift rapidly as business rationale changes. However,
assistance can be beneficial in this situation as it allows one to become familiar with
the benefits of boundaries.

. Application values

This class provides values given by server-side projects that should not be changed
on the client side. Treats are stowed away fields, and designers utilize question strings
to store a scope of significant information, for example, item cost and amount,
meeting ID. IDS should check that these qualities match those set by the application.
Signature-based IDS cannot detect changed values because they need an attack strat-
egy and changed values frequently resemble real information. Inconsistency-based
frameworks, then again, can be utilized to realize which boundaries should not be
changed on the client side. Boundary-altering assaults were found in the exploration
portrayed.

. Multiple users with multiple roles

Web applications typically have a large number of clients with varying levels of
honors. These honors are supervised by the approval interaction, which ensures
that the client is only leading legal activities. Applications follow each client-server
connection and direct each solicitation to a specific client before deciding whether

Quality Control - An Anthology of Cases

to handle it. Every time a user logs in to the program, a meeting ID is assigned the
responsibility of identifying the solicitations from the solicitation pool and append-
ing them to the user.
Utilizing discovery frameworks allows the user to provide various clients with
unique honors arrangements. IDS should initially have the option to follow client
meetings to relate client solicitations to the suitable meeting. IDS should also observe
asset utilization and client actions during a meeting. Unapproved access can be
acquired with an all-around created honor heightening attack. This element helps the
IDS in monitoring the situation with a solitary meeting. Finally, the full state strategy
can associate the grouping of solicitations to a given client, while stateless IDS treats
each solicitation freely and does not monitor them. Frameworks that come up short
on means to connect the current solicitation to recently got demands will probably not
recognize state support and authorization infringement.

. Conclusion

Interruption identification frameworks are confounded and present various


obstacles to security experts. Earlier IDS research has generally centered on expand-
ing the precision of these frameworks and giving help to experts and dissecting
potential security issues. Further developed IDS convenience is one region that
has received insignificant consideration. Yet, present heuristics are not laid out for
IDS frameworks and can go about as a hindrance to utilization. An overview of the
ease-of-use assessment was provided. This project includes convenience evaluations
and difficulties with usability. In terms of computer programming, organization and
programming connection points, and the proximity of the correlation of ease of use
assessment, the analysis further added to the categorization of convenience issues,
which check to take the issues and inadequacies in this field into account. Moreover,
the suggested heuristics for clients and IDS give the principal standards for creating
and developing IDS connection points to opposing security breaks.

Abbreviations

ADFA Australian Defense Force Academy


APIDS Application protocol-based intrusion detection system
CASI Cognitive analysis of software interface
HIDS Host intrusion detection system
HIDS Hybrid intrusion detection system
HTTP Hypertext transfer protocol
IDS Intrusion detection system
NIDS Network intrusion detection system
PIDS Protocol-based intrusion detection system


Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423

Author details

Ahmed MateenButtar* and MuhammadMajid


Department of Computer Science, University of Agriculture, Faisalabad, Pakistan

*Address all correspondence to: [email protected]

©  The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/.),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.

Quality Control - An Anthology of Cases

References

[] Usability Evaluation. The network-intrusion-detection-software.


Encyclopedia of Human-Computer [Accessed: August , ]
Interaction. nd ed. Available from:
https://www.interaction-design.org/ [] Preparing Simple Consolidated
literature/book/the-encyclopedia-of- Financial Statements | F Financial
human-computer-interaction-nd-ed/ Accounting | ACCA Qualification |
usability-evaluation. [Accessed: August Students | ACCA Global. Available
, ] from: https://www.accaglobal.com/my/
en/student/exam-support-resources/
[] Becker FG, et al. Available from: fundamentals-exams-study-resources/
https://www.researchgate.net/ f/technical-articles/preparing-simple-
publication/_What_is_ consolidated-financial-statements.html.
governance/link/ [Accessed: August , ]
cfdcb/downloadA.
http://www.econ.upf.edu/~reynal/ [] Son LH, Pritam N, Khari M, Kumar R,
Civilwars_December. Phuong PTM, Thong PH. Empirical
pdfA. https://think-asia.org/ study of software defect prediction:
handle//Ahttps://www. A systematic mapping. Symmetry.
jstor.org/stable/ ;:. DOI: ./SYM

[] Naqvi I, Chaudhary A, Kumar A.
A systematic review of the intrusion [] Available from: https://
detection techniques in VANETS. TEM ieeexplore.ieee.org/stamp/stamp.
Journal. ;():-. jsp?arnumber=. [Accessed:
DOI: ./tem- August , ]

[]  Phases of the System Development [] Sumaiya Thaseen I, Aswani Kumar C.
Life Cycle Guide. Available from: https:// Intrusion detection model using fusion
www.clouddefense.ai/blog/system- of chi-square feature selection and
development-life-cycle. [Accessed: multi class SVM. Journal of King Saud
August , ] University - Computer and Information
Science. ;():-.
[] Lazarevic A, Kumar V, Srivastava J. DOI: ./J.JKSUCI...
Intrusion detection: A survey. Managing
Cyber Threats, Massive [] Khraisat A, Gondal I, Vamplew P,
Computing. ;:-. DOI: Kamruzzaman J. Survey of intrusion
./---_ detection systems: Techniques,
datasets and challenges. Cybersecurity.
[] Masood Butt S, Majid MA, Marjudi S, ;():-. DOI: ./
Butt SM, Onn A, Masood Butt M. Casi S---
method for improving the usability of
IDS. Science International (Lahore). [] Khan A, Sohail A, Zahoora U,
;():- Qureshi AS. A survey of the recent
architectures of deep convolutional
[] Best Intrusion Detection Software neural networks. Artificial Intelligence
- IDS Systems - DNSstuff. Available Review. ;:-. DOI: ./
from: https://www.dnsstuff.com/ s---

Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423

[] Alghayadh F, Debnath D. A hybrid


intrusion detection system for smart
home security based on machine learning
and user behavior. Advanced Internet
of Things. ;:-. DOI: ./
ait..

[] Kim S et al. A critical function


for the actin cytoskeleton in targeted
exocytosis of prefusion vesicles during
myoblast fusion. Developmental Cell.
;():-. DOI: ./J.
DEVCEL...

[] Maseno EM, Wang Z, Xing H. A


systematic review on hybrid intrusion
detection system. Security and
Communication Networks.
;:-. DOI: ./
/

[] Bangui H, Ge M, Buhnova B.


Exploring big data clustering algorithms
for internet of things applications,
IoTBDS —Proc. rd Int. Conf.
Internet Things. Big Data Security.
;:-. DOI: ./


[] Amiri E, Keshavarz H, Heidari H,


Mohamadi E, Moradzadeh H. Intrusion
detection systems in MANET: A review.
Procedia—Social and Behavioral
Sciences. ;:-.
DOI: ./J.SBSPRO...

[] Shona D, Kumar MS. Efficient


IDs for MANET Using hybrid firefly
with a genetic algorithm. Proceedings
of International Conference on
Inventive Research in Computing
Applications ICIRCA. ;:-.
DOI: ./ICIRCA..

[] What Is an Intrusion Prevention


System (IPS)?. Available from: https://
heimdalsecurity.com/blog/intrusion-
prevention-system-ips/. [Accessed:
August , ]


Chapter

Design of Low-Cost Reliable and


Fault-Tolerant 32-Bit One
Instruction Core for Multi-Core
Systems
Shashikiran Venkatesha and Ranjani Parthasarathi

Abstract

Billions of transistors on a chip have led to integration of many cores leading to


many challenges such as increased power dissipation, thermal dissipation, occurrence
of faults in the circuits, and reliability issues. Existing approaches explore the usage of
redundancy-based solutions for fault tolerance at core level, thread level, micro-
architectural level, and software level. Core-level techniques improve the lifetime
reliability of multi-core systems with asymmetric cores (large and small cores), which
have gained momentum and focus among a large number of researchers. Based on the
above implications, multi-core system using one instruction cores (MCS-OIC) factor-
ing its features are proposed in this chapter. The MCS-OIC is an asymmetric multi-
core architecture with MIPS core as the conventional core and OICs as the warm
standby-redundant core. OIC executes only one instruction named ‘subleq _ subtract
if less than or equal to zero’. When there is one of the functional units (i.e., ALU) of
any conventional core fails, the opcode of the instruction is sent to the OIC. The OIC
decodes the instruction opcode and emulates the faulty instruction by repeated exe-
cution of the ‘subleq’ instruction, thus providing fault tolerance. To evaluate the idea,
the OIC is synthesized using ASIC and FPGA. Performance implications due to OICs
at instruction and application level are evaluated. Yield analysis is estimated for
various configurations of multi-core system using OICs.

Keywords: fault tolerance, reliability, one instruction core, multi-core, yield

1. Introduction

Researchers have predicted about an eight percent increase in soft-error rate per
logic state bit in each technology generation [1]. According to the International
Telecommunication Roadmap for Semiconductors (ITRS) 2005 and 2011, reduction in
dynamic power, increase in resilience to faults and heterogeneity in computing archi-
tecture pose a challenge for researchers. According to the International Roadmap for

1
Quality Control - An Anthology of Cases

Figure 1.
SERs at various technology node.

Device and System (IRDS) roadmap 2017, device scaling will touch the physical limits
with failures reaching one failure per hour as shown in Figure 1 . The soft error rate
(SER) is the rate at which a device or system encounters or is predicted to encounter
soft errors per unit of time, and is typically expressed as failures-in-time (FIT). It can
be seen, from Figure 1 [2–4] that, at 16 nm process node size, a chip with 100 cores
could come across one failure every hour due to soft errors.
This decrease in process node size and increase in integration density as seen in
Figure 1, has the following effects.

1. Number of cores per chip has increased. Due to increase in number of cores, size
of the last level cache (LLC) has increased. For example, NVIDIA’s GT200
architecture GPU did not have an L2 cache, the Fermi GPU, Kepler GPU,
Maxwell GPU has 768KB LLC, 1536KB LLC and 2048KB LLC respectively [5].
Similarly, Intel’s 22 nm Ivytown processor has a 37.5 MB static random-access
memory (SRAM) LLC (Rusu 2014) [6] and 32 nm Itanium processor had a
32 MB SRAM LLC (Zyuban 2013) [7]. Consequence of larger cache size has led to
exponential increase in SER.

2. Low swing interconnect circuits are being used in CMOS transmission system.
This has proved to be an energy efficient signalling system compared to
conventional full swing interconnects circuits. However, incorrect sampling of
the signals in low swing interconnect circuits together with interference and
noise sources can induce transient voltages in wires or internal receiver nodes
resulting in incorrect value being stored at receiver output latch [8].

This scenario can be envisaged as a "fault wall”. In order to surmount the fault wall
scenario, reliability has been identified as a primary parameter for future multi-core
processor design [9, 10]. Similarly, ITRS 2005 and 2011, have also identified increase
in resilience to faults as a major challenge for researchers. Hence, a number of
researchers have started focusing on resilience to faults and reliability enhancement in
multi-core processors. The chapter focuses on providing fault tolerance solutions for
processor cores in multi-core systems.
2
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

2. Motivation

As seen in Figure 1, the total FIT per chip increases with number of cores per chip
increasing. In order to accommodate higher number of cores per chip, (1) total FIT
per chip has to be maintained constant (or no change), and (2) SER per core needs to
be reduced. In the present-day processor cores, the frontend of the core comprises of
decode queue, instruction translation lookaside buffer, and latches. The backend of
the core comprises of arithmetic logic unit, register files, data translation lookaside
buffer, reorder buffers, memory order buffer, and issue queue. SER from backend and
the frontend of the core is 74.48% and 25.22% respectively. In the present processor
cores, latches are hardened [11, 12] cache and large memory arrays are protected using
error correcting codes (ECC) [13, 14]. The SER from backend of the processor is more
when compared to front end and is mainly due to arithmetic logic unit. The FIT from
the arithmetic logic unit of the processor core has started reaching higher levels which
needs robust fault mitigation approaches for present and future processors. Hence
addressing the reliability issues of the core (arithmetic logic unit in backend) is more
significant in improving the reliability of the multi-core system [15, 16]. Conventional
approaches to handle soft errors consumes more power and area. Hence, the chapter
focuses on using heterogeneous model with low cost ( “low cost” denote low power
and lesser area of OICs) fault tolerant cores to improve reliability of multi-core
systems.

2.1 Chapter contributions

Contributions of the chapter are briefly presented below.

1. The microarchitecture consisting of control and data path for OIC is designed.
Four modes of operation in 32-bit OIC namely (a) baseline mode (b) DMR
mode (c) TMR mode and (d) TMR with self-checking subtractor (TMR + SCS)
are introduced.

2. The microarchitecture of 32-bit OIC and multi-core system integrated


with 32-bit OIC are implemented using Verilog HDL. The design is
synthesized in Cadence Encounter (R) RTL Compiler RC14.28 –V14.20
(Cadence design systems 2004) using TSMC 90nm technology library
(tcbn90lphptc 150).

3. Dynamic power, area, critical path and leakage power for four modes of OIC are
estimated and compared.

4. Dynamic power and area of OIC and URISC++ are compared.

5. Area and power are estimated for multi-core system consisting of 32-bit OIC.

6. The OIC is synthesized using Quartus prime Cyclone IVE (Intel, Santa Clara,
CA) with device EP4CE115FE29C7. Number of logical elements and registers
are estimated.

7. Number of logical elements and registers in OIC and URISC++ are compared.
3
Quality Control - An Anthology of Cases

8. Using Weibull distribution, the reliability for the four modes of OIC are
evaluated and compared.

9. Using Weibull distribution, the reliability for OIC and URISC++ are evaluated
and compared.

10. Performance overhead at instruction level and application level is estimated.

11. Yield analysis for proposed multi-core system with OICs is presented.

2.2 Chapter organization

The remaining portion of the chapter is organized as follows as: Section titled “3. An
Overview on 32-bit OIC” presents (a) an outline of 32-bit OIC (b) one instruction set of
OIC (c) modes of operation of OIC (d) microarchitecture of OIC (e) microarchitecture
of multi-core system consisting of OIC (f) instruction execution flow in multi-core
system using one instruction cores (MCS-OIC); Section titled “4. Experimental results
and discussion” presents power, area, register and logical elements estimation for OIC,
and power, area estimation for MCS-OIC; Section titled “5. Performance implications in
multi-core systems” presents performance implications at instruction level and applica-
tion level; Section titled “6. Yield analysis for MCS-OIC” presents yield estimates for the
proposed MCS-OIC; Section titled “7. Reliability analysis of 32-bit OIC” presents reli-
ability modelling of OIC and its estimate in different operational modes; the conclusion
of the chapter is presented in the Section titled “8. Conclusion”; the relevant references
are citated in the Section titled “References”.

3. An overview on 32-bit one instruction core

A 32-bit OIC [17] is designed to provide fault tolerance to a multi-core system with
32-bit integer instructions of conventional MIPS cores. OIC is an integer processor.
The terms “32-bit OIC” and “OIC” are interchangeably used in this thesis. OIC exe-
cutes only one instruction, namely, “subleq – subtract if less than or equal”. The OIC
has three conventional subtractors and an additional self-checking subtractor. A con-
ventional core that detects faults in one of the functional units (i.e., ALU) sends the
opcode with operands to the OIC. In this thesis, the OIC is designed to support the
instruction set of 32-bit MIPS core. However, it can be designed to support 32 bit 86/
ARM instruction set by making necessary changes in the instruction decoder. The OIC
emulates the instruction by repetitively executing the subleq instruction in a
predetermined manner. There are four modes of operation in OIC and they are (a)
baseline mode (b) DMR mode (c) TMR mode and (d) TMR + Self Checking
Subtractor (SCS) or TMR + SCS mode. TMR + SCS is the “high resilience mode” of
OIC. Baseline mode is invoked only when soft error detection and correction alone are
required.

3.1 One instruction set

“Subleq – subtract if less than or equal” is the only instruction executed by the OIC.
The syntactic construct of the subleq instruction is given below.
Subleq A, B, C; Mem [B] = Mem [B] – Mem [A]
4
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

ADD a,a,b INC a MOV a, b RSB b, a, b


1.Subleq a, z,2 1.Subleq One, z,2 1.Subleq a, a,2 1.Subleq a, b,2

2.Subleq z, b,3 2.Subleq z, a,3 2.Subleq b, z,3 2 ret

3 Subleq z, z,4 3.Subleq z, z,4 3.Subleq z, a,4 DEC a

4 ret 4.ret 4.Subleq z, z,5 Subleq one, a


5.ret ret

Table 1.
Sequence of synthesized Subleq instruction.

; If (Mem [B] ≤ 0) go to C;
It is interpreted as: “subtract the value at the memory location A from the value at the
memory location B; store the result at the memory location B; If the value at the memory
location B is less than or equal to zero, then jump to C.” The subleq instruction is Turing
complete. The instruction set of a core or processor is said to be Turing complete, if in
principle, it can perform any calculation that any other programmable computer can. As
an illustration, the equivalent synthesized subleq instructions for ADD, INC, MOV, DEC
and RSB (Reverse subtract) instructions are given in the Table 1.

3.2 Modes of operation

The OIC operates in four modes as mentioned above. They are (a) baseline mode
(b) DMR mode (c) TMR mode and (d) TMR + Self Checking Subtractor (SCS) or
TMR + SCS mode.

a. Baseline mode: In this mode, only the self-checking subtractor is operational. The
results from the subtractor are verified by the self-checker. If the results differ,
the subtraction operation is repeated to correct the transient faults. Transient
faults are detected and corrected in this mode. If the results do not match again,
a permanent fault is detected.

b. DMR mode: In this mode, only two subtractors are operational. The results of
the two subtractors are compared using a comparator. If the results differ, the
subtraction operation is repeated to correct the transient faults. The transient
faults are detected and corrected in this mode. If one of the two subtractors
fails, a permanent fault is detected, and the OIC switches to baseline mode.

c. TMR mode: In this mode, all three subtractors are operational. The results from
the three subtractors are compared using three comparators. The voters check
the results from the comparators and perform majority voting. To correct the
transient faults, the operations are repeated. If anyone subtractor fails, the
faulty subtractor is disabled. In this mode, results from the redundant
subtractors are fed back on special interconnects to the inputs of the
multiplexer. OIC then switches to DMR mode. It is assumed that two
subtractors do not fail simultaneously. Occurrence of one permanent fault is
detected and tolerated in this mode.

d. TMR + SCS mode: TMR + SCS mode is the initial mode of operation in OIC. In
this mode, all three subtractors and SCS are operational. Both permanent and
5
Quality Control - An Anthology of Cases

transient faults are detected and corrected. The results of three subtractors and
SCS are compared using a comparator. If the results differ, then entire operation
is repeated to correct the transient faults. If results continue to differ, then OIC
switches to TMR mode.

3.3 Micro-architecture of OIC

The micro-architecture of the OIC is given in Figure 2. The micro-architecture of


the OIC can be divided into two parts: the control unit and data-path unit. The control
unit consists of a 12-bit program counter (PC), an instruction decoder, a 12-bit control
word memory register and control word memory. The control memory is safeguarded
by (12, 4) Hamming codes [18]. All single-bit errors are detected and corrected by
Hamming codes. The data-path unit consists of four multiplexers, one demultiplexer,
three subtractors, one self-checking subtractor (SCS), three comparators and one
voter unit. Normally, the register files occupy a large die area in a core and are
exposed to high energy particles. In the spheres of replication, the register files also
have high access latency and power overhead due to their fortification from ECC. The
OIC does not have large register files that are likely to propagate transient faults or
soft errors to other subsystems. The OIC uses very few registers. Once the operands
from faulty core are admitted, they are stored in the registers. The results computed
by the subtractors are compared and fed back on a separate interconnect line to the
respective multiplexers. The intermediate results are not stored in the registers.

Figure 2.
Control unit and data path unit of 32-bit OIC.

6
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

3.4 Microarchitecture and instruction execution flow in MCS-OIC

A Multicore system comprising one 32-bit MIPS core and one 32 bit OIC occupying
the upper half and lower half portions respectively in the micro-architecture, is shown
in Figure 3. The MIPS core is a five-stage pipelined scalar processor. Instruction Fetch
(IF), Instruction Decode (ID), Execution (EXE), Memory access (MEM) and Write
Back (WB) are the five stages in the MIPS pipeline. IF/ID, ID/EXE, EXE/MEM, and
MEM/WB are the pipeline registers. PC is a program counter and LMD, Imm, A, B, IR,
NPC, Aluoutput, and Cond are temporary registers that hold state values between
clock cycles of one instruction. The fault detection logic (FDL) detects faults in all the
arithmetic instructions (except logical instructions) by concurrently executing the
instructions. The results of ID/EXE.Aluoutput and FDL are compared to detect the
fault. If a fault is found then the pipeline is stalled. The IF/ID.opcode (in IR) and
operands ID/EXE.A and ID/EXE.B are transferred to OIC as shown in Figure 4. The
IF/ID.opcode is decoded and concurrently ID/EXE.A and ID/EXE.B values are loaded
into the OIC registers (X & Y). The OIC.PC is initialized and simultaneously first
control word from memory is loaded into its control word register. During every clock
cycle, the control bits from control word register are sent to the selection lines of the
multiplexer that control the input lines to the subtractors. At every clock cycle,
subtraction is performed to emulate the instruction sent from the MIPS core. Finally,

Figure 3.
Multi-core system consisting of one 32-bit MIPS core and one 32-bit OIC.

7
Quality Control - An Anthology of Cases

Figure 4.
Sequence of events from fault detection to loading of results into Mem/WB.Aluoutput register of MIPS core.

the computed result is loaded into MEM/WB.Aluoutput and the MIPS pipeline oper-
ation is resumed. The sequence of events from fault detection to results loaded into
MEM/WB.Aluoutput register of the MIPS core is shown in Figure 4.

4. Experimental results and discussion

The micro-architecture of the OIC is implemented using Verilog HDL and


synthesised in ASIC and FPGA platforms to estimate hardware parameters (area, criti-
cal path delay, leakage power, dynamic power) and number of logical elements, register
8
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

usage respectively. In the Section 4.1, comparison of area, power, registers and number
of logical elements of OIC with an approach named URISC proposed by [19] and URISC
++ proposed by [20] is presented. Notably, URISC/URISC++ implement one instruction
set. The URISC/URISC++, a co-processor for TigerMIPS, emulates instructions through
the execution of subleq instruction. TigerMIPS performs static code insertion in both
control flow and data flow invariants so as to detect faults by performing repeated
executions of subleq within the co-processor. Comparative analysis on hardware
parameters for different modes of OIC are discussed in Section 4.2.
ASIC simulation: The OIC given in Figure 2 and multi-core system in Figure 3
has been implemented using Verilog HDL and then synthesized in Cadence Encounter
(R) RTL Compiler RC14.28 –V14.20 (Cadence design systems 2004) using TSMC
90 nm technology library (tcbn90lphptc 150). The area, power (dynamic, leakage,
net, internal) and critical path delay are estimated for the OIC and tabulated in
Table 2.
FPGA synthesis: The OIC is synthesized using Quartus prime Cyclone IVE with
device EP4CE115FE29C7 and the results are illustrated in Tables 3 and 4.
Leakage power and dynamic power: Power dissipation shown in Table 2 is
understood as sum of dynamic power and static power (or cell leakage). Static power
is consumed when gates are not switching. It is caused by current flowing through
transistors when they are turned off and is proportional to the size of the circuit.
Dynamic power is a sum of net switching power and cell internal power. The net
switching power is the power dissipated in the interconnects and the gate capacitance.

Block name Area Leakage Internal Net (nW) Dynamic Critical path
(μm2) power (nW) (nW) power (nW) delay (ps)

Control path 590 39.87 79,498.48 21,881.40 10,1379.88

(Control path + 8122 704.08 10,51,631.88 346,487.45 13,98,115.34 8608


data path)

Sub blocks

Subtractor 581 67.98 41,676.83 6711 48,387.83

Comparator 615 67.04 42,457.83 9954.38 52,411.44

Table 2.
Implementation of 32 bit OIC results using 90 nm TSMC technology.

(A) blocks Logical elements Dedicated registers

OIC (TMR + SCS) 530 160

Subtractor (1) 33

Comparator (1) 43
(B) modes Logical elements

Baseline 100

DMR 303

TMR 486

Table 3.
FPGA synthesis results for OIC.

9
Quality Control - An Anthology of Cases

Cores Logical elements Dedicated registers


OIC 530 160

URISC 15,019 5232

URISC++ 15,081 5233

Table 4.
FPGA synthesis results comparison.

The cell internal power is the power consumed within a cell by charging and
discharging cell internal capacitances. The total power is a sum of the dynamic power
and the leakage power.
Multi2sim (version 5.0): Multi2sim supports emulation for 32 bit MIPS/ARM
binaries and simulation for 32-bit 86 architectures. It performs both functional and
timing simulations. The performance loss is estimated for compute intensive and
memory intensive micro-benchmarks using a Multi2sim simulator. Performance loss
for micro-benchmarks listed in Table 6 are illustrated in Figures 6–11.

4.1 Comparative analysis: power, area, registers and logical elements

With the critical path delay at 8608 ps, the operating frequency of the circuit is 115
MHz with power supply at 1.2v. OIC is a low power core consuming 1.3 mW, with die
area of 8122 μm2. The die area of conventional MIPS core is 98,558 μm2 which is 14.2
larger than OIC core. The MIPS core consumes a total power of 1.153 W and the 32-bit
OIC consumes 1.39 mW; order of difference in powers of 10 is three. The registers in
OIC are PC and temporary registers which hold the operands. But they are not
designed and managed as a register file. Tables 3 and 4 provide the register count and
logical elements count for OIC and URISC++. The number of logical elements in OIC
is 3.51% and 3.52% of the logical elements in URISC and URISC++ respectively. The
number of registers in OIC is 3.05% of URISC++. URISC++ adds 62 logical elements
and one additional register to the architecture of URISC. The logical elements in
URISC++ consume 6.6 mW. URISC++ has 650 registers or 14.3% of registers in
TigerMIPS. URISC++ has two large register files. URISC++ altogether consumes
1.96 W. Thus, OIC consumes less power than URISC++.

4.2 Comparative analysis: four modes of OIC

The critical path delay, area, dynamic power and leakage power for the four modes
of OIC namely baseline mode, DMR mode, TMR mode and TMR + SCS mode are
normalized to baseline mode and shown in Figure 5. The area overhead of TMR + SCS
mode is 68.43% of the baseline, area overhead of TMR mode is 65.37% of the baseline
and for DMR mode it is 51.4%. The comparators and subtractors occupy 22.71% and
28.6% of TMR + SCS mode area respectively. The size of the voter is negligible in
TMR + SCS mode and TMR mode. In the critical path delay, 10% increase is noticed
from the baseline to TMR + SCS mode. The critical path traverses from the subtractor
input to the comparator, and then to the voter, passing through select logic and ends at
an input line. Delay would not differ much between TMR mode and TMR + SCS mode.
Both the dynamic power and leakage power for TMR mode and DMR mode
increase significantly due to redundant subtractors and comparators which are not in
the baseline. The dynamic power overhead of TMR mode and DMR mode is 60% and
10
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

Figure 5.
(a) Area, (b) critical path delay, (c) leakage power and (d) dynamic power (y-axis—normalized values to
baseline).

73% of the baseline. It is 75% for TMR + SCS mode. The static power or leakage power
is proportional to the size of the circuit. The TMR + SCS mode has leakage power
which is 76% more than the baseline. The TMR and DMR mode have leakage power
which is 72% and 50% more than the baseline. In Table 4 which depicts FPGA
synthesis results, it is observed that the number of logical elements in TMR + SCS
mode and DMR mode is 79% and 66% more than the baseline. From Tables 2 and 4, it
is observed that TMR mode with additional self-checking subtractor in TMR + SCS
mode costs more than the baseline, but still TMR + SCS/OIC will be a suitable fault
tolerant core for a low power embedded system.

4.3 Power and area estimation for MCS-OIC

The area and power for the micro-architecture of multi-core system (one MIPS
core with one OIC) shown in Figure 3, are estimated using ASIC simulation. The
multi-core system occupies a total area of 306,283 μm2 and consumes a total power of
1.1554 W. The FDL occupies an area of 6203 μm2 which is 2% of the total area
occupied by the system. The OIC occupies an area of 8122 μm 2 which is 2.6% of the
total area occupied by the system. The FDL consumes a power of 1.2 mW and OIC
consumes a power of 1.4 mW which are negligible when compared to the total power.
The redundancy-based core level fault mitigation techniques/approaches such as Slip-
stream [21], dynamic core coupling (DCC) approach proposed by [22], configurable
isolation [23], reunion is a fingerprinting technique proposed by Smolens et al. [24]
have nearly 100% area overhead and obviously larger power overhead.

5. Performance implications in MCS-OIC

For every instruction emulated on OIC, an additional three clock cycles are needed
for transfer of opcodes and operands, and two clock cycles are needed to resume the
11
Quality Control - An Anthology of Cases

pipeline in the MIPS processor. The two terms defined below highlight the latency
that incur in instruction execution, presented in the following subsection.

5.1 Performance overhead at instruction level

Definitions: (a) The instruction execution time by emulation (IETE) is defined


as the number of cycles needed to execute the instruction on OIC. (b) Total execu-
tion time (TET) is defined as the sum of IETE and time (in clock cycles) to transfer
opcodes, operands (from MIPS to OIC) and results (from OIC to MIPS). In other
words, this indicates that time in clock cycles between pipeline stall and resumption of
pipeline. The TET and IETE for instructions are tabulated in Table 5 .

5.2 Performance overhead at application level

In the previous section, performance loss at instruction level caused due to


transfer of operands and results back to host core is discussed. This would cause an
accumulative loss in performance of application and the same is discussed in this
section. The OIC supports 32 bit ISA of MIPS R3000/4000 processor operating at a
frequency of 250 MHz. OIC operates at a frequency of 115 MHz, thereby incurring a
performance loss while emulating the instructions from a faulty functional unit in
MIPS core. The Multi2sim, a cycle accurate simulator together with a cross
compiler, mips-linux-gnu-gcc/mips-unknown-gnu-gcc is used to estimate the
simulation execution time for a set of micro-benchmarks. The emulation of only
arithmetic instructions on OIC is considered for estimating the performance loss as
they constitute nearly 60% of total instructions in integer application programs. The
compute intensive and memory intensive micro-benchmark programs considered are
listed in Table 6.

5.2.1. Memory intensive micro-benchmarks

The performance loss for memory intensive micro-benchmark programs, namely,


binary search, quicksort (using recursion), and radix sort, are given in Figures 6–8
respectively. The performance loss for CPU intensive micro-benchmark programs,
namely, matrix multiplication, CPU scheduling, and sieve of Eratosthenes, are given
in the Figures 9–11 respectively. The baseline indicates the simulated execution time

Instructions IETE TET Clock cycle in MIPS/LEON 2FT/3FT


ADD 4 9 1

MOV 5 10 1

INC 4 9 1

DEC 1 5 1
SUB 1 5 1

MUL 7 (per iteration) Min 12 6

DIV 5 (per iteration) Min 10 34

Table 5.
IETE and TET for instructions.

12
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

S. no Micro-benchmark CPU/memory Input Input size


intensive form

1 Matrix multiplication CPU Matrix [10  10], [100  100],


(single/multithreaded) [1000  1000] elements

2 Binary search Memory Array 3000, 30,000, 300,000 elements


(single/multithreaded)
3 Sieve of Eratosthenes CPU Array 1000, 10,000, 100,000 prime number
limit

4 CPU-scheduling CPU Array 1000, 10,000, 100,000 processes


5 Quicksort (recursion) Memory Array Sorted 100, 1000, 10,000 elements for
worst case analysis

6 Radix sort Memory Array 1000, 10,000, 100,000 elements

Table 6.
CPU intensive and memory intensive micro-benchmarks.

Figure 6.
Performance overhead in binary search by emulating ADD using subleq instruction.

of micro-benchmarks with no arithmetic instructions emulated on OIC. The perfor-


mance loss is quantified for micro-benchmarks with respect to simulated execution
time of the baseline (with varying input data sets/size).
As shown in Figure 6, Binary search with emulation of ADD instructions incurs
performance loss of 1.77, 3.59 and 4.59 with input size of 3000, 30,000 and
300,000 respectively, when compared to baselines. Significant proportion of ADD
instructions is associated with incrementing or decrementing counters and effective
addresses. OIC do not fetch operands or store results directly to main memory. Main
memory latency is not taken into account for performance loss estimation. The num-
ber of ADD instructions executing as a part of the algorithmic phase of program
execution does not increase exponentially with increase in input data sets. Hence,
performance loss impact is minimal in algorithmic phase and is higher during fetching
and storing of the input data sets. In case of multithreaded binary search, multi-core
setup consisting of two cores core-0 and core-1 each with single thread is used to
estimate the performance loss. The performance loss is similar to that of single
13
Quality Control - An Anthology of Cases

Figure 7.
Performance overhead in Quicksort by emulating ADD using subleq instruction.

Figure 8.
Performance overhead in Radix sort by emulating ADD and DIV using subleq instruction.

threaded binary search. It is due to the fact that majority of the ADD instructions are
associated with LOAD and STORE instructions.
Quicksort (with emulation of ADD instruction), implemented using recursion for
sorted data elements (worst case analysis) incurs performance loss of 3.85, 6.31,
and 6.99  for data size of 100, 1000 and 10,000 respectively as shown in Figure 7.
For best case analysis of quicksort for 10,000 elements, performance loss reduces to
1.008. Due to recursion, majority of ADD instructions are associated with LOAD/
STORE instructions. In radix sort, occurrence of ADD instructions is more compared
to DIV instructions. Since it is memory intensive method of sorting, large number of
14
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

ADD instructions is used to increment counters and constants associated with LOAD/
STORE instructions. The performance loss due to emulation of ADD instructions for
radix sort is 2.45, 4.79 , and 5.96 for input sizes of 1000, 10,000 and 10,000 as
shown in Figure 8. For DIV instructions, performance loss is 1.4, 2.05, and 2.37
for input sizes of 1000, 10,000 and 10,000 elements.
As shown in Figure 9 , matrix multiplication with emulation of ADD and MUL
instructions executing in the algorithmic phase of the program, incurs a performance
loss of < 1.56, 4.09, 4.0> (for ADD) and < 1.632, 7.62, 7.99> (for MUL), for
input matrix sizes of 10  10, 100  100, and 1000  1000 respectively. In CPU
scheduling, ADD and SUB instructions emulation incur a performance loss of < 2.45,
4.79, 5.96> and < 1.4, 2.05, 2.3> for input data set of 1000, 10,000 and
100,000 processes respectively as shown in Figure 10. In sieve of Eratosthenes,
emulation of MUL and ADD instructions incur a performance loss of < 1.89, 5.03,
7.63> and < 1.48, 2.9, 3.8> for input data set size of 1000, 10,000 and 100,000
respectively as shown in Figure 11.

Figure 9.
Performance overhead in matrix multiplication by emulating ADD and MUL using subleq instruction.

Figure 10.
Performance overhead in CPU scheduling by emulating ADD and SUB using subleq instruction.

15
Quality Control - An Anthology of Cases

Figure 11.
Performance overhead in Sieve of Eratosthenes by emulating MUL and ADD using subleq instruction.

For multithreaded matrix multiplication, multi-core configuration consisting of


two cores: core-0 and core-1 with single thread each is considered. The ADD and MUL
instructions of core-0 and core-1 are emulated on single OIC due to failures in adder
and multiplier units respectively. The performance loss is estimated as 2.04, 10.07,
and 10.99 for matrix size of 10  10, 100  100, and 1000  1000 respectively as
shown in Figure 9. Since, simultaneous access to single OIC from two cores is not
permitted, performance loss includes the waiting time between subsequent ADD and
MUL instructions emanating from core-0 and core-1. Waiting time alone is greater
than 45% of the performance loss. In this multi-core configuration, consisting of two
MIPS cores with single OIC, it bears the brunt of multiple functional unit failures in
two cores. An Additional OIC would bring down the performance loss by 1.5 (for
matrix size of 10  10) and 7 (for matrix size of 100  100/1000  1000) and
eliminate the need for instructions to wait for execution on OIC. On 1:1 and 1: N basis
i.e., one MIPS core with one or more OICs can scale to 100 MIPS core with 100 or
more OICs.
It may be noticed that the performance loss does not vary when there is change of
mode in OIC from TMR + SCR to TMR, or TMR to DMR as the number of instructions
executed remains the same.

6. Yield analysis for MCS-OIC

This section examines the effect of fault tolerance provided in MCS-OIC on the
yield. As discussed in the section which presents design of OIC, it is assumed that two
subtractors do not fail simultaneously. In the TMR + SCR, TMR, and DMR modes,
OIC repeats the instruction execution if the results differ, to avoid transient failures.
The spatial and temporal redundancy to avoid permanent and transient faults in OIC
makes it defect tolerant. The arithmetic logic unit in MIPS is protected by functional
support provided by OIC. The remaining portion of MIPS are hardened and protected
by ECC. The die yield for proposed different configurations of MCS-OIC is estimated
using the equations presented below.
16
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

6.1 Terms and parameters

a. Original die: It is the die consisting of MIPS cores only.

b. Fault tolerant die: It is the die consisting of MIPS cores and OICs.

c. Regular dies per wafer: It is the number of original dies per wafer. The number of
regular dies per wafer is estimated using the Eq. (1).

π ðdiameter=2Þ2 π  diameter
Regular dies per wafer ¼  pffiffiffi (1)
Area 2  Area

Where diameter refers to the diameter of the wafer, Area refers to the area of the die.

d. Die yield: Ignoring full wafer damage, the yield for single die is approximated
using negative binomial approximation as given in the Eq. (2).
 cp
defect density  Area
Die yield ¼ 1þ (2)
cp

Where cp denotes cluster parameter or manufacturing complexity, defect


density denotes number of defects per unit area.

e. Regular working dies per wafer: It is die yield times the regular dies per wafer. It is
estimated using the Eq. (3).

defect density  Area cp


 
Regular working dies per wafer ¼ 1þ :
cp
! (3)
π ðdiameter=2Þ2 π  diameter
 pffiffiffi
Area 2  Area

f. Regular fault tolerant dies per wafer:


The area of fault tolerant core is expressed as summation of area of original
die and area of OIC. If the area of OIC is expressed as δð0 < δ < 1Þ times the
area of original design then ((1 + δ)  area of the original design)) denotes
the area of the fault tolerant die. By substituting (1 + δ)  area in the number
of regular fault tolerant cores per wafer can be estimated and is given in the
Eq. (4).

π ðdiameter=2Þ2 π  diameter
Regular fault tolerant dies per wafer ¼  pffiffiffiffiffi (4)
ð1 þ δÞArea ð2 ð1 þ δÞ AreaÞ

g. Regular working fault tolerant dies per wafer: It is die yield times the regular fault
tolerant die. It is estimated using the Eq. (5).

17
Quality Control - An Anthology of Cases

!cp
defect density  Area
Regular working fault tolerant dies per wafer ¼ 1þ :
cp
!
π ðdiameter=2Þ 2 π  diameter
 pffiffiffiffiffi
ð1 þ δÞArea ð2ð1 þ δÞAreaÞ
(5)

6.2 Parametric evaluation and discussion

The die yield for the original die and fault tolerant die estimated for one MIPS core
with one/two/four/ OICs, two MIPS core with one/two/four/ OICs, four MIPS core
with one/two/four/ OICs, and eight MIPS core with one/two/four/six OICs is tabu-
lated in Tables 7–10 respectively. The defect density is varied from 9.5, 5.0, to 1.0 and
wafer diameters varied from 300 mm, 200 mm to 100 mm to estimate die yield of the
original die and fault tolerant die. The cp is fixed at 4.0. The die yield of the original
die at defect densities are 1.0, 5.0, and 9.5 are 0.9971, 0.9855, and 0.9727 respectively.
The die yields for three fault tolerant dies each consisting of one MIPS core with first
die with one OIC, second with two OICs, third with four OICs for 300 mm wafer with
defect density at 1.0 is < 0.9970/0.9969/0.9967> respectively as shown in the Table 7,
which is slightly lesser than the yield of the original die. The average of the differences
between yield of original die and fault tolerant dies with defect density 1.0 is 0.0002
which is negligible value. Similarly, the average of the differences between yield of
original die and fault tolerant dies at defect density 5.0 and 9.5 are 0.0009 and 0.0017
respectively. It is observed that an increase in the defect density decreases the yield.

Wafer diameter 100 mm 200 mm 300 mm

Defect density 9.5 5.0 1.0 9.5 5.0 1.0 9.5 5.0 1.0

Number of regular dies 26,489 26,489 26,489 10,6781 10,6781 10,6781 24,0876 24,0876 24,0876
per wafers

Die yield of original die 0.9727 0.9855 0.9971 0.9727 0.9855 0.9971 0.9727 0.9855 0.9971

Number of regular 25,767 26,106 26,412 10,3870 10,5237 10,6470 23,4309 23,7391 24,0174
working dies per wafer

Number of One OIC 25,768 25,768 25,768 103,884 103,884 103,884 234,347 234,347 234,347
regular
Two OICs 25,084 25,084 25,084 101,139 101,139 101,139 228,163 228,163 228,163
fault
tolerant Four OICs 23,820 23,820 23,820 96,061 96,061 96,061 216,723 216,723 216,723
dies per
wafer

Die yield One OIC 0.9719 0.9851 0.9970 0.9719 0.9851 0.9970 0.9719 0.9851 0.9970
of fault
Two OICs 0.9712 0.9847 0.9969 0.9712 0.9847 0.9969 0.9712 0.9847 0.9969
tolerant
die Four OICs 0.9697 0.9839 0.9967 0.9697 0.9839 0.9967 0.9697 0.9839 0.9967

Number of One OIC 25,046 25,385 25,691 10,0974 10,2340 103,573 227,784 230,864 233,645
regular
Two OICs 24,363 24,701 25,007 98,231 99,595 100,828 221,603 224,681 227,461
working
fault dies Four OICs 23,100 23,437 23,743 93,157 94,518 95,750 210,170 213,243 216,021
per wafer

Table 7.
Die yield for fault tolerant die consisting of one MIPS core with one/two/four OICs.

18
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

Wafer diameter 100 mm 200 mm 300 mm


Defect density 9.5 5.0 1.0 9.5 5.0 1.0 9.5 5.0 1.0

Number of regular 13,159 13,159 13,159 53,220 53,220 53,220 120,182 120,182 120,182
dies per wafers
Die yield for original 0.9463 0.9713 0.9942 0.9463 0.9713 0.9942 0.9463 0.9713 0.9942
die

Number of regular 12,454 12,782 13,082 50,368 51,694 52,911 113,740 116,736 119,484
working dies per
wafer
Number One OIC 12,977 12,977 12,977 52,487 52,487 52,487 118,529 118,529 118,529
of regular
Two OICs 12,494 12,494 12,494 50,544 50,544 50,544 114,150 114,150 114,150
fault
tolerant Four OICs 12,459 12,459 12,459 50,403 50,403 50,403 113,833 113,833 113,833
dies per
wafer

Die yield One OIC 0.9456 0.9709 0.9941 0.9456 0.9709 0.9941 0.9456 0.9709 0.9941
for fault
Two OICs 0.9449 0.9705 0.9940 0.9449 0.9705 0.9940 0.9449 0.9705 0.9940
tolerant
die Four OICs 0.9428 0.9693 0.9937 0.9428 0.9693 0.9937 0.9428 0.9693 0.9937

Number One OIC 12,272 12,600 12,900 49,636 50,962 52,177 112,091 115,085 117,830
of regular
Two OICs 12,095 12,423 12,723 48,924 50,249 51,464 110,486 113,478 116,222
working
fault Four OICs 11,592 11,919 12,219 46,900 48,222 49,435 105,923 108,908 111,650
tolerant
dies per
wafer

Table 8.
Die yield for fault tolerant die consisting of two MIPS core with one/two/four OICs.

The die yield of the fault tolerant dies each consisting of two MIPS cores with <
one/two/four> OICS with defect density 1.0 is < 0.9941, 0.9940, 0.9937> respectively
as shown in Table 8 . The die yield of the original die at defect densities 1.0, 5.0, and
9.5 is 0.9942, 0.9713, and 0.9463 slightly higher than yield of fault tolerant dies. The
average of the differences between yield of original die and fault tolerant dies is
0.00026. The average of the differences between yield of original die and fault toler-
ant dies increases by 0.0009 and 0.0018 for defect density 5.0 and 9.5 respectively.
The die yields of the original die at defect densities 1.0, 5.0, and 9.5 are 0.9884,
0.9436, and 0.8963 respectively. From Table 9, the die yield of the fault tolerant dies
each consisting of four MIPS cores with < one/two/four> OICS with defect density 1.0
are < 0.9883, 0.9882, 0.9880> respectively. It is observed that average of the differ-
ences between yield of the original die and fault tolerant dies at varying defect
densities is similar with other alternatives discussed above.
From Table 10, the die yield of the fault tolerant dies each consisting of eight MIPS
cores with < one/two/four/six> OICS with defect density 1.0 is < 0.9769, 0.9767,
0.9765, 0.9764> respectively. The die yield of the original die at defect densities 1.0,
5.0, and 9.5 is 0.9769, 0.8912, and 0.8057 respectively. The average of the differences
between the original die and fault tolerant dies with defect density of 9.5 is 0.0031, is
the highest among the averages. From this data, it is inferred that larger chips with
increasing redundancy widens gap between the yield of the original dies and fault
19
Quality Control - An Anthology of Cases

Wafer diameter 100 mm 200 mm 300 mm


Defect density 9.5 5.0 1.0 9.5 5.0 1.0 9.5 5.0 1.0

Number of regular dies 6519 6519 6519 26,489 26,489 26,489 59,910 59,910 59,910
per wafers
Die yield for original die 0.8963 0.9436 0.9984 0.8963 0.9436 0.9984 0.8963 0.9436 0.9984

Number of regular 5843 6152 6444 23,744 24,997 26,182 53,700 56,536 59,216
working dies per wafer
Number of One OIC 6474 6474 6474 26,305 26,305 26,305 59,495 59,495 59,495
working fault
Two OICs 6428 6428 6428 26,124 26,124 26,124 59,085 59,085 59,085
tolerant dies
per wafer Four OICs 6340 6340 6340 25,768 25,768 25,768 58,282 58,282 58,282

Die yield for One OIC 0.8956 0.9433 0.9883 0.8956 0.9433 0.9883 0.8956 0.9433 0.9883
fault tolerant
Two OICs 0.8949 0.9429 0.9882 0.8949 0.9429 0.9882 0.8949 0.9429 0.9882
die
Four OICs 0.8929 0.9417 0.9880 0.8929 0.9417 0.9880 0.8929 0.9417 0.9880

Number of One OIC 5798 6106 6398 23,561 24,814 25,998 53,288 56,122 58,800
regular
Two OICs 5753 6062 6353 23,381 24,633 25,817 52,881 55,713 58,391
working fault
tolerant dies Four OICs 5623 5930 6221 22,855 24,104 25,286 51,694 54,520 57,195
per wafer

Table 9.
Die yield for fault tolerant die consisting of four MIPS core with one/two/four OICs.

Wafer diameter 100 mm 200 mm 300 mm

Defect density 9.5 5.0 1.0 9.5 5.0 1.0 9.5 5.0 1.0

Number of regular dies per 3217 3217 3217 13,159 13,159 13,159 29,827 29,827 29,827
wafers

Die yield for original die 0.8057 0.8912 0.9770 0.8057 0.8912 0.9770 0.8057 0.8912 0.9770

Number of regular working dies 2592 2867 3143 10,603 11,728 12,856 24,034 26,584 29,141
per wafer

Number of One OIC 3205 3205 3205 13,113 13,113 13,113 29,723 29,723 29,723
regular fault
Two OICs 3194 3194 3194 13,068 13,068 13,068 29,620 29,620 29,620
tolerant dies per
wafer Four OICs 3172 3172 3172 12,977 12,977 12,977 29,415 29,415 29,415

Six OICs 3150 3150 3150 12,888 12,888 12,888 29,214 29,214 29,214

Die yield for fault One OIC 0.8051 0.8909 0.9769 0.8051 0.8909 0.9769 0.8051 0.8909 0.9769
tolerant die
Two OICs 0.8040 0.8902 0.9767 0.8040 0.8902 0.9767 0.8040 0.8902 0.9767

Four OICs 0.8028 0.8895 0.9765 0.8028 0.8895 0.9765 0.8028 0.8895 0.9765

Six OICs 0.8016 0.8888 0.9764 0.8016 0.8888 0.9764 0.8016 0.8888 0.9764

Number of One OIC 2581 2856 3131 10,559 11,683 12,810 23,933 26,481 29,037
regular working
Two OICs 2559 2833 3109 10,470 11,592 12,719 23,732 26,277 28,831
fault tolerant dies
per wafer Four OICs 2537 2811 3087 10,382 11,503 12,629 23,535 26,075 28,628

Six OICs 2516 2790 3065 10,296 11,415 12,541 23,340 25,877 28,428

Table 10.
Die yield for fault tolerant die consisting of eight MIPS core with one/two/four/six OICs.

20
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

tolerant dies. Thus, a trade-off exists between the die yield and fault tolerance pro-
vided by the design alternatives (discussed above) having redundancy ranging
between 2% and 11%.

7. Reliability analysis of 32-bit OIC

In order to assess the endurance for the four modes of OIC, reliability is evaluated
and compared. The reliability, denoted by R(t), is defined as the probability of its
survival at least until time t, which is estimated using Weibull distribution and can be
determined in the following manner:
β
RðtÞ ¼ PðT > tÞ ¼ eλt (6)

where β is the shape parameter, T denotes the lifetime and λ denotes the failure rate
of a component. Defect induced faults occur in the early stage of the lifetime, but the
wear-out induced faults increase in the tail end of the lifetime. β < 1, is used to model
infant mortality and it is a period of growing reliability and decreasing failure rate.
When β = 1, the R(t) of Weibull distribution and exponential distribution are identi-
cal. β > 1, is used to model wear out and the end of useful life where failure rate is
increasing. The initial failure rate is computed using the failure rate formula:

λ ¼ ðC1 π Tπ V þ C2 π EÞπ Q π L (7)

here, C 1 , C2 are the complexity factors, π T , πV , π E , π Q , π L are temperature, voltage


stress, environment, quality and learning factors respectively. Failure rate λ is
assumed as a function of the number of logical elements in the micro-architectural
components.
The reliabilities of the four modes of OIC given in the Eqs. (8)(11) are expressed
in terms of Rselect logic ðtÞ, Rsub ðtÞ, Rsubsc ðtÞ, RcompðtÞRvoter which denote the reliabilities
of select logic, subtractor, SCS, comparator and voters logic respectively.
TMR + SCS mode reliability is expressed as:

4  
4
ðR sub ðtÞÞi ð1  Rsub ðtÞÞ4i
X
RTMTþSCS ðtÞ ¼ Rsubsc ðtÞRselect logic ðtÞRcomp ðtÞRvoter ðtÞ
i¼2
i
(8)

TMR mode reliability is expressed as:

3  
3
ðRsub ðtÞÞi ð 1  Rsub ðtÞÞ 3i
X
R TMRðtÞ ¼ Rselect logic ðtÞR compðtÞRvoterðtÞ (9)
i¼2
i

DMR mode reliability is expressed as:

2  
2
ðRsub ðtÞÞi ð1  Rsub ðtÞÞ2i
X
RDMRðtÞ ¼ R select logic ðtÞRcompðtÞ (10)
i¼1
i

Baseline mode reliability is expressed as:


21
Quality Control - An Anthology of Cases

RbaselineðtÞ ¼ R select logic ðtÞRsubsc ðtÞ (11)

The reliabilities are plotted for TMR + SCS, TMR, DMR and Baseline modes in
Figure 12 for β = 0.9 and 1.0, (which denote defect induced fault phase) and in
Figure 13 for β = 1.1 and 1.2 (which denote wear out induced fault phase). λ is a
function of number of logical elements as given in the Table 3.
In all these cases, TMR + SCS mode is observed to have a better failure tolerance
when compared to all other modes. For β = 1.2, the reliabilities of TMR mode and
DMR mode are less than that of TMR + SCS mode during the interval 3  104 to
15  104 hours, as illustrated in Figure 13. The levels of reliability of TMR modes
decline far below DMR, and baseline modes in wear out induced fault phase due to the

Figure 12.
Reliability vs. time for (a) β = 0.9 and (b) β = 1.0.

22
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

Figure 13.
Reliability vs. time for (a) β = 1.1 and (b) β = 1.2

fact that a single component reliability is below 0.5 and that the redundancy does not
have any merit in the TMR mode. In Table 11, reliability of subtractor goes below 0.5
at t = 180,000 h or 20.5 years and reliability gap between TMR and DMR widens
endorses the above argument.

7.1 Comparative analysis: OIC and URISC/URISC++

In this section, reliability of OIC is compared with that of URISC++. The reliability
function of Weibull distribution with λ as a function of number of logical elements is
used to estimate the reliability of URISC/URISC++. The number of logical elements in
23
Quality Control - An Anthology of Cases

t (h) R (subtractor) R (comparator) R (TMR) R (DMR)


120,000 (13.7 years) 0.6256 0.6948 0.16931 0.2112

150,000 (17.12 years) 0.5417 0.6226 0.09060 0.1272

180,000 (20.5 years) 0.4663 0.5545 0.04633 0.07706

Table 11.
Reliabilities of components in OIC for β = 1.2.

Figure 14.
β = 0.9 reliability vs. time (hours).

Figure 15.
β = 1.0 reliability vs. time (hours).

OIC and URISC++ are given in Table 4. In the defect induced fault phase (β = 0.9 and
β = 1.0), a drastic fall in the URISC++ reliability is observed as shown in Figures 14
and 15. OIC continues to maintain a reliability of 0.96, unlike URISC++ with endur-
ance reaching 0.87 after 210,000 hours. In the wear-out induced fault phase, the
24
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

Figure 16.
β = 1.1 reliability vs. time (hours).

Figure 17.
β = 1.2 Reliability vs. time (hours).

reliability gap widens between 32-bit OIC and URISC++ when β = 1.1 (Figure 16) after
60,000 hours or 6.84 years. The reliability levels of OIC fall below that of URISC++
because single component reliability reduces below 0.5 after 23.4 years as shown in
Figure 17 and the redundancy in the OIC does not have any merit thereafter.

8. Conclusion

1. Power, area and total power for OIC and for its contender URISC++ are
evaluated. OIC consumes less power and area compared to its contender. The
registers count in OICs is significantly less compared to URISC++. It is observed
that two large register files in URISC++ consume more power, unlike OIC which
does not maintain register files.
25
Quality Control - An Anthology of Cases

2. The performance overheads at instruction level and application level are


evaluated. In terms of performance overhead, based on the analysis in the
Section 5, performance loss is incurred in compute intensive and memory
intensive micro-benchmarks mainly due to MUL and DIV instructions in the
programs. But the performance loss will not be high in programs with right mix
of arithmetic instructions.

3. In 1:1 configuration of multi-core system with OICs i.e., one conventional core
with one OIC, all the emulation request from the conventional core is handled by
OIC. In 2:1 configuration (two cores and one OIC), simultaneous failures in two
conventional cores results in higher performance loss for the application
executing in the system. This performance loss can be reduced by augmenting
the multi-core configuration with an additional OIC. That is, 1:1 model proves to
be a viable solution with minimal performance loss. This is validated by the
simulation results presented in this chapter. On 1:1 and 1: N basis i.e., one MIPS
core with one or more OICs can scale to 100 MIPS core with 100 or more OICs.
Hence, MCS-OIC model is a scalable design alternative.

4. As expected, it is observed from the reliability analysis of OIC that an increase in


the number of subtractors results in higher reliability. Alternatively, it can be
understood that replication of functional units improves reliability of the OIC
significantly. Hence, TMR + SCS mode has higher reliability compared to the
other modes.

5. The yield of the fault tolerant die is slightly lesser than the original die for all the
design alternatives of MCS-OIC. It is inferred that larger chips with increasing
redundancy widens gap between the yield of the original dies and fault tolerant
dies. Thus, a trade-off exists between the die yield and fault tolerance provided
by the design alternatives (discussed above) having redundancy ranging
between 2% and 11%.

6. Reliability of OIC and URISC++ are evaluated and compared. Evaluation results
indicate that OIC is more reliable than URISC++ both in the defect induced phase
and the wear out induced phase. It can be understood that the level of
redundancy is significantly less in URISC++ compared to OIC.

26
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

Author details

Shashikiran Venkatesha1 * and Ranjani Parthasarathi2

1 Vellore Institute of Technology, Vellore, Tamil Nadu, India

2 Department of Information Science and Technology, College of Engineering


Guindy, Anna University, Chennai, Tamil Nadu, India

*Address all correspondence to: [email protected]

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
27
Quality Control - An Anthology of Cases

References

[1] Borkar S. Designing reliable systems [8] Postman J, Chiang P. A survey


from unreliable components: The addressing on-chip interconnect: Energy
challenges of transistor variability and and reliability considerations.
degradation. IEEE Micro. 2005;25(6): International Scholarly Research Notices.
10-16 2012;2012:1-9. Article ID: 916259. DOI:
10.5402/2012/916259
[2] Shivakumar P, Kistler M,
Keckler SW, Burger D, Alvisi L. [9] Nassif SR, Mehta N, Cao Y. A
Modeling the effect of technology trends resilience roadmap. In: 2010 Design,
on the soft error rate of combinational Automation & Test in Europe
logic. In: Proceedings of International Conference & Exhibition (DATE 2010).
Conference on Dependable Systems and IEEE Explorer; 2010. pp. 1011-1016.
Networks. IEEE Explorer. 2002. DOI: 10.1109/DATE.2010.5456958
pp. 389-398. DOI: 10.1109/
DSN.2002.1028924 [10] Karnik T, Tschanz J, Borkar N,
Howard J, Vangal S, De V, et al.
[3] Feng S, Gupta S, Ansari A, Mahlke S. Resiliency for many-core system on a
Shoestring: Probabilistic soft error chip. In: 2014 19th Asia and South Pacific
reliability on the cheap’. ACM SIGARCH Design Automation Conference (ASP-
Computer Architecture News. 2010; DAC). IEEE Explorer; 2014. pp. 388-389.
38(1):385-396 DOI: 10.1109/ASPDAC.2014.6742921

[4] Li T, Ambrose JA, Ragel R, [11] Gaisler J. A portable and fault-


Parameswaran S. Processor design for tolerant microprocessor based on the
soft errors: Challenges and state of the SPARC v8 architecture. In: Proceedings
art. ACM Computing Surveys. 2016; International Conference on Dependable
49(3):1-44 Systems and Networks. IEEE Explorer;
2002. pp. 409-415. DOI: 10.1109/
[5] Mittal S. A survey of techniques for DSN.2002.1028926
managing and leveraging caches in
GPUs. Journal of Circuits, Systems, and [12] Lin S, Kim YB, Lombardi F. Design
Computers. 2014; 23(08):1430002 and performance evaluation of radiation
hardened latches for nanoscale CMOS.
[6] Rusu S, Muljono H, Ayers D, Tam S, IEEE Transactions on Very Large-scale
Chen W, Martin A, et al. 5.4 Ivytown: A Integration Systems. 2010; 19(7):
22 nm 15-core enterprise Xeon® 1315-1319
processor family. In: 2014 IEEE
International Solid-State Circuits [13] Slayman CW. Cache and memory
Conference Digest of Technical Papers error detection, correction, and
(ISSCC). IEEE Explorer; 2014. pp. 102- reduction techniques for terrestrial
103. DOI: 10.1109/ISSCC.2014.6757356 servers and workstations. IEEE
Transactions on Device and Materials
[7] Zyuban V, Taylor SA, Christensen B, Reliability. 2005;5(3):397-404
Hall AR, Gonzalez CJ, Friedrich J, et al.
IBM POWER7+ design for higher [14] Pomeranz I, Vijaykumar TN.
frequency at fixed power. IBM Journal of FaultHound: Value-locality-based soft-
Research and Development. 2013;57(6): fault tolerance. In: Proceedings of the
1-1 42nd Annual International Symposium
28
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823

on Computer Architecture. ACM Digital [22] LaFrieda C, Ipek E, Martinez JF,


Library; 2015. pp. 668-681. DOI: 10.1145/ Manohar R. Utilizing dynamically
2749469.2750372 coupled cores to form a resilient chip
multiprocessor. In: 37th Annual IEEE/
[15] Meaney PJ, Swaney SB, Sanda PN, IFIP International Conference on
Spainhower L. IBM z990 soft error Dependable Systems and Networks
detection and recovery. IEEE (DSN’07). IEEE Explorer; 2007. pp.
Transactions on Device and Materials 317-326. DOI: 10.1109/DSN.2007.100
Reliability. 2005;5(3):419-427
[23] Aggarwal N, Ranganathan P,
[16] Stackhouse B, Bhimji S, Bostak C, Jouppi NP, Smith JE. Configurable
Bradley D, Cherkauer B, Desai J, et al. A isolation: Building high availability
65 nm 2-billion transistor quad-core systems with commodity multi-core
Itanium processor. IEEE Journal of Solid- processors. ACM SIGARCH Computer
State Circuits. 2008;44(1):18-31 Architecture News. 2007;35(2):470-481

[17] Venkatesha S, Parthasarathi R. 32-Bit [24] Smolens JC, Gold BT, Falsafi B, Hoe
one instruction core: A low-cost, reliable, JC. Reunion: Complexity-effective
and fault-tolerant core for multicore multicore redundancy. In: 2006 39th
systems. Journal of Testing and Annual IEEE/ACM International
Evaluation. 2019;47(6):3941-3962. DOI: Symposium on Microarchitecture
10.1520/JTE20180492. ISSN 0090-3973 (MICRO'06). IEEE Explorer; 2006. pp.
223-234. DOI: 10.1109/MICRO.2006.42
[18] Hamming RW. Error detecting
and error correcting codes’. The Bell
System Technical Journal. 1950;29(2):
147-160

[19] Rajendiran A, Ananthanarayanan S,


Patel HD, Tripunitara MV, Garg S.
Reliable computing with ultra-reduced
instruction set co-processors. In: DAC
Design Automation Conference 2012.
ACM Digital Library; 2012. pp. 697-702.
DOI: 10.1145/2228360.2228485

[20] Ananthanarayan S, Garg S, Patel


HD. Low-cost permanent fault detection
using ultra-reduced instruction set co-
processors. In: 2013 Design, Automation
& Test in Europe Conference &
Exhibition (DATE). IEEE Explorer;
2013. pp. 933-938. DOI: 10.7873/
DATE.2013.196

[21] Sundaramoorthy K, Purser Z,


Rotenberg E. Slipstream processors:
Improving both performance and fault
tolerance’. ACM SIGPLAN Notices.
2000;35(11):257-268
29
Chapter

Computer Vision-Based
Techniques for Quality Inspection
of Concrete Building Structures
Siwei Chang and Ming-Fung Francis Siu

Abstract

Quality performance of building construction is frequently assessed throughout


the construction life cycle. In Hong Kong, quality management system must be
established before commencing new building works. Regular building inspections are
conducted in accordance with the code of practice of new building works. Quality
managers are deployed in construction sites to inspect and record any building
defects. The concrete cracks must be identified, which is usually followed by proposed
rectifications, in order to protect the public and occupants from dangers. This chapter
is structured as follows: Background information of concrete cracks is firstly given.
Traditional technique of conducting regular manual inspection is introduced, in
accordance with Hong Kong’s code of practice “Building Performance Assessment
Scoring System (PASS)”. Then, an advanced technique of conducting crack inspection
intelligently based on computer vision is introduced. The procedures of defining,
training, and benchmarking the architecture of convolutional neural network models
are presented. The calculation steps are detailed and illustrated using a simple text-
book example. An experiment case study is used to compare the time, cost of
inspecting concrete cracks using both manual and advanced technique. The study
concludes with a presentation of the future vision of robot-human collaboration for
inspecting concrete cracks in building construction.

Keywords: building quality control, concrete crack, quality inspection, computer


vision, artificial intelligence

1. Introduction

Throughout the entire construction life cycle, quality assessment plays an impor-
tant role in ensuring the safety, economy, and long-term viability of construction
activities. Construction products that have been completely inspected and certificated
by quality inspectors are more inclined to be chosen by developers and buyers. Typi-
cally, the structural work is considered as an essential aspect for quality assessment
because structural problems directly influence the construction stability and integrity.
Among the construction structural forms, concrete structures are adopted as the most
common and basic construction structure. Therefore, exploring advanced
1
Quality Control - An Anthology of Cases

technologies that enable effective concrete defect inspection can be deemed a worth-
while endeavor.
Normally, the types of concrete defects include blistering, delamination, dusting,
etc. Among them, concrete cracks, usually caused by deformation, shrinkage, swell-
ing, or hydraulic, appear most frequently in concrete components. Concrete cracking
is considered the first sign of deterioration. As reported by the BRE Group [1], cracks
up to 5 mm in width simply need to be re-decorated because they only affect the
appearance of the concrete. However, cracks with a width of 5–25 mm have the
possibility to trigger structural damage to concrete structures [2]. A 40-year-old
oceanfront condo building collapsed on June 27, 2021, in Florida because of the neglect
of cracks. Experienced engineers noticed the cracked or crumbling concrete, the
interior cracks, and the cracks at the corners of windows and doors are the significant
and earliest signs of this tragedy. Therefore, in order to prevent potential failures that
may pose a loss to society, crack problems should be thoroughly examined and
resolved.
In general, construction works are divided into two categories: new building works
and existing building works. The new works refer to a building that will be
constructed from scratch. The existing building works mean that a building has
existed for many years and residents are living inside. In Hong Kong, quality assur-
ance and control should be conducted by full-time quality managers on-site for both
new and existing buildings. Normally, the quality managers visually inspect implied
build quality and by appointing a score to the building’s quality in accordance to the
Building Performance Assessment Scoring System (PASS) for new buildings, the
Mandatory Building Inspection Scheme (MBIS), and the Mandatory Window Inspec-
tion Scheme (MWIS) for existing buildings. Meanwhile, to ensure a continuous and
in-depth inspection, Non-destructive (NDT) methods e.g., eddy current testing,
ultrasonic testing are also commonly applied in the quality inspection process.
Quality managers are commonly obliged to work 8 hours per day. Their salary
ranges from HKD 30,000 to HKD 50,000 per month. In PASS, more than 300 quality
assessment items are related to cracking-related problems. Cracks in all building
components, including floors, internal and external walls, ceilings, and others are
required to be strictly inspected during both structural and architecture engineering
stages. Therefore, both manual and NDT inspections are considered time-consuming,
costly, and dangerous, especially for large-scale and high-rise structures. To tackle this
issue, computer-vision technique is increasingly introduced for automated crack
inspection. For example, various convolutional neural network (CNN) architectures
have been developed and implemented to increase the efficiency of manual crack
inspection [3, 4].
Considering the aforementioned context, computer-vision-based automated
crack inspection techniques were introduced by the authors in 2022. To achieve this,
the theoretical background of CNN networks is firstly explained in the context of
convolution, pooling, fully-connected, and benchmarking processes. AlexNet and
VGG16 models were then implemented and tested to detail and illustrate the
calculation steps. Meanwhile, a practical case study is used to compare the
difference between manual and computer-vision-based crack inspection. The future
directions of combining robotics and computer-vision for automated crack
inspection are discussed. This study gives a comprehensive overview and solid
foundation for a computer-vision-based automated crack inspection technique that
contributes to high efficiency, cost-effectiveness, and low-risk quality assessment of
buildings.
2
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405

2. Computer vision-based automated concrete crack inspection

The term computer vision is defined as an interdisciplinary field that enables com-
puters to recognize and interpret environments from digital images or videos [5].
Computer vision techniques are rapidly being used to detect, locate, and quantify
concrete defects to reduce the limitations of manual visual inspection. By automati-
cally processing images and videos, computer vision-based defect detection technolo-
gies enable efficient, accurate, and low-cost concrete quality inspection. Various
techniques in the computer vision field, such as semantic segmentation and object
detection, have been developed and applied to date [6]. Among them, image classifi-
cation is considered the most basic computer vision technique and has been intro-
duced most frequently to predict and target concrete defects.
The motivation of image classification is to identify the categories of input images.
Different from human recognition, an image is first presented as a three-dimensional
array of numbers to a computer. The value of each number ranges from 0 (black) to
255 (white). An example is shown in Figure 1. The crack image is 256 pixels wide, 256
pixels tall, and has three color channels RGB (Red, Green, and Blue). Therefore, this
image generates 256  256  3 = 196,608 input numbers.
The input array is then computed using computer vision algorithms to transform
the numbers to a specific label that belongs to an assigned set of categories. One of the
computer vision algorithms is CNN, which has become dominant in image classifica-
tion tasks [7]. CNN is a form of a deep learning model for computing grid-shaped
data. The central idea of CNN is to identify the image classification by capturing its
features using filters. The features are then output to a specific classification by a
trained weight and biases matrix.
There are three main modules included in a CNN model: convolution, pooling, and
fully connected layer. The convolution and pooling layers are used to extract image
features. The fully connected layer is used to determine the weight and biases matrix
and to map the extracted features into specific labels.
Convolution layer is the first processing block in CNN. During the convolution
 
process, a set of convolution filters is used to compute the input array Α ¼ aij mn ,
   
m, n ∈ widthimage , heightimage . After computing, a new image Α ∗ ¼ a ∗ ij nn , is

Figure 1.
An example of the input number array.

3
Quality Control - An Anthology of Cases

output and passed to the next processing layers. The size of the output image can be
calculated with Eq. (1). The values of output image pixels can be calculated with
Eq. (2). The output images are known as convolution feature map.

n ¼ ððm  f þ 2pÞ=sÞ þ 1 (1)

Here: n refers to the size of output image, m refers to the size of input image, f
refers to the size of convolution filter, p refers to the number of pooling layer, s refers
to the stride of convolution filter.
!
X
Αo∗ ¼ f W o  Αo þ bo (2)
k

Here: Αo∗ refers to the pixels of output image, f refers to an applied non-linear
function, W o refers to the values of convolution filter matrix, k refers to the number
of convolution filters.Αo refers to the pixels of input image, and b o is an arbitrary real
number.
An example of a convolution process is shown in Figure 2. In this example, both
the width and height of the input image is 5. The pixels of the image are shown in
Figure 2. The convolution filter is in a shape of 3  3. In this example, only one filter is
used. The initial value of the convolution filter is set randomly. The filter matrix is
adjusted and optimized in the following backpropagation process. In this example, the
non-linear function, padding layer is not used, and the biases value bo is set as 0. The
stride of convolution filter is set as 1. The convolution filter moves from left to right,
and from top to bottom. The size and value of the output feature map can be com-
puted using Eqs. (1) and (2). The detailed calculation process of the example feature
maps value and size is shown in Table 1. Seen from Figure 2, the value of size of input
image, size of filter is 5, 3, respectively. Suppose the number of the pooling layer, the
convolution stride is 0, 1, respectively.
A pooling layer is used to refine the feature maps. After pooling, the dimensions of
the feature maps can be simplified. In doing so, the computation cost can be effec-
tively decreased by reducing the number of learning parameters, whilst allowing only
the essential information of feature maps to be presented. Usually, pooling layers
follow behind convolution layers. Average pooling and maximum pooling are the
main pooling operations. Similar to convolution layers, pooling filters are used to
refine feature maps. For maximum pooling, the maximum value from the regions in
feature map that is covered by pooling filters is extracted. For average pooling, the
average value of the regions in feature maps covered by pooling filters is computed.
The pooling filters slide in the feature map from top to bottom, and from left to right.
The output of the pooling process is new feature maps that contain the most

Figure 2.
An example of convolution process.

4
5

Variable Equation Calculation process

Size of feature map n ¼ ððm  f þ 2pÞ=sÞ þ 1 ((5–3 + 2  0)/1) +1 = 3


P 
Value of feature map Α∗o ¼ f k W o  Αo þ b o
(1)  1 + 0  3 + 0  2 + 0  2 + 1  3 + 0  2 + 1 

(1)  3 + 0  2 + 0  1 + 0  3 + 1  2 + 0  1 + 1  1

(1)  2 + 0  1 + 0  2 + 0  2 + 1  1 + 0  4 + 1  3
(1)  2 + 0  3 + 0  2 + 0  4 + 1  1 + 0  3 + 1 

(1)  3 + 0  2 + 0  1 + 0  1 + 1  3 + 0  2 + 1  2

(1)  2 + 0  1 + 0  4 + 0  3 + 1  2 + 0  1 + 1 

(1)  4 + 0  1 + 0  3 + 0  3 + 1  2 + 0  1 + 1  1
(1)  1 + 0  3 + 0  2 + 0  2 + 1  1 + 0  3 + 1  4

(1)  3 + 0  2 + 0  1 + 0  1 + 1  3 + 0  2 + 1  3

Table 1.
Detailed calculation process of feature map value and size.
Quality Control - An Anthology of Cases

Figure 3.
An example of max pooling and average pooling.

prominent features or average features. An example of maximum pooling and average


pooling is shown in Figure 3.
After extracting image features, the fully connected layers are applied to map these
features with classification labels. The relationship between input feature maps and
output classifications is calculated using an artificial neural network (ANN). The ANN is
structured into input layers, hidden layers, and output layers. A group of neurons is
included in the three layers. The neurons connect to one another in a processed weight
matrix. The weights present the importance of input feature maps to classification labels.
Therefore, the relationships between inputs and outputs can be obtained by calculating a
weight matrix that connects image feature neurons and classification neurons.
To achieve this, the cube-shaped feature maps are first flattened into one-
dimension vectors. The values of transformed vectors represent the values of input
neurons. Then Eq. (3) is applied to calculate the value of new neurons that connect
with input neurons. The initial weights and biases values are chosen at random.
!
X
n
y j ðxÞ ¼ f w jx i þ b (3)
i¼1

Here: y j refers to the weights of output neurons, w j refers to the weights that
connect different neurons, xi refers to the values of input neurons, b refers to the biases.
A Back-Propagation algorithm (BP) is commonly used to train and modify weights
and biases. BP updates weights and biases by computing the gradient of loss function.
In doing this, the optimal weights and biases matrix that enable the minimum loss
between model outputs and actual value are identified. For now, various loss func-
tions are developed and applied. For example, the mean square error (MSE), shown in
Eq. (4), is one of the most frequently used loss functions to calculate loss value.
Stochastic gradient descent (SGD) is then processed to determine updated weights
and biases using the gradient of loss function, shown as Eq. (5).

1X n  2
Loss ¼ y i  ybi (4)
n i¼1

Here: Loss refers to the loss value of output neuron and actual value, n refers to the
number of neurons that connect to one specific output neuron, y refers to the actual
value, ^y refers to the value of one output neuron.
6
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405

∂L
w0 ¼ w  η (5)
∂w
∂L
b0 ¼ b  η
∂b

Here: w 0, b0 refers to updated weights and biases, η, η refers to former weights and
biases, η refers to the learning rate, ∂L, ∂L refers to the partial score of the loss function
∂w ∂b
for weights and biases, respectively.
An example of feature map updating using BP is explained. Figure 4 depicts an
example of a fully connected process. The initial weights and biases in this process are
determined randomly. Suppose the value of w11, w12, w 21 , w22, w5, w 6 is 0.1, 0.2, 0.3,
0.4, 0.5, 0.6, respectively. The value of x1, x 2, actual output value is 5, 1, 0.24. The
detailed calculation of the updated weights, biases, feature map is shown in Table 2.
In conclusion, during the convolution and pooling processes in CNN, the features
of the input image are extracted first. The pooled feature maps are then flattened and
considered as input neurons in fully connected process. After several training periods,
the appropriate weights and biases can be determined using BP. The classifications of
input images can be predicted automatically and reliably using the optimal weights
and biases.
A confusion matrix is a table structure that permits the viewing of CNN perfor-
mance [8]. Each row of the matrix records the number of images from actual classes,
while each column records the number of images from predicted classes. There are
four type indicators in the matrix: (1) True positive (TP) represents the images that
are predicted correctly as the actual class; (2) False positive (FP) represents the
images that are wrongly predicted; (3) True negative (TN) represents the images that
are correctly predicted as another actual class; (4) False negative (FN) represents the
images that are wrongly predicted as another actual class. TP, FP, TN, FN can be
expressed in a 2  2 confusion matrix, shown in Figure 5.
Based on TP, FP, FN, and TN, four typical CNN performance evaluation indexes:
accuracy, precision, recall, and F1-score can be calculated using Eqs. (6)–(9). For the
crack inspection problem, accuracy shows how many images can be predicted cor-
rectly. The percentage of actual cracked photos to all predicted cracked images is
shown by precision. CNNs with a high precision score indicate a better inspection
ability of cracked images. Recall shows the ratio of predicted cracked images to all
actual cracked images. CNNs with a high recall score indicate a better distinguishing
capacity between cracked and uncracked images. F1-score shows the comprehensive

Figure 4.
An example of a fully connected process.

7
8

Variable Equation Calculation pro

h1 h 1 ¼ w11  5 þ w21  1 5  0.1 + 1  0.3

h2 h 2 ¼ w12  5 þ w22  1 5  0.2 + 1  0.4

y y ¼ w 5  h1 þ w6  h2 0.8  0.5 + 1.4 


 2
Loss 1/2  (0.24–1.24
Loss ¼ 12  yactual  youtput
 
w 5’ ∂L ¼ ∂L ∂y 1 2  1/2  (0.24–
∂w5 ∂y  ∂w5 ¼ 2  2  y actual  youtput  h1  ð1Þ

w5 ’ ¼ w 5  η∂L
∂w5
0.5–0.1  0.8 =
 
w 6’ ∂L
¼ ∂L ∂y 1 2  1/2  (0.24–
∂w6 ∂y  ∂w6 ¼ 2  2  yactual  y output  h2  ð1Þ

w6 0 ¼ w6  η∂L
∂w 6
0.6–0.1  1.8 =
 
w 11’ ∂L ¼ ∂L  ∂y  ∂h 1
¼ 2  12  yactual  y output  ð1Þ  w5  x1 2  1/2  (0.24–
∂w11 ∂y ∂h1 ∂w11

w11 0 ¼ w 11  η ∂L
∂w11
0.1–0.1  2.5 = 
 
w 12’ ∂L ¼ ∂L  ∂y  ∂h2
¼ 2  12  yactual  y output  ð1Þ  w6  x1 2  1/2  (0.24–
∂w12 ∂y ∂h2 ∂w12

w12 0 ¼ w 12  η∂L
∂w12
0.2–0.1  3 = 0
 
w 21’ ∂L ¼ ∂L  ∂y  ∂h1
¼ 2  12  yactual  y output  ð1Þ  w5  x2 2  1/2  (0.24–
∂w21 ∂y ∂h1 ∂w21

w21 0 ¼ w 21  η∂L
∂w21
0.3–0.1  0.5 =
 
w 22’ ∂L
¼ ∂L ∂y ∂h2
¼ 2  12  yactual  y output  ð1Þ  w6  x 2 2  1/2  (0.24–
∂w22 ∂y  ∂h 2  ∂w22

w22 0 ¼ w 22  η∂L
∂w22
0.4–0.1  0.6 =

Updated feature map h 10 ¼ youtput  w5 0 1.24  0.42 = 0.

h 20 ¼ youtput  w6 0 1.24  0.42 = 0.

x 10 ¼ h1 0  w11 0 þ h 2 0  w12 0 0.5208  (0.15


0 0
x 20 0
¼ h1  w21 þ h 2  w22 0 0.5208  0.5+ 0

Table 2.
Detailed calculation process of feature map updating using BP.
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405

Figure 5.
An example of a fully connected process.

performance of precision and recall. A CNN with a high F1-score indicates stronger
robustness.

TP þ TN
Accuracy ¼  100 (6)
TP þ TN þ FP þ FN
TP
Precision ¼  100 (7)
TP þ FP
TP
Recall ¼  100 (8)
TP þ FN
Precision  Recall
F 1  score ¼ 2   100 (9)
Precision þ Recall

For example, the prepared dataset contains 10,000 photos, with 32,000 and 7000
cracked surface images and uncracked surface images, respectively. After CNN
processing, 2700 images are correctly predicted as cracked surfaces, 300 images out of
the 3000 real cracked surfaces are wrongly predicted as uncracked surfaces. 6500
images are correctly predicted as uncracked surfaces, and 500 images out of the 7000
uncracked surfaces are wrongly predicted as cracked surfaces. Then, based on above-
mentioned concepts, the values of TP, FN, FP, TN is 2700, 300, 500, 6500, respec-
tively. Table 3 shows the details of the accuracy, precision, recall, and F1 score
calculations.

Variable Equation Calculation process

Accuracy TPþTN  100 (2700 + 6500) /(2700 + 300 + 500 + 6500)  100 = 92%
TPþTN þFPþFN

Precision TP 2700/(2700 + 500)  100 = 84.375%


TPþFP  100

Recall TP 2700/(2700 + 300)  100 = 90%


TPþFN  100

F1-score 2  PrecisionRecall
PrecisionþRecall  100
2  ((0.84375  0.9)/(0.84375 + 0.9))  100 = 87.23%

Table 3.
Detailed calculation process of accuracy, precision, recall, and F1 score.

9
Quality Control - An Anthology of Cases

3. Example of concrete crack inspection using CNN

3.1 Textbook example of crack inspection using CNN

This chapter provides an example of how convolution, pooling, fully connected,


and benchmarking can be demonstrated in real-world concrete crack inspection using
CNN. The above-mentioned calculation was carried out using the Python program-
ming language and the Pytorch package.

3.1.1 Dataset

In this example, the input images were gathered from Kaggle, the world’s most
well-known data science community. Kaggle allows access to thousands of public
datasets covering a wide range of topics, including medical, agriculture, and con-
struction [9]. By searching “concrete crack” in Kaggle datasets module, 12 datasets
were found. The “SDNET2018” dataset was chosen from among them since it com-
prises sufficient and clean concrete surface images with and without cracks [10]. In
“SDNET2018”, 56,096 images were captured in the Utah State University Campus
using a 16-megapixel Nikon digital camera, including 54 bridge decks, 72 walls, and
104 pavements. In this example, only images of walls and pavements were used to
demonstrate the comparison analysis between manual inspection and CNN-based
automatic inspection. Therefore, 42,472 images were used as training and testing
dataset. Among them, 6459 cracked concrete surfaces are considered as positive class.
The captured cracks are as narrow as 0.06 mm and as wide as 25 mm, while 36,013
uncracked concrete surfaces are considered as negative class. Images in this dataset
contain a range of impediments, such as shadows, surface roughness, scaling, edges,
and holes. The diverse photographing backgrounds contribute to ensuring the robust-
ness of the designed CNN architecture. At a ratio of 80/20, the cracked and uncracked
concrete photos were randomly separated into training and testing datasets. The input
images’ pixels were standardized to 227  227  3 for AlexNet, and 224  224  3 for
VGG16. Table 4 shows the details of the input images. Figure 6 shows the examples
of the input images.

3.1.2 CNN architecture

In this section, two pre-trained CNN networks, AlexNet and VGG16, were intro-
duced to illustrate CNN computation process. AlexNet was designed as an eight-layer
architecture. VGG16 has a depth that is two twice that of AlexNet. According to
[11, 12], the depth of CNN network has a significant impact on model performance.

Total dataset Training dataset Testing dataset

Total images 42,472 33,978 8494


Cracked images 6227 4986 1238

Non-cracked images 36,245 28,992 7256

Image pixels AlexNet: 227  227  3


VGG16: 224  224  3

Table 4.
Details of prepared dataset.

10
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405

Figure 6.
Examples of cracked and non-cracked surface.

Therefore, by training and testing the prepared dataset with AlexNet and VGG16, the
comparison of network depth to prediction performance and computation cost can be
further highlighted.

1. AlexNet architecture

The AlexNet architecture, developed by Alex Krizhevsky, Ilya Sutskever, and


Geoffrey E. Hinton in 2012, is considered one of the most influential CNN architec-
tures [13]. AlexNet consists of five convolution layers and three fully-connected
layers. The max-pooling layers follow the first, second, and fifth convolution layers.
AlexNet was designed to predict 1000 object classifications. 1.2 million images with a
pixel size of 2,242,243 were used as input images. As a result, 60 million parameters
and 650,000 neurons are included in the computation process. The details of the
AlexNet architecture are shown in Figure 7.
In the first convolution stage, 96 convolution filters with size of 11  11 were
applied; they move with a stride with four pixels. The size of pooling filters is 3  3.
The pooling filters move with a stride of two. It is worth noticing that the error rate
can be reduced by applying overlapping pooling technique (the size of pooling filters
is smaller than its stride). In the second convolution stage, the size of convolution
filters becomes smaller from 11  11 to 5  5 while its number becomes larger from 96
to 256. The convolution filters in the third and the fourth convolution stage keep
minimizing, from 5  5 to 3  3, while its number keeps increasing from 256 to 384. In
the last convolution stage, the size of convolution filters remains same as 3  3, and its
number turns back to 256. The size and stride of pooling filters also remain the same in
the second and fifth convolution stage. Finally, 4096 neurons are included for both
first and second fully-connected layers. The final fully-connected layer contains 1000
neurons to output the probabilities of 1000 classifications. The 1000 neurons are
activated by softmax function.
The outputs of each convolution and fully-connected layer are activated by a non-
linear function, namely the Rectified Linear Units (ReLU) [14]. It is proved in
11
Quality Control - An Anthology of Cases

Figure 7.
Details of AlexNet architecture.

AlexNet that using ReLU instead of other activation functions effectively solves the
overfitting problem and improves computation efficiency. Especially for the larger
architectures trained on larger datasets. The local response normalization [15] tech-
nique (LRN) is also applied following ReLUs to reduce the error rate. Moreover, to
avoid overfitting, drop-out techniques [16] are also applied in the first two fully-
connected layers. The dropout criteria was set at 0.5.
AlexNet was computed using SGD. The batch size, momentum [17], and weight
decay [18] were set as 128, 0.9, and 0.0005, respectively. The learning rate was set as
0.00001. AlexNet was computed for roughly 90 periods in NVIDIA GTX 580 3GB
GPUs. As a result, the error rate of AlexNet on test set of top-1 and top-5 achieved
37.5% and 17.0%, which was 10% lower than the out-performed CNN architecture at
that time.

2. VGG16 architecture

VGG16, designed by Karen Simonyan and Andrew Zisserman in 2015, was devel-
oped to investigate the influence of convolution network depth on prediction accu-
racy in larger datasets [19]. Therefore, VGG16 was designed as a deep architecture
with 16 weight layers, including 13 convolution layers and three fully-connected
layers. Convolution layers in VGG16 are presented as five convolution blocks. The
details of the VGG16 architecture are shown in Figure 8.
12
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405

Figure 8.
Details of VGG16 architecture.

As seen from Figure 8, there are two convolution layers in the first two convolution
blocks, respectively, and three convolution layers in the following three convolution
blocks, respectively. The size of all convolution filters is uniformly 3  3. All the
convolution filters move with a stride of one. The number of convolution filters
increases gradually from 64 to 128, 256, and 512 in the five convolution blocks. To
preserve information about image boundaries as completely as possible, spatial padding
is applied [20]. As with AlexNet, ReLU is applied as a non-linearity function for convo-
lution and fully-connected outputs to avoid overfitting problems. However, unlike in
AlexNet, LRN is not used in VGG16 because the authors stated that LRN has no influ-
ence on model performance and increases memory consumption and computation time.
Five max-pooling layers follow the last convolution layer in each block. The max-
pooling filters are uniformed with a size of 2  2, and a stride of two. As with AlexNet,
the first two fully-connected layers have 4096 neurons and 1000 output neurons. The
output neurons are activated by softmax. To avoid overfitting problems, drop-out
technique is also applied in the first two fully-connected layers. The dropout ratio is
set at 0.5. It can be concluded that the most important novelty of VGG16 compared
with AlexNet are: (1) the designed deep architecture; (2) the uniformed and small size
convolution filters.
In the training process, the training batch size, momentum, weight decay, and
learning rate were set as 256, 0.9, and 0.0005, 0.0001, respectively. As a result, the
13
Quality Control - An Anthology of Cases

top-1 and top-5 errors of VGG16 achieved 24.4% and 7.2%, which is 13% and 9.8%
lower than AlexNet. The result proved that the deep architecture and small convolu-
tion filters have positive influences on CNN performance.

3.1.3 Training and benchmarking

Finally, the prepared dataset mentioned in Section 3.3.1 was used to train and test
AlexNet and VGG16, respectively. The training and testing process was conducted in
Kaggle kernels [21]. Kaggle kernel, provided by Kaggle community, is a virtual envi-
ronment equipped with NVIDIA Tesla K80, a dual GPU design, and 24GB of GDDR5
memory. This high computing performance enables 5–10 times faster training and
testing processes than CPU-only devices. Both AlexNet and VGG16 were trained
using SGD. Batch size was and learning rate set as 64, 0.0001, respectively. To avoid
overfitting problem, dropout was applied at the fully-connected stage, dropout prob-
ability was set as 0.5.
Python was used to program the computing process. Pytorch library was imported.
The whole computation time of AlexNet was roughly 2 h, and 4 h for VGG16. The
model’s performance in the training and testing datasets is shown in Figures 9 and 10,

Figure 9.
Training loss and accuracy of AlexNet and VGG16.

Figure 10.
Testing loss and accuracy of AlexNet and VGG16.

14
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405

respectively. The training and testing loss and accuracy values are represented on the
vertical axis, while the processing epochs are represented on the horizontal axis. Since
the loss and accuracy variation remained consistent after the 60th epoch overtraining
the model could lead to an overfitting problem [22]. The training epoch was set to 60
epochs.
As shown in Figure 9 , both AlexNet and VGG16 converged successfully. The
training loss for AlexNet reduced steadily from 0.43 to 0.05 in the 58th epoch and
then remained constant in subsequent epochs. Similarly, at the 58th epoch, AlexNet’s
training accuracy increased from 0.85 to 0.98. At the 35th epoch, the training loss for
VGG16 dropped from 0.42 to 0.01 and subsequently stayed steady at approximately
0.008–0.01 in following epochs. At the 34th epoch, the training accuracy of VGG16
increased from 0.85 to 0.99 and then remained at 0.99. The results revealed that
VGG16 performed better during the training procedure. VGG16’s convergence speed
is roughly two times that of AlexNet. VGG16’s minimum training loss is 0.04 lower
than AlexNet’s, while its maximum accuracy is 0.01 times higher. It is observed that
deeper CNN designs assist in the faster processing of larger datasets, which contrib-
utes to producing more trustworthy weights and biased matrices. These results are in
accordance with those proposed by [23].
Figure 10 shows the loss and accuracy variations of AlexNet and VGG16 in the
testing dataset. The testing loss and accuracy consist of the fluctuation tendency of
training loss and accuracy. It indicated that neither AlexNet nor VGG16 had
overfitting or underfitting problems. VGG16 also out-performed AlexNet in the test-
ing process. AlexNet and VGG16 have minimum testing losses of 0.01 and 0.00003,
respectively. AlexNet’s maximum accuracy was 0.98, and VGG16’s was 0.99. In the
testing dataset, VGG16 converges at the 34th epoch, which is nearly 2 times faster
than AlexNet.
The confusion matrix of AlexNet and VGG16 is shown in Table 5. It can be shown
that the accuracy scores of AlexNet and VGG16 are nearly identical, indicating that
AlexNet and VGG16 have similar prediction abilities for cracked and uncracked con-
crete surfaces. VGG16 has a precision and recall of 96.5% and 89.6%, respectively,
which is nearly 1% and 5% greater than AlexNet. The results show that VGG16 out-
performs AlexNet for predicted positive variables (cracked surfaces). Meanwhile,
more cracked images from actual datasets can be correctly identified by applying
VGG16. AlexNet and VGG16 have F1-scores of 89.6% and 92.9%, respectively, indi-
cating that the VGG16 model is more robust.

AlexNet VGG16

TP 5242 5579

FN 985 648

TN 36,007 36,040

FP 238 205
Accuracy 0.971204558 0.97991618

Precision 0.956569343 0.9645574

Recall 0.84181789 0.895937048

F1-score 0.895532587 0.928981767

Table 5.
Confusion matrix of AlexNet and VGG16.

15
Quality Control - An Anthology of Cases

In conclusion, VGG16 demonstrates better performance. Since it is important to


avoid ignoring any cracked surfaces, the model with the highest recall and F1-score is
more worthwhile. Meanwhile, AlexNet is also a preferable option when the number of
cracked and uncracked images is balanced because it shows a similar accuracy score as
VGG16 and has a lower computation cost.

3.2 Comparison of CNN and manual inspection

During on-site construction quality management process, quality control managers


(QCM) or registered inspectors (RI) are responsible for personally inspecting and
reporting quality problems with forms, reports, and photocopies. According to the
Mandatory Building Inspection Scheme (MBIS) and related contract regulations,
QCMs and RIs are obliged to examine cracks and other defects in building compo-
nents visually or with non-destructive equipment [24]. For example (1) cracks on the
structural components, e.g., structural beam, column, (2) cracks on the external
finishes, e.g., tiling, rendering, and cladding, (3) cracks on the fins, grilles, windows,
curtain walls.
When using computer-vision-based inspection techniques, differently, there is no
necessity for QCMs and RIs to conduct the aforementioned inspection tasks on-site.
Instead, their primary responsibilities may switch to (1) taking photos or videos of
building components, and (2) inputting the images and videos into pre-trained CNN
models. To highlight the differences between manual and computer-vision-based
crack inspection, an experiment was set up to calculate and compare inspection time
and cost.
The layout of the experiment is shown in Figure 11. Suppose this experiment case
is a 15 m  15 m  2 m residential building that is located in San Bernardino. The
inspection items include cracks on slab, internal walls, and external walls. According
to Dohm, John Carl [25], the total manual inspection time for 1600 –2600ft2 home in
San Bernardino is around 13.65 h, including inspection items of building slab, shear
walls, etc. The manual inspection service cost is around $85.9 per hour.
Referring to the computer-vision-based inspection process described above, the
total inspection time includes the time of taking images or videos and CNN
processing. Assume that the input videos are obtained with handheld camera devices
while QCMs or RIs are by means of walking. Then, the time of taking videos can be
considered as the time of walking.

Figure 11.
Layout of the experiment case.

16
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405

Figure 12.
Walking path of the inspectors.

Manual inspection Computer-vision based inspection

Time 13.65  3600 = 49,140 s Time Taking video (1/0.1  15)  15) +1/0.1  14 = 2390 s
CNN processing (2390  24)/100 = 573.6 s

Cost (85.9/3600)  49,140 = $1172.5 Cost (573.6 + 2390)  (85.9/3600) = $70.7

Table 6.
Time and cost of manual and computer-vision based crack inspection.

Normally, the average walking speed between the age of 20–49 is around 1.42 m/s
[26]. Considering the time delays of taking videos, the walking speed can be consid-
ered as 0.1 m/s. Suppose the walking path follows an S-curve, shown in Figure 12.
According to [27], the universally accepted frame rate is 24 FPS per second.
Suppose the inspector begins to record video while taking the first step. Then the time
of captured video equals the time of walking. The number of the input images that
converted from the captured video can be calculated as 2390 s  24FPS = 57,360.
According to the testing time of the textbook examples mentioned in Section 3.1, and
the study outcomes of [28], the time of CNN processing is around 100 images per
second. Then, the time of CNN processing can be calculated as 57,360/100 = 573.6 s.
Therefore, the cost of computer-vision based crack inspection can be calculated as
(2390 s + 573.6 s)  (85.9/3600) = $70.7.
Table 6 summarizes the calculation process of time and cost of manual and
computer-vision-based crack inspection. It can be seen that using CNN-based tech-
nique can effectively reduce inspection time and cost. The inspection time decreases
from 13.65 to 0.8 h in total, the inspection cost decreases from $1172.5 to $70.7.

4. Conclusion

To facilitate automatic building quality inspection and management, this study


introduced a computer-vision-based automated concrete crack inspection technique.
In order to demonstrate the computing and benchmarking process, the mathematical
17
Quality Control - An Anthology of Cases

understanding of one of the most essential computer vision algorithms, convolution


neural network, was first detailed.
The theoretical foundation was then explained using a textbook example. In this
case, the input dataset “SDNET2018” was obtained from the Kaggle community. A
digital camera was used to acquire the 56,096 photos from the Utah State University
campus. To train the input images, the two most basic CNN architectures, AlexNet
and VGG16, were chosen. The Pytorch library was used to carry out the training
process in the Kaggle kernel. The model’s performance was evaluated using a confu-
sion matrix. The results revealed that the prediction accuracy of AlexNet and VGG16
is nearly identical. However, VGG16’s precision and recall are higher than AlexNet’s,
indicating that VGG16 has a stronger capacity to identify cracked surfaces. VGG16’s
F1 score is also greater than AlexNet’s, signifying that VGG16 is more robust. VGG16
is deemed to have a better significance since it has higher precision, recall, and F1-
score, which is crucial when distinguishing cracked and uncracked surfaces. When the
ratio of cracked and uncracked images is almost the same, however, AlexNet is a
feasible alternative because of its high accuracy score and low computation cost. It’s
worth noting that, when compared to shallow CNN architectures, deeper and broader
CNN architectures outperform shallow CNN architectures for larger datasets.
Next, an experimental case was designed to compare manual and computer-vision-
based crack inspection in terms of time and cost. The results showed that the
efficiency and cost-effectiveness can be effectively improved when adopting
computer-vision-based techniques. The inspection time and cost or the designed case
can nearly decrease from 13.65 to 0.8 h, and from $1172.5 to $70.7, respectively.
The findings help to demonstrate the computer-vision-based quality inspection
technique in both theory and practice. Although the recently developed computer-
vision-based technology improves the efficiency, cost-effectiveness, and safety of
human quality inspection, it still relies primarily on the collected image quality. Some
concrete surface images are difficult to capture in real-life situations, including among
others high-rise buildings, component corners, and buildings in extremely harsh
environments. To address this issue, robotics techniques are growing rapidly as a
means of upgrading computer-vision-based quality inspection [29]. Previous research
has begun to use mobile robots, such as UAVs in order to gather surface images [30–32].
Some studies have focused on exploring robotic inspection systems to raise the auto-
matic level of quality inspection [33, 34]. Therefore, merging robotics and computer
vision approaches may be considered as a worthwhile future research direction to
improve the efficiency and accuracy of manual quality control and management.

Acknowledgements

The authors highly appreciate the full support funding of the full-time PhD
research studentship under the auspice of the Department of Building and Real Estate,
The Hong Kong Polytechnic University, Hong Kong. The authors would like to
express their deepest gratitude to Prof. Heng Li for his guidance. Finally, the authors
would like to acknowledge the research team members (Mr. King Chi Lo, Mr. Qi Kai)
and anyone who provided help and comments to improve the content of this article.

Conflict of interest

All authors declare that they have no conflicts of interest.


18
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405

List of abbreviations

PASS Building Performance Assessment Scoring System


MBIS Mandatory Building Inspection Scheme
MWIS Mandatory Window Inspection Scheme
NDT Non-Destructive
CNN Convolutional Neural Network
RGB Red, Green, and Blue
ANN Artificial Neural Network
BP Back-Propagation
MSE Mean Square Error
SGD Stochastic Gradient Descent
ReLU Rectified Linear Units
LRN Local Response Normalization
QCM Quality Control Managers
RI Registered Inspectors

Author details

Siwei Chang and Ming-Fung Francis Siu*


The Hong Kong Polytechnic University, Hung Hom, Hong Kong

*Address all correspondence to: [email protected]

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
19
Quality Control - An Anthology of Cases

References

[1] Driscoll R. Assessment of damage in [8] Stehman SV. Selecting and


low-rise buildings, with particular interpreting measures of thematic
reference to progressive foundation classification accuracy. Remote
movement. In: Digest. London: H.M.S. Sensing of Environment. 1997;62(1):
O.; 1981. p. 251 77-89. DOI: 10.1016/S0034-4257(97)
00083-7
[2] Chitte CJ, Sonawane YN. Study on
causes and prevention of cracks in [9] Banachewicz K, Massaron L. Data
building. International Journal for Analysis and Machine Learning with
Research in Applied Sciences and Kaggle: How to Compete on Kaggle and
Engineering Technology. 2018; 6(3): Build a Successful Career in Data
453-461. DOI: 10.22214/ijraset.2018.3073 Science. Birmingham, United Kingdom:
Publisher Packt Publishing Limited.
[3] Kim B, Cho S. Image-based concrete 2021. Available from: https://www.
crack assessment using mask and region- bookdepository.com/Data-Analysis-
based convolutional neural network. Machine-Learning-with-Kaggle-Konrad-
Structural Control and Health Banachewicz/9781801817479
Monitoring. 2019;26:e231. DOI: 10.1002/
stc.2381
[10] Dorafshan S, Robert JT, Marc M.
SDNET2018: An annotated image
[4] Rao A, Nguyen T, Palaniswami M,
dataset for non-contact concrete crack
Ngo T. Vision-based automated crack
detection using deep convolutional
detection using convolutional
neural networks. Data in Brief. 2018;21:
neural networks for condition
1664-1668. DOI: 10.1016/j.dib.2018.
assessment of infrastructure. Structural
11.015
Health Monitoring. 2020;20 :1475921720
96544. DOI: 10.1177/14759217209
65445 [11] OrShea A, Lightbody G, Boylan G,
Temko A. Investigating the impact of
[5] Vandoni HT, Carlo E. Computer CNN depth on neonatal seizure
Vision: Evolution and promise. 19th detection performance. In: 2018 40th
CERN School of Computing. Geneva: Annual International Conference of the
CERN; 1996. pp. 21-25. DOI: 10.5170/ IEEE Engineering in Medicine and
CERN-1996-008.21 ISBN 978-9290 Biology Society (EMBC); 18–21 July
830955 2018; USA, New York: IEEE; 2018.
pp. 5862-5865
[6] Feng X, Jiang Y, Yang X, Du M, Li X.
Computer vision algorithms and [12] Pasupa K, Sunhem W. A comparison
hardware implementations: A survey. between shallow and deep architecture
Integration. 2019;69:309-320. DOI: classifiers on small dataset. In: 2016 8th
10.1016/j.vlsi.2019.07.005 International Conference on Information
Technology and Electrical Engineering
[7] Yamashita R, Nishio M, Do RKG, (ICITEE); 5–6 October 2016; Indonesia,
Togashi K. Convolutional neural New York: IEEE; 2016. pp. 1-6
networks: an overview and application in
radiology. Insights into Imaging. 2018; [13] Krizhevsky A, Ilya S, Hinton GE.
9(4):611-629. DOI: 10.1007/s13244-018- Imagenet classification with deep
0639-9 convolutional neural networks.
20
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405

Advances in Neural Information Transactions on Pattern Analysis and


Processing Systems. 2018;25:1097-1105. Machine Intelligence. 2015; 37(9):
DOI: 10.1145/3065386 1904-1916. DOI: 10.1109/
TPAMI.2015.2389824
[14] Nair V, Hinton GE. Rectified linear
units improve restricted boltzmann [21] Banachewicz K. Data Analysis and
machines. In: Proceedings 27th Machine Learning with Kaggle.
International conference on machine Birmingham: Packet Publishing Limited;
learning; 21–24 June. Israel: International 2021
Machine Learning Society; 2010.
pp. 417-425 [22] Al Haque AF, Rahman MR, Al
Marouf A, Khan MAAA. Computer
vision system for Bangladeshi local
[15] Kim GB, Jung KH, Lee Y, Kim HJ,
mango breed detection using
Kim N, Jun S, et al. Comparison of
convolutional neural network (CNN)
shallow and deep learning methods on
models. In: 4th International Conference
classifying the regional pattern of diffuse
on Electrical Information and
lung disease. Journal of Digital Imaging.
Communication Technology (EICT);
2018;31(4):415-424. DOI: 10.1007/
20–22 December 2019; Bangladesh,
s10278-017-0028-9
New York: IEEE; 2019. pp. 1-6
[16] Srivastava N, Hinton G,
[23] Zheng Z, Yang Y, Niu X, Dai H,
Krizhevsky A, Sutskever I,
Zhou Y. Wide and deep convolutional
Salakhutdinov R. Dropout: A simple way
neural networks for electricity-theft
to prevent neural networks from
detection to secure smart grids. IEEE
overfitting. The Journal of Machine Transactions on Industrial Informatics.
Learning Research. 2014;15(1):1929-1958.
2018;14(4):1606-1615. DOI: 10.1109/
DOI: 10.5555/2627435.2670313
tii.2017.2785963c

[17] Smith LN A Disciplined Approach to [24] Buildings Department. Code of


Neural Network Hyper-Parameters: Part Practice for Mandatory Building
1: Learning Rate, Batch Size, Inspection Scheme and Mandatory
Momentum, and Weight Decay. arXiv Window Inspection Scheme. Hong
preprint arXiv:1803.09820. 2018 Kong: Hong Kong Government. 2012.
Available from: https://www.bd.gov.hk/
[18] Gnecco G, Sanguineti M. The doc/en/resources/codes-and-references/
weight-decay technique in learning from code-and-design-manuals/CoP_MBIS_
data: An optimization point of view. MWISe.pdf, https://www.bd.gov.hk/en/
Computational Management Science. safety-inspection/mbis/index.html
2008;6(1):53-79. DOI: 10.1007/
s10287-008-0072-5 [25] Dohm JC. Building Inspection Fee
Analysis. Theses Digitization Project. San
[19] Simonyan K & Zisserman A. Very Bernardino, California, United States:
Deep Convolutional Networks for Large- California State University. 2007.
Scale Image Recognition. arXiv preprint Available from: https://scholarworks.lib.
arXiv; 1409.1556. 2014 csusb.edu/etd-project/3249

[20] He K, Zhang X, Ren S, Sun J. Spatial [26] Mohler BJ, Thompson WB, Creem-
pyramid pooling in deep convolutional Regehr SH, Pick HL, Warren WH. Visual
networks for visual recognition. IEEE flow influences gait transition speed and
21
Quality Control - An Anthology of Cases

preferred walking speed. Experimental [34] Montero R, Menendez E,


Brain Research. 2008;181(2):221-228. Victores JG, Balaguer C. Intelligent
DOI: 10.1007/s00221-007-0917-0 robotic system for autonomous crack
detection and characterization in
[27] Apple Inc. Final Cut Pro User Guide. concrete tunnels. In: 2017 IEEE
Apple, One Apple Park Way, Cupertino, International Conference on
CA 95014, United States: Apple Inc. Autonomous Robot Systems and
2021. Available from: https://support. Competitions (ICARSC); 26–28 April
apple.com/en-hk/guide/final-cut-pro/ 2017; Portugal, New York: IEEE; 2017.
ver917522c9/mac pp. 316-321

[28] Ma D, Fang H, Wang N, Xue B,


Dong J, Wang F. A real-time crack
detection algorithm for pavement based
on CNN with multiple feature layers.
Road Materials and Pavement Design.
2021:1-17. DOI: 10.1080/
14680629.2021.1925578

[29] Chang S, Siu MFF, Li H, Luo X.


Evolution pathways of robotic
technologies and applications in
construction. Advanced Engineering
Informatics. 2022;51:101529

[30] Seo J, Duque L, Wacker J. Drone-


enabled bridge inspection methodology
and application. Automation in
Construction. 2018;94:112-126. DOI:
10.1016/j.autcon.2018.06.006

[31] Humpe A. Bridge inspection with an


off-the-shelf 360° camera drone. Drones.
2020;4(4):67. DOI: 10.3390/
drones4040067

[32] Liu Y, Nie X, Fan J, Liu X. Image-


based crack assessment of bridge piers
using unmanned aerial vehicles and
three-dimensional scene reconstruction.
Computer-aided Civil and Infrastructure
Engineering. 2020;35(5):511-529. DOI:
10.1111/mice.12501

[33] La H, Gucunski N, Dana K, Kee S.


Development of an autonomous bridge
deck inspection robotic system. Journal
of Field Robotics. 2017;34(8):1489-1504.
DOI: 10.1002/rob.21725
22
Chapter

Development and Usage of


Electronic Teaching Technologies
for the Economic Training of
Students in a Technical University
ValeryiSemenov

Abstract

In this chapter, the experience of the Department of Economic Theory in the


development and use of electronic technologies in teaching economic theory for
students of technical directions is described. The necessity of electronic testing in
the context of the concept of practice-oriented teaching has been substantiated. The
stages of development and structure of electronic testing are presented. The process
of forming the base of test tasks is described. The structure of the software is stated.
The experience of approbation and application of testing technology is presented.
The influence of electronic testing technology on teaching methods is shown. The
issues of electronic support of business games are considered. Electronic technologies
are considered as a necessary and essential element in the organization and imple-
mentation of business games developed at the department. An assessment of the
impact of electronic testing and electronic support of business games on the quality
of the educational process is given.

Keywords: economic theory, students of technical specialties, practice-oriented teaching,


business games, electronic support, electronic testing technology, quality control of the
educational process

1. Introduction

The development of the system of economic education for students in technical


areas involves the following in particular: taking into account the requirements of
employers, compliance of domestic standards with foreign ones, creativity in teach-
ing, and the use of electronic learning technologies.
The subject “Economic theory” is included in the block of humanitarian and
socioeconomic disciplines that provide students of technical stream with the neces-
sary competencies. There are two main problems in the teaching of economic theory
in a technical university:

1
Quality Control - An Anthology of Cases

• Uninterested perception of the subject as “not basic” and not relevant to the
disciplines in the specialty

• Small amount of auditor hours

To solve these problems, the concept of practice-oriented teaching has been


implemented at the Department of Economic Theory in recent years [].
A necessary element in the implementation of the concept of practice-oriented
learning is the use of electronic learning technologies.

2. The structure of a practice-oriented approach to teaching economic theory

It consists of the fact that along with the consideration of questions of a theoretical
nature, it is obligatory to consider concrete and real data on lectures and seminars on
all the topics under study. At the same time, the main methodological principle is the
maximum possible usage of examples corresponding to the streams of student training.
A practically oriented approach in teaching the subject “Economic theory” is
implemented in the following streams:

. Development of presentations of lectures containing real data. is may in-


clude—but be not limited to—the following practical applications, addressing
the following topics:

• Automotive market

• Oil and gas market

• Information engineering and technology market

The choice of the automotive market as a topic for practical application is justified
by acknowledging the following circumstances: widespread use, a lot of informa-
tion, the ability to historically analyze the evolution of the market and the structure
of competition, tracking the effects of mass production, the availability of data on
prices, production volumes, and technologies.
In addition, this topic correlates with the use of the business game “Formation of
the automotive market” in practical classes.
The subject of the oil and gas market is related to the peculiarities of the Russian
economy and is characterized by the possibility of obtaining various data for analysis,
thus making it possible to analyze the activities of monopolies and oligopolies.
The choice of the information technology and technology market as a topic for
practical application is due to its relevance, the widespread use of digital technologies,
and their development. This topic corresponds to the areas of training of one of the
faculties of the university and correlates with the business game “Digital Economy”
developed within the department.

. Change the methodology for conducting practical exercises.

As far as the change in the methodology of teaching is concerned, at first, the


technology of preparing and presenting reports by students is introduced. The
2
Development and Usage of Electronic Teaching Technologies for the Economic Training…
DOI: http://dx.doi.org/10.5772/intechopen.105610

structure of the report should contain a brief summary of the theoretical content of
the key aspect of the topic (– min), as well as bringing statistical and other data
on the topic (– min). The recommended subject of reports on all topics coincides
with the directions chosen by the department.
In addition to this topic, students are offered other areas in accordance with their
specialty.

. Development and application of business games and specific situations for


analysis in the educational process.

The department has developed a number of business games and specific situations
for analysis [].
Therefore, when studying the topic of supply and demand, the business game
“Demand and Supply in the Automotive Market” is used. In a playful way, students
analyze the formation of the automotive market in the United States, supply and
demand factors, market structure, and the strategy of competing firms.
When reviewing the topic of the production factors market, the business game
“Real estate: rent or buy” is used. The game deals with supply and demand in the
labor market of engineers, wage dynamics, and housing market data. Students
analyze the possibility of buying or renting an apartment depending on the level
and dynamics of their future income, other life criteria, and factors of the real estate
market.
The study of the topic “Cost theory of the firm” is also conducted using a business
game. By imitating small businesses, students are divided into subgroups, creating
small enterprises in the field of catering. In doing so, they analyze the main costs,
their structure, and dynamics.
Studying the topics “Fiscal policy of the state” and “Monetary policy of the state,”
the business game “State regulation of the automotive market” is used. The actions
of the “AvtoVAZ Bank” as well as the state policy on supporting the car industry and
AvtoVAZ’s efforts to attract investments and implement its production program are
considered.
When studying the topic “Economic functions of the state,” the business game
“Digital Economy” is used. During the business game, four teams of students inter-
act, representing the positions of the state, business, consumers, and experts, assess-
ing the socioeconomic consequences of implementing programs for the development
of the digital economy.

. Araction of students of technical faculties for participation in conferences with


reports on economic theory of practical orientation.

In the context of the practical orientation of teaching, students of technical


faculties are involved in terms of participating in conferences. They present reports
on economic topics with the involvement of specific materials. Some of the students
present their own reports, while others act in collaboration with the teachers of the
department.
The experience of students participating in conferences held at the university
showed that students readily present reports on specific topics.
It is worth noting at this point that at the Russian scientific and practical confer-
ence of students, that is, postgraduate students of the “Modern problems of man-
agement” (),  reports of students were presented by the department (which
3
Quality Control - An Anthology of Cases

accounted for more than  of all the reports presented by four departments of the
Faculty of Economics and Management).
In particular, students presented reports on the following topics:

• Demand and supply in the Russian electronics market

• Competition in the electronics market in Russia

• Domestic market of laser equipment and laser technology

• On the state of the instrument making and measuring equipment market

• Russian market of medical technologies and medical equipment

• On the development of the market for ultrasonic non-destructive testing

• Effect of scale in the development of the automotive industry

• On factors affecting the oil market

• Demand and supply in the world oil market

• Structure of the information technology market in Russia

• Change in supply and demand in the information technology market

. Development of technologies for electronic testing and support of business games.

Electronic teaching technologies are considered in the context of a practice-ori-


ented approach. The technology of electronic testing allows one to save time of class
hours, increasing the possibilities for meaningful studying. It furthermore improves
the quality of testing in assessing students’ mastery of the main theoretical material.
Indeed, it may be stated that the electronic support of business games is an essen-
tial element of their conduct.

. Development of online courses and use of other forms of distance learning.

3. Structure of the score-rating system for assessing students’ knowledge

At the Department of Economic Theory, a point-rating system for assessing


students’ knowledge has been used for a long time [].
At present, this system has the following structure:

Basic controls.

. Test papers:

• Two for “Microeconomics” and two for “Macroeconomics.”

• The maximum number of points for each control work is .


4
Development and Usage of Electronic Teaching Technologies for the Economic Training…
DOI: http://dx.doi.org/10.5772/intechopen.105610

• Final examination—the maximum number of points is .

• The relevant control works are carried out in the form of test tasks and include
tests and tasks.

. Report

• This is rated from  to  points.

• Each student has the right to make only one report during the semester.

• Preparation of one report by a group of two students is allowed.

. Variably considered forms of educational work.

• Solving problems and test tasks at the seminar. Points allocated range from 
to  points.

• Performances: evaluated from  to  points.

• Answers to questions for discussion and control questions, participation in the


discussion of the report, and the discussion: evaluated for each type of activity
and making-up between  to  points.

e approximate structure of time spent in a practical lesson (one topic is consid-


ered for 40–45 min):

• report—no more than  min.

• discussion of the report:  min.

• speeches on the topic of the report or the topic of the seminar:  min.

• discussion of the issues of the topic and answers to control questions:  min.

• problem solving:  min.

• Total:  min.

The final rating is formed as follows:


The mark “satisfactory” is set if the student scores – points, “good” is awarded
between  and  points, and “excellent” from  points upwards.
If a student does not score  points, he/she receives an unsatisfactory grade and
must pass the exam with the performance of test tasks.

4. Testing stages of development and technology structure

. Urgency of development

An urgency of development is related, first of all, with the reduction of time of


class hours. The working program on economic theory provides the study of  topics.
5
Quality Control - An Anthology of Cases

The lectures and seminars are allocated for  h. Practically, for each topic, both the
lecture and the practical lesson have one academic hour ( min). Such a structure
of classes assumes a significant change and improvement in the methods of lecturing
and conducting practical classes.
Before the introduction of the electronic testing system, knowledge control was
carried out only during practical classes, and this was done by using paper carriers for
test tasks.
Its main purpose was to eliminate these problems that the development of a system
of electronic tests was primarily aimed at.
The following shortcomings of this system were revealed:

• too much time-related expenditures,

• limited variants,

• cribbing,

• replicating the right answers,

• difficulty in controlling the independent work of students due to lack of time in


practical classes.

. Formation of the base of test tasks.

As the experience of other developers of similar methods and our estimates has
shown, in order to avoid repetition of the questions, it was necessary to form at least
 test tasks for each topic. Taking into account that all the topics of the course were
subject to testing, the total number of generated test tasks was more than . As
such, the author of this research work used previously proven test tasks, newly devel-
oped, as well as tasks taken from other sources, in particular from websites attnica.
ru, i-exam.ru, and fepo.ru, which was a time-consuming and laborious work. The
formation of the base of test tasks was done by a group of three people.

. Soware.

To develop computer tests and questions, students used the database management
system Question Mark Perception, which allows them to reproduce memories, as well
as generate the results of various forms of reports for data analysis.
The Database Management System Question Mark Perception supports different
types of questions:

• open-ended questions included in the tested text of the answer;

• question with one choice—the subject must choose one correct answer from
several;

• multiple choice questions—the subject must choose at least the correct answer
from the proposed ones;

• a question with a Likert scale—the subject chooses one of several values based on
the scale;
6
Development and Usage of Electronic Teaching Technologies for the Economic Training…
DOI: http://dx.doi.org/10.5772/intechopen.105610

• filling in the void—the subject must enter the missing word in a paragraph of the
text;

• selection of words from the drop-down list—the subject must choose the correct
answer from the drop-down list;

• a question about the details of the text;

• a question about entering a number—it involves entering the answer in a count-


able form;

• a question about the transfer of objects—the subject must reassign a lot of mark-
ers on the image;

• a question with a graphic choice—the subject must reassign the marker on the
image

• matrix question—it is a table in each series of which the subject must choose one
answer (column);

• ordering question—the subject must put the options in the correct order.

When setting rules for checking answers, many options are also possible, for
example, correct answer, wrong answer, partially correct answer, etc. For each indi-
vidual answer option, one needs to create one’s own rule. It is also possible to set up a
feedback, for example, a message with comments on the answer.
The evaluation of each question is also configured by the author, and the points
awarded for the answer depend on the specified conditions.
Test events can be taken on any device that has access to the Internet. This allows testing
not only in the classroom, but also to give homework in the form of independent work.
For testing within the disciplines of the Department of Economic Theory, questions
with a single choice and a question for entering a number and choosing a word from
a drop-down list were mainly used. Evaluation is carried out according to the system
wherein the correct answer is linked to a simple question—one point, the correct
answer to the problem—two to four points (depending on the level of complexity).

. Approbation of technology.

The technology of electronic testing was proved during practical classes in eco-
nomic theory at the Faculty of Economics and Management, the Humanities Faculty,
and a number of groups of other technical faculties.
Testing was carried out for each topic of the course of economic theory in com-
puter labs during the practical classes immediately after passing the corresponding
topic. For a number of groups, testing on the same topics was conducted on an
extracurricular basis (online).
The analysis of the results of testing on the parameters noted above was carried
out, and the answers of students in computer classes and at home were compared.
A number of test tasks have been adjusted.
There was a slight repetition of the questions and some unevenness in the com-
plexity of some tests.
7
Quality Control - An Anthology of Cases

It turned out that the percentage of failure to perform tests in extracurricular time
for various reasons (including technical ones) was about . This made it possible
to compile a real timetable for retesting on the examination week and to estimate the
scope of retesting with a larger number of students.
The total number of students tested in each academic year was more than a thou-
sand people. On average, about – of students tested outside the classroom had
problems during testing for technical reasons. They were promptly retested.
At present, the system is fully debugged.

. Application of testing technology and changing teaching methods.

The technology of electronic testing for students of technical faculties is intro-


duced from the / academic year. In this regard, the system of monitoring
was changed. Four tests are carried out in the test form (two under the section
“Microeconomics” and two under the section “Macroeconomics”). Test works
included test tasks and exercises on all topics of the course. For each section, one
control work was done in a computer lab (or in the classroom—in written form), and
the other—on an extracurricular time (online). A test schedule was prepared, students
were given logins and passwords, and consultations were conducted on testing tech-
nology. As previously mentioned, the total number of students tested in each academic
year was more than a thousand people. On average, about – of students tested on
an extracurricular time had problems in testing for technical reasons. Retesting was
reopened for them promptly. Currently, the system is completely debugged.

5. Electronic support of business games

A business game is conducted in the form of an analysis of a specific situation


using the role method. The business game is directly related to the lecture’s material
and the topic of the practical class.
The goal of the business game is to assimilate learning issues by students, to teach
the skills of applying knowledge in economic theory to the analysis of the real situa-
tion, and to develop the abilities of working in small groups.
The business game is based on the careful study of a specific situation by students,
searching for and analyzing additional materials, organizing individual and group
work, preparing a report in accordance with questions and tasks for the business
game, presenting the report, and discussing it at the seminar.
Electronic technologies are an essential component in the preparation, organiza-
tion, and implementation of business games developed at the department.
This is determined by the following specific circumstances:

• The need to present materials in the form of presentations and texts,

• The training schedule, according to which the practical classes are held in a week,
and therefore, a high efficiency is required in the preparation for the conduct of a
business game

The time for conducting a business game is one to two academic hours; therefore,
strict requirements are imposed on the representativeness, volume, and quality of the
information provided.
8
Development and Usage of Electronic Teaching Technologies for the Economic Training…
DOI: http://dx.doi.org/10.5772/intechopen.105610

Introductory instructive explanations of the lecturer on the organization of the


business game are accompanied by the issuance of a task, a scenario, an algorithm for
conducting, and the necessary materials in electronic form. A schedule is established
for presenting the reports of students to the lecturer. In the process of preparing for a
business game, consultations and necessary adjustments of the submitted materials
take place. When conducting a business game, it is necessary to introduce presenta-
tions of the final report and other materials.

6. Development of online courses and the use of distance technologies

The departments are focused on the development of online courses and other
products using the university’s technological and organizational resources.
The St. Petersburg Electrotechnical University “LETI” has developed a strategy in
the field of distance learning and distance learning technologies.
The strategy is a formalized set of approaches consistent with the develop-
ment priorities of the university, on the basis of which an action plan is imple-
mented to saturate the educational process with information and communication
technologies [].
The technical infrastructure is:

• computer network, including wireless access equipment;

• computer equipment, telecommunication and communication devices, presenta-


tion, and video equipment;

• mobile devices for access to digital resources;

• systems for monitoring and managing access to resources, alarm, and video
surveillance systems.

The information infrastructure consists of the following systems:

• intrauniversity SPOC platform on open edX;

• automated e-learning system (LMS);

• interactive media library;

• system for online conferences;

• electronic library platforms;

• the “Electronic Dean’s Office” system for the implementation of educational


programs of all forms of education.

The information infrastructure is implemented in the form of digital resources


and services of the corporate information environment. A single attribute for access-
ing university resources is the personal IDs of students and staff.
9
Quality Control - An Anthology of Cases

The Center for New Educational Technologies and Distance Learning and the
Department of Educational Programs are responsible for the development of the
online education system at LETI [].
During the implementation, the following organizational technology was used:

• At the beginning of the semester, consultative meetings were held with all the
students (in groups and on streams) on the organization of the educational
process. Students were provided with information about the procedure for
registering for online courses, the training schedule, types of classes, reporting,
and the assessment system.

• During the semester, several times a week, mailing lists were sent with informa-
tion about the opening of new course materials and deadlines for completing
tests and handing in practical assignments.

• On a weekly basis, the Dean’s offices were provided with detailed information
on the development of the online course by students. At the end of the semester,
all the students passed the final certification for their courses in the format of
offline proctoring.

The following models of embedding distance learning technologies are assumed []:

• full distance learning;

• express distance learning;

• full or partial reduction of lecture classes;

• partial replacement of classroom hours with in-depth study of the material;

• flipped learning;

• automated issuance of individual homework assignments and carrying out


control activities in the form of computer tests;

• controlled independent work on discipline.

When developing online courses, there is a need for their expertise. The university
has the following system of peer review, which includes the following two stages:

. Preliminary technical expertise: It includes the availability and performance of


the online course components.

. Comprehensive expertise: e subjects of a comprehensive examination are:

• assessment of the compliance of the course structure and its content with the
goals and objectives in the development of the academic discipline for which
the online course is being created;

• assessment of the presentation of text and presentation materials, various


audiovisual aspects of the online course;
10
Development and Usage of Electronic Teaching Technologies for the Economic Training…
DOI: http://dx.doi.org/10.5772/intechopen.105610

• evaluation of the control materials used in the online course.

Before creating the online course “Economics,” the following main issues were
analyzed:

• the educational objectives of the course,

• the audience of the course,

• technical capabilities.

Depending on the objectives of the course, audience, and technical capabilities,


the format was chosen and the components of the course were determined, i.e.:

• who will be involved in creating the course,

• what the length of the course and its volume will be,

• whether the course provides additional classroom lessons,

• whether a situational assessment with instant feedback is needed.

As a result, it was decided that the course would be recorded by an employee of the
department who would work together with the author and producer of the course. In
the case described herein, the producer was also a member of the department.
The online course is seen as an addition to the classroom.
The course contains  topics and corresponds to the program of the discipline
“Economic theory.” The course structure includes video content and testing based on
the results of mastering. The online course developed at the department is an integral
part of the educational process and can be used as an independent material at the
same time.
It should be noted that the development of the course required a significant
amount of time. In addition, there was a need to master new competencies in content
design and lecture recording.

7. Conclusion and further work

Electronic teaching technologies are considered in the context of a practice-


oriented approach.
The used score-rating system for assessing students’ knowledge has become even
more effective. Furthermore, the developed technology of electronic testing has
allowed to improve essentially the technique of conducting practical classes.
The quality of testing has significantly improved. The time for discussing issues
of the topic, presenting reports, and solving problems has increased. The control of
independent work of students became better.
Experience has shown that the electronic support of business games is an
indispensable element of their conduct. The developed technology of electronic
support has allowed to considerably increase the effectiveness of the conducted
business games.
11
Quality Control - An Anthology of Cases

As a result of the introduction of electronic testing and business games support


technologies, the perception of subject has expressively been enhanced, and the qual-
ity and effectiveness of training has advanced.
A short online course developed at the Department of Economic Theory is
included in the system of the educational process and is its important component.
According to the results of a sociological survey [, ], the main advantages of
online learning as named by students were:

• the ability to independently organize the process of mastering the academic


discipline;

• the rational use of time in the development of the course;

• the use of video materials in the educational process;

• the possibility of returning to lecture materials and restudying these.

The most important aspects of online learning for students were: the clarity and
consistency of the presentation of the educational material, the usefulness of the
course for the specialty, and the pleasure of attending the course.
Main directions of further work may include but not be limited to the following
areas:

• inclusion in the online course of business games and case studies;

• creation of a number of webinars and forums on the problems of economic


theory;

• organization of design work in groups using remote

• technologies;

• creating a massive online course in economics for engineering students.

Author details

ValeryiSemenov
Department of the Economic Theory, Saint Petersburg Electrotechnical University
“LETI”, SaintPetersburg, Russia

*Address all correspondence to: vps@mail.ru

©  The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/.),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
12
Development and Usage of Electronic Teaching Technologies for the Economic Training…
DOI: http://dx.doi.org/10.5772/intechopen.105610

References

[] Semenov VP. Practical-oriented [] Detailed typology of electronic


approach in the teaching of educational resources. .
economic theory. Sovremennoe Available from: https://etu.ru/ru/
obrazovanie: soderzhanie, tekhnologii, on-line-learning/digest-e-learning/
kachestvo. Sbornik materialov XXI such-different-online-courses
Mezhdunarodnoy nauchno-prakticheskoy
konferentsii. [Modern education: content, [] Regulations on the organization of
technology, quality. Information package the educational process at St. Petersburg
of the XXI International Scientific and Electrotechnical University “LETI”
Practical Conference.]. Vol. . Saint using distance learning and distance
Petersburg Electrotechnical University; technologies. . Available from:
. pp. -. (In Russian) https://etu.ru/assets/files/ru/universitet/
normativnye-dokumenty/Prikaz
[] Semenov VP. Experience of business OD/-poryadok-primeneniya-eo-
games in the teaching of economic i-dot.pdf [Accessed: --]
theory for students of technical streams.
Sovremennoe obrazovanie: soderzhanie, [] Kustov TV, Timofeev AV, Room PV.
tekhnologii, kachestvo. Sbornik Experience of using online courses in the
materialov XXI Mezhdunarodnoy implementation of the main educational
nauchno-prakticheskoy konferentsii. programs of the Electrotechnical
[Modern education: content, technology, University. In: Ram VN, editor. Distance
quality. Information package of the XXI Educational Technologies. Proceedings of
International Scientific and Practical the IV All-Russian Scientific and Practical
Conference.]. Vol. . Saint Petersburg Conference (with the participation of
Electrotechnical University; . pp. RIAC), Dedicated to the th anniversary
-. (In Russian) of the GPA. St. Petersburg: GPA; .
pp. -. (In Russian)
[] Semenov VP, Nikishin VM,
[] Strogetskaya EV, Pashkovsky EA,
Baranova LY. Experience in the
Kazarinova NV, Betiger IB, Timofeev AV.
development and use of technologies for
The experience of teaching students in
teaching economic theory for students
the new digital paradigm of education.
of technical areas // Modern education:
Discourse. ;5():-. (In Russian)
content, technology, quality. Collection
of materials of the XXI international
scientific-practical conference. Vol. . St.
Petersburg Electrotechnical University;
. pp. -. (In Russian)

[] Strategy for the development of


e-learning and distance learning
technologies in St. Petersburg
Electrotechnical University “LETI” for
-. Available from: https://etu.
ru/assets/files/university/normativnye-
dokumenty/strategiya-razvitiya-eoidot.
pdf [Accessed: --]

13
Chapter

Exploring the Effects of Learning


Capability and Innovation on
Quality Management-Organizational
Performance Relationship
Mohsen Modarres

Abstract

Management scholars should further study the scientific area concerning the
contingent effects of learning capability and organizational innovations on the
relationship between quality management organizational performance. This chapter
examines the interactive effects of quality management with organizational learning
capability and innovations on organizational performance. Indeed, it may be argued
that within quality management theory and methodology, the need to consider the
contingency approach may result in an in-depth understanding of how the intersec-
tion of constituent elements associated with quality management influences organi-
zational performance. Results revealed that the interaction of quality management
and learning capability explained higher variance in organizational performance than
the direct effect of quality management on performance. Similarly, interactions
between quality management and innovations explained more significant variance in
organizational performance than the direct effect of quality management on perfor-
mance. Outcomes showed that quality management might not directly impact orga-
nizational performance. Findings underscore the importance of interactive effects of
innovation and organizational learning capability with quality management in
explaining the relationship between quality management and organizational
performance.

Keywords: strategy, integrated quality management, contingency theory, innovation,


learning capability

1. Introduction

Organizations competing in dynamic industries are required to be cognizant of


challenges and complexity in maintaining a balance between initiating changes
through innovations and maintaining stability in their existing processes.
Unpredictability within dynamic competitive markets creates a paradox between

1
Quality Control - An Anthology of Cases

replicating stable processes or re-allocating resources toward innovation [1]. Hence,


organizational success tends to be contingent on organizational commitment and
capability to continuously explore a new way of doing things and exploit existing
competencies [2]. Markets in dynamic industries tend to exert more significant pres-
sure on competing firms to sense and respond to cues in their environment by creating
flexible and adaptable core capabilities. The recent trends toward the adoption and
implementation of total quality management have been indicative of competitive
challenges in dynamic industries. As competitive advantage tends to erode at an
accelerated pace [1], organizations that are responsive to intra-organizational cues and
shifts in elements within the immediate organizational environment may have a better
chance of success and prosperity [3, 4]. A healthy competitive position in the mar-
ketplace requires managers to coordinate among various internal processes, such as
continuous improvements, innovations, and efficiency, through enhanced organiza-
tional learning capability. Moreover, internal coordination among process improve-
ment, innovation, and organizational learning may lead to equilibria between
continuous changes in various constituents in quality management and maintaining
stability in existing processes. Integrated total quality management strategies enable
managers to explore and implement a novel way of doing things and maintain stable
and standard processes by repetition and duplication of high-performing processes. A
number of researchers have posited that the performance outcome of quality man-
agement strategies tends to be contingent on the managerial capability to coordinate
among timely innovations, investment in human capital, enhanced learning capabil-
ity, and knowledge collaboration among organizational members and subunits [5–8].
Moreover, integrated quality management enables organizations to exploit the
existing core capabilities and channel organizational knowledge into individual and
team cognitive energy to gain competitive advantage and enhance organizational
performance e.g., [8, 9] and organizational excellence [10]. Moreover, integrated
quality management provides a window of opportunity for managers to detect and
adapt to the external environment contingencies in a timely fashion [11]. The incon-
sistency in the causal linkage between desired performance outcome [12, 13] and
integrated quality management strategies and practices at the operational level remain
inconsistent [14, 15]. Past studies have shown inconsistent results in the relationship
between performance and integrated quality management. For example, research by
Powell [16] and Westphal et al. [17] revealed no statistical significance between
performance and total quality management. In contrast, few researchers have
reported a direct and positive association [18, 19] or a mediated relationship between
organizational performance and quality management. Previous researchers have
parsed and identified various components of integrated quality management and
investigated each component’s relationship with performance.
In this body of work, the financial measure of organizational success [8], human
resource capability [20], research and development were explored as firm-specific
capability [9]. Furthermore, integrated total quality management draws upon firm-
specific resources and capabilities and coordinates a strategic balance between
exploring new ideas and exploiting existing firm-specific capabilities [9, 21]. Such
capabilities developed within integrated quality management tend to be non-imitable
and sources of competitive advantage and higher performance [22, 23]. The causal
ambiguity in the relationship between quality management and performance led to
failures in the implementation of quality management [16]. Furthermore, causal
ambiguity in the quality management-performance relationship has refocused
research studies on the interrelationship between constituent elements of quality
2
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

management and organizational performance. For instance, research by Modarres


and Pezeshk indicated that the relationship between total quality management
and organizational performance is mediated by organizational learning and
innovation performance. Similarly, Huang et al. [6] argued that individual
interactions mediate the innovation performance in the quality management
method and the degree of the team learning that may result from team member
interactions.
Another body of research centered on the interrelationship between investment
in human capital and success in the implementation outcome of quality manage-
ment [7]. Other researchers have discussed that the quality management-performance
relationship tends to be contingent on creating a culture of dyadic trust among
organizational members and promoting knowledge sharing among the organizational
members [6]. Both dyadic trust and knowledge sharing create an internal organiza-
tional environment that generates enhanced cognitive learning. Furthermore, knowl-
edge sharing allows accumulated knowledge by members of the organization to
become the basis for diverse ideas and explorations of novel routines. Within this
body of research, the relationship between quality management and performance
tends to be contingent on a culture of employee empowerment within organizations
[24]. Such a culture promotes an environment of learning and interaction, mutual
trust, and information sharing among organizational members that may lead to the
introduction of new products and services and the implementation of new codes in
the organization.
Parsing quality management into its constituent parts and their synthetic
roles within quality management have partially contributed to our understanding of
the performance-quality management relationship. However, previous researchers
have provided little information about the interactive effects of quality
management with organizational learning and innovations to explain performance
variations within corporations. This chapter derives from contingency theory to
examine the contingency theory, neglected in recent quality management studies, to
examine the interactions between quality management and two important variables,
organizational learning, and innovations in explaining variations in organizational
performance.
The proposed model (Figure 1) and hypotheses tested both direct and interaction
effects between quality management, organizational learning, and innovation on var-
ious organizational performance levels. In contrast to parsing the constituent parts
and their synthetic roles within quality management, the present research proposes
that the interactions between quality management and learning capability and inno-
vation tend to positively impact organizational performance. The present research
views quality management as an integrated, gestalt, and adaptive method capable of
continuously learning [25] and innovating novel routines and new core competencies.
Furthermore, present research argues that integrated quality management allows for
incremental modifications and radical reengineering of existing operations and
enables managers to be flexible and enable the transformation and enhancement of
internal capabilities.

2. Interaction effects of quality management with learning capability

Integrated quality management practices promote cross-functional communica-


tion and frequent exchanges of complex information among individuals and teams.
3
Quality Control - An Anthology of Cases

Figure 1.
Macro model.

Interaction between quality management and learning capability across subunits is


likely to result in a novel way of doing things. Such knowledge creation commits top
executives to allocate resources to employees ’ education, expression of new ideas, and
team learning.
Furthermore, the managerial challenge in establishing a stable and reliable process
tends to be contingent on creating an organizational culture. Such culture focuses on
creating new knowledge and continuous organizational learning and the existing
experience curve accumulated through information flow across subunits [6]. Such a
seamless flow of information across subunits allows organizational members and
managers to explore novel routines and exploit existing knowledge. Integrated quality
management enables top managers to invest in continuous education and learning
through employees ’ interactions. Over time, the accumulated education and learning
become the basis for organizational learning capability [25, 26] and the flexibility to
explore new routines and continuous process improvement [27]. According to Jerez-
Gomez et al. [28], the interactions between top management commitment to
employees’ education and employee involvement in strategic directions of the orga-
nization enhance learning as one of the organization’s core competencies. Moreover,
higher levels of learning and education tend to lead to better implementation of
quality management, greater innovation, higher quality of products and services, and
higher organizational performance [26].
Moreover, high levels of learning capability within quality management enhance
organizational awareness and ability to absorb new knowledge and transform the col-
lective organizational know-how into new products and competitive advantage [9]. In
contrast, low adaptive learning and low organizational performance tend to be attrib-
uted to parochial organizational practices and the inability to absorb new knowledge
[29]. Similarly, the interaction between quality management and organizational inno-
vations is likely to allow exploration for the opportunity to develop new products and
services. Innovation tends to be among the success factors that contribute to high

4
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

corporate performance [9, 22]. Previous researchers have argued that a positive associ-
ation between innovation and organizational performance tends to be contingent on the
flexible structural design that facilitates subunits innovations and interconnectedness,
decentralized decision-making, and accumulated organizational learning [13, 30–32].
According to Singh and Smith [33], quality management practices promote an organic
environment within organizations that is conducive to innovation and high levels of
learning. Such organic structural design promotes employee interactions and cross-
functional links and interactions. Furthermore, the organic structural design creates
greater flexibility [34], that facilitates the speed and extent of innovations, and timely
adaptation to changes in the firm’s industry environment.
Moreover, quality management practices that promote the timely introduction of
products and services to the marketplace can lead to competitive advantage and
high organizational performance [8]. Similarly, entrepreneurial mindset within
organizations tends to be a key factor in technological and product innovations.
Furthermore, entrepreneurial mindset enables managers to respond to environmental
changes by reallocating valued resources within the organization toward new
products and services and enhancing corporate performance [22, 30, 35, 36].
Finally, quality management creates a culture of collaborations and exchanges of
new ideas as employees interact within each function and cross-functionally.
Researchers must identify the interrelationship among quality management, learning
capability, and innovations to realize a deeper understanding of how employee
interaction may lead to higher organizational learning capability and innovations.
Furthermore, research studies should explore the interactive effects of quality
management, learning capability, and innovation on organizational performance.
Given the above, this study hypothesizes the main and intersection effects between
integrated quality management, organizational learning, and innovations in the
following manner:
H1: There will be a positive and significant relationship between quality
management, organizational and organizational performance.
H1a: There will be a positive relationship between quality management,
organizational learning.
H1b: There will be a positive relationship between quality management,
organizational, and innovation.
H2: There will be a positive relationship between organizational learning and
organizational performance.
H3: There will be a positive relationship between innovation and organizational
performance.
H4: The interactions between quality management and organizational learning
positively influence the relationship between quality management and
organizational performance.
H5: The interactions between quality management and innovation positively influ-
ence the relationship between quality management and organizational performance.

3. Methodology

3.1 Sample and data

Data. The data used in this study were collected by the survey method. The survey
was carried out during the year 2015 and provided information on Iran’s food business
5
Quality Control - An Anthology of Cases

environment, quality management, organizational learning, innovation performance,


and organizational performance. Top executives and senior managers represent the
most appropriate sources of information for this study. The population of top execu-
tives and managers was determined to be 400. A questionnaire and cover letter were
mailed to the managing director or chief executive officer of each company from the
Food Industry in Iran. A total of 37% of the 400 mailed surveys was completed and
returned, a sample of 148. All 148 completed surveys were used in this investigation.
Given the population of N = 400, the Cochran sample size formula indicated a sample
of n = 148 allows the study to draw correct inferences from the population.

3.2 Measurement of variables

A survey method was used for all the variables in the present study. Respondents
were asked to indicate their levels of agreement with descriptive statements using a 5-
point Likert scale (range, 1 = strongly disagree to 5 = strongly agree).
Quality management. To measure the effectiveness of integrated quality manage-
ment, following the study by Vanichchinchai and Igel [37] and Coyle-Shapiro [38],
the present research employed the following seven variables:

• top management support

• employee involvement

• continuous improvement

• customer focus

• education and training

• supply management

Organizational learning capability. Based on the study by [28] learning capability


was operationalized as top executive commitment, system perspectives,
organizational experimentations, and knowledge transfer initiatives.
Organizational Innovations. Exploring new ways of things in the organization
requires managerial decisions on innovations and reallocation of valued resources
toward new processes, products, and services [9, 35]. Following the study by Parjogo
et al., innovation performance was operationalized as product/service innovation,
performance innovation, and overall organizational innovations.
Organizational Performance. Organizational performance can be defined as the
desired outcome within organizations. Performance is multi-dimensional and may be
measured as such. Following the study by Santos and Brito [39], the present research
operationalized performance as employee satisfaction, response to environmental
changes, sustainability, customer satisfaction, and projected revenue from new
products/services.

3.3 Procedures and design

Congruent with the previous research in contingency theory [8], the present
research considers quality management as an integrated organizational strategy. As
6
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

such, the study used structural equation modeling to explore the independent and
interaction effects of integrated quality management, innovations, and organizational
learning on organizational performance. For parsimony, and to reduce the number of
relationships, a hierarchical component model was created. Model I (Table 1,
Figure 1) shows the results of the structural equation modeling analysis of the high
component model, and standardized regression weights showing integrated quality
management association with organizational learning capability, products and ser-
vices innovations, and organizational performance. The hierarchical analysis of Model
I also shows the relationship between each of the four constructs in this study with
their sub-constructs.

4. Analysis

4.1 Main constructs, sub-constructs, and variables

Integrated quality management. According to the results shown in Model 1


(shown in Table 1), integrated quality management is positively and
significantly associated with continuous education and employee training
(B = 0.90), and continuous long-term employee support after the implementation
of quality management (B = 0.72). Furthermore, continuous improvement
programs (B = 0.61), managing the supplier relationship (B = 0.55), customer
relations and satisfaction (B = 0.45), and employee involvement in the decision-
making processes (B = 0.41) were positively and significantly associated with quality
management.
Organizational learning capability. As shown in Model I (shown in Table 1), the
organizational learning capability construct has a positive and significant relationship
with long-term management commitment to involve employees in decision-making
processes (B = 0.84). Results also indicated that within and inter-subunit knowledge
transfer significantly influenced and enhanced organizational learning capability
(B = 0.83). Furthermore, organizational learning capability is positively and signifi-
cantly associated with the exploration of new ideas, and exploitation of the existing
process to enhance further process improvements (B = 0.66). Results also indicated
that subunits independently set divisional strategies and goals, and were responsible
for co-align their strategies and goals with overall organizational strategies, goals, and
mission (B = 0.71).
Innovation. Results (shown in Table 1) revealed that top managers
encouraged and permitted exploration of new products and services (B = 0.92)
and process innovation (B = 0.78). Furthermore, overall organizational innovations
were significantly related to operational cost reductions and revenue generations
(B = 0.79).
Organizational Performance. The results (shown in Table 1) indicated that the
implementation of integrated quality management instituted organizational perfor-
mance assessments were associated with continuous monitoring of the competitive
dynamics in the marketplace (B = 0.63). Furthermore, results also indicated that the
performance construct has a significant relationship with employee work satisfaction
(B = 0.75), employee participation in decision-making processes (B = 0.63), customer
expectations and satisfaction (B = 0.59), projected financial post-integrated quality
management implementation ( B = 0.65), and implementation of environmental
sustainability programs (B = 0.92).
7
Quality Control - An Anthology of Cases

Standardized Standardized t-value


regression weight bias

Quality management ➔ Organizational learning 0.95 0.08 13.41*

Quality management ➔ Innovation performance 0.91 0.08 12.41*


Quality management ➔ Organizational performance 0.43 0.08 1.13

Quality management ➔ Education and training 0.90 0.08 14.20*

Quality management ➔ Total management support 0.72 0.08 10.41*

Quality management ➔ Continuous improvement 0.61 0.08 8.60 *


Quality management ➔ Supply chain mgt 0.55 0.08 7.67*

Quality management ➔ Customer focus 0.45 0.08 6.29*

Quality management ➔ Employee involvement 0.41 0.08 6.21*

Organizational learning ➔ Management commitment 0.84 — —

Organizational learning ➔ System perspective 0.71 0.08 10.34*


Organizational learning ➔ Organizational experiment 0.66 0.08 9.31 *

Organizational learning ➔ knowledge transfer 0.83 0.08 8.21*

Organizational learning ➔ Organizational performance 0.58 0.08 6.89*

Innovation performance ➔ Product/service 0.92 — —


Innovation performance ➔ Process innovation 0.78 0.08 12.24*

Innovation performance ➔ Overall organizational 0.79 0.08 12.57*


innovation

Innovation performance ➔ Organizational performance 0.62 0.08 9.17 *

Innovation performance ➔ Product/service 0.92 — —

Innovation performance ➔ Process innovation 0.78 0.08 12.24*

Innovation performance ➔ Overall organizational 0.79 0.08 12.57*


innovation

Innovation performance ➔ Organizational performance 0.62 0.08 9.17 *

Organization Performance ➔ Post TQM financial 0.65 — —


expectation

Organization Performance ➔ Employee participation 0.63 — —

Organization Performance ➔ Customer satisfaction 0.59 0.08 7.79*


Organizational performance ➔ Employee satisfaction 0.75 0.08 8.57*

Organizational performance ➔ Sustainability 0.92 0.08 13.16*


Chi-square = 247.24; df = 114; GFI = 0.92; AGFI = 0.86; RMSEA = 0.08.
*
p < .05.

Table 1.
Results of structural equation modeling-model I.

For the accuracy of the constructed model and to make sure the data is presenting
accurate and reliable drawing from the population under study the Kolmogrov-
Smrinov (KS) test was performed [40, 41]. Table 2 shows that all four variables’ data
are normally distributed.

8
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

TQM OLC INP OP

149 149 149 149


a
Normal Parametes Mean 3.62 3.37 3.34 3.38

Std. Deviation 0.60 0.65 0.67 0.66

Most Extreme Difference Absolute 0.059 0.059 0.076 0.057

Positive 0.059 0.051 0.075 0.039


Negative 0.054 0.059 0.079 0.057

Kolmogrov-Smrinov Z 0.751 0.721 0.926 0.691

Asymp. Sig (2-tailed) 0.687 0.676 0.358 0.726


a
Test distribution is normal

Table 2.
One-sample Kolmogrov-Smrinov biased analysis.

5. Explanation of latent constructs

In this section of the chapter, complex hierarchical constructs, sub-constructs, and


related subset variables are disentangled and discussed.

5.1 Integrated quality management

Table 3 presents the result of an orthogonal (VARIMAX) rotation of the factor


matrix underlying the quality management items. Based on the six-independent factor
solution suggested by the eigenvalue pattern (i.e., greater than 1.0), 25 items were
identified so that each of which loaded at least cleanly on only one of the six factors. A
cut-off of 0.50 was used for item-scale selection. These factors accounted for over 78%
of the variance in the quality management scale items. Following an inspection of the
factor loadings, the six factors were subsequently labeled:

• Total management support

• customer focus

• education and training

• continuous improvement and innovation

• supply chain management

• employee participation

Table 4 shows an examination of the Kaiser-Meyer Olkin measure of sampling


adequacy suggested that the sample was factorable. The results reasonably describe
each set of items as being indicative of an underlying factor for quality management.
(KMO = 0.833); χ2 = 3485, df, 300, sig 0.000).

9
Quality Control - An Anthology of Cases

c
Derived factors

Quality managementb EENb1 TMS b2 SMb3 CII b4 CFb5 EDT b6

TMS1 0.219 0.850 0.201 0.077 0.057 0.188

TMS2 0.140 0.794 0.200 0.310 0.245 0.074

TMS3 0.217 0.848 0.194 0.115 0.113 0.176

TMS4 0.251 0.822 0.147 0.194 0.199 0.141

CF5 0.037 0.141 0.196 0.051 0.859 0.136

CF6 0.147 0.271 0.116 0.113 0.821 0.002

CF7 0.187 0.072 0.069 0.099 0.887 0.141

EDT8 0.110 0.413 0.359 0.347 0.297 0.568

EDT9 0.057 0.131 0.222 0.287 0.065 0.791

EDT10 0.176 0.363 0.384 0.404 0.207 0.526

EDT11 0.231 0.334 0.321 0.169 0.165 0.670

CII12 0.109 0.108 0.209 0.828 0.206 0.188

CII13 0.002 0.193 0.156 0.818 0.15 0.158

CII14 0.063 0.207 0.227 0.845 0.096 0.177

SM15 0.021 0.210 0.867 0.144 0.111 0.072

SM16 0.010 0.303 0.793 0.204 0.004 0.146

SM17 0.016 0.031 0.836 0.148 0.159 0.130

SM18 0.084 0.165 0.820 0.151 0.153 0.267

EEN19 0.674 0.103 0.062 0.223 0.003 0.021

EEN20 0.899 0.199 0.035 0.020 0.135 0.009

EEN21 0.908 0.150 0.054 0.033 0.061 0.037

EEN22 0.719 0.015 0.050 0.042 0.051 0.349

EEN23 0.785 0.025 0.126 0.199 0.154 0.018

EEN24 0.780 0.109 0.038 0.093 0.029 0.197

EEN25 0.806 0.276 0.064 0.015 0.099 0.027

Eigenvalue 9.73 3.99 1.84 1.60 1.56 1.09

Variance explained 19.35 14.78 14.09 11.40 10.61 8.67


a
A VARIMAX orthogonal rotation is performed on the initial factor matrix.
b
Factors derived from quality management.
c
Loadings above 0.50 are in boldface.

Factors Cronbach’s alphas Scales included


b1
Employee involvement 0
b2
Total Management Support 0
b3
Supply Management 0
b4
Continuous improvements 0
b5
Customer Focus 0
b6
Education Training 0

Table 3.
Factor analysis of quality management scales.a

10
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

Kaiser-Meyer-Olkin Measure of Sampling Adequacy 0.833


Bartlett’s Test of Sphericity Approx. Chi-Square 3485

DF 300

Sig 0.000

Table 4.
KMO and Bartlett ’s test of quality management variable.

Results of second-order confirmatory factor analysis (Table 3) present the scale


reliability on quality management dimensions that reached statistical significance.
This indicates that criteria had a significant correlation with appropriate dimensions
and scales had convergent validity [42].
Association of the latent constructs and quality management.
Findings (shown in Table 5) also indicated that integrated quality management is
positively and significantly associated with human resource development through
continuous education and training (B = 0.94). Findings also indicted executives’
commitment to coordinate and support continuous improvements, post quality man-
agement implementation (B = 0.73), and employee involvements in implementation
decision making (B = 0.33). Furthermore, findings indicated that top managers
encouraged exploring new ideas and innovation (B = 0.70). Results revealed that
managers were cognizant about immediate factors in the organization industry
environment by managing supplier relationships (B = 0.70), focusing on customer
relations (B = 0.54).
Analysis of subset variables and their relationship with quality management.
Education and training. Further analysis of the subset variables shows that long-
term quality management training programs (B = 0.96) and employee know-how
about the developments and changes in the industry (B = 0.95) as the most important
variables. Work conditions and environment (B = 0.76) and customer relations
(B = 0.67) were important factors in the implementation of quality management.
Top management support. As shown in Table 5 top managers strategy, post quality
management implementation focuses on continued investment in quality manage-
ment programs (B = 0.91), coalignment of quality management strategies with
changes in the industry (B = 0.98), and employee involvement in the implementation
process (B = 0.88).
Continuous improvements. Top managers encouraged the employee for both input
for new products and existing product improvements (B = 0.94), and research and
development activities focusing on products and services improvements (B = 0.75).
Furthermore, employees were encouraged to participate and suggest work environ-
ment improvements (B = 0.88).
Managing supplier relations. Results revealed top managers included supplier rela-
tions in their strategic plans for the long term (B = 0.86). Such a strategic plan was
based on information sharing with the suppliers (B = 0.88), and assessment of the
supply chain based on the long-term trend in the quality of the services and products
the organization received (B = 0.86).
Customer focus. Top managers’ decision-making process prioritized customer
expectation (B = 0.83), contentedness with the quality of the product (B = 0.84), and
importance to the organization (B = 0.88).
Employee involvement in decision-making processes. According to the results,
employees were encouraged to form improvement circles and teams (B = 0.96) and
11
Quality Control - An Anthology of Cases

Items First-order t-value Second- t-value


order

Standardized Standardized

loading loading

Total Quality Management-QM


Education and Training

1. Top managers’ commitment to training employees in 0.96 /a 0.94


quality management
2. Top managers training in best conduct with 0.67 10.36* 13.32 *
employees and customers

3. Employees knowledge about food industry 0.95 15.66*


4. Managers’ commitment to providing employees 0.76 13.27*
essential needs at work

Top management support


1. Top managers’ commitment to post-implementation 0.86 /a 0.73 8.64*
of quality management

2. Top managers’ commitment to long-term 0.91 15.65 *


investment in quality management

3. Top managers’ support of employee involvement in 0.88 14.51*


quality management implementation

4. Top managers’ strategic co-alignment of quality 0.98 16.49*


management with changes in market
Continuous improvement and innovation

1. Employees are encouraged to make suggestions 0.88 /a 0.70 8.23*


about work condition improvements
2. Employees are encouraged to research to improve 0.75 11.21 *
products and services

3. Manager’s consideration of suggestions for product/ 0.94 15.44*


services improvement

Supply Management

1. Coordination with the critical supplier through 0.88 /a 0.70 8.27*


information sharing

2. Enhance the quality of suppliers post quality 0.86 13.84*


management implementation

3. Establish a win-win relation with suppliers 0.78 11.76 *

4. Strategic view on managing supply-chain 0.86 13.83 *

Customer Focus

1. Center firm activities based on customer satisfaction 0.84 /a .54 6.02*


2. Customer satisfaction and expectation as a top goal 0.83 11.52 *

3. Importance of customers in top managers’ decisions 0.88 12.25 *

Employee Involvement
1. Employee training and encouragement to participate 0.57 /a 0.33 3.50*
in company programs

2. Creation of work improvement teams 0.96 7.09*

12
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

Items First-order t-value Second- t-value


order

Standardized Standardized

loading loading
*
3. Employees suggestions about improving supply- 0.96 7.99
chain

4. Employees responsibility to inspect work outcome 0.66 6.47*

5. Creation of quality circles to assist staff in problem- 0.70 6.71*


solving

6. Employee participation in management quality 0.75 7.00*


programs

7. Establishing a reward program for novel suggestions 0.82 7.36*


by employees
Chi-square = 670.02 (p < 0.001); df = 269; GFI = 0.93; AGFI = 0.88; RMSEA = 0.100.
/aFixed parameter.
*p < 0.001.

Table 5.
Results of the first-order and second-order confirmatory factor analysis of integrated quality management.

provide input about the supplier selection based on the quality of services and prod-
ucts (B = 0.96).

5.2 Organizational Learning capability

Results of an orthogonal (VARIMAX) rotation of the factor matrix (Table 6)


indicate underlying organizational learning capability items. Based on the four-
independent factor solution suggested by the eigenvalue pattern (i.e., greater than 1.0),
15 items were identified so that each of which loaded at least cleanly on only one of the
four factors. A cut-off of 0.50 was used for item-scale selection. These factors
accounted for over 75% of the variance in the organizational learning capability scale
items. Following an inspection of the factor loadings, four factors were subsequently
labeled “management commitment, ” “system perspectives,” “organizational
experiment,” and “knowledge transfer initiative.” After the initial component
analysis number of items was reduced to 15 which explained the highest variation in
organizational learning.
Table 7 shows the Kiser-Meyer-Olkin, and Bartlett test of sphericity utilized to
measure four organizational learning dimensions, with each of the dimensions being
measured by responses to several items. The results reasonably describe each set of
items as being indicative of an underlying factor for learning capability (KMO > 0.818;
χ2 = 1843, df, 120, sig 0.000).
Results of second-order confirmatory factor analysis (Table 6) present the scale
reliability on organizational learning dimensions that reached statistical significance.
This indicates that criteria had a significant correlation with dimensions and scales
had convergent validity [42].
Furthermore, results (shown in Table 8) indicated that organizational learning
capability positive and significant relationship with management commitment to
long-term investment in human resources development and organizational learning
13
Quality Control - An Anthology of Cases

Derived Factorsc

Organizational Learning Capabilityb MCb1 SPb2 OEXb3 KTIb4


MC1 0.667 0.286 0.335 0.132

MC2 0.734 0.174 0.317 0.152

MC3 0.714 0.351 0.135 0.222

MC4 0.852 0.059 0.154 0.098


MC5 0.771 0.295 0.269 0.080

SP6 0.237 0.850 0.162 0.035

SP7 0.255 0.797 0.247 0.168

SP8 0.199 0.867 0.226 0.199


OEX9 0.230 0.290 0.845 0.053

OEX10 0.186 0.374 0.789 0.087

OEX11 0.344 0.052 0.800 0.246

OEX12 0.502 0.086 0.634 0.211


KTI13 0.166 0.296 0.162 0.765

KTI14 0.296 0.164 0.010 0.838

KTI15 -0.079 -0.119 0.221 0.726

Eigenvalue 9.73 1.76 1.50 1.24

Variance explained 19.35 18.57 18.26 15.89


a
A VARIMAX orthogonal rotation is performed on the initial factor matrix.
b
Factors derived from organizational learning capability
Factors Cronbach’s alphas Scales included
b1
MC 0
b2
SP 0
b3
OEX 0
b4
KTI 0
c
Loadings above 0.50 are in boldface

Table 6.
Factor analysis of organizational learning Scales.a

Kaiser-Meyer-Olkin Measure of Sampling Adequacy 0.818


Bartlett’s Test of Sphericity Approx. Chi-Square 1843

Df 120

Sig 0.000

Table 7.
KMO and Bartlett ’s test of organizational learning variable.

(B = 0.88). Moreover, to enhance learning capability at all levels within organizations,


top managers promoted a culture of information sharing and knowledge transfer at all
levels (B = 0.63). Results showed that top managers encouraged individuals and teams
to explore new ideas through open experimentation (B = 0.80). Findings also indi-
cated that subunits were encouraged to adopt a system perspective notion, as it relates
to the understanding of organizational goals and strategic orientation (B = 0.72).

14
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

Items First-order t-value Second-order t-value

Standardized Standardized

loading loading

Organizational Learning Capability

Management commitment

1. Employee participation in management decision 0.81 /a 0.88 0.930 *


making

2. Invest in employee learning 0.78 10.36 *

3. Embracing change to adapt to changing business 0.73 9.66*


environment

4. Employee learning as a key success factor in 0.77 10.33 *


company

5. Rewarding novel ideas 0.86 11.89*

Open experimentation

1. Job expansion through creativity and 0.85 /a 0.80 9.0*


experimentation

2. Adopting best practices in competitive field .84 12.60*

3. Considering expert views outside company to 0.85 12.64*


improve learning

4. Creating a culture of accepting ideas generated by 0.76 10.84*


employees
System perspective

1. Employee knowledge about the strategic direction of 0.83 /a 0.72 7.96*


company

2. Divisional participation in company goals 0.88 13.19*

3. Communication among company divisions/ 0.94 14.30*


departments

Knowledge Transfer Initiative

1. Discussion about shortcomings and mistakes at all 0.82 /a 0.63 6.66*


levels
2. Discussions about ideas, programs, and activities 0.83 10.43*
among employees
3. Culture of teamwork 0.44 5.14 *

4. Maintenance of work process documentation 0.78 9.90*


/aFixed parameter
Chi-square = 235.64 (p < 0.001); df = 100; GFI = 0.95; AGFI = 0.86; RMSEA = 0.095
*p < 0.001.

Table 8.
Results of the first-order and second-order confirmatory factor analysis of organization learning.

Analysis of subset variables and their relationship with quality management.


Management commitment. Results shown in Table 8 revealed that investment in
human capital through learning programs (B = 0.78) will be considered a key success
factor in the organization (B = 0.77). Furthermore, the analysis indicated that
15
Quality Control - An Anthology of Cases

employees participating in the management decision-making process will be impor-


tant (B = 0.81) and can contribute to decisions on how to adapt to changing industry
environment (B = 0.73). Management also implemented a program to reward novel
ideas by individuals and teams (B0.86).
Knowledge sharing and cross-functional transfer. Knowledge sharing within a subunit
and among various subunits contributes to the generation of new ideas among
employees (B = 0.83), proper documentation of work processes (B = 0.78), creates a
culture of teamwork (B =0.44), also generates productive discussions about the
subunits and top management shortcomings (B = 0.82).
System perspective. Establishing a system perspective requires a lateral and flexible
organizational structure. Results of the data analysis showed that top executives
implemented integrated quality management by designing a lateral organizational
structure which enabled departments and divisions to participate in the strategic goals
setting process of the organization.
(B = 0.88). Furthermore, lateral structural design facilitated a more effective
communicate cross-functionally to co-align division objects and goals (B = 0.94), and
from conferences to educate employees about organizational strategic direction
(B = 0.83).
Exploration and open experimentation. According to March [2], organizations
engage in exploration to find a new way of doing things, creating products and
services. The data analysis in the present research indicated that organizations pur-
sued both internal strategy and external monitoring to explore and experiment with
novel ideas. Data analysis showed that the organization created a culture of welcoming
and accepting new ideas by employees (B = 0.76), also, employee job expansion
employees were enabled to explore and experiment with new ideas (B = 0.85). Within
business environment, results indicated that organizations monitored and adopted
best practices (B = 0.84) and consulted with experts in the field outside the organiza-
tion to improve learning capability (B = 0.85).

5.3 Organizational innovation

Table 9 presents the result of an orthogonal (VARIMAX) rotation of the factor


matrix underlying organizational innovation items. Based on the three-independent
factor solution suggested by the eigenvalue pattern (i.e., greater than 1.0), 17 items
were identified so that each of which loaded at least cleanly on only one of three
factors. A cut-off of 0.50 was used for item-scale selection. These factors accounted
for over 74% of the variance in the organizational innovation scale items. Following
the factor loadings, the three factors were subsequently labeled “product/services
initiatives,” “product innovation,” and “overall organizational innovation.”
Table 10 shows the Kiser-Meyer-Olkin and Bartlett test of sphericity. Results
reasonably describe each set of items as being indicative of underlying factors for
organizational innovation (KMO > 0.891; χ2 = 2.418E3, df, 136, Sig, 0.000). Further-
more, results are indicative of a relationship among the innovation components,
“product innovation,” “process innovation,” and “organizational innovation.”
Table 9 shows the results of second-order confirmatory factor analysis and the scale
reliability on organizational innovation dimensions that reached statistical significance.
This indicates that criteria had a significant correlation with dimensions and that the
scales had convergent validity [42]. Results (Shown in Table 8) were also indicative of a
significant and positive correlation between innovation and the introduction of new
products and services (B = 0.92). Moreover, top managers allocated resources for
16
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

Derived Factorsc

Organizational Innovation b PS b1 PR b2 OOIb3


PS1 0.760 0.229 0.303

PS2 0.795 0.219 0.346

PS3 0.818 0.413 0.180

PS4 0.550 0.339 0.447


PS5 0.781 0.400 0.218

PS6 0.659 0.246 0.439

PR7 0.459 0.718 0.155

PR8 0.277 0.730 0.381


PR9 0.141 0.805 0.121

PR10 0.170 0.757 0.358

PR11 0.337 0.738 0.253

PR12 0.430 0.757 0.169


OOI13 0.398 0.169 0.736

OOI14 0.299 0.261 0.784

OOI15 0.415 0.298 0.687

OOI16 0.103 0.249 0.855

OOI17 0.241 0.162 0.797


Eigenvalue 9.84 1.66 1.18

Variance explained 25.57 25.09 24.01


a
A VARIMAX orthogonal rotation is performed on the initial factor matrix
b
Factors derived from organizational innovation
Factors Cronbach’s alphas Scales included
b1
PS 0
b2
PR 0
b3
OOI 0
c
Loadings above 0.50 are in boldface

Table 9.
Factor analysis of organizational innovation Scalesa.

Kaiser-Meyer-Olkin Measure of Sampling Adequacy 0.891

Bartlett’s Test of Sphericity Approx. Chi-Square 2.418E3


Df 136

Sig 0.000

Table 10.
KMO and Bartlett ’s test of innovation variable.

continuous process innovation (B = 0.78). Findings also revealed that top managers
coordinated subunits efforts to enhance overall organizational innovations (B = 0.77).
Analysis of subset variables and their relationship with quality management.
Products and services innovation. Results for the subset variables of the innovation
dimension (Table 11) reveal that executives place strategic importance on the first-
17
Quality Control - An Anthology of Cases

Items First-order t-value Second-order t-value

Standardized Standardized

loading loading

Overall Organizational Innovation

Product and service innovation

1. Higher rate of innovation in comparison to 0.79 /a 0.92 9.88*


competitors

2. Higher production improvement in comparison to 0.82 11.32*


competitors

3. Faster acquisition of innovative ideas compare to 0.94 13.55*


competitors
4. Knowledge and skill improvement through R&D 0.72 9.56*

5. Production of products that better fit customer 0.92 13.21*


needs
6. Introduction of new products to customers faster 0.95 10.10*
than competitors

Performance innovation
1. Utilizing novel ideas to improve the product quality 0.88 /a 0.78 10.25*
and speed of deliver

2. Utilizing quality resources in the production process 0.80 12.61*

3. Flexibility in resources allocation 0.68 9.72*


4. Cost reduction through efficient resource allocation 0.78 11.86*

5. Adoption of human resources management 0.81 12.86*

6. Flexibility in org-structure compare to competitors 0.89 15.42*


that allows innovation

Overall organization innovation

1. Best use of organizational resources to implement 0.77 /a 0.77 8.28*


quality management

2. Unit cost reduction after implementation of quality 0.84 11.08*


management
3. Financial improvement after quality management 0.81 10.61*
improvement
4. Increased employee productivity after quality 0.79 10.18*
management implementation
/aFixed parameter.
Chi-square = 244.89 (p < 0.001); df = 116; GFI = 0.91; AGFI = 0.81; RMSEA = 0.086.
*p < 0.001.

Table 11.
Results of the first-order and second-order confirmatory factor analysis of organization innovation.

mover advantage and faster generation of new products and services compare to other
rivals (B = 0.94). Furthermore, the first-mover advantage enabled the organization to
present customers with products and services that best served their needs compared
to other rivals in the marketplace (B = 0.92), at a higher rate of market presentation of
18
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

innovative products compared to other rivals (B = 0.79). Results also indicated that as
a first-mover strategy, top managers placed strategic emphasis on R&D and allocated
greater resources toward research and development (B = 0.72). Congruent with results
presented in the learning capability segment, flexible and lateral structural design and
greater cross-functional communication and knowledge sharing, reduced process
costs associated with the higher production improvements and efficiency, compared
to other competitors (B = 0.82), and generating new products and services for
customers (B = 0.75).
Innovation performance. Findings reveal that designing a lateral flexible organiza-
tional structure was highly correlated with innovations in the organization (B = 0.89).
enabled subunits to transform the novel ideas into products and services and present
them to the marketplace in a timely fashion (B = 0.88). Moreover, resources are to be
allocated and reallocated cross-functionally (B = 0.68), with lower costs and more
efficiency (B = 0.78). Findings also indicated that top managers focused on human
resource development and management (B = 0.81) and acquire high-quality resources
in the production processes (B = 0.80).
Organizational innovation. The results of the analysis of innovation showed that
there are two important aspects of organizational innovation. The financial aspect
indicated that innovation leads to a reduction in costs per unit (B = 0.84).
Moreover, innovation enhances the employee productivity (B = 0.79), efficient
resources allocation cross-functionally ( B = 0.77), and prospects of healthier finances
(B = 0.79).

5.4 Organizational performance

Table 12 presents the result of an orthogonal (VARIMAX) rotation of the factor


matrix underlying organizational performance items. Based on the four-independent
factor solution suggested by the eigenvalue pattern (i.e., greater than 1.0), 16 items
were identified so that each of which loaded at least cleanly on only one of four
factors. A cut-off of 0.50 was used for item-scale selection. These factors accounted
for over 77% of the variance in the organizational performance scale items. Following
an inspection of the factor loadings, the four factors were subsequently labeled
“customer satisfaction,” “employee satisfaction,” “environmental performance,” and
“environmental sustainability.”
Kaiser-Meyer-Olkin and Bartlett test of sphericity (shown in Table 13) was
utilized to measure four organizational performance dimensions, with each of the
dimensions being measured by responses to several items. Results (shown in Table E)
reasonably describe each set of items as being indicative of an underlying factor for
organizational performance (KMO > 0.862; χ2 = 1.971E3, df, 120, Bartlett’s Test of
sphericity with significant of 0.000 (less than 0.05).
Table 12 shows the results of second-order confirmatory factor analysis and the
scale reliability on organizational performance dimensions that reached statistical
significance. This indicates that criteria had a significant correlation with dimensions
and that the scale had convergent validity [42].
Results of path analysis indicated top echelon focus on reduced turnover rate by
instituting a high remuneration policy and employee satisfaction (B = 0.75). More-
over, the data analysis indicated that customer contentment with products and ser-
vices was high with little or no defect returns (B = 0.59). Findings also indicated that
top managers monitored the industry environment and continuously selected best
practices (B = 0.63). Furthermore, top managers were cognizant of the organization’s
19
Quality Control - An Anthology of Cases

Derived Factorsc

Organizational Performanceb CUS b1 EMS b2 SORb3 ENP b4


EMS1 0.306 0.731 0.288 0.279

EMS2 0.211 0.810 0.297 0.210

EMS3 0.301 0.844 0.195 0.120

EMS4 0.400 0.689 0.089 0.319


CUS5 0.668 0.436 0.285 0.016

CUS6 0.819 0.133 0.198 0.226

CUS7 0.680 0.090 0.542 0.095

CUS8 0.797 0.337 0.088 0.038


CUS9 0.876 0.219 0.139 0.138

ENP10 0.163 0.157 0.202 0.836

ENP11 0.078 0.543 0.158 0.680

ENP12 0.024 0.184 0.306 0.784


SOR13 0.310 0.362 0.724 0.203

SOR14 0.218 0.014 0.738 0.307

SOR15 0.015 0.470 0.689 0.173

SOR16 0.270 0.267 0.756 0.196

Eigenvalue 8.20 1.90 1.30 1.05


Variance explained 22.62 22.06 18.32 14.36
a
VARIMAX orthogonal rotation is performed on the initial factor matrix
b
Factors derived from organizational performance
Factors Cronbach’s alphas Scales included
b1
CUS 0
b2
EMS 0
b3
SOR 0
b4
ENP
c
Loadings above 0.50 are in boldface

Table 12.
Factor analysis of organizational performance Scalesa.

Kaiser-Meyer-Olkin Measure of Sampling Adequacy 0.862

Bartlett’s Test of Sphericity Approx. Chi-Square 1.971E3


Df 120

Sig 0.000

Table 13.
KMO and Bartlett ’s test of organizational performance variable.

impact on the environment and negative externalities and pursued a sustainability


strategy as a priority post integrated quality management implementation (B = 0.93).
Analysis of subset variables and their relationship with quality management.
Human resource management. Analysis of the subset variables (shown in Table 14)
revealed that executives place strategic importance on employee retention (B = 0.88)
20
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

Items First-order t-value Second-order t-value

Standardized Standardized

loading loading

Organizational Performance

Employee satisfaction

1. Employee satisfaction 0.86 /a 0.75 10.88*


2. Ample remuneration for employees 0.87 14.12*

3. Reducing turnover after quality management 0.88 14.37*


implementation
4. Reduction of absenteeism after quality management 0.83 12.97*
implementation

Customer satisfaction

1. Customer satisfaction 0.86 /a 0.75 10.88*


2. Introduction of new product and services 0.80 10.91*

3. Reduction of product defect returns after quality 0.75 10.03*


management implementation

4. Strategies to maintain customer base 0.82 11.41*

5. Higher profitability after quality management 0.88 11.74*


implementation

6. Reducing customer complaints after quality 0.89 12.65*


management implementation

Sustainability and Environmental


1. Consideration of environmental projects after 0.74 /a 0.63 7.41*
implementation of quality management
2. Sustainability/Reducing production pollution 0.85 9.31*

3. Reducing complains about environmental pollution 0.75 8.56*

Social responsibility performance

1. Sustainability 0.89 /a 0.93 10.04*


a
/ Fixed parameter.
Chi-square = 232.06 (p < 0.001); df = 100; GFI = 0.93; AGFI = 0.84; RMSEA = 0.094.a Fixed parameter.
*p < 0.001.

Table 14.
Results of the first-order and second-order confirmatory factor analysis of organizational performance.

and reducing absenteeism (B = 0.83) by offering employees competitive remunera-


tions (B = 0.87), and overall employee satisfaction of their jobs (B = 0.86).
Customer contentment. Results of data analysis indicated that investment in the
introduction of new and high-quality products and services (B = 0.80) tend to reduce
the rate of defected products (B = 0.75), consumer complaints (B = 0.89), maintain
the market share (B = 0.82), and assure consumers are contented with the products
and services (B = 0.81).
Monitoring environmental conditions and sustainability strategy. The analysis out-
come also revealed that top executives were cognizant about the company’s reputation
by maintaining sustainability by considering environmental renewable energy
21
Quality Control - An Anthology of Cases

projects (B = 0.74), reducing the negative externalities caused by production pollution


(B = 0.85). Integrating the sustainability strategy with quality management enhanced
the company’s legitimacy and reputation for social responsibility by planning for
environmentally friendly projects and sustainability (B = 0.89).

6. Results and discussion

Macro model. As shown in Table 1 (Figure 1), the standard regression weight for
the overall model indicated a positive and significant relationship between main vari-
ables, quality management, organizational learning, and innovations. According to the
results, organizational integrated quality management is positively and significantly
associated with organizational learning capability (B = 0.95, p < 0.05). Similarly, results
showed a positive and significant relationship between innovation performance and
integrated quality management.
(B = 0.91, p < 0.05). Results indicated that when parsing the main effects of
learning capability and innovation performance, the association between quality
management and organizational performance remains positive but statistically non-
significant (B = 0.43, n.s.) and does not explain significant variance (R2 = 0.18) in
organizational performance. A detailed analysis revealed that organizational learning
capability is positively and significantly associated with organizational performance
(B = 0.58, p < 0.05). Furthermore, innovation performance, according to findings, is
also positively and significantly associated with organizational performance (B = 0.62,
p < 0.05). Findings are congruent with hypotheses H1a and H1b. Findings, however,
being partially congruent with hypothesis a, H1.
H1: There will be a positive and significant relationship between quality
management, organizational and organizational performance.
H1a: There will be a positive relationship between quality management,
organizational learning.
H1b: There will be a positive relationship between quality management,
organizational, and innovation.
A detailed analysis revealed that organizational learning capability is positively and
significantly associated with organizational performance (B = 0.58, p < 0.05).
Furthermore, innovation performance, according to findings, is also positively and
significantly associated with organizational performance (B = 0.62, p < 0.05).
Findings also supported hypotheses H2 and H3.
H2: There will be a positive relationship between organizational learning and
organizational performance.
H3: There will be a positive relationship between innovation and organizational
performance.

6.1 Interaction effects

As managers attempt to identify factors that influence organizations’ performance,


this research argued that it is important to gain a deeper understanding as to how
interaction effects of quality management, learning capability, and innovations matter
in influencing organizational performance. The hypothesis H4 specified that organi-
zational performance would be affected by an interactive effect of quality manage-
ment and organizational learning capability. The hypothesis H5 specified that
organizational performance would be affected by an interactive effect of quality
22
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

management and innovation. To test these hypotheses, I employed structural equation


modeling analysis to reduce the number of variables and to capture the interrelations
of measured variables and latent constructs, as suggested by Tarka [43]. Results
indicated that compared to the effects of quality management and organizational
performance (B = 0.43, n.s., R2 = 0.18), the multiplicative interaction term for quality
management and organizational learning capability increased explanatory variance in
organizational performance (R2 = 0.34, p < 0.05), significantly (0.95  0.58) = (0.55,
p < 0.05). Similarly, the multiplicative term between quality management and
increased the variance (R2 = 0.38, p < 0.05) significantly (0.91  0.62) = (0.56,
P < 0.05). Results of the analysis were congruent with H4 and H5.
H4: The interactions between quality management and organizational learning
positively influence the relationship between quality management and
organizational performance.
H5: The interactions between quality management and innovation positively
influence the relationship between quality management and organizational
performance.

7. Discussion

There are several important theoretical and practical implications that


emerge from this research. Findings underscore the importance of the interaction
of quality management elements. Over the past decade, researchers have
systematically underplayed the interaction effects of quality management elements.
The present research showed that the dominant impact on organizational
performance, beyond external resource considerations, is the intersection of forces
associated with quality management, organizational learning capability, and innova-
tions within these organizations. It was argued earlier that within quality management
theory and methodology, the need to consider the contingency approach might result
in an in-depth understanding of the strategic allocation of resources and managing
and coordinating among the interrelated constituent elements within quality
management. Results suggested that organizational performance is positively
influenced by the interaction of quality management and innovation and learning
capability at organizational levels. It is also clear that there are distinct differences
between parsed and integrated constituents within quality management with respect
to explaining variations in organizational performance. This finding is of some
theoretical significance.
As a strategy, quality management appears to have coordination challenges associ-
ated with learning capability and application of such learning to innovations of new
products and services. This study found that organizational performance is signifi-
cantly impacted by the interaction between quality management and learning capa-
bility. Similarly, findings indicated that interaction between innovation and quality
management positively and significantly influences organizational performance. The
strength of these findings, particularly in light of incorporating external environmen-
tal factors such as sustainability considerations, points to the potential importance of
revitalizing the contingency theory perspective pertaining to integrated quality man-
agement. Such a revival would not necessarily imply that researchers “pit” internal
elements influencing performance against external forces. Instead, more direct inte-
gration of contingency variables within quality management is suggested to better
balance internal and external perspectives on organizational performance.
23
Quality Control - An Anthology of Cases

Nevertheless, any resurrection of this perspective within quality management theory


and methodology may require changes in how contingency theory may be employed
(e.g., Pfeffer 1997). This study did not limit its focus to examining the main effects of
organizational learning capability, innovations, and quality management on perfor-
mance. As I argued in theoretical development, one cannot easily specify the nature of
these main effects. Instead, what may be as, if not more, important to consider is the
interaction of these variables as previous organizational researchers have argued that
internal and external characteristics of organizations and their members may cluster
together in predictable patterns to explain a variety of micro to macro-level organiza-
tional processes and relationships [44]. Congruent with Meyer et al.’s findings on
organizational learning capability showed top managerial commitment to implement a
complex set of policies on the development of human resources. Such policies included
learning based on system perspectives, learning associated with experimentation and
exploration of a novel way of doing things, and knowledge transfer at various levels of
individuals, teams, and organizational subunits. Furthermore, findings revealed mana-
gerial efforts to coordinate and co-align subunits’ strategies with the organization at the
macro level. Similarly, findings on innovations showed managerial commitment to
implementing flexible resource allocation strategies for subunits to explore novel pro-
cesses and ideas. Findings were congruent with the notion that integrating interrelated
constituents of quality management at the micro and macro level require greater struc-
tural flexibility and high levels of coordination among organizational activities. While
the explicit consideration of interactive variables in quality management theory adds
complexity to the understanding and application of contingency theory, this type of
complexity is what managers must face. Rarely, is there the luxury of focusing exclu-
sively on one aspect of quality management, as has been the themes of previous
research, in isolation from others? For the contingency theory to develop as a theoretical
perspective and be relevant to the practical concerns of managers and executives,
researchers may need to provide further attention to how constructs in quality man-
agement and their subset variables interact to influence organizational performance
over time.
Employing contingency theory to conduct future research in the quality manage-
ment field will also require making more direct connections between the results of
studies and the organizational design concerns of managers. One important vehicle for
doing this is by considering how quality management research findings can be
connected to process considerations at various levels of organization. It is often orga-
nizational processes that are of most direct concern to managers adopting quality
management practice. Perhaps, the most direct implication relates to the enhanced
importance of managing and integrating complex processes within and between each
constituent of quality management.
Therefore, it is critical for an organization adopting quality management to
develop an organizational capability or competence for managing internal complex
and interrelated process models. Without this capability, managerial policies and
efforts can become misguided and create greater conflict, thereby undermining the
effectiveness of coordination efforts among complex processes to achieve timely
policy and strategy adjustments. Successful corporations such as Boeing and car
manufacturers recognized the need to employ quality management and, as the corpo-
ration evolves, developed organizational capabilities to manage complex processes.
Future researchers may wish to create a matrix that examines the contingent
effects of long-term variations in learning capability on innovations and assess
variations in long-term innovations on organizational performance.
24
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

Author details

Mohsen Modarres
Management and Technology Consulting, Kirkland, WA, USA

*Address all correspondence to: [email protected]

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
25
Quality Control - An Anthology of Cases

References

[1] Raynor EM. The Strategy Paradox. Organization. New York: Currency
London: Currency Doubleday; 2007 Doubleday; 1990

[2] March J. Exploration and exploitation [11] Corredor P, Goñi S. TQM and
in organization learning. Organization performance: Is the relationship so
Science. 1991;2 (1):71-87 obvious? Journal of Business Research.
2011;64:830-838
[3] Ireland DR, Hoskisson RE, Hitt MA,
Loomis CJ. Can anyone run Citigroup?
[12] Porter ME. What is strategy?
Fortune. 2008:80-91
Harvard Business Review. 1996;74(6):
61-78
[4] Bartlett CA, Ghoshal A, Birkinshaw J.
Transnational Management, Texts, Cases
[13] Wheelen T, Hunger D, Hoffman A,
and Readings in Cross-Boarder
Bamford C. Strategic Management and
Management. San Francisco: Irwin-
McGraw-Hill; 2004 Business Policy: Globalization,
Innovation and Sustainability. 14th ed.
USA: Pearson; 2015
[5] Douglas TJ, Judge JQ. Total quality
management and competitive advantage:
The role of structural control and [14] Gomez-Gras JM, Verdu-Jover AJ.
exploration. Academy of Management TQM, structural and strategic flexibility
Journal. 2001;44(1):158-169 and performance: An empirical research
study. Total Quality Management and
[6] Hung RYY, Lien BYH, Yang B, Wu Business Excellence. 2005; 16(7):841-860
CM, Kuo YM. Impact of TQM and
organizational learning on innovation [15] Hasan M, Kerr RM. The relationship
performance in the high-tech industry. between TQM practices and
International Business Review. 2011; organizational performance in service
20(2):213-225 organizations. The TQM Magazine.
2003;15(4):286-291
[7] Luthans F. Organizational Behavior.
7th ed. New York, NY: Mcgraw Hill; [16] Powell T. Total quality management
1995 as competitive advantage: A review and
empirical study. Strategic Management
[8] Joiner TA. Total quality management Journal. 1995;16(1):15-37
and performance: The role of
organization support and co-worker [17] Westphal JD, Gulati R, Shortell SM.
support. International Journal of Quality The Institutionalization of Total Quality
and Reliability Management. 2006; Management: The Emergence of
21(6):617-627 Normative TQM Adoption and the
Consequences for Organizational
[9] Hana U. Competitive advantage Legitimacy and Performance. August:
achievement through innovation and Academy of Management Proceedings;
knowledge. Journal of Competitiveness. 1996. pp. 249-253
2013;5(11):82-96
[18] Demirbag M, Tatoglu E, Tekinkus
[10] Senge PM. The Fifth Discipline: The M, Zaim S. An analysis of the
art & Practice of the Learning relationship between TQM

26
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503

implementation and organizational simulation technology. . Vol. 31.


performance: Evidence from Turkish International Journal of Technology
SMEs. Journal of Manufacturing Management. 2005;31(1–2):116-128
Technology Management. 2006; 17(6):
829-847 [28] Jerez-Gomez P, Cespedes-Lorente J,
Valle-Cabrera R. Organizational learning
[19] Salaheldin I. Problems. success capability: A proposal of measurement.
factors and benefits of QCs Journal of Business Research. 2005;58:
implementation: a case of QASCO. The 715-725
TQM Journal. 2009;21:87-100
[29] Jimenes D, Sans-Valle R. Innovation.
[20] Jaafreh AB, Al-Abedallat AZ. The organizational learning and
effect of quality management practices performance, Journal of Business
on organizational performance in Jordan: Research. 2011;64:408-417
an empirical study. International Journal
of Financial Research. 2012;4(1):93
[30] Atalay M, Anafarta N, Sarvan F. The
relationship between innovation and
[21] Winter S. Knowledge and
firm performance: An empirical
competence as strategic assets. In: Teece
evidence from Turkish automotive
D, editor. The Competitive Challenge:
supplier industry. Proceedings-Social
Strategies for Industrial Innovation and
and Behavioral Sciences. 2013;75:226-235
Renewal. New York: Harper and Row;
1987. pp. 159-184
[31] Günday G, Ulusoy G, Kılıç K, Alpkan
[22] Schumpeter JA. The Theory of L. Effects of innovation types on firm
Economic Development. Cambridge, performance. International Journal of
MA: Harvard University Press; 1934 Production Economics. 2011;133(2):
662-676
[23] Penrose E. The Theory of the
Growth of the Firm. New York: Wiley; [32] Hoang D, Igel B, Laosirihongthong
1959 T. The impact of total quality
management on innovation: Findings
[24] Conner K, Prahalad C. A resource- from a developing country. International
based theory of the firm: Knowledge Journal of Quality and Reliability
versus opportunism. Organization Management. 2006;23(9):1092-1117
Science. 1996;7 :477-501
[33] Singh PJ, Smith AJ. Relationship
[25] Cyert R, March J. A Behavioral between TQM and innovation: An
Theory of the Firm. Englewood Cliffs: empirical study. Journal of
Prentice-Hall; 1963 Manufacturing Technology
Management. 2004;15(5):394-401
[26] Kim DH. Link between individual
and organizational learning. In: Klien D, [34] Holbeche, L. 2017. Designing More
editor. Strategic Management of Agile Organizational Structure and Ways
Intellectual Capital. 1998. pp. 41-62 of Working. CIPD Annual Conference
and exhibition.
[27] Modarres M, Beheshtian M.
Enterprise information systems: [35] Luecke R, Katz R. Managing
Integrating decision support systems, Creativity and Innovation. Boston, MA:
executive information systems, and Harvard Business SchoolPress; 2003
27
Quality Control - An Anthology of Cases

[36] Tushman ML, Anderson P.


Technological discontinuities and
organizational environments.
Administrative Science Quarterly. 1986;
31(3):439-465

[37] Vanichchinchai A, Igel B. The impact


quality management on supply chain
management and firm’s supply
performance. International Journal of
Production Research. 2011;49(11):
3405-3424

[38] Coyle-Shapiro J. A psychological


contract perspective on organizational
citizenship behavior. Journal of
Organizational Behavior. 2002;23:
927-946

[39] Santos JB, Briot LAL. Toward a


subjective measurement model for firm
performance. Brazilian Administration
Review (BAR). 2012;9(6):95-117

[40] Fields A. Discovering Statistics


Using SPSS. 3rd ed. London: Sage
Publications; 2009

[41] Royston P. Constructing time-


specific reference ranges. Statistics in
Medicine. 1991; 10(5):675-690

[42] Anderson JC, Gerbing DW.


Structural equation modeling in practice:
A review and recommended two-step
approach. Psychological Bulletin. 1988;
103(3):411-423

[43] Tarka P. An overview of structural


equation modeling: Its beginnings,
historical development, usefulness and
controversies in the social sciences.
Quality & Quantity. 2018;52(1):313-354.
DOI: 10.1007/s11135-017-0469-8

[44] Modarres M. Interactive Effects of


Organizational Size and Structural
Differentiation on Administrative
Reorganization. Research in Progress.
2019
28
Chapter

Artificial Intelligence Deployment


to Secure IoT in Industrial
Environment
Shadha ALAmri, Fatima ALAbri and Tripti Sharma

Abstract

Performance enhancement and cost-effectiveness are the critical factors for most
industries. There is a variation in the performance and cost matrices based on the
industrial sectors; however, cybersecurity is required to be maintained since most of
the 4th industrial revolution (4IR) are based on technology. Internet of Things, IoT,
technology is one of the 4IR pillars that support enhancing performance and cost. Like
most Internet-based technologies, IoT has some security challenges mostly related to
access control and exposed services. Artificial intelligence (AI) is a promising
approach that can enhance cybersecurity. This chapter explores industrial IoT (IIoT)
from the business view and the security requirements. It also provides a critical
analysis of the security challenges faced by IoT systems. Finally, it presents a com-
parative study of the advisable AI categories to be used in mitigating IoT security
challenges.

Keywords: artificial intelligence, Internet of Things, cybersecurity, industry,


industrial IoT (IIoT), 4th industrial revolution

1. Introduction

The 4th Industrial revolution (4IR) is the current era where industry is driven by
technology. It encourages the co-operation between scientific knowledge and experi-
ence with business mindset and requirements. The key technologies that allow 4IR to
be sustained are additive manufacturing techniques, Autonomous and collaborative
robotics, Industrial Internet of Things (IIoT), Big data analytics, Cloud Manufacturing
techniques [1]. The current scenarios show the benefits of IIoT in improving QoS
industries, starting from predictive maintenance, reaching remote controlling of
assets, and deploying Digital Twin concept that allows virtualizing the operations
environment and permits the owner to be proactive when any anomalies are detected
[2]. Even though IIoT adds value to the traditional industry, there should be a balance
between the operational benefits and the security level.
Aims and objectives

• To study and compare the existing IoT architectures


1
Quality Control - An Anthology of Cases

• To explore industrial IoT (IIoT) from the business point of view

• To analyze various IIoT threats and security challenges and existing mitigation
techniques

• To perform a comparative study of the different AI categories and their


applicability in IIoT security

• To recommend the most convenient AI techniques for mitigation of IIoT security


challenges

This chapter is designed to be used as a reference to study the effectiveness of


Artificial intelligence (AI) and to enhance the security techniques for mitigating the
threats faced by IIoT deployment. Section 2.1 discusses IoT architecture and Section
2.2 demonstrates the IoT security challenges. Section 2.3 describes the main AI cate-
gories and their subcategories. It also points out the appropriate and relevant situation
to employ AI categories based on the available data and the type of intelligence
needed. Section 3 explores IIoT details, its significant business model, and the added
values. Section 4 focuses on IoT security in terms of threat model, threats classifica-
tion, and common IoT security mitigations. This chapter ends with a comparative
study of AI categories used to mitigate IIoT security challenges in Section 5.

2. Background

2.1 IoT architecture

Internet of Things (IoT) is a service-oriented paradigm that is built on the involve-


ment of several technologies. Therefore, its architecture consists of layers starting from
sensors and reaching to constructive data displayed on the system-analyzer screen.
In References [3–5], the main IoT architecture consists of devices that have sensors
and edge computing which has embedded devices, fog computing such as gateway
and servers, cloudlets such as base stations, and the last component being cloud
computing, which can be any cloud platform. Table 1 shows some IoT architectures
with variations on the number of layers based on five different references. In general,
there are three main layers that are devices, network, and cloud computing. However,
the device layer can be divided into two sub-layers based on the type and functional-
ity: The first sub-layer comprises of end-user devices that contain sensors; and the
second sub-layer are devices that support machine-to-machine communication such

Ref Number of layers Layers title

[3] 5 layers Devices, edge Computing, fog computing, cloudlets, cloud computing

[4] 3 layers IoT layer, fog layer, cloud layer

[5] 3 layers EoT(ecosystem of Things) layer, edge layer, cloud layer

[6] 4 layers Fog network consist of (IoT layer, Mist, cloudlet/edge layer,) cloud
[7] 4 layers Sensors and systems layer, far edge layer, near-edge layer, cloud layer

Table 1.
IoT ecosystem architecture comparison.

2
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469

as an Arduino platform. The network layer can be divided as well into two sub-layers
based on the communication characteristics such as the speed and bandwidth: fog
computing and cloudlet. The third layer is the cloud computing layer. Figure 1 illus-
trates the authors’ insights into IoT architecture after studying the literature. Layer
one consists of IoT devices, layer two covers all networking related technologies and
devices, and the third layer consists of cloud computing and related data analytics
technologies.
The IoT layers are connected through networking media using wireless or wired
connections. However, wireless technology evolvement is critical to extend IoT
deployment as the complexity of energy impact and processing capacity are getting
worse at the sensor’s layer [6]. The emerging of 5G in wireless communication adds an
advantage to IoT architecture since it improves the performance by allowing the
transformation of more data in less time, which technically reduces service-latency
and enhances real-time access to data [6, 8].

2.2 IoT security challenges

The growing use of Internet of Things (IoT) technology in the industrial sector has
posed new issues for the device and data security. Based on different world statistics,
the number of devices connected to IoT networks is rapidly increasing. This expansion
leads to experience different levels of vulnerabilities, which may—in turn—cause an
increase in security threats and challenges. Security may be regarded as a big threat that
leads to limitations of the IoT systems deployment. As a result thereof, it is the Authors’
view that effective security practices may become more vital in the IoT industry.
The National Institute of Standards and Technology (NIST) designed programs to
boost cybersecurity involvement in IoT [9]. This initiative promotes the development
and implementation of cybersecurity standards, guidelines, and tools for IoT prod-
ucts, connected devices, and their deployment environment.

• Security Challenges: Some common challenges posed by the security


requirements for the IoT systems are given as follows:

◦ Because IoT involves various and diverse technologies, determining and


understanding security needs is more complicated.

Figure 1.
IoT architecture.

3
Quality Control - An Anthology of Cases

◦ IoT networks typically consist of resource-constrained devices. Therefore,


these devices became the weakest link for cyberattacks.

◦ The Internet of Things (IoT) may include mobile devices that demand
adaptability, posing security vulnerabilities.

◦ IoT also generates a vast amount of data, which is referred to as Big data. The
latter has its own set of security and management concerns.

◦ Security requirements: Because of the varied nature of IoT Applications,


security requirements may also differ. Based on the scenarios from a specific
industry and the infrastructure to which IoT is being applied to, the
requirements and consequent security measures may be changed or
adjusted. Nevertheless, the common security requirements [10–13] of IoT
systems can be summarized as given in Table 2.

Satisfying all the above-mentioned requirements is a huge challenge because of the


limitations and constraints associated with the IoT devices in terms of capability and
capacity to deploy the conventional security solutions.

2.3 Artificial intelligence categories

When it comes to artificial intelligence (AI), there are several philosophical


groundworks that have been done. As per Russel [14], there are two types of AI: weak
AI where machine can act intelligently and strong AI where machine can really think.
However, when hybrid mechanisms are used, the deployment of AI system features is
enhanced.
Artificial intelligence (AI) can be divided into two main categories as per the
mechanisms that are used to reach intelligence through data processing [14–16]. The

Security Example of mitigation techniques Requirement description


requirements

Confidentiality Encryption Only authorized entities should be able to


read it to ensure data protection.
Integrity Hash generation The data should be checked to ensure that
it has not been tampered with.

Authentication, Implement policies, Security • Identification of devices and users.


Authorization, credentials, firewall, and • Special rights or privileges for authorized
Access control authentication servers. Digital users;
(AAA) signature, etc. • Access to resources and data should be
restricted.
Availability Fault tolerance mechanism, clustering The ability to be accessed and used by an
and high availability architecture, etc. authorized entity on demand

Non-repudiation Digital signature Securing information transmission by


supplying confirmation of delivery and
identification to both sender and receiver
so that neither can later deny processing it.
It ensures data origin and integrity.

Table 2.
IoT security attributes, techniques and requirements.

4
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469

first category is knowledge-based in which the main component is the existence of


inference engine, and it is known as expert system (ES). The second category is
machine learning (ML) where different algorithms are used to allow the machine to
learn from the dataset. Table 3 illustrates the main AI categories. The core element is
knowledge engineering in order to build either the dataset for ML or the fact database
for ES. The data preparation phase needs to make use of other technology such as data
mining and Big data techniques. The ML sub-categories are supervised learning,
reinforcement learning, and un-supervised learning. The ES types of systems are rule-
based, Fuzzy-logic, and frame-based.

• Machine Learning types (ML): The intelligence behind ML is the ability to learn.
ML involves adaptive mechanisms; therefore, it is considered as the basis of
adaptive systems. In this context, the ML detects and extrapolates patterns by
adapting to new circumstances. This learning process can be based on experience
or examples or analogy. Therefore, ML has three sub-categories as follows:

◦ Supervised learning: is learning from examples. This type is the easiest ML


type in terms of mathematical complexity. The machine learns from a behavior
(labels).

◦ Reinforcement learning: defined as learning from the environment based


on experience. This type is based on an agent that can learn from reward
signal. The machine learns from its mistake.

◦ un-supervised: referred to as learning based on analogy and to find a


pattern from a dataset. This type is used when there are no examples to learn
from and no reward signal to get feedback.

Figure 2 shows examples of mechanisms for each ML sub-category.

• Expert System (ES): The Expert System, ES is dealing with uncertain knowledge
and reasoning. Rule-based ES consist of five basic components that are shown in
Figure 3: the knowledge base, the database, the inference engine, the explanation
facility, and the user. ES intelligence resembles the way the expert human apply
their knowledge and intelligence to solve the problem in a narrow domain. ES
processes knowledge in the form of rules and uses symbolic reasoning to solve the
problem. The main difference between ES and conventional programs (CP) is
that the CP processes data using algorithms on well-defined operations to solve a
problem in a general domain. Examples of ES are as follows:

Expert system (ES) Machine learning (ML)

Rule-based Supervised learning

Fuzzy logic Reinforcement learning


Frame-based Un-supervised Learning

Table 3.
Artificial intelligence (AI) main categories.

5
Quality Control - An Anthology of Cases

Figure 2.
Examples of ML sub-categories mechanisms.

Figure 3.
Expert system (ES) rule based adapted from [15].

6
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469

◦ Rule-based: is based on logical rules. Its disadvantage comes from


ineffective search strategy and inability to learn.

◦ Fuzzy logic: is centered on logic that describes fuzziness. It models the


common sense of a human. Tuning is the most ponderous stage of building a
fuzzy system.

◦ Frame-based: is constructed on structuring knowledge based on object


attributes. It usually uses pattern matching but it has a limitation to make
decisions about the hierarchical structuring during knowledge engineering.

3. Artificial intelligence in industrial IoT

3.1 The significance of AI in IIOT

Artificial intelligence (AI) deployment in Industrial IoT (IIoT) systems is very


convenient due to the huge data generated by the IoT system. AI approaches are used to
infer knowledge and support data analytics. The main areas requiring exploration and
proposing solutions for intelligent IIoT systems are threat hunting and intelligence,
blockchain, edge computing such as cloud computing, privacy preservation [17].
The generated big data from IIoT are due to real-time computation and the risk
increases when the communicated data are critical and sensitive; therefore, AI can
support the need of big data analysis with low latency [2]. Designing security and privacy
solutions require to identify business processes and operations. However, this task is
complex in the regular industrial system, and it comes more sophisticated in IIoT [18]. AI
technology deployment has several implementations including computing paradigm and
security; however, inter-operability issues are regarded as a critical challenge [3].
The Internet of Things (IoT) has grown from a concept used in research laborato-
ries and technology companies to a reality in everyday lives. IoT has become embed-
ded in the operations of some companies, enterprises, and governments [19].
Emerging IoT applications are spread out in all domains, and it has affected a variety
of industries. Figure 4 illustrates the examples of IoT technology applications, which
include Smart Homes, Smart Health, Intelligent Transportation, Smart Cities, Smart
Agriculture, and Factory Automation [3].
Indeed, the very same report by McKinsey & Company mentioned above [19]
identifies the top five sectors where IoT adds the most economic value: factories that
include all standardized production environments followed by human health, work
sites, cities, and retail environment. Indeed, it has been estimated in this report that
IoT could add a value of $5.5 trillion to $12.6 trillion by 2030, where the most value can
be created in B2B type of applications.

3.2 The IoT business model

The term business model describes how an organization creates, delivers, and
captures value [20]. The adoption of IoT technologies in an organization will most
certainly affect the business relationships and the business model for that organiza-
tion. In this section, the common business models used will be discussed.
One of the early initiatives to develop an IoT business model was published in 2015
[21]. The research focused on identifying the relevant building blocks that can fit in
7
Quality Control - An Anthology of Cases

Figure 4.
Example of industry utilizing IoT technology [3].

IoT business models, as well as the types and importance of the building blocks. This
framework identified value proposition as the most important building block for IoT
business models. The entities “customer relationships” and “key partnerships”
followed suit in terms of importance.
Another conceptual IoT Business Model is the AIC (Aspiration, Implementation
and Contribution) model presented in [22], which focuses on context-specific imple-
mentation of IoT. This model consists of three interconnected phases: Aspiration,
Implementation, and Contribution. The first phase “Aspiration ” focuses on defining
and predicting the value creation through adoption of IoT. The second phase Imple-
mentation includes strategy development in which an organization should investigate
how IoT will improve the business by gaining competitive advantage or creating
enhanced products or services. In the third phase Contribution, an organization
opting for IoT should study the practicality of the approach and the capabilities and
resources available for the organization to implement IoT. In other words, does the
organization own the knowledge and skills needed to succeed in implementing IoT.
Four types of IoT-enabled servitized business models were classified in [23]. Each
business model was analyzed from three perspectives: the role of IoT, the firm’s
benefits, and the inhibiting factors. Table 4 adapts from the study presents the four
types of IoT business models and compares them based on the stated three perspec-
tives. The four different business models have some shared features as the common
role for IoT is adaptation, the common benefit is reducing operation cost, and the
common inhibiting factor is the need for close relationship between different
stakeholders.
IoT business models vary based on the type of deployment. Therefore, each
industry has a different model that will fit with its value proposition. Seven IoT
business models were reviewed by the researchers in [24]. Based on their analysis, six
characteristics of the IoT business model were identified:
8
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469

IoT Business Role of IoT Firm’s benefits Inhibiting factors


model

Add-on business • Innovation • Improve product- • Privacy concerns


model • Adaptation service offerings • Data security
• Smoothing • Extend firms • Requires close relationship between
business different stakeholders in the network
• Reduce
operation costs

Usage-Based • Adaptation • Extend firms • Requires expertise in data management


business model • Smoothing business • Requires close relationship between
• Generate steady different stakeholders in the network
income
• Reduce
operation costs

Sharing business • Adaptation • Improve service • Requires new ways of interactions with
model • Smoothing offerings customers
• Increase resource • Requires close relationship between
utilization different stakeholders in the network
• Reduce
operation costs

Solution-oriented • Innovation • Extend firms • Developing servitized offerings that aligns


business model • Adaptation business with customer’s needs
• Gain competitive • Requires close relationship between
advantage different stakeholders in the network
• Reduce
operating cost

Table 4.
Business model categorization based on role, benefits, and inhibiting factors.

• The ability to capture the transition between different business models.

• The possibility to connect IoT elements to the business model components.

• The ability to view the relation between business-centric and a network-centric


approaches.

• The ability to map the Value Flows that involve revenue, costs, and assets

• The possibility to include the model patterns of digital business.

• The ability to balance between the actions and widening the rational thinking.

3.3 Analytical study of how IoT add-value to the industry

Given the potential impact and IoT devices’ prevalence and ubiquity, one needs to
understand how to leverage IoT technologies to realize the value-deriving benefits
associated with them. For example, IoT can be used in the factory setting to make
various processes more efficient. The IoT applications have noteworthy potential in
value creation in terms of operation optimization and predictive maintenance. This
can be achieved by monitoring, remotely tracking and adjusting the machineries,
based on sensor data from different parts of the factory. It has been estimated that IoT
9
Quality Control - An Anthology of Cases

has a potential to create value of $1.2 trillion to $3.7 trillion per year in 2025 by
optimizing factory settings. This improvement in the working efficiency using IoT
may also induce some security and privacy issues [25]. Moreover, technology does not
automatically bring added convenience or value unless firms carefully consider the
context into which it is introduced and how to derive any practical or monetary
benefits. Mostly, add-value is related to performance enhancement. The latter can be
improved through a variety of factors such as time saving, cost saving, and processing
low-overhead to name but a few.
Table 5 shows some recent empirical research [26–31] on how to mitigate security
challenges in an IoT industrial environment and different add-value. AI approaches
are used more in access control, which is related mostly to the Network layer of IoT.
Access control is a critical part of the system, which acts as a door for the factory to
control authorized access to the recourses and the level of privileges. Due to the
heterogenous and dynamic nature of the IoT networks, it will be significant to use AI
approaches to enhance the access control.
The IoT add-value is constraint by several challenges and barriers. These can be
categorized in two groups based on their domain as follows:

• Human limitations

• lack of social acceptance and knowledge

• lack of skilled workforce, technical knowledge

• Technology limitation

• the absence of technical accountability and regulation

• challenges related to data management and data mining

• privacy, security, and uncertainty

Ref. IoT layer Security mitigation approach Performance (add-value) AI


used

[26] Network Graph-theory Cost is not evaluated 


Different performances based
on number of nodes
[27] Data (access Conditional proxy re-encryption primitive Low overhead 
control)

[28] Data (access Context-aware analysis Improve detection ration 


control)

[29] Network Deep Learning and Blockchain- Standard measures of latency, ✓


Empowered Security Framework accuracy, and security
[30] gateways Flexible rule-based control strategies Cost and time saving 

[31] Network A deep learning methodology for detecting Improve detection accuracy of ✓
cyberattacks IDS

Table 5.
Examples of AI usage in security mitigation approaches based on IoT layer.

10
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469

• the immaturity of IoT innovations

• integration among networks and no standardization of regulations

• Business limitation

• difficulty in designing business models for the IoT due to a multitude of different
types of connected products

• ecosystems are unstructured since it is too early to identify stakeholders and their
roles

Uncertainty of how IoT will impact existing business models, organizational strat-
egies, and return of investment, business models are considered significant barriers to
implementation, where the add-value should be clearly identified.

4. Critical analysis of IoT security

4.1 Threat modeling

A threat model is an essential approach in defining security requirements. The goal


of threat modeling is to understand how an attacker would be able to compromise a
system, and then to ensure that proper mitigation techniques are in place to prevent
such attacks. Threat modeling pushes the design team to consider the mitigations
during the process of the system creation before deployment. In general, the threat
modeling process consists of four steps.

• step 1: Model the application

• step 2: Recognize and Enumerate Threats

• step 3: Use countermeasures to Mitigate threats

• step 4: Verify and validate the mitigations

The most critical step is step 2 aimed at exposing the vulnerabilities and security
challenges of the IoT systems. After properly classifying the threats, it will be possible
to explore the mitigation techniques. For classifying threats in an information system,
Microsoft introduced the STRIDE (Spoofing, Tampering, Repudiation, Information
disclosure, Denial of Service and Elevation of privilege) threat model [32] Counter-
measures are recommended and evaluated for each threat. The application of STRIDE
for threat modeling in Industrial IoT (IIoT) has been studied before as discussed in
[33, 34]. It also describes the adaptation of STRIDE for the Azure IoT reference
architecture. After discovering threats, these should be rated according to their sever-
ity using some tools. The use of the DREAD (Damage, Reproducibility, Exploitability,
Affected Users, Discoverability) model as one of commonly used tools to assign
ratings to threats is mentioned in [35] .
Generally, each IoT system will have a multi-layered architecture consisting of
various layers. These layers make use of diversified technologies, which introduce a
11
Quality Control - An Anthology of Cases

plethora of challenges and security threats. As a result, the architecture of the IoT
system plays a significant role in identifying the threats and attacks. However, there is
no specific standard architecture because most of the IoT solutions are application-
specific developed with explicit technologies, resulting thus in heterogeneous and
fragmented architectures.
A secured IoT network architecture was proposed in [36] that would be using
Software Defined Networks (SDN) for identifying the threats. It also summarizes how
IoT network security can be achieved in a more effective and flexible way using SDN.
Furthermore, studies, reviews, and analysis were conducted on some existing IoT
architectures and a new architecture was proposed based on those architectures [37].
This new architecture includes a lot of the key elements of the other architectures,
while fostering a high degree of inter-operability across diverse assets and platforms.
Among the several IoT architectures reviewed in [38], it is found that the four-layer
architecture (Application, Transport, Network, and Perception layers) is often being
considered by researchers to address security challenges and solutions at each layer.
Moreover, the most used IoT architectures are often three-tier/layer systems, includ-
ing a perception/hardware layer, network and communication layer, and application
interfaces and services layer. Additionally, the Open Web Application Security Project
(OWASP) [39] identified attack vectors using the three layers of an IoT system:
hardware, communication links, and interfaces/services layers. Thus, as shown in
Figure 5 at all layers of the IoT architecture, implementation of IoT security
mitigation techniques should include security architecture [40].
According to the IoT security architecture, there are security issues and concerns at
each of the three IoT layers. Because of their relative positions in the architecture,
each of these layers has its own set of security needs. However, because they are all
interconnected, if one is compromised, the others may suffer as well. The goal of IoT
security is to protect customer’s privacy, confidentiality, data integrity, infrastructure,
and IoT device security, as well as the availability of the services. The following
subsection discusses the IoT Security issues and threats at each of the layer.

Figure 5.
IoT security architecture [40].

12
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469

4.2 Classification of IoT threats and attacks with solutions

Like in any other system, confidentiality, integrity, AAA, availability, and non-
repudiation are some general security goals and requirements as already stated in
previous sub-section. This section discusses about some of the most frequent threats
and attacks at each IoT layer that might affect at least one of these criteria. Following
Table 6 provides an overview of the classification of the threats at each IoT layer and
some proposed solutions corresponding to these threats [41–44].

4.3 State-of-art IoT security mitigations

The primary goal of implementing security mitigation is to ensure privacy, confi-


dentiality, and the security of IoT users, infrastructures, data, and devices, as well as
to ensure the availability of services provided by an IoT ecosystem. As a result,
mitigation and countermeasures are often implemented in accordance with the tradi-
tional threat vectors.
In the above sub-section, some empirical based solutions have been listed in
Table 2 corresponding to the given threat or attack. Based on the studies performed in
[11, 45–47], it is observed that some ubiquitous state-of-the-art technologies such as
Blockchain, Fog Computing, Edge Computing, SDN, Artificial Intelligence can be
used to enhance the security in an IoT environment. These technologies are vital and
have enormous potential for addressing the IoT ecosystem’s security concerns.
Blockchain (BC): A blockchain is a special kind of database. It differs from a
standard database because of the unique approach in which it saves data. Data are,
hence, saved in a series of blocks that are subsequently linked together to form a
blockchain. IoT devices capture data from sensors in real time, and BC provides data
security by establishing a distributed, de-centralized, and shared ledger [48]. Due to
its critical operational properties, such as distributed functionality, de-centralized
behavior, encrypted communication, embedded cryptography, and authorized access,
it provides security solutions against a variety of threats across the different layers of
the IoT such as disclosure of critical information, device compromise, malicious data
injection, tag cloning, node cloning, unauthorized access, software modification, data
manipulation, spoofing, session hijacking, false data injection, brute force attack.
Fog computing (FC): Fog computing allows processing, storage, and intelligent
control to be close to the data devices themselves. Hardware failures, eavesdropping,
device compromise, disclosure of critical information, leaks of critical information,
node tampering, node capture attacks, node replication, battery drainages attack,
illegal access, DoS and DDoS, MITM, etc. are just some of the threats and attacks that
can be prevented by the vast processing, storage and management capabilities of the
voluminous data that it processes, stores, and manages.
Edge Computing (EC): In edge computing, data are transmitted within the net-
work or within the device. Data movement is reduced as compared to fog computing,
which alleviates security concerns. Real-time services such as intrusion detection,
identity recognition, access management enable edge computing to strengthen secu-
rity against a variety of threats and attacks, including battery drain, hardware failure,
eavesdropping, node capture, DoS and DDoS, SQL injection, jamming, malicious
attack, virtualization, data integrity, cloud flooding attack, illegal access.
SDN: Software-defined networking is the preferred method of managing network
security in a variety of application domains, including smart homes, businesses, and
e-health care systems. The control plane and data plane refer to the two primary tasks
13
Quality Control - An Anthology of Cases

Layer Threats/ Description Solution


attacks

Perception Eavesdropping An intrusion, also known as a sniffing or Deploying intrusion


Layer spying attack, occurs when someone detection system.
attempts to steal information sent by the
devices.

Replay Attack An intruder listens to the transmission Using one-time passwords


between sender and receiver and steals and session keys and
legitimate data from the sender. timestamps
RF Jamming RFID tags may potentially be exploited via a Encryption &
DoS attack, when RF transmission is authentication.
disrupted by excessive noise signals.

Node Capture An attacker takes control over a key node, Authentication and access
like a gateway node to use its resources. control.
Fake Node and It is a kind of attack in which an attacker Authentication and access
Malicious modifies the system by adding a node and control.
injecting bogus data. This created node
drains vital energy from genuine nodes and
may gain control of them, thus destroying
the network.
Network Sybil Attack In this attack, the attacker controls and Trusted certificates that are
Layer changes the node in such a way that it based on a central
shows multiple identities, hence certification authority.
compromising a huge portion of the system
and resulting in misleading information
about redundancy.

Sinkhole The attacked hole serves as a strong node, Intrusion detection system,
Attack and so other nearby nodes and devices strong authentication
prefer it for communication or as a techniques.
forwarding node for data routing, and thus
acts as a sinkhole, attracting everything.
Denial of This kind of attack causes the targeted Configuring a firewall which
Service (DoS) system’s resources to be exhausted, denies ping requests or using
Attack rendering the network inaccessible to its AES encryption.
users because of an attacker’s flood of
useless traffic.

Man-in-the- Using a middleman attack, the attacker Using high level encryption
Middle Attack pretends to be the original sender, making and digital signatures.
the recipient believe that the message came
from them.
RFID Spoofing These attacks are designed to transfer RFID Authentication
malicious data into the system by gaining protocols.
access to the IoT system. RFID spoofing, IP
spoofing, and other spoofing attacks in IoT
systems are examples.

Unauthorized An unauthorized person may get access to Authentication and access


access the IoT device over the network. control.

Application Malicious This attack is done by executing the Firewall is inspected at run
Layer Code Attacks malicious scripts or code. It is a hacking time.
method enabling the attacker to first insert
the malicious code into the system and then
data is stolen from the user by executing
these malicious scripts.

14
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469

Layer Threats/ Description Solution


attacks

Cross site Client-side scripts, such as javascript, may Validating user input and
scripting be injected into a trusted website by an the input by the web page.
attacker. An attacker may then totally alter
the application’s content to suit his
requirements and illegally use original data.

Phishing The attacker spoofs the legitimate users’ Using anti-phishing,


attack data to get their usernames, email prevention techniques.
addresses, and passwords. The attacker
creates a false e-mail or website, and then
the legitimate user logs in, stealing their
data.

Botnet By using a botnet, the hacker may take over Using a secure router
a network of devices and control them from encryption protocol, such as
a single access point. WPA2.

SQL injection SQL script is used to log into the IoT Programming the log page
Devices and applications. using parameterized
statements.

Table 6.
Common IoT threats, description, and solutions.

of switches/routers. The control plane determines where traffic should be routed,


whereas the data plane routes traffic to a specific destination. The control plane and
data plane are linked together in conventional networking, but are separated in an
SDN architecture. The data plane runs on hardware, while the control plane runs on
software and is logically centralized. SDN is capable of monitoring and detecting
harmful activity on the network. It separates the compromised nodes from the rest of
the network by identifying them. Flow statistics in SDN architectures was employed
to detect anomalies through a variety of techniques, including DDoS attacks, port
scanning, and worm spreading [49].
Artificial intelligence (AI): The use of artificial Intelligence is growing in cyber-
security because it can help protect systems from cyber threats in a more dynamic
way. AI is most frequently employed in cybersecurity for intrusion detection, which
involves studying traffic patterns and looking for activities indicative of threat. With
the growth of IoT technology, AI has received considerable attention. As a result of
this expansion, AI technologies such as machine learning, support vector machines,
decision trees, linear regression, and neural networks have been integrated into IoT
cybersecurity applications to detect threats and prospective attacks. AI is viable for
IoT security, particularly for the four critical risks: intrusion detection, defense against
DoS/DDoS attacks, device authentication, and virus detection [50]. The following
section discusses the role of AI techniques and their comparative studies for IoT
security.

5. Comparative study AI categories used to mitigate industrial IoT


security

AI is a promising approach, which can be employed to mitigate the security


challenges faced by IoT autonomous system. As per [51], the secure solution can be
15
Quality Control - An Anthology of Cases

improved through AI approaches to predict future threats. The researchers point out
generative adversarial networks (GAN) that are using generator and discriminator.
The generator’s scope is to add samples to the real data, whereas the discriminator’s
purpose is to remove the fake samples from the original data. The suggested AI-based
solutions are from the data-driven type, which are support vector machine (SVM),
neural networks (NN), artificial neural networks (ANN), recurrent neural network
(RNN).
A framework has been proposed where AI based reaction agent is introduced [52].
The security enhancement is a combination between two intrusion detection systems:
knowledge-based and anomaly-based. For network pattern analysis, Weka has been
used as data mining tool and NSL KDD as dataset source and distributed JRip algo-
rithm in which machine learning can be used for security enhancement. For anomaly-
based IDS, the dataset is collected from real sensor data and the model uses library of
python Scikit-learn.
The main finding of [53] is that AI can be used for IoT security mostly in intrusion
detection system (IDS) in order to analyze the traffic and learn the characteristic of
the attack. Naïve Bayes algorithm is mostly used to classify attack data where it is
assumed this to originate from the independent events.
A two-tier framework is proposed by [54] for embedded systems such as an IoT
system. The security mitigation is to improve the traditional host-based IDS. The
machine learning approach used is of a pipeline method where a set of algorithms are
involved which allow the flexibility of adjusting the ML processing and the link
between different tiers.
From a comprehensive survey published by [55], it has been found that high-level
encryption techniques are not advisable to be implanted in IoT systems due to
resource limitation. Therefore, AI approach is a very strong candidate to enhance
security in IoT system in addition to the other existing network security protocols.
Consequently, to the nature of IoT-layered architecture, each layer has its specific
security threats. It has been noticed that machine learning approaches are widely
adopted in comparison to the knowledge-based expert systems.

Reference Expert Machine- Security mechanism to be enhanced


system learning

[51] ✓ ✓ DoS, Sybil detection, intrusion detection, MITM, malicious


node

[52] ✓ ✓ intrusion detection system (IDS)

[53] ✓ ✓ Mostly used in (IDS) and MITM


[54] ✓ Host-based IDS

[55] ✓ ✓ Different layers of IoT system threats

[56] ✓ ✓ IDS

[45] ✓ ✓ Authentication mechanisms (Access control) and detection


systems

[57] ✓ ✓ False Data detection

[58] ✓ ✓ IT trustworthiness (safety, security, privacy, reliability, and


resilience)

Table 7.
AI branches used in IoT security solutions.

16
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469

Another study published by [56] suggests that the machine learning based security
approaches are used mostly to enhance the detection mechanism of IDS. The only
approach that provides mitigation features is based on the techniques that utilizes
deep learning such as Gaussian mixture, SNN, FNN, RNN or utilize supervised
machine learning such as SVM. Table 7 [45, 51–58] shows that machine learning is
mostly used in the security mechanisms in IoT environment as there are a huge data to
learn from.
As per the literature, AI-based methods are recommended to be used to enhance
protection against IoT attack. However, most of them are not yet commercialized due
to the difficulty of its implementation. The focus of proposing different IoT security
mitigation is to introduce high-performance approaches with low cost in a real-time
environment. Moreover, dataset preparation is a critical factor that affects the accu-
racy and efficiency of machine learning approaches.

6. Conclusions

As discussed in this chapter, industries deployed IoT technology to develop indus-


trial applications to add values to their businesses and consumers in terms of perfor-
mance and cost. Different business models are also reviewed to comprehend that the
standardization of IoT business model is very difficult due to the different types of
industries and their varied requirements. As such, it is critical for industries to ensure
confidentiality, data integrity, availability to ensure data privacy, and security of the
system. However, maintaining privacy and security emerged as a challenge in IIOT
because of the sophistication of the IoT system. This chapter considered the most used
three-layer IoT architecture to study and review the various possible threats and
attacks and their conventional mitigation techniques. Conventional security mecha-
nisms have a limitation in IIoT, particularly in predicting attacks.
The state-of-the-art technologies such as Blockchain, Fog computing, Edge com-
puting, SDN, and AI have also been discussed to enhance the security levels in IIoT
systems. But artificial intelligence (AI) has been emerging as a promising approach to
secure the IIoT-based systems because of its ability to learn from the big data. It
furthermore supports data analysis and enhances security mechanisms. AI techniques
such as SVM, NN, ANN, RNN have been reviewed and recommended to design and
improve countermeasures such as IDS. Data engineering is a critical phase to prepare
the datasets required for machine learning. Therefore, it is highly recommended to
consider this phase in order to achieve an effective AI deployment. Based on the
analysis presented herein, it is the authors’ view that this is an open challenge to
enhance security mechanisms through AI-based mitigation techniques.

Acknowledgements

We would like to extend our appreciation to the Ministry of higher education and
research and innovation for funding this research through the block funding program.
This paper is aimed at contributing and further fostering the quality of research in the
University of Technology and Applied Sciences in Oman. We extend our gratitude to
the reviewers for their insights on the submitted manuscript that greatly improved the
chapter.
17
Quality Control - An Anthology of Cases

Glossary of terms

AI Artificial Intelligence
ML Machine Learning
QoS Quality of Services
ES Experts System
B2B Business to Business
STRIDE Spoofing, Tampering, Repudiation, Information disclosure, Denial
of Service and Elevation of privilege
DREAD Damage, Reproducibility, Exploitability, Affected Users,
Discoverability
SDN Software-Defined Networks
OWASP Open Web Application Security Project
Digital Twin: A digital twin is a virtual representation of an object or system that
spans its lifecycle, is updated from real-time data, and uses simula-
tion, machine learning and reasoning to help decision making.

Author details

Shadha ALAmri*, Fatima ALAbri and Tripti Sharma


University of Technology and Applied Sciences, Muscat, Oman

*Address all correspondence to: [email protected]

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
18
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469

References

[1] Bécue A, Praça I, Gama J. Artificial [8] Mishra D, Zema NR, Natalizio E. A
intelligence, cyber-threats and industry high-end IoT devices framework to foster
4.0: Challenges and opportunities. beyond-connectivity capabilities in 5G/
Artificial Intelligence Review. 2021; B5G architecture. IEEE Communications
54(5):3849-3886. DOI: 10.1007/ Magazine. 2021;59(1):55-61. DOI: 10.1109/
S10462-020-09942-2/FIGURES/3 MCOM.001.2000504

[2] Shojafar M, Mukherjee M, Piuri V, [9] Fagan M, Marron J, Brady KG Jr,


Abawajy J. Guest editorial: Security and Cuthill BB, Megas KN, Herold R, et al.
privacy of federated learning solutions IoT device cybersecurity guidance for
for industrial IoT applications. IEEE the Federal Government. NIST. 2021;
Transactions on Industrial Informatics. 800:213. Available from: https://nvlpubs.
2022;18(5). Available from: https://ieee nist.gov/nistpubs/SpecialPublications/
xplore-ieee-org.masader.idm.oclc.org/ NIST.SP.800-213A.pdf. [Accessed:
document/9619939/. [Accessed: February 9, 2022]
February 11, 2022]
[10] Almtrafi S, Alkhudadi B, Sami G,
[3] Swamy SN, Kota SR. An empirical Alhakami W. Security threats and attacks
study on system level aspects of Internet in Internet of Things (IOTs). International
of Things (IoT). IEEE Access. 2020;8: Journal of Computer Science & Network
188082-188134. DOI: 10.1109/ Security. 2021;21(1):107-118. DOI:
ACCESS.2020.3029847 10.22937/IJCSNS.2021.21.1.15

[4] Whaiduzzaman M, Mahi MJN, [11] Krishna RR, Priyadarshini A, Jha AV,
Barros A, Khalil MI, Fidge C, Buyya R. Appasani B, Srinivasulu A, Bizon N.
BFIM: Performance measurement of a State-of-the-art review on IoT threats
blockchain based hierarchical tree layered and attacks: Taxonomy, challenges and
fog-IoT microservice architecture. IEEE solutions. Sustainability. 2021;13(16):
Access. 2021;9:106655-106674. DOI: 9463. DOI: 10.3390/SU13169463
10.1109/ACCESS.2021.3100072
[12] Azrour M, Mabrouki J, Guezzaz A,
[5] Chao L, Peng X, Xu Z, Zhang L. Kanwal A. Internet of Things security:
Ecosystem of things: Hardware, Challenges and key issues. Security and
software, and architecture. Proceedings Communication Networks. 2021; 2021.
of the IEEE. 2019;107(8):1563-1583. DOI: DOI: 10.1155/2021/5533843
10.1109/JPROC.2019.2925526
[13] Dorsemaine B, Gaulier JP, Wary JP,
[6] Oteafy SMA, Hassanein HS. IoT in Kheir N, Urien P. A new approach to
the fog: A roadmap for data-centric IoT investigate IoT threats based on a four
development. IEEE Communications layer model. In: 13th International
Magazine. 2018;56(3):157-163. DOI: Conference on New Technologies for
10.1109/MCOM.2018.1700299 Distributed Systems (NOTERE 2016).
IEEE; 2016. DOI: 10.1109/
[7] Arena F, Pau G. When edge NOTERE.2016.7745830
computing meets IoT systems: Analysis
of case studies. China Communications. [14] Russell S, Peter N. Artificial
2020;17(10):50-63. DOI: 10.23919/ intelligence: a modern approach.
JCC.2020.10.004 Harvard: Prentice Hall; 2010
19
Quality Control - An Anthology of Cases

[15] Negnevitsky M. Artificial Intelligence: Things: Aspiration, implementation and


A Guide to Intelligent Systems. 3rd ed. contribution. Journal of Business
Addison Wesley/Pearson; 2011 Research. 2022;139:69-80. DOI: 10.1016/
J.JBUSRES.2021.09.025
[16] Lantz B. Machine Learning with R:
Expert Techniques for Predictive. [23] Suppatvech C, Godsell J, Day S. The
Birmingham, UK: Packt Publishing Ltd; roles of Internet of Things technology in
2019 enabling servitized business models: A
systematic literature review. Industrial
[17] Moustafa M, Choo KKR, Abu- Marketing Management. 2019;82:70-86.
Mahfouz AM, Guest editorial: AI- DOI: 10.1016/J.INDMARMAN.2019.
enabled threat intelligence and hunting 02.016
microservices for distributed industrial
IoT system. IEEE Transactions on [24] Mansour H, Presser M, Bjerrum T.
Industrial Informatics. 2022; 18(3). Comparison of seven business model
Available from: https://ieeexplore.ieee. innovation tools for IoT ecosystems.
org/document/9536391/ [Accessed: IEEE World Forum Internet Things.
February 7, 2022] 2018;2018:68-73. DOI: 10.1109/WF-
IOT.2018.8355219
[18] Gebremichael T et al. Security and
privacy in the industrial Internet of [25] Manyika J, Chui M, Bisson P,
Things: Current standards and future Woetzel J, Dobbs R. The Internet of
challenges. IEEE Access. 2020;8: Things: Mapping the Value beyond the
152351-152366. DOI: 10.1109/ Hype. McKinsey Global Institute; 2015
ACCESS.2020.3016937
[26] George G, Thampi SM. A graph-
[19] Chui M, Collins M, Patel M. McKinsey based security framework for securing
Report: The Internet of Things Catching up industrial IoT networks from
to an Accelerating Opportunity. Fluxus; vulnerability exploitations. IEEE Access.
2021. Available from: https://fluxus- 2018;6:43586-43601. DOI: 10.1109/
prefab.com/mckinsey-report-the-interne ACCESS.2018.2863244
t-of-things-catching-up-to-an-accelera
ting-opportunity/. [Accessed: February 11, [27] Fang L, Zhang H, Li M, Ge C, Liu L,
2022] Liu Z. A secure and fine-grained scheme
for data security in industrial IoT
[20] Osterwalder A, Pigneur Y. Business platforms for smart city. IEEE Internet
Model Generation: A Handbook for Things Journal. 2020;7(9). Available
Visionaries, Game Changers, and from: https://ieeexplore.ieee.org/docume
Challengers (The Strategyzer Series). nt/9104725/. [Accessed: February 6,
John Wiley and Sons; 2010 2022]

[21] Dijkman RM, Sprenkels B, Peeters T, [28] Tariq U, Aseeri AO, Alkatheiri MS,
Janssen A. Business models for the Zhuang Y. Context-aware autonomous
Internet of Things. International Journal security assertion for industrial IoT.
of Information Management. 2015;35(6): IEEE Access. 2020;8:191785-191794.
672-678. DOI: 10.1016/J.IJINFOMGT. DOI: 10.1109/ACCESS.2020.3032436
2015.07.008
[29] Rathore S, Park JH, Chang H. Deep
[22] Cranmer EE, Papalexi M, tom learning and blockchain-empowered
Dieck MC, Bamford D. Internet of security framework for intelligent 5G-
20
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469

enabled IoT. IEEE Access. 2021;9: [37] ENISA, Baseline Security


90075-90083. DOI: 10.1109/ Recommendations for IoT — ENISA. 2017.
ACCESS.2021.3077069
[38] Alshohoumi F, Sarrab M,
[30] El Kaed C, Khan I, Van Den Berg A, AlHamadani A, Al-Abri D. Systematic
Hossayni H, Saint-Marcel C. SRE: review of existing IoT architectures
Semantic rules engine for the industrial security and privacy issues and concerns.
Internet-of-things gateways, IEEE International Journal of Advanced
Transactions on Industrial Informatics. Computer Science and Applications.
2018;14(2). Available from: https://ieee 2019;10(7):232-251. DOI: 10.14569/
xplore.ieee.org/document/8091285/. IJACSA.2019.0100733
[Accessed: February 7, 2022]
[39] OWASP. OWASP IoT Security
[31] Iwendi C, Rehman SU, Javed AR, Verification Standard OWASP
Khan S, Srivastava G. Sustainable Foundation. OWASP; 2022. Available
security for the Internet of Things using from: https://owasp.org/www-projec
artificial intelligence architectures. ACM t-iot-security-verification-standard/.
Transactions on Internet Technology. [Accessed: February 12, 2022]
2021;21(3):73. DOI: 10.1145/3448614
[40] Zhang J, Jin H, Gong L, Cao J, Gu Z.
[32] Microsoft, IoT Security Architecture Overview of IoT security architecture.
Microsoft Docs. 2021. Available from: In: 2019 IEEE 4th International
https://docs.microsoft.com/en-us/a Conference on Data Science in
zure/iot-fundamentals/iot-security-arch Cyberspace, DSC. 2019. pp. 338-345.
itecture. [Accessed: February 11, 2022] DOI: 10.1109/DSC.2019.00058

[33] Sven Schrecker H. Industrial [41] Sharma N, Prakash R, Rajesh E.


Internet of Things volume G4: Security Different dimensions of IOT security.
Framework. Industrial Internet International Journal of Recent
Consortium. 2016 Technology and Engineering. 2020;8(5):
2277-3878. DOI: 10.35940/ijrte.
[34] Microsoft. Azure IoT Reference E5893.018520
Architecture. Microsoft; 2021. Available
from: https://azure.microsoft.com/en-us/ [42] Ahanger TA, Aljumah A. Internet of
resources/microsoft-azure-iot-reference- Things: A comprehensive study of
architecture/. [Accessed: February 12, security issues and defense mechanisms.
2022] IEEE Access. 2019;7:11020-11028. DOI:
10.1109/ACCESS.2018.2876939
[35] Aufner P. The IoT security gap: A
look down into the valley between threat [43] Litoussi M, Kannouf N, El
models and their implementation. Makkaoui K, Ezzati A, Fartitchou M. IoT
International Journal of Information security: Challenges and
Security. 2020;19(1):3-14. DOI: 10.1007/ countermeasures. Procedia Computer
S10207-019-00445-Y/FIGURES/1 Science. 2020; 177:503-508. DOI:
10.1016/J.PROCS.2020.10.069
[36] Flauzac O, Gonzalez C, Nolot F. New
security architecture for IoT network. [44] Ávila K, Sanmartin P, Jabba D,
Procedia Computer Science. 2015;52(1): Gómez J. An analytical survey of attack
1028-1033. DOI: 10.1016/J.PROCS.2015. scenario parameters on the techniques of
05.099 attack mitigation in WSN. Wireless
21
Quality Control - An Anthology of Cases

Personal Communications. 2022;122 (4): [52] Bagaa M, Taleb T, Bernabe JB, et al.
3687-3718. DOI: 10.1007/S11277-021- A machine learning security framework
09107-6/FIGURES/19 for IoT systems. IEEE Access. 2020;8:
114066-114077. Available from: https://
[45] Restuccia F, D’Oro S, Melodia T. ieeexplore.ieee.org/abstract/document/
Securing the Internet of Things in the 9097876/. [Accessed: February 04, 2022]
age of machine learning and software-
defined networking. IEEE Internet of [53] Kuzlu M. Role of artificial
Things Journal. 2018;5(6):4829-4842. intelligence in the Internet of Things
DOI: 10.1109/JIOT.2018.2846040 (IoT) cybersecurity. Discover Internet of
Things. 2021;1(1):1-14. DOI: 10.1007/
[46] Hassija V, Chamola V, Saxena V, S43926-020-00001-4
Jain D, Goyal P, Sikdar B. A survey on
IoT security: Application areas, security [54] Liu M, Xue Z, He X. Two-tier intrusion
threats, and solution architectures. IEEE detection framework for embedded
Access. 2019;7:82721-82743. DOI: systems. IEEE Consumer Electronics
10.1109/ACCESS.2019.2924045 Magazine. 2021;10(5):102-108. DOI:
10.1109/MCE.2020.3048314
[47] Noor MM, Hassan WH. Current
research on Internet of Things (IoT) [55] Zaman S et al. Security threats and
security: A survey. Computer Networks. artificial intelligence based
2019;148:283-294. DOI: 10.1016/J. countermeasures for Internet of Things
COMNET.2018.11.025 networks: A comprehensive survey. IEEE
Access. 2021;9:94668-94690. Available
[48] Miller D. Blockchain and the from: https://ieeexplore-ieee-org.masader.
Internet of Things in the industrial idm.oclc.org/document/9456954/.
sector. IT Professional. 2018;20(3):15-18. [Accessed: February 05, 2022]
DOI: 10.1109/MITP.2018.032501742
[56] Jayalaxmi P, Saha R, Kumar G,
[49] Giotis K, Argyropoulos C, Kumar N, Kim TH. A taxonomy of
Androulidakis G, Kalogeras D, security issues in industrial Internet-of-
Maglaris V. Combining OpenFlow and Things: Scoping review for existing
sFlow for an effective and scalable solutions, future implications, and
anomaly detection and mitigation research challenges. IEEE Access. 2021;9:
mechanism on SDN environments. 25344-25359. DOI: 10.1109/
Computer Networks. 2014;62:122-136. ACCESS.2021.3057766
DOI: 10.1016/J.BJP.2013.10.014
[57] Aboelwafa MMN, Seddik KG,
[50] Wu H, Han H, Wang X, Sun S. Eldefrawy MH, Gadallah Y, Gidlund M.
Research on artificial intelligence A machine-learning-based technique for
enhancing Internet of Things security: A false data injection attacks detection in
survey. IEEE Access. 2020;8: industrial IoT. IEEE Internet of Things
153826-153848. DOI: 10.1109/ Journal. 2020;7(9). Available from:
ACCESS.2020.3018170 https://ieeexplore-ieee-org.masader.idm.
oclc.org/document/9084134/. [Accessed:
[51] Puthal D, Mishra A, Sharma S. February 11, 2022]
AI-driven security solutions for the
internet of everything. IEEE Consumer [58] Hassan MM, Gumaei A, Huda S,
Electronics Magazine. 2021;10(5):70-71. Almogren A. Increasing the
DOI: 10.1109/MCE.2021.3071676 trustworthiness in the industrial IoT
22
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469

networks through a reliable cyberattack


detection model. IEEE Transactions on
Industrial Informatics. 2020; 16(9).
Available from: https://ieeexplore-ieee-
org.masader.idm.oclc.org/document/
8972480/. [Accessed: February 11, 2022]

23

You might also like