Quality Control An Anthology of Cases
Quality Control An Anthology of Cases
154
Countries delivered to
TOP 1%
most cited scientists
12.2%
Contributors from top 500 universities
Abstract
Network security ensures that essential and accessible network assets are protected
from viruses, key loggers, hackers, and unauthorized gain. Interruption detection
system (IDS) is one of the most widespread significant network tools for network
security management. However, it has been shown that the current IDS is challenging
for network professionals to use. The interface, which assists users in evaluating the
software usability, is a crucial aspect that influences the effectiveness of IDS, while
security software such as IDS is effective. Usability testing is essential for supporting
users in successful interaction and IDS utilization because the user finds it difficult to
assess and use the output quality. Usability engineers are responsible for the majority
of usability evaluations. Small and large business software engineers must master
multiple usability paradigms. This is more difficult than teaching usability engineers
how to write software. The Cognitive Analysis of Software Interface (CASI) tech-
nology was created as a solution for software engineers. Furthermore, this system
aids software engineers in evaluating IDS based on user perception and evaluation
perspectives. This study also discusses a large body of research on software interfaces
and assessment procedures to evaluate novel heuristics for IDS. Finally, additional
interface challenges and new ways for evaluating programmed usability are discussed.
Topic Subject Areas: Intrusion Detection System (IDS) Usability.
. Introduction
The Internet has evolved recently, and users have been confronted with network
security issues. Many firms are concerned about protecting their valuable and private
data from dangers inside and outside society. Human and organizational variables,
according to research, have an impact on network security. Security is a challenge
for network practitioners. As a result, they employ specific tools, such as intrusion
detection systems, firewalls, antivirus software, and Nmap, among others, to reduce
or completely eradicate incursion. Interruption detection system (IDS) is critical in
detecting malevolent behavior quickly and supporting real-time attack response.
But many intrusion detection systems are challenging to use, and users cannot take
advantage of all of their functions. These issues must be to boost IDS efficiency. One
option is to create an effective solution that may assist network administrators in con-
trolling security. Usability is a critical factor that has a significant impact on security
management. Software developers acknowledge that the software interface is critical
to its success. In terms of software usability, this success can be measured. Usability
Quality Control - An Anthology of Cases
discusses the quality of a user’s know-how when interacting with products or systems,
including websites, software, devices, or applications. Usability is an essential term in
the human-computer interaction (HCI) discipline. One option to overcome the issues
of IDS is to create a user-friendly interface to assist network experts in effectively
managing security.
. Usability
The way businesses and people interact has altered due to Twitter, which was
created in . Therefore, usability is a crucial aspect of software quality. It is
described by ISO as the degree to which confident clients can use a product to
succeed in preset goals with sufficiency, competency, and fulfillment in an exact set
of users. The capacity of the product item is to be perceived, learned, and enjoyed by
the client when used in endorsed settings []. Definitions emphasize convenience as
a key component of programming that enables users to do tasks quickly and without
any issues. Nielson lists the five characteristics of learnability, memorability, and
adaptability essential to usability.
According to the client’s point of view, ease of use ensures that the result produced
is easy to measure, use, and recollect. The objective of effectiveness, adequacy,
security, utility, learnability, and memorability is reached. HCI’s center has grown,
and the errand-focused convenience worldview has expanded to include a refined and
epicurean client experience UX worldview.
Various methodologies assess the convenience of programming in ease of use.
There are two convenience testing techniques: ease-of-use assessment and ease-of-use
testing strategies. Convenience issues are perceived by ease-of-use professionals in
convenience assessment. However, ease-of-use issues are found in clients’ perceptions
of how they utilize the framework and connect with the product interface in ease-of-
use testing strategies.
. Heuristic evaluation
Users believe that testing applications are an essential step in making them better.
Heuristic evaluation is a well-known low-cost approach to usability testing. According
to some authors [], heuristics and recommendations can be used interchangeably.
Up to of the usability flaws were identified []. However, a collection of heuris-
tics has never been designed expressly for evaluating security-related applications.
The project’s objective at this stage is to create criteria for assessing usability for this
particular problem space. These strategies are used to evaluate the quality of existing
products and to discover demands that products can meet. For the heuristic evalu-
ation, users selected snort as a candidate application. Snort is a simple yet popular
intrusion detection system. It can track and record IP traffic. Because it is a command
line-based tool, users decided to use a web-based application. Silicon defense has
created a user-interface front end.
Usability testing can be done in various ways, including cognitive walkthroughs,
formal usability inspections, heuristic evaluations, and pluralistic walkthroughs.
Heuristic evaluation was used to assess the usefulness of IDS additionally, and heu-
ristics are specifically developed for IDS. Heuristic evaluation entails a small group
of convenience specialists looking through the framework and comparing it with
Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423
usage standards. Customers can assess the ease of use of IDS and identify and address
usability matters more successfully by employing new heuristics.
However, given that assessment can be expensive in terms of time, money, and
human exertion, semi-mechanized or fully robotic evaluation is a viable option to
improve current assessment approaches. Additionally, research reveals [] the signifi-
cance of a particular framework in facilitating convenience assessment.
Regarding programming projects, utilizing a computerized or self-loader audit
framework is basic to guarantee the venture’s adequacy, mainly when the cutoff time
is tight. To guarantee project achievement, one choice is to develop further manual
evaluation utilizing robotization or semi-mechanization. This will help assessors
follow guide cycles and catch more mistakes significantly quicker. Finally, the assess-
ment’s discoveries are summed up and introduced to the planning group, alongside
ideas for development.
Figure 1.
IDS architectural data flow diagram.
Quality Control - An Anthology of Cases
PIDS is a structure that is frequently seen at the front end of the server, supervis-
ing and deciphering the communication between the client/contraption and the
server. By consistently examining the HTTPS show stream and enduring the con-
nected HTTP show, it attempts to connect to the web server. This system would need
to remain in collaboration for HTTPS to be used because HTTPS is not mixed until it
manifests at the web show layer.
Figure 2.
Life cycle or system flow diagram.
Figure 3.
Internal life cycle model.
Quality Control - An Anthology of Cases
. IDPS methodologies
IDPs utilize various ways to deal with distinguishing changes in the frameworks
they screen. Outside assaults or interior staff abuse can cause these changes. Four
procedures stand apart among the numerous others and are ordinarily utilized. The
four options are as follows:
• Signature-based,
• Oddity-based,
• Half-and-half-based.
Each methodology follows a similar broad framework; the main variations lie in how
they analyze data from the observed environment to determine whether an agreement
violation has happened as explained in Table [].
Table 1.
Best intrusion detection software tools and features.
Quality Control - An Anthology of Cases
Signature-based IDPSs are not difficult to overcome because they depend on existing
assaults and require the utilization of new marks before they can identify new ones.
Attackers can easily lose signature-based identification frameworks if they modify
known attacks and target frameworks that have not been updated with new marks
that identify the alteration. Signature-based procedures demand significant resources
to maintain awareness of the potentially endless number of changes to known risks.
Systems based on signatures are easier to modify and enhance since the markings or
rules used to display them can still change.
. With the advancing assortment of assaults, the two old-style IDSs referenced
above can safeguard our data frameworks. New strategies for joining different
interruption location frameworks to further develop their adequacy have been
planned. The inquiry has shown that consolidated calculations perform well
compared with only calculations [].
. In light of a C choice tree classifier and a one-class support vector machine, sci-
entists developed a mixed location model OC-SVM. Two key components made
up the model []. The primary component of the abuse identification model
Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423
was developed using a C. decision tree classifier. The next section was devel-
oped with OC-SVM for irregularity discovery. The NSL-KDD and Australian De-
fense Force Academy (ADFA) datasets were used by the experts to demonstrate
the model, and the results revealed that the half-and-half model performed
better than single-based models.
. For smarter home security, the experts [] suggested a half-and-half model in-
terruption identification model. The model was divided into two pieces. The ma-
jority of the section used AI calculations to recognize continuous interruptions.
In this section, calculations using irregular forests, XG Boost, choice trees, and
K-closest neighbors were used. The abuse interruption identification approach
was used in the next section to find known assaults. Both the CSE-CIC-IDS
and NSL-KDD datasets were used to test the model’s presentation. For the loca-
tion of both organizational disruption and client-based anomalies in cunning
homes, the model captured an amazing display.
. The creators [] fostered an interruption location framework by joining firefly and
Hopfield brain organization HNN calculations. The analysts utilized Firefly calcu-
lation to identify refusal-of-rest assaults through hub grouping and verification.
. The scientists [] proposed a crossbreed recognition framework for VANET ve-
hicular impromptu organization. The model comprises two parts. The scientists
conveyed an order calculation on the main part and a grouping calculation on the
subsequent part. In the main stage, they utilized irregular woodland to identify
known assaults through the order. They sent a weighted K-implies computation
for the next step, which was the finding of an odd interruption. The most recent
dataset, the CICIDS dataset, was used to evaluate the model. The experts
suggested conducting additional testing on the model under verifiable circum-
stances. They also combined arbitrary woods computation with unsupported
bunching calculation in light of corsets in another work. This model was used to
Quality Control - An Anthology of Cases
. The author [] projected a mixture location perfect given hereditary calcula-
tion and fake-resistant framework AIS-GAAIS for interruption identification
on impromptu on-request distance vector-based versatile impromptu organiza-
tion AODV-based MAN, ET. The model was assessed utilizing different steering
assaults. In contrasted and different models, the model had superior recognition
rates and diminished the deception rates.
. The scientists [] involved incorporated firefly calculation with a hereditary
calculation to include determination MANET. To group the chosen highlights
in the main phase of the model as one or the other interruption or typical, the
specialists utilized a replicator brain system for arrangement. The models’ exhi-
bition was contrasted with that of fluffy-based IDS. The model beat fluffy-based
IDS in exactness as well as accuracy.
The objective of the literary investigations is to look into IDS and convenience to
track down replies to explore issues. Users will do an abstract analysis of IDS’ usabil-
ity to identify any usability challenges and determine the best course of action. To
advance the usability of IDS, Figure shows the users will also need to ascertain the
present state of craftsmanship and methods [].
To improve convenience, users want to identify and study the IDSs that are used
the most frequently. Users have Grunt, KF Sensor, and Easy KF Sensor is a viable host
based-intrusion detection system (IDS) that acts as a honeypot to invite and detect
hackers by pretending weak systems. A few fundamental highlights of IDS are seen
during the examination, including client sorts, ease of use issues, and client collabo-
ration with IDS.
It is critical to understand who the actual IDS users are to gain meaning-
ful user input in defining the heuristics for IDS. In addition, this will aid in
Figure 4.
Selection-and-study of IDS.
Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423
identifying IDS usability issues and determining ways to improve IDS usability
based on user perceptions.
Based on the responses to the review questionnaire, users determine the problems
users have using IDS. This will support the creation of fresh IDS heuristics. The
heuristics are broken down into various groups, including:
a. Installation heuristics.
b. Interface heuristics.
c. Output heuristics.
d.Customization heuristics.
e. Help heuristics.
After the heuristics have been planned, now is the right time to scrutinize them in
the lab. The good thing about CASI is that the user may use the provided calculations
at any point in the IDS process, including the result and customization phases. This
study aims to evaluate CASI’s performance in identifying and fixing ease of use flaws
compared with conventional heuristics.
Following lab testing, the wished-for heuristics are currently prepared for exact
testing, in which network experts can participate in IDS interface assessment chal-
lenges and receive the outcomes. At the same time, another IDS interface mock-up
is assembled and tried for assessment relying upon the experimental outcomes.
Assuming that network experts find the point of interaction engaging and easy to
utilize, it will ultimately supplant the past IDS interface.
To observer-assess IDS, this can be achieved via the CASI and (Nielson) []
Usability on IDS to decide the number of ease-of-use flaws found and eliminated
from the IDS interface. The researcher’s ease-of-use was picked because they are
Quality Control - An Anthology of Cases
the most routinely used. The objective of contrasting the convenience of how CASI
functions contrast with scientists’ ease of use. A few elements should be considered
while contrasting including the quantity of ease-of-use defects distinguished, time,
dependability, proficiency, and accuracy.
Information is sent from the client to the server through a web demand. The
data is sent utilizing HTTP demand header fields or solicitation boundaries. The
solicitation header fields contain client demand control data, while the solicitation
boundaries contain extra client data required by server-side projects to play out a
movement. GET and POST are the two standard strategies for passing boundaries
to the server. Boundary values are provided in the inquiry line of the URL in the
GET demand, and these qualities are conveyed in the solicitation body in the POST
demand. The client program typically characterizes the header fields. However, the
boundary values are either given by the client or recently arranged by waiter side
projects, for example, treats, stowed away fields. The hidden test with electronic
application security is that client information can be truly a factor and similarly
mind boggling, making it hard to interface them along with a legitimate arrange-
ment of values.
Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423
These characteristics exist in a restricted reach and can be free, that is, general to
all or tweaked to the application’s business rationale. The main gathering contains an
assortment of normal qualities, for example, header fields, Accepts, Accept Charest,
Accept-Language. Since these qualities are regularly something similar across applica-
tions, they can be checked against a SIDS allow list. The last gathering of boundaries
contains values for HTML controls, such as dropdown records, checkboxes. These
controls assist clients with choosing values from a restricted determination of choices.
However, the business case for the application leaves the value arrangement of these
uncertain. Because of an assortment of elements, keeping up with the whitelist
to assess such boundary values can become a tedious activity for SIDS. First, the
whitelist has become excessively intended for the assortment of values that match the
business rationale. Second, this rundown may be huge dependent upon how much an
application controls. Third, staying up with the latest is troublesome since the pass-
able arrangement of values could shift rapidly as business rationale changes. However,
assistance can be beneficial in this situation as it allows one to become familiar with
the benefits of boundaries.
. Application values
This class provides values given by server-side projects that should not be changed
on the client side. Treats are stowed away fields, and designers utilize question strings
to store a scope of significant information, for example, item cost and amount,
meeting ID. IDS should check that these qualities match those set by the application.
Signature-based IDS cannot detect changed values because they need an attack strat-
egy and changed values frequently resemble real information. Inconsistency-based
frameworks, then again, can be utilized to realize which boundaries should not be
changed on the client side. Boundary-altering assaults were found in the exploration
portrayed.
Web applications typically have a large number of clients with varying levels of
honors. These honors are supervised by the approval interaction, which ensures
that the client is only leading legal activities. Applications follow each client-server
connection and direct each solicitation to a specific client before deciding whether
Quality Control - An Anthology of Cases
to handle it. Every time a user logs in to the program, a meeting ID is assigned the
responsibility of identifying the solicitations from the solicitation pool and append-
ing them to the user.
Utilizing discovery frameworks allows the user to provide various clients with
unique honors arrangements. IDS should initially have the option to follow client
meetings to relate client solicitations to the suitable meeting. IDS should also observe
asset utilization and client actions during a meeting. Unapproved access can be
acquired with an all-around created honor heightening attack. This element helps the
IDS in monitoring the situation with a solitary meeting. Finally, the full state strategy
can associate the grouping of solicitations to a given client, while stateless IDS treats
each solicitation freely and does not monitor them. Frameworks that come up short
on means to connect the current solicitation to recently got demands will probably not
recognize state support and authorization infringement.
. Conclusion
Abbreviations
Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423
Author details
© The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/.),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
Quality Control - An Anthology of Cases
References
[] Phases of the System Development [] Sumaiya Thaseen I, Aswani Kumar C.
Life Cycle Guide. Available from: https:// Intrusion detection model using fusion
www.clouddefense.ai/blog/system- of chi-square feature selection and
development-life-cycle. [Accessed: multi class SVM. Journal of King Saud
August , ] University - Computer and Information
Science. ;():-.
[] Lazarevic A, Kumar V, Srivastava J. DOI: ./J.JKSUCI...
Intrusion detection: A survey. Managing
Cyber Threats, Massive [] Khraisat A, Gondal I, Vamplew P,
Computing. ;:-. DOI: Kamruzzaman J. Survey of intrusion
./---_ detection systems: Techniques,
datasets and challenges. Cybersecurity.
[] Masood Butt S, Majid MA, Marjudi S, ;():-. DOI: ./
Butt SM, Onn A, Masood Butt M. Casi S---
method for improving the usability of
IDS. Science International (Lahore). [] Khan A, Sohail A, Zahoora U,
;():- Qureshi AS. A survey of the recent
architectures of deep convolutional
[] Best Intrusion Detection Software neural networks. Artificial Intelligence
- IDS Systems - DNSstuff. Available Review. ;:-. DOI: ./
from: https://www.dnsstuff.com/ s---
Updates on Software Usability
DOI: http://dx.doi.org/10.5772/intechopen.107423
Chapter
Abstract
1. Introduction
Researchers have predicted about an eight percent increase in soft-error rate per
logic state bit in each technology generation [1]. According to the International
Telecommunication Roadmap for Semiconductors (ITRS) 2005 and 2011, reduction in
dynamic power, increase in resilience to faults and heterogeneity in computing archi-
tecture pose a challenge for researchers. According to the International Roadmap for
1
Quality Control - An Anthology of Cases
Figure 1.
SERs at various technology node.
Device and System (IRDS) roadmap 2017, device scaling will touch the physical limits
with failures reaching one failure per hour as shown in Figure 1 . The soft error rate
(SER) is the rate at which a device or system encounters or is predicted to encounter
soft errors per unit of time, and is typically expressed as failures-in-time (FIT). It can
be seen, from Figure 1 [2–4] that, at 16 nm process node size, a chip with 100 cores
could come across one failure every hour due to soft errors.
This decrease in process node size and increase in integration density as seen in
Figure 1, has the following effects.
1. Number of cores per chip has increased. Due to increase in number of cores, size
of the last level cache (LLC) has increased. For example, NVIDIA’s GT200
architecture GPU did not have an L2 cache, the Fermi GPU, Kepler GPU,
Maxwell GPU has 768KB LLC, 1536KB LLC and 2048KB LLC respectively [5].
Similarly, Intel’s 22 nm Ivytown processor has a 37.5 MB static random-access
memory (SRAM) LLC (Rusu 2014) [6] and 32 nm Itanium processor had a
32 MB SRAM LLC (Zyuban 2013) [7]. Consequence of larger cache size has led to
exponential increase in SER.
2. Low swing interconnect circuits are being used in CMOS transmission system.
This has proved to be an energy efficient signalling system compared to
conventional full swing interconnects circuits. However, incorrect sampling of
the signals in low swing interconnect circuits together with interference and
noise sources can induce transient voltages in wires or internal receiver nodes
resulting in incorrect value being stored at receiver output latch [8].
This scenario can be envisaged as a "fault wall”. In order to surmount the fault wall
scenario, reliability has been identified as a primary parameter for future multi-core
processor design [9, 10]. Similarly, ITRS 2005 and 2011, have also identified increase
in resilience to faults as a major challenge for researchers. Hence, a number of
researchers have started focusing on resilience to faults and reliability enhancement in
multi-core processors. The chapter focuses on providing fault tolerance solutions for
processor cores in multi-core systems.
2
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
2. Motivation
As seen in Figure 1, the total FIT per chip increases with number of cores per chip
increasing. In order to accommodate higher number of cores per chip, (1) total FIT
per chip has to be maintained constant (or no change), and (2) SER per core needs to
be reduced. In the present-day processor cores, the frontend of the core comprises of
decode queue, instruction translation lookaside buffer, and latches. The backend of
the core comprises of arithmetic logic unit, register files, data translation lookaside
buffer, reorder buffers, memory order buffer, and issue queue. SER from backend and
the frontend of the core is 74.48% and 25.22% respectively. In the present processor
cores, latches are hardened [11, 12] cache and large memory arrays are protected using
error correcting codes (ECC) [13, 14]. The SER from backend of the processor is more
when compared to front end and is mainly due to arithmetic logic unit. The FIT from
the arithmetic logic unit of the processor core has started reaching higher levels which
needs robust fault mitigation approaches for present and future processors. Hence
addressing the reliability issues of the core (arithmetic logic unit in backend) is more
significant in improving the reliability of the multi-core system [15, 16]. Conventional
approaches to handle soft errors consumes more power and area. Hence, the chapter
focuses on using heterogeneous model with low cost ( “low cost” denote low power
and lesser area of OICs) fault tolerant cores to improve reliability of multi-core
systems.
1. The microarchitecture consisting of control and data path for OIC is designed.
Four modes of operation in 32-bit OIC namely (a) baseline mode (b) DMR
mode (c) TMR mode and (d) TMR with self-checking subtractor (TMR + SCS)
are introduced.
3. Dynamic power, area, critical path and leakage power for four modes of OIC are
estimated and compared.
5. Area and power are estimated for multi-core system consisting of 32-bit OIC.
6. The OIC is synthesized using Quartus prime Cyclone IVE (Intel, Santa Clara,
CA) with device EP4CE115FE29C7. Number of logical elements and registers
are estimated.
7. Number of logical elements and registers in OIC and URISC++ are compared.
3
Quality Control - An Anthology of Cases
8. Using Weibull distribution, the reliability for the four modes of OIC are
evaluated and compared.
9. Using Weibull distribution, the reliability for OIC and URISC++ are evaluated
and compared.
11. Yield analysis for proposed multi-core system with OICs is presented.
The remaining portion of the chapter is organized as follows as: Section titled “3. An
Overview on 32-bit OIC” presents (a) an outline of 32-bit OIC (b) one instruction set of
OIC (c) modes of operation of OIC (d) microarchitecture of OIC (e) microarchitecture
of multi-core system consisting of OIC (f) instruction execution flow in multi-core
system using one instruction cores (MCS-OIC); Section titled “4. Experimental results
and discussion” presents power, area, register and logical elements estimation for OIC,
and power, area estimation for MCS-OIC; Section titled “5. Performance implications in
multi-core systems” presents performance implications at instruction level and applica-
tion level; Section titled “6. Yield analysis for MCS-OIC” presents yield estimates for the
proposed MCS-OIC; Section titled “7. Reliability analysis of 32-bit OIC” presents reli-
ability modelling of OIC and its estimate in different operational modes; the conclusion
of the chapter is presented in the Section titled “8. Conclusion”; the relevant references
are citated in the Section titled “References”.
A 32-bit OIC [17] is designed to provide fault tolerance to a multi-core system with
32-bit integer instructions of conventional MIPS cores. OIC is an integer processor.
The terms “32-bit OIC” and “OIC” are interchangeably used in this thesis. OIC exe-
cutes only one instruction, namely, “subleq – subtract if less than or equal”. The OIC
has three conventional subtractors and an additional self-checking subtractor. A con-
ventional core that detects faults in one of the functional units (i.e., ALU) sends the
opcode with operands to the OIC. In this thesis, the OIC is designed to support the
instruction set of 32-bit MIPS core. However, it can be designed to support 32 bit 86/
ARM instruction set by making necessary changes in the instruction decoder. The OIC
emulates the instruction by repetitively executing the subleq instruction in a
predetermined manner. There are four modes of operation in OIC and they are (a)
baseline mode (b) DMR mode (c) TMR mode and (d) TMR + Self Checking
Subtractor (SCS) or TMR + SCS mode. TMR + SCS is the “high resilience mode” of
OIC. Baseline mode is invoked only when soft error detection and correction alone are
required.
“Subleq – subtract if less than or equal” is the only instruction executed by the OIC.
The syntactic construct of the subleq instruction is given below.
Subleq A, B, C; Mem [B] = Mem [B] – Mem [A]
4
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
Table 1.
Sequence of synthesized Subleq instruction.
; If (Mem [B] ≤ 0) go to C;
It is interpreted as: “subtract the value at the memory location A from the value at the
memory location B; store the result at the memory location B; If the value at the memory
location B is less than or equal to zero, then jump to C.” The subleq instruction is Turing
complete. The instruction set of a core or processor is said to be Turing complete, if in
principle, it can perform any calculation that any other programmable computer can. As
an illustration, the equivalent synthesized subleq instructions for ADD, INC, MOV, DEC
and RSB (Reverse subtract) instructions are given in the Table 1.
The OIC operates in four modes as mentioned above. They are (a) baseline mode
(b) DMR mode (c) TMR mode and (d) TMR + Self Checking Subtractor (SCS) or
TMR + SCS mode.
a. Baseline mode: In this mode, only the self-checking subtractor is operational. The
results from the subtractor are verified by the self-checker. If the results differ,
the subtraction operation is repeated to correct the transient faults. Transient
faults are detected and corrected in this mode. If the results do not match again,
a permanent fault is detected.
b. DMR mode: In this mode, only two subtractors are operational. The results of
the two subtractors are compared using a comparator. If the results differ, the
subtraction operation is repeated to correct the transient faults. The transient
faults are detected and corrected in this mode. If one of the two subtractors
fails, a permanent fault is detected, and the OIC switches to baseline mode.
c. TMR mode: In this mode, all three subtractors are operational. The results from
the three subtractors are compared using three comparators. The voters check
the results from the comparators and perform majority voting. To correct the
transient faults, the operations are repeated. If anyone subtractor fails, the
faulty subtractor is disabled. In this mode, results from the redundant
subtractors are fed back on special interconnects to the inputs of the
multiplexer. OIC then switches to DMR mode. It is assumed that two
subtractors do not fail simultaneously. Occurrence of one permanent fault is
detected and tolerated in this mode.
d. TMR + SCS mode: TMR + SCS mode is the initial mode of operation in OIC. In
this mode, all three subtractors and SCS are operational. Both permanent and
5
Quality Control - An Anthology of Cases
transient faults are detected and corrected. The results of three subtractors and
SCS are compared using a comparator. If the results differ, then entire operation
is repeated to correct the transient faults. If results continue to differ, then OIC
switches to TMR mode.
Figure 2.
Control unit and data path unit of 32-bit OIC.
6
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
A Multicore system comprising one 32-bit MIPS core and one 32 bit OIC occupying
the upper half and lower half portions respectively in the micro-architecture, is shown
in Figure 3. The MIPS core is a five-stage pipelined scalar processor. Instruction Fetch
(IF), Instruction Decode (ID), Execution (EXE), Memory access (MEM) and Write
Back (WB) are the five stages in the MIPS pipeline. IF/ID, ID/EXE, EXE/MEM, and
MEM/WB are the pipeline registers. PC is a program counter and LMD, Imm, A, B, IR,
NPC, Aluoutput, and Cond are temporary registers that hold state values between
clock cycles of one instruction. The fault detection logic (FDL) detects faults in all the
arithmetic instructions (except logical instructions) by concurrently executing the
instructions. The results of ID/EXE.Aluoutput and FDL are compared to detect the
fault. If a fault is found then the pipeline is stalled. The IF/ID.opcode (in IR) and
operands ID/EXE.A and ID/EXE.B are transferred to OIC as shown in Figure 4. The
IF/ID.opcode is decoded and concurrently ID/EXE.A and ID/EXE.B values are loaded
into the OIC registers (X & Y). The OIC.PC is initialized and simultaneously first
control word from memory is loaded into its control word register. During every clock
cycle, the control bits from control word register are sent to the selection lines of the
multiplexer that control the input lines to the subtractors. At every clock cycle,
subtraction is performed to emulate the instruction sent from the MIPS core. Finally,
Figure 3.
Multi-core system consisting of one 32-bit MIPS core and one 32-bit OIC.
7
Quality Control - An Anthology of Cases
Figure 4.
Sequence of events from fault detection to loading of results into Mem/WB.Aluoutput register of MIPS core.
the computed result is loaded into MEM/WB.Aluoutput and the MIPS pipeline oper-
ation is resumed. The sequence of events from fault detection to results loaded into
MEM/WB.Aluoutput register of the MIPS core is shown in Figure 4.
usage respectively. In the Section 4.1, comparison of area, power, registers and number
of logical elements of OIC with an approach named URISC proposed by [19] and URISC
++ proposed by [20] is presented. Notably, URISC/URISC++ implement one instruction
set. The URISC/URISC++, a co-processor for TigerMIPS, emulates instructions through
the execution of subleq instruction. TigerMIPS performs static code insertion in both
control flow and data flow invariants so as to detect faults by performing repeated
executions of subleq within the co-processor. Comparative analysis on hardware
parameters for different modes of OIC are discussed in Section 4.2.
ASIC simulation: The OIC given in Figure 2 and multi-core system in Figure 3
has been implemented using Verilog HDL and then synthesized in Cadence Encounter
(R) RTL Compiler RC14.28 –V14.20 (Cadence design systems 2004) using TSMC
90 nm technology library (tcbn90lphptc 150). The area, power (dynamic, leakage,
net, internal) and critical path delay are estimated for the OIC and tabulated in
Table 2.
FPGA synthesis: The OIC is synthesized using Quartus prime Cyclone IVE with
device EP4CE115FE29C7 and the results are illustrated in Tables 3 and 4.
Leakage power and dynamic power: Power dissipation shown in Table 2 is
understood as sum of dynamic power and static power (or cell leakage). Static power
is consumed when gates are not switching. It is caused by current flowing through
transistors when they are turned off and is proportional to the size of the circuit.
Dynamic power is a sum of net switching power and cell internal power. The net
switching power is the power dissipated in the interconnects and the gate capacitance.
Block name Area Leakage Internal Net (nW) Dynamic Critical path
(μm2) power (nW) (nW) power (nW) delay (ps)
Sub blocks
Table 2.
Implementation of 32 bit OIC results using 90 nm TSMC technology.
Subtractor (1) 33
Comparator (1) 43
(B) modes Logical elements
Baseline 100
DMR 303
TMR 486
Table 3.
FPGA synthesis results for OIC.
9
Quality Control - An Anthology of Cases
Table 4.
FPGA synthesis results comparison.
The cell internal power is the power consumed within a cell by charging and
discharging cell internal capacitances. The total power is a sum of the dynamic power
and the leakage power.
Multi2sim (version 5.0): Multi2sim supports emulation for 32 bit MIPS/ARM
binaries and simulation for 32-bit 86 architectures. It performs both functional and
timing simulations. The performance loss is estimated for compute intensive and
memory intensive micro-benchmarks using a Multi2sim simulator. Performance loss
for micro-benchmarks listed in Table 6 are illustrated in Figures 6–11.
With the critical path delay at 8608 ps, the operating frequency of the circuit is 115
MHz with power supply at 1.2v. OIC is a low power core consuming 1.3 mW, with die
area of 8122 μm2. The die area of conventional MIPS core is 98,558 μm2 which is 14.2
larger than OIC core. The MIPS core consumes a total power of 1.153 W and the 32-bit
OIC consumes 1.39 mW; order of difference in powers of 10 is three. The registers in
OIC are PC and temporary registers which hold the operands. But they are not
designed and managed as a register file. Tables 3 and 4 provide the register count and
logical elements count for OIC and URISC++. The number of logical elements in OIC
is 3.51% and 3.52% of the logical elements in URISC and URISC++ respectively. The
number of registers in OIC is 3.05% of URISC++. URISC++ adds 62 logical elements
and one additional register to the architecture of URISC. The logical elements in
URISC++ consume 6.6 mW. URISC++ has 650 registers or 14.3% of registers in
TigerMIPS. URISC++ has two large register files. URISC++ altogether consumes
1.96 W. Thus, OIC consumes less power than URISC++.
The critical path delay, area, dynamic power and leakage power for the four modes
of OIC namely baseline mode, DMR mode, TMR mode and TMR + SCS mode are
normalized to baseline mode and shown in Figure 5. The area overhead of TMR + SCS
mode is 68.43% of the baseline, area overhead of TMR mode is 65.37% of the baseline
and for DMR mode it is 51.4%. The comparators and subtractors occupy 22.71% and
28.6% of TMR + SCS mode area respectively. The size of the voter is negligible in
TMR + SCS mode and TMR mode. In the critical path delay, 10% increase is noticed
from the baseline to TMR + SCS mode. The critical path traverses from the subtractor
input to the comparator, and then to the voter, passing through select logic and ends at
an input line. Delay would not differ much between TMR mode and TMR + SCS mode.
Both the dynamic power and leakage power for TMR mode and DMR mode
increase significantly due to redundant subtractors and comparators which are not in
the baseline. The dynamic power overhead of TMR mode and DMR mode is 60% and
10
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
Figure 5.
(a) Area, (b) critical path delay, (c) leakage power and (d) dynamic power (y-axis—normalized values to
baseline).
73% of the baseline. It is 75% for TMR + SCS mode. The static power or leakage power
is proportional to the size of the circuit. The TMR + SCS mode has leakage power
which is 76% more than the baseline. The TMR and DMR mode have leakage power
which is 72% and 50% more than the baseline. In Table 4 which depicts FPGA
synthesis results, it is observed that the number of logical elements in TMR + SCS
mode and DMR mode is 79% and 66% more than the baseline. From Tables 2 and 4, it
is observed that TMR mode with additional self-checking subtractor in TMR + SCS
mode costs more than the baseline, but still TMR + SCS/OIC will be a suitable fault
tolerant core for a low power embedded system.
The area and power for the micro-architecture of multi-core system (one MIPS
core with one OIC) shown in Figure 3, are estimated using ASIC simulation. The
multi-core system occupies a total area of 306,283 μm2 and consumes a total power of
1.1554 W. The FDL occupies an area of 6203 μm2 which is 2% of the total area
occupied by the system. The OIC occupies an area of 8122 μm 2 which is 2.6% of the
total area occupied by the system. The FDL consumes a power of 1.2 mW and OIC
consumes a power of 1.4 mW which are negligible when compared to the total power.
The redundancy-based core level fault mitigation techniques/approaches such as Slip-
stream [21], dynamic core coupling (DCC) approach proposed by [22], configurable
isolation [23], reunion is a fingerprinting technique proposed by Smolens et al. [24]
have nearly 100% area overhead and obviously larger power overhead.
For every instruction emulated on OIC, an additional three clock cycles are needed
for transfer of opcodes and operands, and two clock cycles are needed to resume the
11
Quality Control - An Anthology of Cases
pipeline in the MIPS processor. The two terms defined below highlight the latency
that incur in instruction execution, presented in the following subsection.
MOV 5 10 1
INC 4 9 1
DEC 1 5 1
SUB 1 5 1
Table 5.
IETE and TET for instructions.
12
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
Table 6.
CPU intensive and memory intensive micro-benchmarks.
Figure 6.
Performance overhead in binary search by emulating ADD using subleq instruction.
Figure 7.
Performance overhead in Quicksort by emulating ADD using subleq instruction.
Figure 8.
Performance overhead in Radix sort by emulating ADD and DIV using subleq instruction.
threaded binary search. It is due to the fact that majority of the ADD instructions are
associated with LOAD and STORE instructions.
Quicksort (with emulation of ADD instruction), implemented using recursion for
sorted data elements (worst case analysis) incurs performance loss of 3.85, 6.31,
and 6.99 for data size of 100, 1000 and 10,000 respectively as shown in Figure 7.
For best case analysis of quicksort for 10,000 elements, performance loss reduces to
1.008. Due to recursion, majority of ADD instructions are associated with LOAD/
STORE instructions. In radix sort, occurrence of ADD instructions is more compared
to DIV instructions. Since it is memory intensive method of sorting, large number of
14
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
ADD instructions is used to increment counters and constants associated with LOAD/
STORE instructions. The performance loss due to emulation of ADD instructions for
radix sort is 2.45, 4.79 , and 5.96 for input sizes of 1000, 10,000 and 10,000 as
shown in Figure 8. For DIV instructions, performance loss is 1.4, 2.05, and 2.37
for input sizes of 1000, 10,000 and 10,000 elements.
As shown in Figure 9 , matrix multiplication with emulation of ADD and MUL
instructions executing in the algorithmic phase of the program, incurs a performance
loss of < 1.56, 4.09, 4.0> (for ADD) and < 1.632, 7.62, 7.99> (for MUL), for
input matrix sizes of 10 10, 100 100, and 1000 1000 respectively. In CPU
scheduling, ADD and SUB instructions emulation incur a performance loss of < 2.45,
4.79, 5.96> and < 1.4, 2.05, 2.3> for input data set of 1000, 10,000 and
100,000 processes respectively as shown in Figure 10. In sieve of Eratosthenes,
emulation of MUL and ADD instructions incur a performance loss of < 1.89, 5.03,
7.63> and < 1.48, 2.9, 3.8> for input data set size of 1000, 10,000 and 100,000
respectively as shown in Figure 11.
Figure 9.
Performance overhead in matrix multiplication by emulating ADD and MUL using subleq instruction.
Figure 10.
Performance overhead in CPU scheduling by emulating ADD and SUB using subleq instruction.
15
Quality Control - An Anthology of Cases
Figure 11.
Performance overhead in Sieve of Eratosthenes by emulating MUL and ADD using subleq instruction.
This section examines the effect of fault tolerance provided in MCS-OIC on the
yield. As discussed in the section which presents design of OIC, it is assumed that two
subtractors do not fail simultaneously. In the TMR + SCR, TMR, and DMR modes,
OIC repeats the instruction execution if the results differ, to avoid transient failures.
The spatial and temporal redundancy to avoid permanent and transient faults in OIC
makes it defect tolerant. The arithmetic logic unit in MIPS is protected by functional
support provided by OIC. The remaining portion of MIPS are hardened and protected
by ECC. The die yield for proposed different configurations of MCS-OIC is estimated
using the equations presented below.
16
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
b. Fault tolerant die: It is the die consisting of MIPS cores and OICs.
c. Regular dies per wafer: It is the number of original dies per wafer. The number of
regular dies per wafer is estimated using the Eq. (1).
π ðdiameter=2Þ2 π diameter
Regular dies per wafer ¼ pffiffiffi (1)
Area 2 Area
Where diameter refers to the diameter of the wafer, Area refers to the area of the die.
d. Die yield: Ignoring full wafer damage, the yield for single die is approximated
using negative binomial approximation as given in the Eq. (2).
cp
defect density Area
Die yield ¼ 1þ (2)
cp
e. Regular working dies per wafer: It is die yield times the regular dies per wafer. It is
estimated using the Eq. (3).
π ðdiameter=2Þ2 π diameter
Regular fault tolerant dies per wafer ¼ pffiffiffiffiffi (4)
ð1 þ δÞArea ð2 ð1 þ δÞ AreaÞ
g. Regular working fault tolerant dies per wafer: It is die yield times the regular fault
tolerant die. It is estimated using the Eq. (5).
17
Quality Control - An Anthology of Cases
!cp
defect density Area
Regular working fault tolerant dies per wafer ¼ 1þ :
cp
!
π ðdiameter=2Þ 2 π diameter
pffiffiffiffiffi
ð1 þ δÞArea ð2ð1 þ δÞAreaÞ
(5)
The die yield for the original die and fault tolerant die estimated for one MIPS core
with one/two/four/ OICs, two MIPS core with one/two/four/ OICs, four MIPS core
with one/two/four/ OICs, and eight MIPS core with one/two/four/six OICs is tabu-
lated in Tables 7–10 respectively. The defect density is varied from 9.5, 5.0, to 1.0 and
wafer diameters varied from 300 mm, 200 mm to 100 mm to estimate die yield of the
original die and fault tolerant die. The cp is fixed at 4.0. The die yield of the original
die at defect densities are 1.0, 5.0, and 9.5 are 0.9971, 0.9855, and 0.9727 respectively.
The die yields for three fault tolerant dies each consisting of one MIPS core with first
die with one OIC, second with two OICs, third with four OICs for 300 mm wafer with
defect density at 1.0 is < 0.9970/0.9969/0.9967> respectively as shown in the Table 7,
which is slightly lesser than the yield of the original die. The average of the differences
between yield of original die and fault tolerant dies with defect density 1.0 is 0.0002
which is negligible value. Similarly, the average of the differences between yield of
original die and fault tolerant dies at defect density 5.0 and 9.5 are 0.0009 and 0.0017
respectively. It is observed that an increase in the defect density decreases the yield.
Defect density 9.5 5.0 1.0 9.5 5.0 1.0 9.5 5.0 1.0
Number of regular dies 26,489 26,489 26,489 10,6781 10,6781 10,6781 24,0876 24,0876 24,0876
per wafers
Die yield of original die 0.9727 0.9855 0.9971 0.9727 0.9855 0.9971 0.9727 0.9855 0.9971
Number of regular 25,767 26,106 26,412 10,3870 10,5237 10,6470 23,4309 23,7391 24,0174
working dies per wafer
Number of One OIC 25,768 25,768 25,768 103,884 103,884 103,884 234,347 234,347 234,347
regular
Two OICs 25,084 25,084 25,084 101,139 101,139 101,139 228,163 228,163 228,163
fault
tolerant Four OICs 23,820 23,820 23,820 96,061 96,061 96,061 216,723 216,723 216,723
dies per
wafer
Die yield One OIC 0.9719 0.9851 0.9970 0.9719 0.9851 0.9970 0.9719 0.9851 0.9970
of fault
Two OICs 0.9712 0.9847 0.9969 0.9712 0.9847 0.9969 0.9712 0.9847 0.9969
tolerant
die Four OICs 0.9697 0.9839 0.9967 0.9697 0.9839 0.9967 0.9697 0.9839 0.9967
Number of One OIC 25,046 25,385 25,691 10,0974 10,2340 103,573 227,784 230,864 233,645
regular
Two OICs 24,363 24,701 25,007 98,231 99,595 100,828 221,603 224,681 227,461
working
fault dies Four OICs 23,100 23,437 23,743 93,157 94,518 95,750 210,170 213,243 216,021
per wafer
Table 7.
Die yield for fault tolerant die consisting of one MIPS core with one/two/four OICs.
18
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
Number of regular 13,159 13,159 13,159 53,220 53,220 53,220 120,182 120,182 120,182
dies per wafers
Die yield for original 0.9463 0.9713 0.9942 0.9463 0.9713 0.9942 0.9463 0.9713 0.9942
die
Number of regular 12,454 12,782 13,082 50,368 51,694 52,911 113,740 116,736 119,484
working dies per
wafer
Number One OIC 12,977 12,977 12,977 52,487 52,487 52,487 118,529 118,529 118,529
of regular
Two OICs 12,494 12,494 12,494 50,544 50,544 50,544 114,150 114,150 114,150
fault
tolerant Four OICs 12,459 12,459 12,459 50,403 50,403 50,403 113,833 113,833 113,833
dies per
wafer
Die yield One OIC 0.9456 0.9709 0.9941 0.9456 0.9709 0.9941 0.9456 0.9709 0.9941
for fault
Two OICs 0.9449 0.9705 0.9940 0.9449 0.9705 0.9940 0.9449 0.9705 0.9940
tolerant
die Four OICs 0.9428 0.9693 0.9937 0.9428 0.9693 0.9937 0.9428 0.9693 0.9937
Number One OIC 12,272 12,600 12,900 49,636 50,962 52,177 112,091 115,085 117,830
of regular
Two OICs 12,095 12,423 12,723 48,924 50,249 51,464 110,486 113,478 116,222
working
fault Four OICs 11,592 11,919 12,219 46,900 48,222 49,435 105,923 108,908 111,650
tolerant
dies per
wafer
Table 8.
Die yield for fault tolerant die consisting of two MIPS core with one/two/four OICs.
The die yield of the fault tolerant dies each consisting of two MIPS cores with <
one/two/four> OICS with defect density 1.0 is < 0.9941, 0.9940, 0.9937> respectively
as shown in Table 8 . The die yield of the original die at defect densities 1.0, 5.0, and
9.5 is 0.9942, 0.9713, and 0.9463 slightly higher than yield of fault tolerant dies. The
average of the differences between yield of original die and fault tolerant dies is
0.00026. The average of the differences between yield of original die and fault toler-
ant dies increases by 0.0009 and 0.0018 for defect density 5.0 and 9.5 respectively.
The die yields of the original die at defect densities 1.0, 5.0, and 9.5 are 0.9884,
0.9436, and 0.8963 respectively. From Table 9, the die yield of the fault tolerant dies
each consisting of four MIPS cores with < one/two/four> OICS with defect density 1.0
are < 0.9883, 0.9882, 0.9880> respectively. It is observed that average of the differ-
ences between yield of the original die and fault tolerant dies at varying defect
densities is similar with other alternatives discussed above.
From Table 10, the die yield of the fault tolerant dies each consisting of eight MIPS
cores with < one/two/four/six> OICS with defect density 1.0 is < 0.9769, 0.9767,
0.9765, 0.9764> respectively. The die yield of the original die at defect densities 1.0,
5.0, and 9.5 is 0.9769, 0.8912, and 0.8057 respectively. The average of the differences
between the original die and fault tolerant dies with defect density of 9.5 is 0.0031, is
the highest among the averages. From this data, it is inferred that larger chips with
increasing redundancy widens gap between the yield of the original dies and fault
19
Quality Control - An Anthology of Cases
Number of regular dies 6519 6519 6519 26,489 26,489 26,489 59,910 59,910 59,910
per wafers
Die yield for original die 0.8963 0.9436 0.9984 0.8963 0.9436 0.9984 0.8963 0.9436 0.9984
Number of regular 5843 6152 6444 23,744 24,997 26,182 53,700 56,536 59,216
working dies per wafer
Number of One OIC 6474 6474 6474 26,305 26,305 26,305 59,495 59,495 59,495
working fault
Two OICs 6428 6428 6428 26,124 26,124 26,124 59,085 59,085 59,085
tolerant dies
per wafer Four OICs 6340 6340 6340 25,768 25,768 25,768 58,282 58,282 58,282
Die yield for One OIC 0.8956 0.9433 0.9883 0.8956 0.9433 0.9883 0.8956 0.9433 0.9883
fault tolerant
Two OICs 0.8949 0.9429 0.9882 0.8949 0.9429 0.9882 0.8949 0.9429 0.9882
die
Four OICs 0.8929 0.9417 0.9880 0.8929 0.9417 0.9880 0.8929 0.9417 0.9880
Number of One OIC 5798 6106 6398 23,561 24,814 25,998 53,288 56,122 58,800
regular
Two OICs 5753 6062 6353 23,381 24,633 25,817 52,881 55,713 58,391
working fault
tolerant dies Four OICs 5623 5930 6221 22,855 24,104 25,286 51,694 54,520 57,195
per wafer
Table 9.
Die yield for fault tolerant die consisting of four MIPS core with one/two/four OICs.
Defect density 9.5 5.0 1.0 9.5 5.0 1.0 9.5 5.0 1.0
Number of regular dies per 3217 3217 3217 13,159 13,159 13,159 29,827 29,827 29,827
wafers
Die yield for original die 0.8057 0.8912 0.9770 0.8057 0.8912 0.9770 0.8057 0.8912 0.9770
Number of regular working dies 2592 2867 3143 10,603 11,728 12,856 24,034 26,584 29,141
per wafer
Number of One OIC 3205 3205 3205 13,113 13,113 13,113 29,723 29,723 29,723
regular fault
Two OICs 3194 3194 3194 13,068 13,068 13,068 29,620 29,620 29,620
tolerant dies per
wafer Four OICs 3172 3172 3172 12,977 12,977 12,977 29,415 29,415 29,415
Six OICs 3150 3150 3150 12,888 12,888 12,888 29,214 29,214 29,214
Die yield for fault One OIC 0.8051 0.8909 0.9769 0.8051 0.8909 0.9769 0.8051 0.8909 0.9769
tolerant die
Two OICs 0.8040 0.8902 0.9767 0.8040 0.8902 0.9767 0.8040 0.8902 0.9767
Four OICs 0.8028 0.8895 0.9765 0.8028 0.8895 0.9765 0.8028 0.8895 0.9765
Six OICs 0.8016 0.8888 0.9764 0.8016 0.8888 0.9764 0.8016 0.8888 0.9764
Number of One OIC 2581 2856 3131 10,559 11,683 12,810 23,933 26,481 29,037
regular working
Two OICs 2559 2833 3109 10,470 11,592 12,719 23,732 26,277 28,831
fault tolerant dies
per wafer Four OICs 2537 2811 3087 10,382 11,503 12,629 23,535 26,075 28,628
Six OICs 2516 2790 3065 10,296 11,415 12,541 23,340 25,877 28,428
Table 10.
Die yield for fault tolerant die consisting of eight MIPS core with one/two/four/six OICs.
20
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
tolerant dies. Thus, a trade-off exists between the die yield and fault tolerance pro-
vided by the design alternatives (discussed above) having redundancy ranging
between 2% and 11%.
In order to assess the endurance for the four modes of OIC, reliability is evaluated
and compared. The reliability, denoted by R(t), is defined as the probability of its
survival at least until time t, which is estimated using Weibull distribution and can be
determined in the following manner:
β
RðtÞ ¼ PðT > tÞ ¼ eλt (6)
where β is the shape parameter, T denotes the lifetime and λ denotes the failure rate
of a component. Defect induced faults occur in the early stage of the lifetime, but the
wear-out induced faults increase in the tail end of the lifetime. β < 1, is used to model
infant mortality and it is a period of growing reliability and decreasing failure rate.
When β = 1, the R(t) of Weibull distribution and exponential distribution are identi-
cal. β > 1, is used to model wear out and the end of useful life where failure rate is
increasing. The initial failure rate is computed using the failure rate formula:
4
4
ðR sub ðtÞÞi ð1 Rsub ðtÞÞ4i
X
RTMTþSCS ðtÞ ¼ Rsubsc ðtÞRselect logic ðtÞRcomp ðtÞRvoter ðtÞ
i¼2
i
(8)
3
3
ðRsub ðtÞÞi ð 1 Rsub ðtÞÞ 3i
X
R TMRðtÞ ¼ Rselect logic ðtÞR compðtÞRvoterðtÞ (9)
i¼2
i
2
2
ðRsub ðtÞÞi ð1 Rsub ðtÞÞ2i
X
RDMRðtÞ ¼ R select logic ðtÞRcompðtÞ (10)
i¼1
i
The reliabilities are plotted for TMR + SCS, TMR, DMR and Baseline modes in
Figure 12 for β = 0.9 and 1.0, (which denote defect induced fault phase) and in
Figure 13 for β = 1.1 and 1.2 (which denote wear out induced fault phase). λ is a
function of number of logical elements as given in the Table 3.
In all these cases, TMR + SCS mode is observed to have a better failure tolerance
when compared to all other modes. For β = 1.2, the reliabilities of TMR mode and
DMR mode are less than that of TMR + SCS mode during the interval 3 104 to
15 104 hours, as illustrated in Figure 13. The levels of reliability of TMR modes
decline far below DMR, and baseline modes in wear out induced fault phase due to the
Figure 12.
Reliability vs. time for (a) β = 0.9 and (b) β = 1.0.
22
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
Figure 13.
Reliability vs. time for (a) β = 1.1 and (b) β = 1.2
fact that a single component reliability is below 0.5 and that the redundancy does not
have any merit in the TMR mode. In Table 11, reliability of subtractor goes below 0.5
at t = 180,000 h or 20.5 years and reliability gap between TMR and DMR widens
endorses the above argument.
In this section, reliability of OIC is compared with that of URISC++. The reliability
function of Weibull distribution with λ as a function of number of logical elements is
used to estimate the reliability of URISC/URISC++. The number of logical elements in
23
Quality Control - An Anthology of Cases
Table 11.
Reliabilities of components in OIC for β = 1.2.
Figure 14.
β = 0.9 reliability vs. time (hours).
Figure 15.
β = 1.0 reliability vs. time (hours).
OIC and URISC++ are given in Table 4. In the defect induced fault phase (β = 0.9 and
β = 1.0), a drastic fall in the URISC++ reliability is observed as shown in Figures 14
and 15. OIC continues to maintain a reliability of 0.96, unlike URISC++ with endur-
ance reaching 0.87 after 210,000 hours. In the wear-out induced fault phase, the
24
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
Figure 16.
β = 1.1 reliability vs. time (hours).
Figure 17.
β = 1.2 Reliability vs. time (hours).
reliability gap widens between 32-bit OIC and URISC++ when β = 1.1 (Figure 16) after
60,000 hours or 6.84 years. The reliability levels of OIC fall below that of URISC++
because single component reliability reduces below 0.5 after 23.4 years as shown in
Figure 17 and the redundancy in the OIC does not have any merit thereafter.
8. Conclusion
1. Power, area and total power for OIC and for its contender URISC++ are
evaluated. OIC consumes less power and area compared to its contender. The
registers count in OICs is significantly less compared to URISC++. It is observed
that two large register files in URISC++ consume more power, unlike OIC which
does not maintain register files.
25
Quality Control - An Anthology of Cases
3. In 1:1 configuration of multi-core system with OICs i.e., one conventional core
with one OIC, all the emulation request from the conventional core is handled by
OIC. In 2:1 configuration (two cores and one OIC), simultaneous failures in two
conventional cores results in higher performance loss for the application
executing in the system. This performance loss can be reduced by augmenting
the multi-core configuration with an additional OIC. That is, 1:1 model proves to
be a viable solution with minimal performance loss. This is validated by the
simulation results presented in this chapter. On 1:1 and 1: N basis i.e., one MIPS
core with one or more OICs can scale to 100 MIPS core with 100 or more OICs.
Hence, MCS-OIC model is a scalable design alternative.
5. The yield of the fault tolerant die is slightly lesser than the original die for all the
design alternatives of MCS-OIC. It is inferred that larger chips with increasing
redundancy widens gap between the yield of the original dies and fault tolerant
dies. Thus, a trade-off exists between the die yield and fault tolerance provided
by the design alternatives (discussed above) having redundancy ranging
between 2% and 11%.
6. Reliability of OIC and URISC++ are evaluated and compared. Evaluation results
indicate that OIC is more reliable than URISC++ both in the defect induced phase
and the wear out induced phase. It can be understood that the level of
redundancy is significantly less in URISC++ compared to OIC.
26
Design of Low-Cost Reliable and Fault-Tolerant 32-Bit One Instruction Core for Multi-Core…
DOI: http://dx.doi.org/10.5772/intechopen.102823
Author details
© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
27
Quality Control - An Anthology of Cases
References
[17] Venkatesha S, Parthasarathi R. 32-Bit [24] Smolens JC, Gold BT, Falsafi B, Hoe
one instruction core: A low-cost, reliable, JC. Reunion: Complexity-effective
and fault-tolerant core for multicore multicore redundancy. In: 2006 39th
systems. Journal of Testing and Annual IEEE/ACM International
Evaluation. 2019;47(6):3941-3962. DOI: Symposium on Microarchitecture
10.1520/JTE20180492. ISSN 0090-3973 (MICRO'06). IEEE Explorer; 2006. pp.
223-234. DOI: 10.1109/MICRO.2006.42
[18] Hamming RW. Error detecting
and error correcting codes’. The Bell
System Technical Journal. 1950;29(2):
147-160
Computer Vision-Based
Techniques for Quality Inspection
of Concrete Building Structures
Siwei Chang and Ming-Fung Francis Siu
Abstract
1. Introduction
Throughout the entire construction life cycle, quality assessment plays an impor-
tant role in ensuring the safety, economy, and long-term viability of construction
activities. Construction products that have been completely inspected and certificated
by quality inspectors are more inclined to be chosen by developers and buyers. Typi-
cally, the structural work is considered as an essential aspect for quality assessment
because structural problems directly influence the construction stability and integrity.
Among the construction structural forms, concrete structures are adopted as the most
common and basic construction structure. Therefore, exploring advanced
1
Quality Control - An Anthology of Cases
technologies that enable effective concrete defect inspection can be deemed a worth-
while endeavor.
Normally, the types of concrete defects include blistering, delamination, dusting,
etc. Among them, concrete cracks, usually caused by deformation, shrinkage, swell-
ing, or hydraulic, appear most frequently in concrete components. Concrete cracking
is considered the first sign of deterioration. As reported by the BRE Group [1], cracks
up to 5 mm in width simply need to be re-decorated because they only affect the
appearance of the concrete. However, cracks with a width of 5–25 mm have the
possibility to trigger structural damage to concrete structures [2]. A 40-year-old
oceanfront condo building collapsed on June 27, 2021, in Florida because of the neglect
of cracks. Experienced engineers noticed the cracked or crumbling concrete, the
interior cracks, and the cracks at the corners of windows and doors are the significant
and earliest signs of this tragedy. Therefore, in order to prevent potential failures that
may pose a loss to society, crack problems should be thoroughly examined and
resolved.
In general, construction works are divided into two categories: new building works
and existing building works. The new works refer to a building that will be
constructed from scratch. The existing building works mean that a building has
existed for many years and residents are living inside. In Hong Kong, quality assur-
ance and control should be conducted by full-time quality managers on-site for both
new and existing buildings. Normally, the quality managers visually inspect implied
build quality and by appointing a score to the building’s quality in accordance to the
Building Performance Assessment Scoring System (PASS) for new buildings, the
Mandatory Building Inspection Scheme (MBIS), and the Mandatory Window Inspec-
tion Scheme (MWIS) for existing buildings. Meanwhile, to ensure a continuous and
in-depth inspection, Non-destructive (NDT) methods e.g., eddy current testing,
ultrasonic testing are also commonly applied in the quality inspection process.
Quality managers are commonly obliged to work 8 hours per day. Their salary
ranges from HKD 30,000 to HKD 50,000 per month. In PASS, more than 300 quality
assessment items are related to cracking-related problems. Cracks in all building
components, including floors, internal and external walls, ceilings, and others are
required to be strictly inspected during both structural and architecture engineering
stages. Therefore, both manual and NDT inspections are considered time-consuming,
costly, and dangerous, especially for large-scale and high-rise structures. To tackle this
issue, computer-vision technique is increasingly introduced for automated crack
inspection. For example, various convolutional neural network (CNN) architectures
have been developed and implemented to increase the efficiency of manual crack
inspection [3, 4].
Considering the aforementioned context, computer-vision-based automated
crack inspection techniques were introduced by the authors in 2022. To achieve this,
the theoretical background of CNN networks is firstly explained in the context of
convolution, pooling, fully-connected, and benchmarking processes. AlexNet and
VGG16 models were then implemented and tested to detail and illustrate the
calculation steps. Meanwhile, a practical case study is used to compare the
difference between manual and computer-vision-based crack inspection. The future
directions of combining robotics and computer-vision for automated crack
inspection are discussed. This study gives a comprehensive overview and solid
foundation for a computer-vision-based automated crack inspection technique that
contributes to high efficiency, cost-effectiveness, and low-risk quality assessment of
buildings.
2
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405
The term computer vision is defined as an interdisciplinary field that enables com-
puters to recognize and interpret environments from digital images or videos [5].
Computer vision techniques are rapidly being used to detect, locate, and quantify
concrete defects to reduce the limitations of manual visual inspection. By automati-
cally processing images and videos, computer vision-based defect detection technolo-
gies enable efficient, accurate, and low-cost concrete quality inspection. Various
techniques in the computer vision field, such as semantic segmentation and object
detection, have been developed and applied to date [6]. Among them, image classifi-
cation is considered the most basic computer vision technique and has been intro-
duced most frequently to predict and target concrete defects.
The motivation of image classification is to identify the categories of input images.
Different from human recognition, an image is first presented as a three-dimensional
array of numbers to a computer. The value of each number ranges from 0 (black) to
255 (white). An example is shown in Figure 1. The crack image is 256 pixels wide, 256
pixels tall, and has three color channels RGB (Red, Green, and Blue). Therefore, this
image generates 256 256 3 = 196,608 input numbers.
The input array is then computed using computer vision algorithms to transform
the numbers to a specific label that belongs to an assigned set of categories. One of the
computer vision algorithms is CNN, which has become dominant in image classifica-
tion tasks [7]. CNN is a form of a deep learning model for computing grid-shaped
data. The central idea of CNN is to identify the image classification by capturing its
features using filters. The features are then output to a specific classification by a
trained weight and biases matrix.
There are three main modules included in a CNN model: convolution, pooling, and
fully connected layer. The convolution and pooling layers are used to extract image
features. The fully connected layer is used to determine the weight and biases matrix
and to map the extracted features into specific labels.
Convolution layer is the first processing block in CNN. During the convolution
process, a set of convolution filters is used to compute the input array Α ¼ aij mn ,
m, n ∈ widthimage , heightimage . After computing, a new image Α ∗ ¼ a ∗ ij nn , is
Figure 1.
An example of the input number array.
3
Quality Control - An Anthology of Cases
output and passed to the next processing layers. The size of the output image can be
calculated with Eq. (1). The values of output image pixels can be calculated with
Eq. (2). The output images are known as convolution feature map.
Here: n refers to the size of output image, m refers to the size of input image, f
refers to the size of convolution filter, p refers to the number of pooling layer, s refers
to the stride of convolution filter.
!
X
Αo∗ ¼ f W o Αo þ bo (2)
k
Here: Αo∗ refers to the pixels of output image, f refers to an applied non-linear
function, W o refers to the values of convolution filter matrix, k refers to the number
of convolution filters.Αo refers to the pixels of input image, and b o is an arbitrary real
number.
An example of a convolution process is shown in Figure 2. In this example, both
the width and height of the input image is 5. The pixels of the image are shown in
Figure 2. The convolution filter is in a shape of 3 3. In this example, only one filter is
used. The initial value of the convolution filter is set randomly. The filter matrix is
adjusted and optimized in the following backpropagation process. In this example, the
non-linear function, padding layer is not used, and the biases value bo is set as 0. The
stride of convolution filter is set as 1. The convolution filter moves from left to right,
and from top to bottom. The size and value of the output feature map can be com-
puted using Eqs. (1) and (2). The detailed calculation process of the example feature
maps value and size is shown in Table 1. Seen from Figure 2, the value of size of input
image, size of filter is 5, 3, respectively. Suppose the number of the pooling layer, the
convolution stride is 0, 1, respectively.
A pooling layer is used to refine the feature maps. After pooling, the dimensions of
the feature maps can be simplified. In doing so, the computation cost can be effec-
tively decreased by reducing the number of learning parameters, whilst allowing only
the essential information of feature maps to be presented. Usually, pooling layers
follow behind convolution layers. Average pooling and maximum pooling are the
main pooling operations. Similar to convolution layers, pooling filters are used to
refine feature maps. For maximum pooling, the maximum value from the regions in
feature map that is covered by pooling filters is extracted. For average pooling, the
average value of the regions in feature maps covered by pooling filters is computed.
The pooling filters slide in the feature map from top to bottom, and from left to right.
The output of the pooling process is new feature maps that contain the most
Figure 2.
An example of convolution process.
4
5
(1) 3 + 0 2 + 0 1 + 0 3 + 1 2 + 0 1 + 1 1
(1) 2 + 0 1 + 0 2 + 0 2 + 1 1 + 0 4 + 1 3
(1) 2 + 0 3 + 0 2 + 0 4 + 1 1 + 0 3 + 1
(1) 3 + 0 2 + 0 1 + 0 1 + 1 3 + 0 2 + 1 2
(1) 2 + 0 1 + 0 4 + 0 3 + 1 2 + 0 1 + 1
(1) 4 + 0 1 + 0 3 + 0 3 + 1 2 + 0 1 + 1 1
(1) 1 + 0 3 + 0 2 + 0 2 + 1 1 + 0 3 + 1 4
(1) 3 + 0 2 + 0 1 + 0 1 + 1 3 + 0 2 + 1 3
Table 1.
Detailed calculation process of feature map value and size.
Quality Control - An Anthology of Cases
Figure 3.
An example of max pooling and average pooling.
Here: y j refers to the weights of output neurons, w j refers to the weights that
connect different neurons, xi refers to the values of input neurons, b refers to the biases.
A Back-Propagation algorithm (BP) is commonly used to train and modify weights
and biases. BP updates weights and biases by computing the gradient of loss function.
In doing this, the optimal weights and biases matrix that enable the minimum loss
between model outputs and actual value are identified. For now, various loss func-
tions are developed and applied. For example, the mean square error (MSE), shown in
Eq. (4), is one of the most frequently used loss functions to calculate loss value.
Stochastic gradient descent (SGD) is then processed to determine updated weights
and biases using the gradient of loss function, shown as Eq. (5).
1X n 2
Loss ¼ y i ybi (4)
n i¼1
Here: Loss refers to the loss value of output neuron and actual value, n refers to the
number of neurons that connect to one specific output neuron, y refers to the actual
value, ^y refers to the value of one output neuron.
6
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405
∂L
w0 ¼ w η (5)
∂w
∂L
b0 ¼ b η
∂b
Here: w 0, b0 refers to updated weights and biases, η, η refers to former weights and
biases, η refers to the learning rate, ∂L, ∂L refers to the partial score of the loss function
∂w ∂b
for weights and biases, respectively.
An example of feature map updating using BP is explained. Figure 4 depicts an
example of a fully connected process. The initial weights and biases in this process are
determined randomly. Suppose the value of w11, w12, w 21 , w22, w5, w 6 is 0.1, 0.2, 0.3,
0.4, 0.5, 0.6, respectively. The value of x1, x 2, actual output value is 5, 1, 0.24. The
detailed calculation of the updated weights, biases, feature map is shown in Table 2.
In conclusion, during the convolution and pooling processes in CNN, the features
of the input image are extracted first. The pooled feature maps are then flattened and
considered as input neurons in fully connected process. After several training periods,
the appropriate weights and biases can be determined using BP. The classifications of
input images can be predicted automatically and reliably using the optimal weights
and biases.
A confusion matrix is a table structure that permits the viewing of CNN perfor-
mance [8]. Each row of the matrix records the number of images from actual classes,
while each column records the number of images from predicted classes. There are
four type indicators in the matrix: (1) True positive (TP) represents the images that
are predicted correctly as the actual class; (2) False positive (FP) represents the
images that are wrongly predicted; (3) True negative (TN) represents the images that
are correctly predicted as another actual class; (4) False negative (FN) represents the
images that are wrongly predicted as another actual class. TP, FP, TN, FN can be
expressed in a 2 2 confusion matrix, shown in Figure 5.
Based on TP, FP, FN, and TN, four typical CNN performance evaluation indexes:
accuracy, precision, recall, and F1-score can be calculated using Eqs. (6)–(9). For the
crack inspection problem, accuracy shows how many images can be predicted cor-
rectly. The percentage of actual cracked photos to all predicted cracked images is
shown by precision. CNNs with a high precision score indicate a better inspection
ability of cracked images. Recall shows the ratio of predicted cracked images to all
actual cracked images. CNNs with a high recall score indicate a better distinguishing
capacity between cracked and uncracked images. F1-score shows the comprehensive
Figure 4.
An example of a fully connected process.
7
8
w5 ’ ¼ w 5 η∂L
∂w5
0.5–0.1 0.8 =
w 6’ ∂L
¼ ∂L ∂y 1 2 1/2 (0.24–
∂w6 ∂y ∂w6 ¼ 2 2 yactual y output h2 ð1Þ
w6 0 ¼ w6 η∂L
∂w 6
0.6–0.1 1.8 =
w 11’ ∂L ¼ ∂L ∂y ∂h 1
¼ 2 12 yactual y output ð1Þ w5 x1 2 1/2 (0.24–
∂w11 ∂y ∂h1 ∂w11
w11 0 ¼ w 11 η ∂L
∂w11
0.1–0.1 2.5 =
w 12’ ∂L ¼ ∂L ∂y ∂h2
¼ 2 12 yactual y output ð1Þ w6 x1 2 1/2 (0.24–
∂w12 ∂y ∂h2 ∂w12
w12 0 ¼ w 12 η∂L
∂w12
0.2–0.1 3 = 0
w 21’ ∂L ¼ ∂L ∂y ∂h1
¼ 2 12 yactual y output ð1Þ w5 x2 2 1/2 (0.24–
∂w21 ∂y ∂h1 ∂w21
w21 0 ¼ w 21 η∂L
∂w21
0.3–0.1 0.5 =
w 22’ ∂L
¼ ∂L ∂y ∂h2
¼ 2 12 yactual y output ð1Þ w6 x 2 2 1/2 (0.24–
∂w22 ∂y ∂h 2 ∂w22
w22 0 ¼ w 22 η∂L
∂w22
0.4–0.1 0.6 =
Table 2.
Detailed calculation process of feature map updating using BP.
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405
Figure 5.
An example of a fully connected process.
performance of precision and recall. A CNN with a high F1-score indicates stronger
robustness.
TP þ TN
Accuracy ¼ 100 (6)
TP þ TN þ FP þ FN
TP
Precision ¼ 100 (7)
TP þ FP
TP
Recall ¼ 100 (8)
TP þ FN
Precision Recall
F 1 score ¼ 2 100 (9)
Precision þ Recall
For example, the prepared dataset contains 10,000 photos, with 32,000 and 7000
cracked surface images and uncracked surface images, respectively. After CNN
processing, 2700 images are correctly predicted as cracked surfaces, 300 images out of
the 3000 real cracked surfaces are wrongly predicted as uncracked surfaces. 6500
images are correctly predicted as uncracked surfaces, and 500 images out of the 7000
uncracked surfaces are wrongly predicted as cracked surfaces. Then, based on above-
mentioned concepts, the values of TP, FN, FP, TN is 2700, 300, 500, 6500, respec-
tively. Table 3 shows the details of the accuracy, precision, recall, and F1 score
calculations.
Accuracy TPþTN 100 (2700 + 6500) /(2700 + 300 + 500 + 6500) 100 = 92%
TPþTN þFPþFN
F1-score 2 PrecisionRecall
PrecisionþRecall 100
2 ((0.84375 0.9)/(0.84375 + 0.9)) 100 = 87.23%
Table 3.
Detailed calculation process of accuracy, precision, recall, and F1 score.
9
Quality Control - An Anthology of Cases
3.1.1 Dataset
In this example, the input images were gathered from Kaggle, the world’s most
well-known data science community. Kaggle allows access to thousands of public
datasets covering a wide range of topics, including medical, agriculture, and con-
struction [9]. By searching “concrete crack” in Kaggle datasets module, 12 datasets
were found. The “SDNET2018” dataset was chosen from among them since it com-
prises sufficient and clean concrete surface images with and without cracks [10]. In
“SDNET2018”, 56,096 images were captured in the Utah State University Campus
using a 16-megapixel Nikon digital camera, including 54 bridge decks, 72 walls, and
104 pavements. In this example, only images of walls and pavements were used to
demonstrate the comparison analysis between manual inspection and CNN-based
automatic inspection. Therefore, 42,472 images were used as training and testing
dataset. Among them, 6459 cracked concrete surfaces are considered as positive class.
The captured cracks are as narrow as 0.06 mm and as wide as 25 mm, while 36,013
uncracked concrete surfaces are considered as negative class. Images in this dataset
contain a range of impediments, such as shadows, surface roughness, scaling, edges,
and holes. The diverse photographing backgrounds contribute to ensuring the robust-
ness of the designed CNN architecture. At a ratio of 80/20, the cracked and uncracked
concrete photos were randomly separated into training and testing datasets. The input
images’ pixels were standardized to 227 227 3 for AlexNet, and 224 224 3 for
VGG16. Table 4 shows the details of the input images. Figure 6 shows the examples
of the input images.
In this section, two pre-trained CNN networks, AlexNet and VGG16, were intro-
duced to illustrate CNN computation process. AlexNet was designed as an eight-layer
architecture. VGG16 has a depth that is two twice that of AlexNet. According to
[11, 12], the depth of CNN network has a significant impact on model performance.
Table 4.
Details of prepared dataset.
10
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405
Figure 6.
Examples of cracked and non-cracked surface.
Therefore, by training and testing the prepared dataset with AlexNet and VGG16, the
comparison of network depth to prediction performance and computation cost can be
further highlighted.
1. AlexNet architecture
Figure 7.
Details of AlexNet architecture.
AlexNet that using ReLU instead of other activation functions effectively solves the
overfitting problem and improves computation efficiency. Especially for the larger
architectures trained on larger datasets. The local response normalization [15] tech-
nique (LRN) is also applied following ReLUs to reduce the error rate. Moreover, to
avoid overfitting, drop-out techniques [16] are also applied in the first two fully-
connected layers. The dropout criteria was set at 0.5.
AlexNet was computed using SGD. The batch size, momentum [17], and weight
decay [18] were set as 128, 0.9, and 0.0005, respectively. The learning rate was set as
0.00001. AlexNet was computed for roughly 90 periods in NVIDIA GTX 580 3GB
GPUs. As a result, the error rate of AlexNet on test set of top-1 and top-5 achieved
37.5% and 17.0%, which was 10% lower than the out-performed CNN architecture at
that time.
2. VGG16 architecture
VGG16, designed by Karen Simonyan and Andrew Zisserman in 2015, was devel-
oped to investigate the influence of convolution network depth on prediction accu-
racy in larger datasets [19]. Therefore, VGG16 was designed as a deep architecture
with 16 weight layers, including 13 convolution layers and three fully-connected
layers. Convolution layers in VGG16 are presented as five convolution blocks. The
details of the VGG16 architecture are shown in Figure 8.
12
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405
Figure 8.
Details of VGG16 architecture.
As seen from Figure 8, there are two convolution layers in the first two convolution
blocks, respectively, and three convolution layers in the following three convolution
blocks, respectively. The size of all convolution filters is uniformly 3 3. All the
convolution filters move with a stride of one. The number of convolution filters
increases gradually from 64 to 128, 256, and 512 in the five convolution blocks. To
preserve information about image boundaries as completely as possible, spatial padding
is applied [20]. As with AlexNet, ReLU is applied as a non-linearity function for convo-
lution and fully-connected outputs to avoid overfitting problems. However, unlike in
AlexNet, LRN is not used in VGG16 because the authors stated that LRN has no influ-
ence on model performance and increases memory consumption and computation time.
Five max-pooling layers follow the last convolution layer in each block. The max-
pooling filters are uniformed with a size of 2 2, and a stride of two. As with AlexNet,
the first two fully-connected layers have 4096 neurons and 1000 output neurons. The
output neurons are activated by softmax. To avoid overfitting problems, drop-out
technique is also applied in the first two fully-connected layers. The dropout ratio is
set at 0.5. It can be concluded that the most important novelty of VGG16 compared
with AlexNet are: (1) the designed deep architecture; (2) the uniformed and small size
convolution filters.
In the training process, the training batch size, momentum, weight decay, and
learning rate were set as 256, 0.9, and 0.0005, 0.0001, respectively. As a result, the
13
Quality Control - An Anthology of Cases
top-1 and top-5 errors of VGG16 achieved 24.4% and 7.2%, which is 13% and 9.8%
lower than AlexNet. The result proved that the deep architecture and small convolu-
tion filters have positive influences on CNN performance.
Finally, the prepared dataset mentioned in Section 3.3.1 was used to train and test
AlexNet and VGG16, respectively. The training and testing process was conducted in
Kaggle kernels [21]. Kaggle kernel, provided by Kaggle community, is a virtual envi-
ronment equipped with NVIDIA Tesla K80, a dual GPU design, and 24GB of GDDR5
memory. This high computing performance enables 5–10 times faster training and
testing processes than CPU-only devices. Both AlexNet and VGG16 were trained
using SGD. Batch size was and learning rate set as 64, 0.0001, respectively. To avoid
overfitting problem, dropout was applied at the fully-connected stage, dropout prob-
ability was set as 0.5.
Python was used to program the computing process. Pytorch library was imported.
The whole computation time of AlexNet was roughly 2 h, and 4 h for VGG16. The
model’s performance in the training and testing datasets is shown in Figures 9 and 10,
Figure 9.
Training loss and accuracy of AlexNet and VGG16.
Figure 10.
Testing loss and accuracy of AlexNet and VGG16.
14
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405
respectively. The training and testing loss and accuracy values are represented on the
vertical axis, while the processing epochs are represented on the horizontal axis. Since
the loss and accuracy variation remained consistent after the 60th epoch overtraining
the model could lead to an overfitting problem [22]. The training epoch was set to 60
epochs.
As shown in Figure 9 , both AlexNet and VGG16 converged successfully. The
training loss for AlexNet reduced steadily from 0.43 to 0.05 in the 58th epoch and
then remained constant in subsequent epochs. Similarly, at the 58th epoch, AlexNet’s
training accuracy increased from 0.85 to 0.98. At the 35th epoch, the training loss for
VGG16 dropped from 0.42 to 0.01 and subsequently stayed steady at approximately
0.008–0.01 in following epochs. At the 34th epoch, the training accuracy of VGG16
increased from 0.85 to 0.99 and then remained at 0.99. The results revealed that
VGG16 performed better during the training procedure. VGG16’s convergence speed
is roughly two times that of AlexNet. VGG16’s minimum training loss is 0.04 lower
than AlexNet’s, while its maximum accuracy is 0.01 times higher. It is observed that
deeper CNN designs assist in the faster processing of larger datasets, which contrib-
utes to producing more trustworthy weights and biased matrices. These results are in
accordance with those proposed by [23].
Figure 10 shows the loss and accuracy variations of AlexNet and VGG16 in the
testing dataset. The testing loss and accuracy consist of the fluctuation tendency of
training loss and accuracy. It indicated that neither AlexNet nor VGG16 had
overfitting or underfitting problems. VGG16 also out-performed AlexNet in the test-
ing process. AlexNet and VGG16 have minimum testing losses of 0.01 and 0.00003,
respectively. AlexNet’s maximum accuracy was 0.98, and VGG16’s was 0.99. In the
testing dataset, VGG16 converges at the 34th epoch, which is nearly 2 times faster
than AlexNet.
The confusion matrix of AlexNet and VGG16 is shown in Table 5. It can be shown
that the accuracy scores of AlexNet and VGG16 are nearly identical, indicating that
AlexNet and VGG16 have similar prediction abilities for cracked and uncracked con-
crete surfaces. VGG16 has a precision and recall of 96.5% and 89.6%, respectively,
which is nearly 1% and 5% greater than AlexNet. The results show that VGG16 out-
performs AlexNet for predicted positive variables (cracked surfaces). Meanwhile,
more cracked images from actual datasets can be correctly identified by applying
VGG16. AlexNet and VGG16 have F1-scores of 89.6% and 92.9%, respectively, indi-
cating that the VGG16 model is more robust.
AlexNet VGG16
TP 5242 5579
FN 985 648
TN 36,007 36,040
FP 238 205
Accuracy 0.971204558 0.97991618
Table 5.
Confusion matrix of AlexNet and VGG16.
15
Quality Control - An Anthology of Cases
Figure 11.
Layout of the experiment case.
16
Computer Vision-Based Techniques for Quality Inspection of Concrete Building Structures
DOI: http://dx.doi.org/10.5772/intechopen.104405
Figure 12.
Walking path of the inspectors.
Time 13.65 3600 = 49,140 s Time Taking video (1/0.1 15) 15) +1/0.1 14 = 2390 s
CNN processing (2390 24)/100 = 573.6 s
Table 6.
Time and cost of manual and computer-vision based crack inspection.
Normally, the average walking speed between the age of 20–49 is around 1.42 m/s
[26]. Considering the time delays of taking videos, the walking speed can be consid-
ered as 0.1 m/s. Suppose the walking path follows an S-curve, shown in Figure 12.
According to [27], the universally accepted frame rate is 24 FPS per second.
Suppose the inspector begins to record video while taking the first step. Then the time
of captured video equals the time of walking. The number of the input images that
converted from the captured video can be calculated as 2390 s 24FPS = 57,360.
According to the testing time of the textbook examples mentioned in Section 3.1, and
the study outcomes of [28], the time of CNN processing is around 100 images per
second. Then, the time of CNN processing can be calculated as 57,360/100 = 573.6 s.
Therefore, the cost of computer-vision based crack inspection can be calculated as
(2390 s + 573.6 s) (85.9/3600) = $70.7.
Table 6 summarizes the calculation process of time and cost of manual and
computer-vision-based crack inspection. It can be seen that using CNN-based tech-
nique can effectively reduce inspection time and cost. The inspection time decreases
from 13.65 to 0.8 h in total, the inspection cost decreases from $1172.5 to $70.7.
4. Conclusion
Acknowledgements
The authors highly appreciate the full support funding of the full-time PhD
research studentship under the auspice of the Department of Building and Real Estate,
The Hong Kong Polytechnic University, Hong Kong. The authors would like to
express their deepest gratitude to Prof. Heng Li for his guidance. Finally, the authors
would like to acknowledge the research team members (Mr. King Chi Lo, Mr. Qi Kai)
and anyone who provided help and comments to improve the content of this article.
Conflict of interest
List of abbreviations
Author details
© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
19
Quality Control - An Anthology of Cases
References
[20] He K, Zhang X, Ren S, Sun J. Spatial [26] Mohler BJ, Thompson WB, Creem-
pyramid pooling in deep convolutional Regehr SH, Pick HL, Warren WH. Visual
networks for visual recognition. IEEE flow influences gait transition speed and
21
Quality Control - An Anthology of Cases
Abstract
1. Introduction
1
Quality Control - An Anthology of Cases
• Uninterested perception of the subject as “not basic” and not relevant to the
disciplines in the specialty
It consists of the fact that along with the consideration of questions of a theoretical
nature, it is obligatory to consider concrete and real data on lectures and seminars on
all the topics under study. At the same time, the main methodological principle is the
maximum possible usage of examples corresponding to the streams of student training.
A practically oriented approach in teaching the subject “Economic theory” is
implemented in the following streams:
• Automotive market
The choice of the automotive market as a topic for practical application is justified
by acknowledging the following circumstances: widespread use, a lot of informa-
tion, the ability to historically analyze the evolution of the market and the structure
of competition, tracking the effects of mass production, the availability of data on
prices, production volumes, and technologies.
In addition, this topic correlates with the use of the business game “Formation of
the automotive market” in practical classes.
The subject of the oil and gas market is related to the peculiarities of the Russian
economy and is characterized by the possibility of obtaining various data for analysis,
thus making it possible to analyze the activities of monopolies and oligopolies.
The choice of the information technology and technology market as a topic for
practical application is due to its relevance, the widespread use of digital technologies,
and their development. This topic corresponds to the areas of training of one of the
faculties of the university and correlates with the business game “Digital Economy”
developed within the department.
structure of the report should contain a brief summary of the theoretical content of
the key aspect of the topic (– min), as well as bringing statistical and other data
on the topic (– min). The recommended subject of reports on all topics coincides
with the directions chosen by the department.
In addition to this topic, students are offered other areas in accordance with their
specialty.
The department has developed a number of business games and specific situations
for analysis [].
Therefore, when studying the topic of supply and demand, the business game
“Demand and Supply in the Automotive Market” is used. In a playful way, students
analyze the formation of the automotive market in the United States, supply and
demand factors, market structure, and the strategy of competing firms.
When reviewing the topic of the production factors market, the business game
“Real estate: rent or buy” is used. The game deals with supply and demand in the
labor market of engineers, wage dynamics, and housing market data. Students
analyze the possibility of buying or renting an apartment depending on the level
and dynamics of their future income, other life criteria, and factors of the real estate
market.
The study of the topic “Cost theory of the firm” is also conducted using a business
game. By imitating small businesses, students are divided into subgroups, creating
small enterprises in the field of catering. In doing so, they analyze the main costs,
their structure, and dynamics.
Studying the topics “Fiscal policy of the state” and “Monetary policy of the state,”
the business game “State regulation of the automotive market” is used. The actions
of the “AvtoVAZ Bank” as well as the state policy on supporting the car industry and
AvtoVAZ’s efforts to attract investments and implement its production program are
considered.
When studying the topic “Economic functions of the state,” the business game
“Digital Economy” is used. During the business game, four teams of students inter-
act, representing the positions of the state, business, consumers, and experts, assess-
ing the socioeconomic consequences of implementing programs for the development
of the digital economy.
accounted for more than of all the reports presented by four departments of the
Faculty of Economics and Management).
In particular, students presented reports on the following topics:
Basic controls.
. Test papers:
• The relevant control works are carried out in the form of test tasks and include
tests and tasks.
. Report
• Each student has the right to make only one report during the semester.
• Solving problems and test tasks at the seminar. Points allocated range from
to points.
• speeches on the topic of the report or the topic of the seminar: min.
• discussion of the issues of the topic and answers to control questions: min.
• Total: min.
. Urgency of development
The lectures and seminars are allocated for h. Practically, for each topic, both the
lecture and the practical lesson have one academic hour ( min). Such a structure
of classes assumes a significant change and improvement in the methods of lecturing
and conducting practical classes.
Before the introduction of the electronic testing system, knowledge control was
carried out only during practical classes, and this was done by using paper carriers for
test tasks.
Its main purpose was to eliminate these problems that the development of a system
of electronic tests was primarily aimed at.
The following shortcomings of this system were revealed:
• limited variants,
• cribbing,
As the experience of other developers of similar methods and our estimates has
shown, in order to avoid repetition of the questions, it was necessary to form at least
test tasks for each topic. Taking into account that all the topics of the course were
subject to testing, the total number of generated test tasks was more than . As
such, the author of this research work used previously proven test tasks, newly devel-
oped, as well as tasks taken from other sources, in particular from websites attnica.
ru, i-exam.ru, and fepo.ru, which was a time-consuming and laborious work. The
formation of the base of test tasks was done by a group of three people.
. Soware.
To develop computer tests and questions, students used the database management
system Question Mark Perception, which allows them to reproduce memories, as well
as generate the results of various forms of reports for data analysis.
The Database Management System Question Mark Perception supports different
types of questions:
• question with one choice—the subject must choose one correct answer from
several;
• multiple choice questions—the subject must choose at least the correct answer
from the proposed ones;
• a question with a Likert scale—the subject chooses one of several values based on
the scale;
6
Development and Usage of Electronic Teaching Technologies for the Economic Training…
DOI: http://dx.doi.org/10.5772/intechopen.105610
• filling in the void—the subject must enter the missing word in a paragraph of the
text;
• selection of words from the drop-down list—the subject must choose the correct
answer from the drop-down list;
• a question about the transfer of objects—the subject must reassign a lot of mark-
ers on the image;
• a question with a graphic choice—the subject must reassign the marker on the
image
• matrix question—it is a table in each series of which the subject must choose one
answer (column);
• ordering question—the subject must put the options in the correct order.
When setting rules for checking answers, many options are also possible, for
example, correct answer, wrong answer, partially correct answer, etc. For each indi-
vidual answer option, one needs to create one’s own rule. It is also possible to set up a
feedback, for example, a message with comments on the answer.
The evaluation of each question is also configured by the author, and the points
awarded for the answer depend on the specified conditions.
Test events can be taken on any device that has access to the Internet. This allows testing
not only in the classroom, but also to give homework in the form of independent work.
For testing within the disciplines of the Department of Economic Theory, questions
with a single choice and a question for entering a number and choosing a word from
a drop-down list were mainly used. Evaluation is carried out according to the system
wherein the correct answer is linked to a simple question—one point, the correct
answer to the problem—two to four points (depending on the level of complexity).
. Approbation of technology.
The technology of electronic testing was proved during practical classes in eco-
nomic theory at the Faculty of Economics and Management, the Humanities Faculty,
and a number of groups of other technical faculties.
Testing was carried out for each topic of the course of economic theory in com-
puter labs during the practical classes immediately after passing the corresponding
topic. For a number of groups, testing on the same topics was conducted on an
extracurricular basis (online).
The analysis of the results of testing on the parameters noted above was carried
out, and the answers of students in computer classes and at home were compared.
A number of test tasks have been adjusted.
There was a slight repetition of the questions and some unevenness in the com-
plexity of some tests.
7
Quality Control - An Anthology of Cases
It turned out that the percentage of failure to perform tests in extracurricular time
for various reasons (including technical ones) was about . This made it possible
to compile a real timetable for retesting on the examination week and to estimate the
scope of retesting with a larger number of students.
The total number of students tested in each academic year was more than a thou-
sand people. On average, about – of students tested outside the classroom had
problems during testing for technical reasons. They were promptly retested.
At present, the system is fully debugged.
• The training schedule, according to which the practical classes are held in a week,
and therefore, a high efficiency is required in the preparation for the conduct of a
business game
The time for conducting a business game is one to two academic hours; therefore,
strict requirements are imposed on the representativeness, volume, and quality of the
information provided.
8
Development and Usage of Electronic Teaching Technologies for the Economic Training…
DOI: http://dx.doi.org/10.5772/intechopen.105610
The departments are focused on the development of online courses and other
products using the university’s technological and organizational resources.
The St. Petersburg Electrotechnical University “LETI” has developed a strategy in
the field of distance learning and distance learning technologies.
The strategy is a formalized set of approaches consistent with the develop-
ment priorities of the university, on the basis of which an action plan is imple-
mented to saturate the educational process with information and communication
technologies [].
The technical infrastructure is:
• systems for monitoring and managing access to resources, alarm, and video
surveillance systems.
The Center for New Educational Technologies and Distance Learning and the
Department of Educational Programs are responsible for the development of the
online education system at LETI [].
During the implementation, the following organizational technology was used:
• At the beginning of the semester, consultative meetings were held with all the
students (in groups and on streams) on the organization of the educational
process. Students were provided with information about the procedure for
registering for online courses, the training schedule, types of classes, reporting,
and the assessment system.
• During the semester, several times a week, mailing lists were sent with informa-
tion about the opening of new course materials and deadlines for completing
tests and handing in practical assignments.
• On a weekly basis, the Dean’s offices were provided with detailed information
on the development of the online course by students. At the end of the semester,
all the students passed the final certification for their courses in the format of
offline proctoring.
The following models of embedding distance learning technologies are assumed []:
• flipped learning;
When developing online courses, there is a need for their expertise. The university
has the following system of peer review, which includes the following two stages:
• assessment of the compliance of the course structure and its content with the
goals and objectives in the development of the academic discipline for which
the online course is being created;
Before creating the online course “Economics,” the following main issues were
analyzed:
• technical capabilities.
• what the length of the course and its volume will be,
As a result, it was decided that the course would be recorded by an employee of the
department who would work together with the author and producer of the course. In
the case described herein, the producer was also a member of the department.
The online course is seen as an addition to the classroom.
The course contains topics and corresponds to the program of the discipline
“Economic theory.” The course structure includes video content and testing based on
the results of mastering. The online course developed at the department is an integral
part of the educational process and can be used as an independent material at the
same time.
It should be noted that the development of the course required a significant
amount of time. In addition, there was a need to master new competencies in content
design and lecture recording.
The most important aspects of online learning for students were: the clarity and
consistency of the presentation of the educational material, the usefulness of the
course for the specialty, and the pleasure of attending the course.
Main directions of further work may include but not be limited to the following
areas:
• technologies;
Author details
ValeryiSemenov
Department of the Economic Theory, Saint Petersburg Electrotechnical University
“LETI”, SaintPetersburg, Russia
© The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/.),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
12
Development and Usage of Electronic Teaching Technologies for the Economic Training…
DOI: http://dx.doi.org/10.5772/intechopen.105610
References
13
Chapter
Abstract
Management scholars should further study the scientific area concerning the
contingent effects of learning capability and organizational innovations on the
relationship between quality management organizational performance. This chapter
examines the interactive effects of quality management with organizational learning
capability and innovations on organizational performance. Indeed, it may be argued
that within quality management theory and methodology, the need to consider the
contingency approach may result in an in-depth understanding of how the intersec-
tion of constituent elements associated with quality management influences organi-
zational performance. Results revealed that the interaction of quality management
and learning capability explained higher variance in organizational performance than
the direct effect of quality management on performance. Similarly, interactions
between quality management and innovations explained more significant variance in
organizational performance than the direct effect of quality management on perfor-
mance. Outcomes showed that quality management might not directly impact orga-
nizational performance. Findings underscore the importance of interactive effects of
innovation and organizational learning capability with quality management in
explaining the relationship between quality management and organizational
performance.
1. Introduction
1
Quality Control - An Anthology of Cases
Figure 1.
Macro model.
4
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503
corporate performance [9, 22]. Previous researchers have argued that a positive associ-
ation between innovation and organizational performance tends to be contingent on the
flexible structural design that facilitates subunits innovations and interconnectedness,
decentralized decision-making, and accumulated organizational learning [13, 30–32].
According to Singh and Smith [33], quality management practices promote an organic
environment within organizations that is conducive to innovation and high levels of
learning. Such organic structural design promotes employee interactions and cross-
functional links and interactions. Furthermore, the organic structural design creates
greater flexibility [34], that facilitates the speed and extent of innovations, and timely
adaptation to changes in the firm’s industry environment.
Moreover, quality management practices that promote the timely introduction of
products and services to the marketplace can lead to competitive advantage and
high organizational performance [8]. Similarly, entrepreneurial mindset within
organizations tends to be a key factor in technological and product innovations.
Furthermore, entrepreneurial mindset enables managers to respond to environmental
changes by reallocating valued resources within the organization toward new
products and services and enhancing corporate performance [22, 30, 35, 36].
Finally, quality management creates a culture of collaborations and exchanges of
new ideas as employees interact within each function and cross-functionally.
Researchers must identify the interrelationship among quality management, learning
capability, and innovations to realize a deeper understanding of how employee
interaction may lead to higher organizational learning capability and innovations.
Furthermore, research studies should explore the interactive effects of quality
management, learning capability, and innovation on organizational performance.
Given the above, this study hypothesizes the main and intersection effects between
integrated quality management, organizational learning, and innovations in the
following manner:
H1: There will be a positive and significant relationship between quality
management, organizational and organizational performance.
H1a: There will be a positive relationship between quality management,
organizational learning.
H1b: There will be a positive relationship between quality management,
organizational, and innovation.
H2: There will be a positive relationship between organizational learning and
organizational performance.
H3: There will be a positive relationship between innovation and organizational
performance.
H4: The interactions between quality management and organizational learning
positively influence the relationship between quality management and
organizational performance.
H5: The interactions between quality management and innovation positively influ-
ence the relationship between quality management and organizational performance.
3. Methodology
Data. The data used in this study were collected by the survey method. The survey
was carried out during the year 2015 and provided information on Iran’s food business
5
Quality Control - An Anthology of Cases
A survey method was used for all the variables in the present study. Respondents
were asked to indicate their levels of agreement with descriptive statements using a 5-
point Likert scale (range, 1 = strongly disagree to 5 = strongly agree).
Quality management. To measure the effectiveness of integrated quality manage-
ment, following the study by Vanichchinchai and Igel [37] and Coyle-Shapiro [38],
the present research employed the following seven variables:
• employee involvement
• continuous improvement
• customer focus
• supply management
Congruent with the previous research in contingency theory [8], the present
research considers quality management as an integrated organizational strategy. As
6
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503
such, the study used structural equation modeling to explore the independent and
interaction effects of integrated quality management, innovations, and organizational
learning on organizational performance. For parsimony, and to reduce the number of
relationships, a hierarchical component model was created. Model I (Table 1,
Figure 1) shows the results of the structural equation modeling analysis of the high
component model, and standardized regression weights showing integrated quality
management association with organizational learning capability, products and ser-
vices innovations, and organizational performance. The hierarchical analysis of Model
I also shows the relationship between each of the four constructs in this study with
their sub-constructs.
4. Analysis
Table 1.
Results of structural equation modeling-model I.
For the accuracy of the constructed model and to make sure the data is presenting
accurate and reliable drawing from the population under study the Kolmogrov-
Smrinov (KS) test was performed [40, 41]. Table 2 shows that all four variables’ data
are normally distributed.
8
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503
Table 2.
One-sample Kolmogrov-Smrinov biased analysis.
• customer focus
• employee participation
9
Quality Control - An Anthology of Cases
c
Derived factors
Table 3.
Factor analysis of quality management scales.a
10
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503
DF 300
Sig 0.000
Table 4.
KMO and Bartlett ’s test of quality management variable.
Standardized Standardized
loading loading
Supply Management
Customer Focus
Employee Involvement
1. Employee training and encouragement to participate 0.57 /a 0.33 3.50*
in company programs
12
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503
Standardized Standardized
loading loading
*
3. Employees suggestions about improving supply- 0.96 7.99
chain
Table 5.
Results of the first-order and second-order confirmatory factor analysis of integrated quality management.
provide input about the supplier selection based on the quality of services and prod-
ucts (B = 0.96).
Derived Factorsc
Table 6.
Factor analysis of organizational learning Scales.a
Df 120
Sig 0.000
Table 7.
KMO and Bartlett ’s test of organizational learning variable.
14
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503
Standardized Standardized
loading loading
Management commitment
Open experimentation
Table 8.
Results of the first-order and second-order confirmatory factor analysis of organization learning.
Derived Factorsc
Table 9.
Factor analysis of organizational innovation Scalesa.
Sig 0.000
Table 10.
KMO and Bartlett ’s test of innovation variable.
continuous process innovation (B = 0.78). Findings also revealed that top managers
coordinated subunits efforts to enhance overall organizational innovations (B = 0.77).
Analysis of subset variables and their relationship with quality management.
Products and services innovation. Results for the subset variables of the innovation
dimension (Table 11) reveal that executives place strategic importance on the first-
17
Quality Control - An Anthology of Cases
Standardized Standardized
loading loading
Performance innovation
1. Utilizing novel ideas to improve the product quality 0.88 /a 0.78 10.25*
and speed of deliver
Table 11.
Results of the first-order and second-order confirmatory factor analysis of organization innovation.
mover advantage and faster generation of new products and services compare to other
rivals (B = 0.94). Furthermore, the first-mover advantage enabled the organization to
present customers with products and services that best served their needs compared
to other rivals in the marketplace (B = 0.92), at a higher rate of market presentation of
18
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503
innovative products compared to other rivals (B = 0.79). Results also indicated that as
a first-mover strategy, top managers placed strategic emphasis on R&D and allocated
greater resources toward research and development (B = 0.72). Congruent with results
presented in the learning capability segment, flexible and lateral structural design and
greater cross-functional communication and knowledge sharing, reduced process
costs associated with the higher production improvements and efficiency, compared
to other competitors (B = 0.82), and generating new products and services for
customers (B = 0.75).
Innovation performance. Findings reveal that designing a lateral flexible organiza-
tional structure was highly correlated with innovations in the organization (B = 0.89).
enabled subunits to transform the novel ideas into products and services and present
them to the marketplace in a timely fashion (B = 0.88). Moreover, resources are to be
allocated and reallocated cross-functionally (B = 0.68), with lower costs and more
efficiency (B = 0.78). Findings also indicated that top managers focused on human
resource development and management (B = 0.81) and acquire high-quality resources
in the production processes (B = 0.80).
Organizational innovation. The results of the analysis of innovation showed that
there are two important aspects of organizational innovation. The financial aspect
indicated that innovation leads to a reduction in costs per unit (B = 0.84).
Moreover, innovation enhances the employee productivity (B = 0.79), efficient
resources allocation cross-functionally ( B = 0.77), and prospects of healthier finances
(B = 0.79).
Derived Factorsc
Table 12.
Factor analysis of organizational performance Scalesa.
Sig 0.000
Table 13.
KMO and Bartlett ’s test of organizational performance variable.
Standardized Standardized
loading loading
Organizational Performance
Employee satisfaction
Customer satisfaction
Table 14.
Results of the first-order and second-order confirmatory factor analysis of organizational performance.
Macro model. As shown in Table 1 (Figure 1), the standard regression weight for
the overall model indicated a positive and significant relationship between main vari-
ables, quality management, organizational learning, and innovations. According to the
results, organizational integrated quality management is positively and significantly
associated with organizational learning capability (B = 0.95, p < 0.05). Similarly, results
showed a positive and significant relationship between innovation performance and
integrated quality management.
(B = 0.91, p < 0.05). Results indicated that when parsing the main effects of
learning capability and innovation performance, the association between quality
management and organizational performance remains positive but statistically non-
significant (B = 0.43, n.s.) and does not explain significant variance (R2 = 0.18) in
organizational performance. A detailed analysis revealed that organizational learning
capability is positively and significantly associated with organizational performance
(B = 0.58, p < 0.05). Furthermore, innovation performance, according to findings, is
also positively and significantly associated with organizational performance (B = 0.62,
p < 0.05). Findings are congruent with hypotheses H1a and H1b. Findings, however,
being partially congruent with hypothesis a, H1.
H1: There will be a positive and significant relationship between quality
management, organizational and organizational performance.
H1a: There will be a positive relationship between quality management,
organizational learning.
H1b: There will be a positive relationship between quality management,
organizational, and innovation.
A detailed analysis revealed that organizational learning capability is positively and
significantly associated with organizational performance (B = 0.58, p < 0.05).
Furthermore, innovation performance, according to findings, is also positively and
significantly associated with organizational performance (B = 0.62, p < 0.05).
Findings also supported hypotheses H2 and H3.
H2: There will be a positive relationship between organizational learning and
organizational performance.
H3: There will be a positive relationship between innovation and organizational
performance.
7. Discussion
Author details
Mohsen Modarres
Management and Technology Consulting, Kirkland, WA, USA
© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
25
Quality Control - An Anthology of Cases
References
[1] Raynor EM. The Strategy Paradox. Organization. New York: Currency
London: Currency Doubleday; 2007 Doubleday; 1990
[2] March J. Exploration and exploitation [11] Corredor P, Goñi S. TQM and
in organization learning. Organization performance: Is the relationship so
Science. 1991;2 (1):71-87 obvious? Journal of Business Research.
2011;64:830-838
[3] Ireland DR, Hoskisson RE, Hitt MA,
Loomis CJ. Can anyone run Citigroup?
[12] Porter ME. What is strategy?
Fortune. 2008:80-91
Harvard Business Review. 1996;74(6):
61-78
[4] Bartlett CA, Ghoshal A, Birkinshaw J.
Transnational Management, Texts, Cases
[13] Wheelen T, Hunger D, Hoffman A,
and Readings in Cross-Boarder
Bamford C. Strategic Management and
Management. San Francisco: Irwin-
McGraw-Hill; 2004 Business Policy: Globalization,
Innovation and Sustainability. 14th ed.
USA: Pearson; 2015
[5] Douglas TJ, Judge JQ. Total quality
management and competitive advantage:
The role of structural control and [14] Gomez-Gras JM, Verdu-Jover AJ.
exploration. Academy of Management TQM, structural and strategic flexibility
Journal. 2001;44(1):158-169 and performance: An empirical research
study. Total Quality Management and
[6] Hung RYY, Lien BYH, Yang B, Wu Business Excellence. 2005; 16(7):841-860
CM, Kuo YM. Impact of TQM and
organizational learning on innovation [15] Hasan M, Kerr RM. The relationship
performance in the high-tech industry. between TQM practices and
International Business Review. 2011; organizational performance in service
20(2):213-225 organizations. The TQM Magazine.
2003;15(4):286-291
[7] Luthans F. Organizational Behavior.
7th ed. New York, NY: Mcgraw Hill; [16] Powell T. Total quality management
1995 as competitive advantage: A review and
empirical study. Strategic Management
[8] Joiner TA. Total quality management Journal. 1995;16(1):15-37
and performance: The role of
organization support and co-worker [17] Westphal JD, Gulati R, Shortell SM.
support. International Journal of Quality The Institutionalization of Total Quality
and Reliability Management. 2006; Management: The Emergence of
21(6):617-627 Normative TQM Adoption and the
Consequences for Organizational
[9] Hana U. Competitive advantage Legitimacy and Performance. August:
achievement through innovation and Academy of Management Proceedings;
knowledge. Journal of Competitiveness. 1996. pp. 249-253
2013;5(11):82-96
[18] Demirbag M, Tatoglu E, Tekinkus
[10] Senge PM. The Fifth Discipline: The M, Zaim S. An analysis of the
art & Practice of the Learning relationship between TQM
26
Exploring the Effects of Learning Capability and Innovation on Quality Management…
DOI: http://dx.doi.org/10.5772/intechopen.102503
Abstract
Performance enhancement and cost-effectiveness are the critical factors for most
industries. There is a variation in the performance and cost matrices based on the
industrial sectors; however, cybersecurity is required to be maintained since most of
the 4th industrial revolution (4IR) are based on technology. Internet of Things, IoT,
technology is one of the 4IR pillars that support enhancing performance and cost. Like
most Internet-based technologies, IoT has some security challenges mostly related to
access control and exposed services. Artificial intelligence (AI) is a promising
approach that can enhance cybersecurity. This chapter explores industrial IoT (IIoT)
from the business view and the security requirements. It also provides a critical
analysis of the security challenges faced by IoT systems. Finally, it presents a com-
parative study of the advisable AI categories to be used in mitigating IoT security
challenges.
1. Introduction
The 4th Industrial revolution (4IR) is the current era where industry is driven by
technology. It encourages the co-operation between scientific knowledge and experi-
ence with business mindset and requirements. The key technologies that allow 4IR to
be sustained are additive manufacturing techniques, Autonomous and collaborative
robotics, Industrial Internet of Things (IIoT), Big data analytics, Cloud Manufacturing
techniques [1]. The current scenarios show the benefits of IIoT in improving QoS
industries, starting from predictive maintenance, reaching remote controlling of
assets, and deploying Digital Twin concept that allows virtualizing the operations
environment and permits the owner to be proactive when any anomalies are detected
[2]. Even though IIoT adds value to the traditional industry, there should be a balance
between the operational benefits and the security level.
Aims and objectives
• To analyze various IIoT threats and security challenges and existing mitigation
techniques
2. Background
[3] 5 layers Devices, edge Computing, fog computing, cloudlets, cloud computing
[6] 4 layers Fog network consist of (IoT layer, Mist, cloudlet/edge layer,) cloud
[7] 4 layers Sensors and systems layer, far edge layer, near-edge layer, cloud layer
Table 1.
IoT ecosystem architecture comparison.
2
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469
as an Arduino platform. The network layer can be divided as well into two sub-layers
based on the communication characteristics such as the speed and bandwidth: fog
computing and cloudlet. The third layer is the cloud computing layer. Figure 1 illus-
trates the authors’ insights into IoT architecture after studying the literature. Layer
one consists of IoT devices, layer two covers all networking related technologies and
devices, and the third layer consists of cloud computing and related data analytics
technologies.
The IoT layers are connected through networking media using wireless or wired
connections. However, wireless technology evolvement is critical to extend IoT
deployment as the complexity of energy impact and processing capacity are getting
worse at the sensor’s layer [6]. The emerging of 5G in wireless communication adds an
advantage to IoT architecture since it improves the performance by allowing the
transformation of more data in less time, which technically reduces service-latency
and enhances real-time access to data [6, 8].
The growing use of Internet of Things (IoT) technology in the industrial sector has
posed new issues for the device and data security. Based on different world statistics,
the number of devices connected to IoT networks is rapidly increasing. This expansion
leads to experience different levels of vulnerabilities, which may—in turn—cause an
increase in security threats and challenges. Security may be regarded as a big threat that
leads to limitations of the IoT systems deployment. As a result thereof, it is the Authors’
view that effective security practices may become more vital in the IoT industry.
The National Institute of Standards and Technology (NIST) designed programs to
boost cybersecurity involvement in IoT [9]. This initiative promotes the development
and implementation of cybersecurity standards, guidelines, and tools for IoT prod-
ucts, connected devices, and their deployment environment.
Figure 1.
IoT architecture.
3
Quality Control - An Anthology of Cases
◦ The Internet of Things (IoT) may include mobile devices that demand
adaptability, posing security vulnerabilities.
◦ IoT also generates a vast amount of data, which is referred to as Big data. The
latter has its own set of security and management concerns.
Table 2.
IoT security attributes, techniques and requirements.
4
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469
• Machine Learning types (ML): The intelligence behind ML is the ability to learn.
ML involves adaptive mechanisms; therefore, it is considered as the basis of
adaptive systems. In this context, the ML detects and extrapolates patterns by
adapting to new circumstances. This learning process can be based on experience
or examples or analogy. Therefore, ML has three sub-categories as follows:
• Expert System (ES): The Expert System, ES is dealing with uncertain knowledge
and reasoning. Rule-based ES consist of five basic components that are shown in
Figure 3: the knowledge base, the database, the inference engine, the explanation
facility, and the user. ES intelligence resembles the way the expert human apply
their knowledge and intelligence to solve the problem in a narrow domain. ES
processes knowledge in the form of rules and uses symbolic reasoning to solve the
problem. The main difference between ES and conventional programs (CP) is
that the CP processes data using algorithms on well-defined operations to solve a
problem in a general domain. Examples of ES are as follows:
Table 3.
Artificial intelligence (AI) main categories.
5
Quality Control - An Anthology of Cases
Figure 2.
Examples of ML sub-categories mechanisms.
Figure 3.
Expert system (ES) rule based adapted from [15].
6
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469
The term business model describes how an organization creates, delivers, and
captures value [20]. The adoption of IoT technologies in an organization will most
certainly affect the business relationships and the business model for that organiza-
tion. In this section, the common business models used will be discussed.
One of the early initiatives to develop an IoT business model was published in 2015
[21]. The research focused on identifying the relevant building blocks that can fit in
7
Quality Control - An Anthology of Cases
Figure 4.
Example of industry utilizing IoT technology [3].
IoT business models, as well as the types and importance of the building blocks. This
framework identified value proposition as the most important building block for IoT
business models. The entities “customer relationships” and “key partnerships”
followed suit in terms of importance.
Another conceptual IoT Business Model is the AIC (Aspiration, Implementation
and Contribution) model presented in [22], which focuses on context-specific imple-
mentation of IoT. This model consists of three interconnected phases: Aspiration,
Implementation, and Contribution. The first phase “Aspiration ” focuses on defining
and predicting the value creation through adoption of IoT. The second phase Imple-
mentation includes strategy development in which an organization should investigate
how IoT will improve the business by gaining competitive advantage or creating
enhanced products or services. In the third phase Contribution, an organization
opting for IoT should study the practicality of the approach and the capabilities and
resources available for the organization to implement IoT. In other words, does the
organization own the knowledge and skills needed to succeed in implementing IoT.
Four types of IoT-enabled servitized business models were classified in [23]. Each
business model was analyzed from three perspectives: the role of IoT, the firm’s
benefits, and the inhibiting factors. Table 4 adapts from the study presents the four
types of IoT business models and compares them based on the stated three perspec-
tives. The four different business models have some shared features as the common
role for IoT is adaptation, the common benefit is reducing operation cost, and the
common inhibiting factor is the need for close relationship between different
stakeholders.
IoT business models vary based on the type of deployment. Therefore, each
industry has a different model that will fit with its value proposition. Seven IoT
business models were reviewed by the researchers in [24]. Based on their analysis, six
characteristics of the IoT business model were identified:
8
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469
Sharing business • Adaptation • Improve service • Requires new ways of interactions with
model • Smoothing offerings customers
• Increase resource • Requires close relationship between
utilization different stakeholders in the network
• Reduce
operation costs
Table 4.
Business model categorization based on role, benefits, and inhibiting factors.
• The ability to map the Value Flows that involve revenue, costs, and assets
• The ability to balance between the actions and widening the rational thinking.
Given the potential impact and IoT devices’ prevalence and ubiquity, one needs to
understand how to leverage IoT technologies to realize the value-deriving benefits
associated with them. For example, IoT can be used in the factory setting to make
various processes more efficient. The IoT applications have noteworthy potential in
value creation in terms of operation optimization and predictive maintenance. This
can be achieved by monitoring, remotely tracking and adjusting the machineries,
based on sensor data from different parts of the factory. It has been estimated that IoT
9
Quality Control - An Anthology of Cases
has a potential to create value of $1.2 trillion to $3.7 trillion per year in 2025 by
optimizing factory settings. This improvement in the working efficiency using IoT
may also induce some security and privacy issues [25]. Moreover, technology does not
automatically bring added convenience or value unless firms carefully consider the
context into which it is introduced and how to derive any practical or monetary
benefits. Mostly, add-value is related to performance enhancement. The latter can be
improved through a variety of factors such as time saving, cost saving, and processing
low-overhead to name but a few.
Table 5 shows some recent empirical research [26–31] on how to mitigate security
challenges in an IoT industrial environment and different add-value. AI approaches
are used more in access control, which is related mostly to the Network layer of IoT.
Access control is a critical part of the system, which acts as a door for the factory to
control authorized access to the recourses and the level of privileges. Due to the
heterogenous and dynamic nature of the IoT networks, it will be significant to use AI
approaches to enhance the access control.
The IoT add-value is constraint by several challenges and barriers. These can be
categorized in two groups based on their domain as follows:
• Human limitations
• Technology limitation
[31] Network A deep learning methodology for detecting Improve detection accuracy of ✓
cyberattacks IDS
Table 5.
Examples of AI usage in security mitigation approaches based on IoT layer.
10
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469
• Business limitation
• difficulty in designing business models for the IoT due to a multitude of different
types of connected products
• ecosystems are unstructured since it is too early to identify stakeholders and their
roles
Uncertainty of how IoT will impact existing business models, organizational strat-
egies, and return of investment, business models are considered significant barriers to
implementation, where the add-value should be clearly identified.
The most critical step is step 2 aimed at exposing the vulnerabilities and security
challenges of the IoT systems. After properly classifying the threats, it will be possible
to explore the mitigation techniques. For classifying threats in an information system,
Microsoft introduced the STRIDE (Spoofing, Tampering, Repudiation, Information
disclosure, Denial of Service and Elevation of privilege) threat model [32] Counter-
measures are recommended and evaluated for each threat. The application of STRIDE
for threat modeling in Industrial IoT (IIoT) has been studied before as discussed in
[33, 34]. It also describes the adaptation of STRIDE for the Azure IoT reference
architecture. After discovering threats, these should be rated according to their sever-
ity using some tools. The use of the DREAD (Damage, Reproducibility, Exploitability,
Affected Users, Discoverability) model as one of commonly used tools to assign
ratings to threats is mentioned in [35] .
Generally, each IoT system will have a multi-layered architecture consisting of
various layers. These layers make use of diversified technologies, which introduce a
11
Quality Control - An Anthology of Cases
plethora of challenges and security threats. As a result, the architecture of the IoT
system plays a significant role in identifying the threats and attacks. However, there is
no specific standard architecture because most of the IoT solutions are application-
specific developed with explicit technologies, resulting thus in heterogeneous and
fragmented architectures.
A secured IoT network architecture was proposed in [36] that would be using
Software Defined Networks (SDN) for identifying the threats. It also summarizes how
IoT network security can be achieved in a more effective and flexible way using SDN.
Furthermore, studies, reviews, and analysis were conducted on some existing IoT
architectures and a new architecture was proposed based on those architectures [37].
This new architecture includes a lot of the key elements of the other architectures,
while fostering a high degree of inter-operability across diverse assets and platforms.
Among the several IoT architectures reviewed in [38], it is found that the four-layer
architecture (Application, Transport, Network, and Perception layers) is often being
considered by researchers to address security challenges and solutions at each layer.
Moreover, the most used IoT architectures are often three-tier/layer systems, includ-
ing a perception/hardware layer, network and communication layer, and application
interfaces and services layer. Additionally, the Open Web Application Security Project
(OWASP) [39] identified attack vectors using the three layers of an IoT system:
hardware, communication links, and interfaces/services layers. Thus, as shown in
Figure 5 at all layers of the IoT architecture, implementation of IoT security
mitigation techniques should include security architecture [40].
According to the IoT security architecture, there are security issues and concerns at
each of the three IoT layers. Because of their relative positions in the architecture,
each of these layers has its own set of security needs. However, because they are all
interconnected, if one is compromised, the others may suffer as well. The goal of IoT
security is to protect customer’s privacy, confidentiality, data integrity, infrastructure,
and IoT device security, as well as the availability of the services. The following
subsection discusses the IoT Security issues and threats at each of the layer.
Figure 5.
IoT security architecture [40].
12
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469
Like in any other system, confidentiality, integrity, AAA, availability, and non-
repudiation are some general security goals and requirements as already stated in
previous sub-section. This section discusses about some of the most frequent threats
and attacks at each IoT layer that might affect at least one of these criteria. Following
Table 6 provides an overview of the classification of the threats at each IoT layer and
some proposed solutions corresponding to these threats [41–44].
Node Capture An attacker takes control over a key node, Authentication and access
like a gateway node to use its resources. control.
Fake Node and It is a kind of attack in which an attacker Authentication and access
Malicious modifies the system by adding a node and control.
injecting bogus data. This created node
drains vital energy from genuine nodes and
may gain control of them, thus destroying
the network.
Network Sybil Attack In this attack, the attacker controls and Trusted certificates that are
Layer changes the node in such a way that it based on a central
shows multiple identities, hence certification authority.
compromising a huge portion of the system
and resulting in misleading information
about redundancy.
Sinkhole The attacked hole serves as a strong node, Intrusion detection system,
Attack and so other nearby nodes and devices strong authentication
prefer it for communication or as a techniques.
forwarding node for data routing, and thus
acts as a sinkhole, attracting everything.
Denial of This kind of attack causes the targeted Configuring a firewall which
Service (DoS) system’s resources to be exhausted, denies ping requests or using
Attack rendering the network inaccessible to its AES encryption.
users because of an attacker’s flood of
useless traffic.
Man-in-the- Using a middleman attack, the attacker Using high level encryption
Middle Attack pretends to be the original sender, making and digital signatures.
the recipient believe that the message came
from them.
RFID Spoofing These attacks are designed to transfer RFID Authentication
malicious data into the system by gaining protocols.
access to the IoT system. RFID spoofing, IP
spoofing, and other spoofing attacks in IoT
systems are examples.
Application Malicious This attack is done by executing the Firewall is inspected at run
Layer Code Attacks malicious scripts or code. It is a hacking time.
method enabling the attacker to first insert
the malicious code into the system and then
data is stolen from the user by executing
these malicious scripts.
14
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469
Cross site Client-side scripts, such as javascript, may Validating user input and
scripting be injected into a trusted website by an the input by the web page.
attacker. An attacker may then totally alter
the application’s content to suit his
requirements and illegally use original data.
Botnet By using a botnet, the hacker may take over Using a secure router
a network of devices and control them from encryption protocol, such as
a single access point. WPA2.
SQL injection SQL script is used to log into the IoT Programming the log page
Devices and applications. using parameterized
statements.
Table 6.
Common IoT threats, description, and solutions.
improved through AI approaches to predict future threats. The researchers point out
generative adversarial networks (GAN) that are using generator and discriminator.
The generator’s scope is to add samples to the real data, whereas the discriminator’s
purpose is to remove the fake samples from the original data. The suggested AI-based
solutions are from the data-driven type, which are support vector machine (SVM),
neural networks (NN), artificial neural networks (ANN), recurrent neural network
(RNN).
A framework has been proposed where AI based reaction agent is introduced [52].
The security enhancement is a combination between two intrusion detection systems:
knowledge-based and anomaly-based. For network pattern analysis, Weka has been
used as data mining tool and NSL KDD as dataset source and distributed JRip algo-
rithm in which machine learning can be used for security enhancement. For anomaly-
based IDS, the dataset is collected from real sensor data and the model uses library of
python Scikit-learn.
The main finding of [53] is that AI can be used for IoT security mostly in intrusion
detection system (IDS) in order to analyze the traffic and learn the characteristic of
the attack. Naïve Bayes algorithm is mostly used to classify attack data where it is
assumed this to originate from the independent events.
A two-tier framework is proposed by [54] for embedded systems such as an IoT
system. The security mitigation is to improve the traditional host-based IDS. The
machine learning approach used is of a pipeline method where a set of algorithms are
involved which allow the flexibility of adjusting the ML processing and the link
between different tiers.
From a comprehensive survey published by [55], it has been found that high-level
encryption techniques are not advisable to be implanted in IoT systems due to
resource limitation. Therefore, AI approach is a very strong candidate to enhance
security in IoT system in addition to the other existing network security protocols.
Consequently, to the nature of IoT-layered architecture, each layer has its specific
security threats. It has been noticed that machine learning approaches are widely
adopted in comparison to the knowledge-based expert systems.
[56] ✓ ✓ IDS
Table 7.
AI branches used in IoT security solutions.
16
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469
Another study published by [56] suggests that the machine learning based security
approaches are used mostly to enhance the detection mechanism of IDS. The only
approach that provides mitigation features is based on the techniques that utilizes
deep learning such as Gaussian mixture, SNN, FNN, RNN or utilize supervised
machine learning such as SVM. Table 7 [45, 51–58] shows that machine learning is
mostly used in the security mechanisms in IoT environment as there are a huge data to
learn from.
As per the literature, AI-based methods are recommended to be used to enhance
protection against IoT attack. However, most of them are not yet commercialized due
to the difficulty of its implementation. The focus of proposing different IoT security
mitigation is to introduce high-performance approaches with low cost in a real-time
environment. Moreover, dataset preparation is a critical factor that affects the accu-
racy and efficiency of machine learning approaches.
6. Conclusions
Acknowledgements
We would like to extend our appreciation to the Ministry of higher education and
research and innovation for funding this research through the block funding program.
This paper is aimed at contributing and further fostering the quality of research in the
University of Technology and Applied Sciences in Oman. We extend our gratitude to
the reviewers for their insights on the submitted manuscript that greatly improved the
chapter.
17
Quality Control - An Anthology of Cases
Glossary of terms
AI Artificial Intelligence
ML Machine Learning
QoS Quality of Services
ES Experts System
B2B Business to Business
STRIDE Spoofing, Tampering, Repudiation, Information disclosure, Denial
of Service and Elevation of privilege
DREAD Damage, Reproducibility, Exploitability, Affected Users,
Discoverability
SDN Software-Defined Networks
OWASP Open Web Application Security Project
Digital Twin: A digital twin is a virtual representation of an object or system that
spans its lifecycle, is updated from real-time data, and uses simula-
tion, machine learning and reasoning to help decision making.
Author details
© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
18
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469
References
[1] Bécue A, Praça I, Gama J. Artificial [8] Mishra D, Zema NR, Natalizio E. A
intelligence, cyber-threats and industry high-end IoT devices framework to foster
4.0: Challenges and opportunities. beyond-connectivity capabilities in 5G/
Artificial Intelligence Review. 2021; B5G architecture. IEEE Communications
54(5):3849-3886. DOI: 10.1007/ Magazine. 2021;59(1):55-61. DOI: 10.1109/
S10462-020-09942-2/FIGURES/3 MCOM.001.2000504
[4] Whaiduzzaman M, Mahi MJN, [11] Krishna RR, Priyadarshini A, Jha AV,
Barros A, Khalil MI, Fidge C, Buyya R. Appasani B, Srinivasulu A, Bizon N.
BFIM: Performance measurement of a State-of-the-art review on IoT threats
blockchain based hierarchical tree layered and attacks: Taxonomy, challenges and
fog-IoT microservice architecture. IEEE solutions. Sustainability. 2021;13(16):
Access. 2021;9:106655-106674. DOI: 9463. DOI: 10.3390/SU13169463
10.1109/ACCESS.2021.3100072
[12] Azrour M, Mabrouki J, Guezzaz A,
[5] Chao L, Peng X, Xu Z, Zhang L. Kanwal A. Internet of Things security:
Ecosystem of things: Hardware, Challenges and key issues. Security and
software, and architecture. Proceedings Communication Networks. 2021; 2021.
of the IEEE. 2019;107(8):1563-1583. DOI: DOI: 10.1155/2021/5533843
10.1109/JPROC.2019.2925526
[13] Dorsemaine B, Gaulier JP, Wary JP,
[6] Oteafy SMA, Hassanein HS. IoT in Kheir N, Urien P. A new approach to
the fog: A roadmap for data-centric IoT investigate IoT threats based on a four
development. IEEE Communications layer model. In: 13th International
Magazine. 2018;56(3):157-163. DOI: Conference on New Technologies for
10.1109/MCOM.2018.1700299 Distributed Systems (NOTERE 2016).
IEEE; 2016. DOI: 10.1109/
[7] Arena F, Pau G. When edge NOTERE.2016.7745830
computing meets IoT systems: Analysis
of case studies. China Communications. [14] Russell S, Peter N. Artificial
2020;17(10):50-63. DOI: 10.23919/ intelligence: a modern approach.
JCC.2020.10.004 Harvard: Prentice Hall; 2010
19
Quality Control - An Anthology of Cases
[21] Dijkman RM, Sprenkels B, Peeters T, [28] Tariq U, Aseeri AO, Alkatheiri MS,
Janssen A. Business models for the Zhuang Y. Context-aware autonomous
Internet of Things. International Journal security assertion for industrial IoT.
of Information Management. 2015;35(6): IEEE Access. 2020;8:191785-191794.
672-678. DOI: 10.1016/J.IJINFOMGT. DOI: 10.1109/ACCESS.2020.3032436
2015.07.008
[29] Rathore S, Park JH, Chang H. Deep
[22] Cranmer EE, Papalexi M, tom learning and blockchain-empowered
Dieck MC, Bamford D. Internet of security framework for intelligent 5G-
20
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469
Personal Communications. 2022;122 (4): [52] Bagaa M, Taleb T, Bernabe JB, et al.
3687-3718. DOI: 10.1007/S11277-021- A machine learning security framework
09107-6/FIGURES/19 for IoT systems. IEEE Access. 2020;8:
114066-114077. Available from: https://
[45] Restuccia F, D’Oro S, Melodia T. ieeexplore.ieee.org/abstract/document/
Securing the Internet of Things in the 9097876/. [Accessed: February 04, 2022]
age of machine learning and software-
defined networking. IEEE Internet of [53] Kuzlu M. Role of artificial
Things Journal. 2018;5(6):4829-4842. intelligence in the Internet of Things
DOI: 10.1109/JIOT.2018.2846040 (IoT) cybersecurity. Discover Internet of
Things. 2021;1(1):1-14. DOI: 10.1007/
[46] Hassija V, Chamola V, Saxena V, S43926-020-00001-4
Jain D, Goyal P, Sikdar B. A survey on
IoT security: Application areas, security [54] Liu M, Xue Z, He X. Two-tier intrusion
threats, and solution architectures. IEEE detection framework for embedded
Access. 2019;7:82721-82743. DOI: systems. IEEE Consumer Electronics
10.1109/ACCESS.2019.2924045 Magazine. 2021;10(5):102-108. DOI:
10.1109/MCE.2020.3048314
[47] Noor MM, Hassan WH. Current
research on Internet of Things (IoT) [55] Zaman S et al. Security threats and
security: A survey. Computer Networks. artificial intelligence based
2019;148:283-294. DOI: 10.1016/J. countermeasures for Internet of Things
COMNET.2018.11.025 networks: A comprehensive survey. IEEE
Access. 2021;9:94668-94690. Available
[48] Miller D. Blockchain and the from: https://ieeexplore-ieee-org.masader.
Internet of Things in the industrial idm.oclc.org/document/9456954/.
sector. IT Professional. 2018;20(3):15-18. [Accessed: February 05, 2022]
DOI: 10.1109/MITP.2018.032501742
[56] Jayalaxmi P, Saha R, Kumar G,
[49] Giotis K, Argyropoulos C, Kumar N, Kim TH. A taxonomy of
Androulidakis G, Kalogeras D, security issues in industrial Internet-of-
Maglaris V. Combining OpenFlow and Things: Scoping review for existing
sFlow for an effective and scalable solutions, future implications, and
anomaly detection and mitigation research challenges. IEEE Access. 2021;9:
mechanism on SDN environments. 25344-25359. DOI: 10.1109/
Computer Networks. 2014;62:122-136. ACCESS.2021.3057766
DOI: 10.1016/J.BJP.2013.10.014
[57] Aboelwafa MMN, Seddik KG,
[50] Wu H, Han H, Wang X, Sun S. Eldefrawy MH, Gadallah Y, Gidlund M.
Research on artificial intelligence A machine-learning-based technique for
enhancing Internet of Things security: A false data injection attacks detection in
survey. IEEE Access. 2020;8: industrial IoT. IEEE Internet of Things
153826-153848. DOI: 10.1109/ Journal. 2020;7(9). Available from:
ACCESS.2020.3018170 https://ieeexplore-ieee-org.masader.idm.
oclc.org/document/9084134/. [Accessed:
[51] Puthal D, Mishra A, Sharma S. February 11, 2022]
AI-driven security solutions for the
internet of everything. IEEE Consumer [58] Hassan MM, Gumaei A, Huda S,
Electronics Magazine. 2021;10(5):70-71. Almogren A. Increasing the
DOI: 10.1109/MCE.2021.3071676 trustworthiness in the industrial IoT
22
Artificial Intelligence Deployment to Secure IoT in Industrial Environment
DOI: http://dx.doi.org/10.5772/intechopen.104469
23