A Multi-Robot-Based Architecture and A Trust Model
A Multi-Robot-Based Architecture and A Trust Model
Article
A Multi-Robot-Based Architecture and a Trust Model for
Intelligent Fault Management and Control Systems
Atef Gharbi 1,2 and Saleh M. Altowaijri 1, *
1 Department of Information Systems, Faculty of Computing and Information Technology, Northern Border
University, Rafha 91911, Saudi Arabia
2 LISI Laboratory, National Institute of Applied Sciences and Technology (INSAT), University of Carthage,
Carthage 1054, Tunisia
* Correspondence: [email protected]
Abstract: One of the most important challenges in robotics is the development of a Multi-Robot-
based control system in which the robot can make intelligent decisions in a changing environment.
This paper proposes a robot-based control approach for dynamically managing robots in such a
widely distributed production system. A Multi-Robot-based control system architecture is presented,
and its main features are described. Such architecture facilitates the reconfiguration (either self-
reconfiguration ensured by the robot itself or distributed reconfiguration executed by the Multi-Robot-
based system). The distributed reconfiguration is facilitated through building a trust model that is
based on learning from past interactions between intelligent robots. The Multi-Robot-based control
system architecture also addresses other specific requirements for production systems, including
fault flexibility. Any out-of-control fault occurring in a production system results in the loss of
production time, resources, and money. In these cases, robot trust is critical for successful job
completion, especially when the work can only be accomplished by sharing knowledge and resources
among robots. This work introduces research on the construction of trust estimation models that
experimentally calculate and evaluate the trustworthiness of robots in a Multi-Robot system where the
robot can choose to cooperate and collaborate exclusively with other trustworthy robots. We compare
Citation: Gharbi, A.; Altowaijri, S.M. our proposed trust model with other models described in the literature in terms of performance
A Multi-Robot-Based Architecture based on four criteria, which are time steps analysis, RMSD evaluation, interaction analysis, and
and a Trust Model for Intelligent variation of total feedback. The contribution of the paper can be summarized as follows: (i) the
Fault Management and Control Multi-Robot-based Control Architecture; (ii) how the control robot handles faults; and (iii) the trust
Systems. Electronics 2023, 12, 3679.
model.
https://doi.org/10.3390/
electronics12173679
Keywords: multi-robot system; reconfigurable architecture; fault tolerant control; trust model; cloud
Academic Editors: Alexander Gegov, and edge computing
Ashwin Ashok, Femi Isiaq,
Raheleh Jafari and Kalin Penev
2. Related Works
SPORAS is a reputation mechanism proposed by [26] for a weakly connected envi-
ronment where agents have a common goal. With this methodology, the reputation value
is calculated by combining user opinions. The rating numbers are collected using two of
the most recent agents. In addition, this model offers a new recursive reputation scoring
procedure depending on previous reputation scores at a given point in time; the more
recent ratings carry greater weight. However, this approach has two major shortcomings.
Firstly, SPORAS only collects the most recent reviews between two users. Secondly, users
with extremely high reputation scores are notified after each update. Users with low scores
have significantly fewer score changes when using a reputation-based trust.
In [27], the authors presented a reliability–reputation model called TRR, which inte-
grates agent societies and evaluates agent reputation based on trustworthiness. In this
approach, an agent’s reputation is determined by ratings given by other agents who
have interacted with them before, as well as the trustworthiness of those rating agents.
Trusted agents receive higher scores, while less trusted agents have a lower impact on the
reputation score.
The REGRET model [28] derives ontological interactions from diverse sources. Conse-
quently, each element of the reputation score must be evaluated separately, considering
individual or societal dimensions. These reputation values are subsequently consolidated
to form the ontological reputation. An advantage of the REGRET model is that it calculates
reputation by considering the number of agents and the frequency of interactions among
rating agents.
Based on a review of the literature, each model used distinct variables to evaluate an
agent’s reputation. For instance, the TRR model emphasized the importance of considering
the reliability of rating agents, where a trustworthy evaluator’s presented value was
deemed accurate. In contrast, TRR assigned equal weight to all interactions, overlooking
the significance of recent contacts that reflect an agent’s recent behavior. On the other hand,
SPORAS incorporated the passage of time in reputation assessment and placed greater
emphasis on recent interactions. The REGRET model, however, pointed out that both TRR
and SPORAS overlooked the impact of the number of agents rating a single agent and
the frequency of interactions among rating agents. Consequently, each model prioritizes
different variables in the calculation of an agent’s reputation.
In [29], the authors created the trust score model, which calculates trustworthiness
in the form of a direct trust score using a combined logic and q-learning rule framework.
In [30], the authors examined the use of a trust framework in Multi-Robot soccer to see
how trust affects action choice. The choice of timing analysis to include in the confidence
assessment, particularly in real-world scenarios, is one of the most common errors found
in the trust assessment model literature [31]. A majority of trust evaluation techniques rely
on a statistical or heuristic approach to develop a trust evaluation algorithm, which may
not be appropriate for analyzing the behavior of complex agents.
In [32], the authors proposed a trust estimation model in which an agent’s trust in
MAS is experimentally evaluated. To estimate trust, the proposed approach uses temporal
difference learning, which involves the concept of Markov games and heuristics. In [33],
the authors developed a trust-based Multi-Robot control system to increase the efficiency of
robot collaboration through recognition and cooperation only with trusted robots. In [34],
the authors proposed a trust-based navigation control algorithm that determines a robot’s
trustworthiness based on its service and uses this information to determine the robot’s
waypoint to avoid deadlocks. In a mobile and collaborative smart factory, [35] proposed a
trust-based team-building paradigm for efficient automated guided vehicle (AGV) teams.
In [36], the authors’ approach is founded on introducing a basic trust model within a com-
petitive multi-agent system, assuming that an agent’s trust measure reflects their expertise,
under the condition that an ample number of competitive interactions are executed. In [37],
the authors proposed several agents possessing specific expertise, represented as a real
value ranging from 0 to 1. To simplify the simulation, each agent has only one type of
Electronics 2023, 12, x FOR PEER REVIEW 4 of 22
only one type of service. The agent is asked to provide a service of type T, where T repre-
sents one of the available service types.
service.
TheThe
workagent is asked
described into provide
this articleaisservice
built onof type T, where
previous workTreported
represents
in one of the
[27,36] by
available
enhancingservice types.
efficiency and offering an innovative model that addresses the problems of the
The work
previous model.described in thistrust
The proposed article is built
model on previous
provides severalwork reportedtoinreduce
contributions [27,36]un-
by
enhancing efficiency and offering an innovative
certainty and enhance performance, as described below: model that addresses the problems of
the previous model. The proposed trust model provides several contributions to reduce
‐ The proposed trust model and algorithm offer a viable approach to reducing uncer-
uncertainty and enhance performance, as described below:
tainty in heterogeneous robot networks.
-‐ The
The proposed
trust model trust model and
formalizes algorithm
reputation offer a viable approach
in self-governing networkstoofreducing
robots. uncer-
‐ tainty in heterogeneous robot networks.
It demonstrates that reputation can be effectively utilized even in the absence of a
- The trust
central model formalizes
authority or externalreputation
calibration. in self-governing networks of robots.
-‐ It demonstrates that reputation can
The simulation results showcase the model’s be effectively utilized
essential evenincluding
features, in the absence of a
improved
central authority or external calibration.
performance compared to the reference models.
- The simulation results showcase the model’s essential features, including improved
performance
3. The comparedControl
Multi-Robot-Based to the reference models.
Architecture
3. TheEvery Intelligent Robot
Multi-Robot-Based existing
Control in our Multi-Robot system has its specific Goal to
Architecture
accomplish and can produce a plan (which is constituted of tasks) related to this goal. This
Every Intelligent Robot existing in our Multi-Robot system has its specific Goal to
strategy permits the Intelligent Robot to choose a suitable plan of tasks to be performed.
accomplish and can produce a plan (which is constituted of tasks) related to this goal. This
In Figure 1, we represent a subset of the ontology modeling the Goal, Task Planner,
strategy permits the Intelligent Robot to choose a suitable plan of tasks to be performed.
and In
Fault Handling
Figure (only the
1, we represent concepts
a subset and
of the roles related
ontology modeling to the
behavior composition
Goal, Task are
Planner, and
shown). Each Intelligent Robot is assumed to have its own Goals. To ensure
Fault Handling (only the concepts and roles related to behavior composition are shown). a defined
Goal,Intelligent
Each a set of Plans can is
Robot beassumed
proposed.toAhavePlanits
is related to a set
own Goals. Toof Tasks.aEach
ensure TaskGoal,
defined has some
a set
inputs that are Events and uses some Resources. An Intelligent Robot can
of Plans can be proposed. A Plan is related to a set of Tasks. Each Task has some inputs subscribe to a
specific
that Task. An
are Events andIntelligent
uses some Robot can send
Resources. AnorIntelligent
receive a message.
Robot canThe different
subscribe to classes of
a specific
planning, perception, treating faults, object, space, context, and reconfiguration,
Task. An Intelligent Robot can send or receive a message. The different classes of planning, as well as
how these treating
perception, classes faults,
are related
object,tospace,
one another,
context, andare reconfiguration,
referred to as semantics for robot
as well as how these
knowledge.
classes are related to one another, are referred to as semantics for robot knowledge.
Followed Followed
Ex
e
iv
e cu
ce
r
te
Pe
t
te x
on
sC
Position
ha
s
s
has
ha
Figure 1. A subset of the ontology modeling the Goal, Task Planner, Trust Model, and Fault Handling.
Electronics 2023, 12, 3679 5 of 21
In Figure 1, the ontology classes and their hierarchies are drawn in rectangles, while
relationships between classes are connected with black solid arrows, where:
• Goal: represents the high-level objectives or desired outcomes that the intelligent robot
aims to achieve in the manufacturing system. Based on the state of the environment,
the intelligent robot can execute the goal with the acheiveGoal method or abort it with
cancelGoal method (in case of major modification on the environment). The intelligent
robot can check the state of execution through the getStateExecution method. As usual,
the isConsistentWith method permits attesting the consistency between two Goals or
more and the getUtility method allows selecting the most convenient Goal.
• Task Planner: represents the sequences of actions and decisions designed to accomplish
specific goals. They provide a structured framework for the robot to follow to achieve
its objectives. Task plans outline the steps, dependencies, and priorities involved
in executing the required tasks. The intelligent robot arranges task sequences and
executes tasks following a specific plan that permits it to achieve a specific goal.
• Fault_Handler: The intelligent robot has to control the system and discover any
fault that can occur based on its symptoms to determine its type and save the occur-
rence time associated with this fault. To accomplish this, the intelligent robot uses
various sensors, monitoring devices, and data analysis techniques. It continuously
gathers data from the system, such as sensor readings, machine outputs, or envi-
ronmental parameters, and analyzes them to detect anomalies or deviations from
expected behavior.
• Trust_Model: used to establish and evaluate trust relationships among different entities
within the intelligent system. It defines the criteria and factors that contribute to
trustworthiness, such as reliability, credibility, past performance, or reputation. Each
robot assesses the trustworthiness of the other robot based on various factors such as
past interactions, observed behavior, and reliability.
• Reconfig_Controller: whenever a fault occurs, the intelligent robot has to decide the
right reconfiguration to ensure that the whole system is still working right.
• Task: a robot’s suitable action selection process.
• Event: the intelligent robot receives notifications about any event that occurs in
the environment
• Resource: each intelligent robot needs some resources to execute a task (for example,
workpiece).
• Sensor: a robot uses its sensors to perceive objects and models the world in which
it lives.
• Position: refers to a certain environmental circumstance that surrounds robots and can
reveal information about a robot’s suitable movement selection process. To support
such cognitive capacities, our model is provided by the context, which consists of
three classes of knowledge: Situation, TemporalPosition, SpatialPosition. The basic
knowledge that an intelligent robot must access for localization and navigation is
represented by context, temporal position, and spatial position. They represent a
perceived environment.
• Actuator: the flexible robot can execute the task using the actuator. For each actuator,
we must propose a behavior to judge the requests to it.
3.3. Model
In our scenario, we have a collection of R robots located in a two-dimensional square
grid environment. Each robot has four actions it can take, specifically moving up, down,
left, or right. The robots all start from the same location and are outfitted with a radio
transceiver that allows them to communicate with other robots within a certain range of
their current position.
The parameters used by our model are
R: Set of robots
r: robot (r ∈ R)
Acr : Set of actions of a robot r, Acr = {up, left, down, right}
Gr : Set of goals to be achieved by a robot r
Ac: Set of joint actions of R agents, Ac = R × Ac
acr i : i-th joint action of the robot, aci ∈ Ac
Ps r : Starting point of robot r
Pi r : Location of robot r at the time i
Pg r : Goal position of robot r
KindT: A robot that can be considered as usually trustworthy, often trustworthy, often
untrustworthy, or usually untrustworthy. KindT = {UT, OT, OU, UU}
Probr : Probability of lying assigned to robot r
Thr : Threshold for a robot r to consider another robot as trustworthy (under this value,
it is considered untrustworthy).
The process aims to minimize coverage time and redundancy while ensuring that the
robots fully achieve their goals. The robots aim to determine the most efficient path from
Electronics 2023, 12, 3679 7 of 21
the origin to the destination point while avoiding obstacles. This can be expressed as an
objective function:
n −1
∑ d( Pri , Pri+1 ) + d( Prn + Pr )
g g
min( Prs , Pr ) = d( Prs − Pr1 ) + (1)
i =1
The fitness function, denoted by Equation (1), evaluates the solution domain, where
Pr s represents the starting point, Pr g is the goal point, Pi is the ith point that the robot
moves to, and d(Pr i , Pr i+1 ) represents the Euclidean distance between Pr i and Pr i+1 .
M1 A1 M2 M3
A2 A3 A7 A8 A9
A4 A5 A6
Figure2.2.Fault
Figure FaultTree.
Tree.
Table1.1.The
Table Thetypical
typicalfaults
faultsrelated
relatedto
toFigure
Figure2.2.
Event
Event Meaning
Meaning
TT Fault alarm
Fault alarm
M1
M1 Actuator fault
Actuator fault
M2
M2 Behavior fault
Behavior fault
M3 Sensor fault
M3
A1 Sensor faultfault
Communication
A1 A3)
A2 (resp. Communication
Actuator A2 (resp. A3) is fault
broken
A4A2 (resp.
(resp. A5,A3)
A6) Actuator
Behavior A4A2 (resp.
(resp. A5, A3) is incorrect
A6) is broken
A7 (resp.
A4 (resp.A8,
A5,A9)
A6) Sensor A4
Behavior A7 (resp.
(resp.A8,
A5,A9)
A6)isisbroken
incorrect
A7 (resp. A8, A9) Sensor A7 (resp. A8, A9) is broken
â We consider the following fault that concern the behavior of an Intelligent Robot:
➢ •We consider the following
Action fault: fault that
the Intelligent concern
Robot doesthe behavior
not executeofthean action
Intelligent
well.Robot:
In this
• case,
Actionthefault:
robot’s
the belief aboutRobot
Intelligent its ability
doestonot
perform
executethe
theaction
actioncorrectly
well. In might be
this case,
misaligned with reality,
the robot’s belief leading
about its abilitytotofaulty behavior.
perform the action correctly might be misa-
• Plan
lignedfault:
withthe Intelligent
reality, leadingRobot generates
to faulty a plan that does not reach the goal.
behavior.
• Plan fault: the Intelligent Robot generates a plan that does not reach the goal.
Electronics 2023,12,
Electronics2023, 12,3679
x FOR PEER REVIEW 99 of
of 21
22
•• Unexpected
Unexpectedcondition:
condition:the Intelligent
the Robot
Intelligent faces
Robot an abnormal
faces condition.
an abnormal In such
condition. In
cases, the robot’s beliefs about its environment might not align with the actual
such cases, the robot’s beliefs about its environment might not align with the
conditions, leadingleading
actual conditions, to unanticipated behavior.
to unanticipated behavior.
NB:
NB: The
The process
process of deciding
deciding on
onfaulty
faultyintelligent
intelligentrobots
robots
is is beyond
beyond thethe scope
scope of
of this
this paper.
paper.
➢
â The
The different
differentfaults
faultsthat
thatconcern
concernthe
theactuator:
actuator:
•• Blocked
Blocked off:
off: the
the actuator
actuator does
does not
not execute
execute any
any request.
request.
•• Blocked
Blocked on:
on: the
the actuator
actuator is
is always
always activated
activated even
even without
without request.
request.
InInthese
thesecases,
cases,the
therobot’s
robot’sbelief
beliefabout
aboutthetheactuator’s
actuator’sstate
statemight
mightnot
notmatch
matchthe
theactual
actual
state,leading
state, leadingtotoincorrect
incorrectactions
actionsor
orlack
lackofofactions.
actions.
4.2.
4.2.Faults
FaultsHandling
HandlingUsing
UsingSelf-Reconfiguration
Self-Reconfiguration
For
Foreach
eachkind
kindofoffault,
fault,wewesave data
save about
data aboutthethe
fault regarding
fault thethe
regarding fault type,
fault the the
type, occur-
oc-
rence time, and the treatment time. Therefore, we have a queue denoted by Queue Agent
currence time, and the treatment time. Therefore, we have a queue denoted by Queue Agent
to
tosave
savethe
thefaults
faultsaffecting
affectingthe thebehavior,
behavior,the
theperception,
perception,and andthe
theexecution
executionof ofIntelligent
Intelligent
Robots (where N A represents the number of all faults saved in QueueAgent ).
Robots (where N represents the number of all faults saved in Queue
A Agent).
The
Thevarious
varioussteps
stepsinvolved
involvedin inthe
therobot’s
robot'sself-reconfiguration
self-reconfigurationforforfault
faulthandling
handlingare are
illustrated
illustratedininFigure
Figure3.3.
1: Fault message()
2: Requires recovery()
3: Requires reconfiguration()
5: Reconfiguration()
7: Fault handled()
8: Fault recovered()
Figure3.3.UML
Figure UMLsequence
sequencediagram
diagramfor
forfault
faulthandling.
handling.
4.3. Order
4.3. Order of of Reconfigurability
Reconfigurability
The reconfigurability
The reconfigurability level level(RL)
(RL)isisdeployed
deployed to to
represent
represent thethe
different kinds
different of recon-
kinds of re-
figurable layers in an automated, self-configured Multi-Robot-based
configurable layers in an automated, self-configured Multi-Robot-based control system. control system. RL
can be used as a reconfiguration complexity indicator. The higher
RL can be used as a reconfiguration complexity indicator. The higher RL leads to more RL leads to more recon-
figurability, whichwhich
reconfigurability, would cost more
would in function
cost more of time
in function and and
of time resources
resourcesrequired
required for the
for
reconfiguration.
the reconfiguration.
Multi-Robot-based control system system can can be
be reconfigured
reconfigured in in three
three levels:
levels:
RL11: AtAtthe
thetask
tasklevel,
level,the
theMulti-Robot-based
Multi-Robot-based control
control system can only change the task
due to a simple fault. For example, instead of moving right, move left due to an obstacle.
RL22: AtAt the
the plan level, the the Multi-Robot-based
Multi-Robot-basedcontrol controlsystem
systemincludes
includesallall thethe recon-
reconfig-
figurations
urations in the
in the firstfirst level,
level, plusplus reconfiguring
reconfiguring the Task
the Task planner,
planner, which which
means means it is possi-
it is possible to
ble to change
change the planning
the planning for a specific
for a specific fault occurring.
fault occurring.
RL33: AtAtthe
thegoal
goal level,
level, the
the Multi-Robot-based
Multi-Robot-based control further changes the goal. This is
accomplished only if the goal cannot be achieved due to some circumstances (some faults
happen and do not allow the goal to be ensured). ensured).
Running Example. The Multi-Robot-based control system considers many scenarios
in case
case faults
faultshappen
happentotophysical
physical components
components such
such as sensors,
as sensors, actuators,
actuators, or machines
or machines (Fig-
(Figure
ure 4). In4).theInfirst
thelevel,
first we
level, we consider
consider three(Goal
three goals goals1, (Goal
Goal 2,1,andGoalGoal2, and
3). InGoal 3). In
the second
the second
level, there level, theretask
are some are some
plannerstaskrelated
planners related
to each to each
goal. Let usgoal.
sayLet us say
three taskthree task
planners
planners (TaskPlanner1, TaskPlanner2, and TaskPlanner3). In the third
(TaskPlanner1, TaskPlanner2, and TaskPlanner3). In the third level, there are some tasks level, there are some
tasks
defineddefined
for eachfor task
eachplanner.
task planner. We consider,
We consider, for example,
for example, five tasks
five tasks (Task1,(Task1, ..., Task5).
..., Task5).
Task 1
Goal 1 Task Plan 1
Task 2
Task 4
Task 5
Figure 4.
Figure 4. Different
Different configurations
configurations applied
applied by
by the
the intelligent
intelligent robot.
robot.
(ii) nPlan i the number of possible plans related to a specific goal Ri , and let Ri,j (where
i ∈ [1, nGoal ] and j ∈ [1, nPlan i ]) be a state of Ri representing a particular Plan.
(iii) nTask i,j the number of all possible tasks related to Ri,j (i denotes a specific goal and
j represents a specific plan).
We denote Ri,j,k where i represents the goal, j represents the Plan, and k represents the
Task (i ∈ [1, nGoal ], j ∈ [1, nPlan i ] and k ∈ [1, nTask i j ]). If we move from Ri,j,k to Ri,j,l that
means the reconfiguration concerns only the task level. If we move from Ri,m,k to Ri,n,p that
means the reconfiguration concerns two levels, meaning the plan level as well as the task
level. If we move from Ri,m,k to Rj,n,p that means the reconfiguration is applied on three
levels (goal, plan, and task).
Running Example
In this running example, we consider three robots RB1, RB2, and RB3. RB1 is desig-
nated as the leader but has a sensor flaw that causes it to misclassify zones as blue. The
environment used was a 6 × 12 grid cell world where every cell measured 20 cm × 20 cm.
The grid cell was partitioned into a blue zone and a green zone, with the blue zone rep-
resenting the objective locations that the robots needed to find, while the green zone was
intended to mistake the agents. The robots were aware of the positions of the zones in
the grid cell but not the color distribution, and they were required to determine the blue
zones’ positions.
The general goal involves robots finding blue zones as targets within a resource
constraint of ten locations. It follows a game of tag format, with RB1 as the leader and
each subsequent robot verifying information and updating the trustworthiness estimates
Electronics 2023, 12, 3679 12 of 21
of the previous robot. RB1 has a sensor flaw programmed into it, and RB2 and RB3 must
evaluate RB1’s trustworthiness to determine if it should continue to lead. RB1 sends
location information to RB2, which verifies the information and updates its trustworthiness
estimate of RB1. In this case, two scenarios are possible: the first one, where robot RB1
sends the correct information to robot RB2, and the second scenario where robot RB1 sends
the wrong information to robot RB2. If robot RB2 relies on the information provided by
RB1 and goes to the target location, it might end up finding nothing there. This results in a
waste of time and resources. Therefore, the robots need to establish a level of trust with
each other to improve the decision-making process.
RB2 then sends the information to RB3, which investigates and assesses the credibility
of both RB1 and RB2. After verifying RB1’s misclassification, RB2 moves on to explore the
closest target area that RB1 has not yet explored. RB3 reached the same conclusion as RB2
that RB1 cannot be trusted after a certain number of mistakes. After RB1 was proven to
be reliable, RB2 and RB3 took action on excluding RB1 and dynamically rearranged the
exploration teams.
N
∑ FBj
j =1
Trusti ( P, Q) = α ∗ Trusti−1 ( P, Q) + (1 − α) ∗ (2)
N
where Trusti−1 (P, Q) is the previous value of trust assigned by P to Q, and α is a real
value in the range [0, 1] reflecting the significance given to previous reliability evaluations
concerning the current evaluation. α indicates the weight of the past evaluations when
updating the trust (it indicates the significance given to the past concerning the current
moment). N is used to denote the number of interactions or requests that have occurred
between robot P and robot Q over time, and it helps contextualize the trust-building process
between the two robots. If it is the first interaction of P with Q, no previous evaluation
may be used for updating the trust. The feedback includes an assessment of the response’s
quality, which informs P about the quality of the contributions made by the robots who
were contacted to generate the response.
mean of all the trust values Reputation(C, Q) that every robot C (different from P and Q)
related to Q is used to calculate the reputation Reputation(P, Q) that a robot P attributes
to another robot Q. In other words, we suppose that the trust that C has in Q is reflected
by the recommendation that each robot C makes to robot P regarding robot Q. The trust
measure Reputation(P, C) is used to weigh this suggestion from C, taking into account P’s
trust in C. In the present stage, the intelligent robot P receives some recommendations from
the other robots in response to past recommendation requests.
The function Reputation(P, C) is calculated roughly as follows:
∑ Trust( P, C ) ∗ Trust(C, Q)
C 6= P,Q
Reputation( P, Q) = (3)
∑ Trust( P, C )
C 6= P,Q
Step 2: Trust
We presume that each community robot can have his or her reliability model, indepen-
dent of the other robots, and we will not go into depth about this model here.
The authors suggest that the coefficient α is not constant, but rather depends on (i) the
number of encounters that P and Q have had in the past about the category’s services, and
(ii) P’s expertise in evaluating the category’s services. To make things easier, we will treat
the coefficient α as a constant.
Furthermore, when the Equation (3) is taken into consideration, the Formula (4)
above becomes:
∑ Trust( P, C ) ∗ Trust(C, Q)
C 6= P,Q
Trusti ( P, Q) = α ∗ Trusti−1 ( P, Q) + (1 − α) ∗ (5)
∑ Trust( P, C )
C 6= P,Q
P
0.5
FeedB(rec2,P,Q)=0.9
)=0.7
,Q)=
)=0.6
)=0.3
Trust(P,Q)=0.5
c3,P
t(P,Q
,P,Q
t(P,Q
B( r e
(r e c 1
Trus
Trus
Feed
B
Feed
Figure 5. Different
Figure 5. Different reconfigurations applied by
reconfigurations applied by Q
Q to
to robot
robot P.
P.
Trust(P, Q):
Step 3: Trust(P, Q): when
when PP calculates
calculates the whole trust score to assign to Q, it considers
reconfiguration reliability
both the contributions of reconfiguration reliability RecfgR(P,
RecfgR(P, Q) and previous reputation
Trustii-1
−1 (P,
(P, Q).
Q). The
The percentage
percentage of
of importance
importance to
to give
give to the reconfiguration
to reconfiguration reliability is
represented by the value ((1 1−− )*Re cfgR( P, Q) .
α ) ∗ Rec f gR ( P, Q ) .
α
α is
is aareal
realvalue
valueininthetherange [0;1]
range that
[0;1] represents
that representsthethe
relevance thatthat
relevance P assigns to pre-
P assigns to
vious reliability evaluations following the existing evaluation. In other words,
previous reliability evaluations following the existing evaluation. In other words, α per- α permits
the
mitsassessment of theof
the assessment value placedplaced
the value on memory (trust)(trust)
on memory in comparison to the present
in comparison recon-
to the present
figuration reliability
reconfiguration RecfgR.
reliability RecfgR.
Moreover,
Moreover, we wetake
takeinto
intoconsideration
consideration Equation
Equation (6),(6),
andand the Formula
the Formula (7) above
(7) above be-
becomes
comes
∑ Trust( P, Q) ∗ FeedB(rec, P, Q)
rec∈Recon f ig( P)
Trusti ( P, Q) = αTrusti−1 ( P, Q) + (1 − α) (8)
|recon f ig( P, Q)|
6. Experimental Results
We consider three robots called RB1, RB2, and RB3 that must explore space and gather
a specific number of colored balls. The robots must swiftly find all the target-colored cells
that have been assigned to them. In particular, RB1 (resp. RB2, RB3) must gather ten
Electronics 2023, 12, 3679 15 of 21
blue (resp. red, yellow) balls. The agents are aware of where the balls are. They must
scavenge the surroundings to locate their balls. A robot will inform the appropriate robot
if it comes across a ball that belongs to another robot. For instance, if RB1 discovers a
red ball belonging to RB2, then RB1 will inform RB2 about the location of the ball. The
given scenario describes a situation where robot RB1 can mistakenly provide inaccurate
information to robot RB2 while exploring an environment to collect specific-colored balls.
This could happen due to faulty or damaged sensors, high uncertainty, or a highly dynamic
environment that differs from initial observations.
The robots in the environment are working individually to find their cells, but they
can collaborate to speed up the process. This collaboration involves sharing information
about cells found by other robots. For instance, if Robot RB1 finds a cell that belongs to
Robot RB3, it will share the cell’s location information with Robot RB3. Robot RB3 will
then verify the cell color by exploring the location shared by Robot RB1. If the cell’s color
matches Robot RB3’s objectives, the interaction is successful.
At the beginning, each trustee robot is randomly assigned one of five profiles. The
robots explore the environment to achieve their objectives of finding their respective cells
and may work collaboratively to expedite the time it takes to complete their objectives. If a
robot shares correct information, the trust in that robot increases, and if they repeatedly
share incorrect information, the trust decreases over time. The generated trust values range
from 0.0 to 1.0, with 0.5 representing neutral trust. A decrease of up to 10% from the neutral
trust is considered an acceptable reduction in trust level before a robot’s trustworthiness is
questioned, and the minimum trust value is set to 0.2 for the simulation.
Since cooperation and communication are allowed, if one robot comes across a colored
cell that belongs to another robot, the location data can be communicated. However, there
is a potential that a sensor error will cause the incorrect robot to receive the cell location
information. All the robots are unaware of the possibility of the sensor malfunction, which
is coded in a probabilistic mode. The predicted trust levels for each robot range from 1.0
to 0.2, with higher values indicating more real trustworthiness. The objective of robots
is to reach their goals within a limited number of steps and avoid collisions. In case of
a collision, the robots are returned to their previous positions. The game ends after all
robots have reached their goals. The goal yields a reward of 100, while a collision results
in a penalty of −10. In all other scenarios, the reward is set at −1 to incentivize taking
fewer steps.
By comparing the performance of the established trust models tReconf to two other
models previously discussed in the literature, namely tFeedB [36] and TRR [27], the models’
effectiveness is assessed. The reason for selecting these three models for the comparative
study is that they employ similar principles in evaluating trustworthiness.
The simulation involves a set of trustor and trustee robots that engage in interactions
for a duration of 85 rounds. Each robot in the simulation is programmed with a probability
of experiencing sensor malfunction, which causes it to incorrectly identify the color of
the grid cell. This probability of malfunction can be associated with the probability of
lying. Table 2 presents the probabilities of the sensor malfunction occurring, which were
determined using arithmetic reasoning based on three different levels of robot credibility:
fully trustworthy, fully untrustworthy, and partially trustworthy.
Table 2. The probability of a sensor’s failure to correctly identify the color of a cell.
Sensor Malfunction
Robot Trustworthiness
Probability
RB1 0.1 0.9
RB2 0.4 0.6
RB3 0 1
Electronics 2023, 12, 3679 16 of 21
In the following section, we compare tReconf with tFeedB [36] and TRR [27] based
on four criteria, which are time steps analysis, RMSD evaluation, interaction analysis, and
variation of total feedback.
12,000
10,000
8,000
Time steps
6,000
4,000
2,000
0
5 15 25 35 45 55 65 75 85
Trial #
Figure6.6.Time
Figure Timesteps
stepscriterion
criterioncomparison
comparisonbetween
betweenmodels
modelsbased
basedon
onsensor
sensormalfunction
malfunctionprobability.
probability.
Table 3. Comparison between tReconfig, tFeedB, and TRR trust models based on RMSD values.
In comparison to the two literature models, the results in Table 3 demonstrate that our
method tReconf has the lowest RMSD values, indicating superior accuracy in trustwor-
thiness estimation. However, when evaluating entirely trustworthy robots, tFeedB [36] is
the worst.
Table 4. Comparison between tReconfig, tFeedB, and TRR trust models based on the average number
of interactions.
- The tReconf model operates as anticipated, with more interactions being seen with
the RB1 and RB3 trustworthy robots and fewer interactions being seen with the RB2
robot. Since RB3 is more reliable than RB1, there are more interactions with RB3.
- The TRR model functions also as expected, exhibiting a higher number of interactions
with the trustworthy robots RB1 and RB3, while fewer interactions are observed with
the RB2 robot.
- However, with tFeedB model, there are fewer contacts with RB1 and RB3 than antici-
pated because trustworthiness cannot be reliably determined by tFeedB models.
Electronics 2023, 12, 3679 18 of 21
5
Total Feedback
0
10 20 30 40 50 60 70 80 90
% Unreliable agents
Figure7.7.Variation
Figure Variationof
of total
total feedback
feedback versus
versus percentage
percentage of
of unreliable
unreliablerobots
robotsPPatataapopulation
populationsize
sizeof
N = 150.
of N = 150.
Theresults
The resultsareareshown
shownininFigure
Figure77ininterms
termsofofthetheresults
resultsofoftFeedB,
tFeedB,TRR, TRR,andandtReconf
tReconf
robots vs. the percentage of unreliable
robots vs. the percentage of unreliable robots P. robots P.
Basedon
Based onFigure
Figure7,7,we weconclude
concludethat
thatthe
theTfeedB
TfeedBapproach
approachreaches
reachesits itsmaximum
maximumbank bank
amountfor
amount forPP== 20%,
20%, andand thethe performance
performance of tFeedB solutions for other P- values drops.
p-values drops.
Thereasons
The reasonsforforthis
thisare
are(i)
(i)the
theTfeedB
TfeedBrobot
robotisisnot
notable
abletotocorrectly
correctlydistinguish
distinguishunreliable
unreliable
robots,and
robots, and(ii)
(ii)ititsuffers
suffersunnecessary
unnecessarycosts costswhile
whileasking
askingfor forrecommendations
recommendationswhen whenthethe
populationisisreliable
population reliable(P(P<<50%).
50%).In Incontrast
contrastto tothe
theTfeedB
TfeedBapproach,
approach,TRR TRRgradually
graduallylearns
learns
totodistinguish
distinguishtrusted
trustedrobots,
robots,which
whichreduces
reducesthe thecost
costofofreferrals.
referrals.Furthermore,
Furthermore,reliability
reliability
ininTRR
TRRisisdetermined
determinedby bythethenumber
numberof ofinteractions
interactionsbetween
between thethe trustor
trustor andand the
the trustee.
trustee.
Moreover,Figure
Moreover, Figure 77 shows that the the performance
performanceofofTRR TRRisisnot significantly
not significantly affected
affectedby by
the
the presence
presence of unreliable
of unreliable robots.
robots. The The approach
approach TRRTRR is a good
is a good solution
solution as it as it provides
provides good
good
resultsresults
in all in all cases
cases (when(when all robots
all robots are reliable
are reliable as wellasaswell
when asmost
whenofmost themofare them are
unrelia-
unreliable).
ble). This can This can be explained
be explained by that
by the fact the fact
TRRthat TRR on
is based is based
the two onparameters
the two parameters
of reliabil-
ofityreliability and reputation.
and reputation. WheneverWhenever the reputation
the reputation is weak, the is weak,
choicethe choiceon
is based is reliability.
based on
reliability. Whenever reliability is reduced, the decision is based on
Whenever reliability is reduced, the decision is based on reputation. Another advantage reputation. Another
advantage of this approach
of this approach is that theisreputation
that the reputation is calculated
is calculated overcommunity,
over all the all the community,
not onlynot re-
only restricted
stricted betweenbetween two robots.
two robots. The major
The major disadvantage
disadvantage of TRRof TRR is that
is that it does
it does not not
taketake
into
into consideration
consideration the quality
the quality of theof solution
the solution ensured
ensured by thebychosen
the chosen
robotrobot
Q, asQ, as does
does our
our solu-
solution tReconf.
tion tReconf.
If we consider the tReconf approach, it starts with acceptable results and gradually
surpasses the other solutions, especially when the percentage of unreliable robots is im-
portant (more than 50%). In our approach tReconf, we give more importance to the past
interactions between robots P and Q as well as the trust that has robot P in Q. That is why,
even when most robots are unreliable, the choice of robot P is based on its own experience,
Electronics 2023, 12, 3679 19 of 21
If we consider the tReconf approach, it starts with acceptable results and gradually sur-
passes the other solutions, especially when the percentage of unreliable robots is important
(more than 50%). In our approach tReconf, we give more importance to the past interactions
between robots P and Q as well as the trust that has robot P in Q. That is why, even when
most robots are unreliable, the choice of robot P is based on its own experience, which
means that P should choose the best robot that previously interacts with it successfully (by
calculating the updated feedback that has P on Q). Therefore, the solution is not affected by
the high percentage of unreliable robots in this case. The only disadvantage that has our
approach tReonf is that robot P does not need to contact all the other robots to decide on
the best robot to choose to ensure the reconfiguration. In some cases, this is not sufficient,
and robot P needs to consider the reputation of robot Q in the whole community not only
calculated by robot P itself.
7. Conclusions
In this paper, we consider treating faults by endogenous Multi-Robots in a distributed
and open framework with online task planning. The essential objective of this research
is to make a decision-making framework in a Multi-Robot framework that is established
on the concept of belief, where robots can only coordinate and collaborate with other
reliable robots. Because of the faults that may occur, trust may be a key factor in any
collaboration, particularly in a highly dynamic and unpredictable environment where
robots are anticipated to work. In this, we present our contribution to the trust model
that discovers and evaluates the reliability of robots in a Multi-Robot system where the
robot can choose to take part and group up exclusively with other dependable robots.
Our main contributions comprise defining (i) the Multi-Robot-based Control Architecture
by presenting in detail the main components and methods. Such architecture facilitates
the reconfiguration (either self-reconfiguration ensured by the robot itself or distributed
reconfiguration executed by the Multi-Robot-based system). For this first contribution, we
use the finite state machine to represent the architecture. (ii) The Multi-Robot-based control
system architecture also addresses other specific requirements for production systems,
including fault flexibility. (iii) Trust Model: The distributed reconfiguration is facilitated
through building a trust model tReconf that is based on learning from past interactions
between intelligent robots. It should be noted that this paper focuses on proposing a new
trust model tReconf and discussing its potential benefits rather than providing specific
formulas or algorithms for calculating trust. We compare tReconf with tFeedB [36] and
TRR [27] based on four criteria, which are time steps analysis, RMSD evaluation, interaction
analysis, and variation of total feedback. Our proposed model outperformed the trust
models described in the literature in terms of performance.
Our future work will be the following. Our methodology can be expanded to include
human-computer interaction. Our Multi-Robot-based control system can be ameliorated
to allow robots to participate in multiple collaborations at the same time. This work is
part of our work on cloud, fog, and edge computing (see, for example, [47]). The problem
considered in this paper is a typical candidate for application of cloud, fog, and edge
computing because the robots need to maintain the knowledge about the other robots,
the environment, the knowledge about trust, etc., and make decisions in real-time. This
requires maintaining global and local knowledge and decision-making with trade-offs
between the decision time and accuracy. Future research will consider these aspects of
robot-based intelligent control systems.
Electronics 2023, 12, 3679 20 of 21
Author Contributions: Conceptualization, A.G. and S.M.A.; formal analysis, A.G. and S.M.A.;
validation, A.G.; writing—original draft preparation, A.G. and S.M.A.; writing—review and editing,
S.M.A. and A.G.; supervision, S.M.A. and A.G.; project administration, S.M.A. and A.G. All authors
have read and agreed to the published version of the manuscript.
Funding: The authors gratefully acknowledge the approval and the support of this research study
by the grant No NBU-FFR-2023-0086, from the Deanship of Scientific Research at Northern Border
University, Arar, Kingdom of Saudi Arabia.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Wu, J.; Jin, Z.; Liu, A.; Yu, L.; Yang, F. A survey of learning-based control of robotic visual servoing systems. J. Frankl. Inst. 2022,
359, 556–577. [CrossRef]
2. Di Lillo, P.; Pierri, F.; Antonelli, G.; Caccavale, F.; Ollero, A. A framework for set-based kinematic control of multi-robot systems.
Control Eng. Pract. 2021, 106, 104669. [CrossRef]
3. Peng, D.; Smith, W.A.; Randall, R.B.; Peng, Z. Use of mesh phasing to locate faulty planet gears. Mech. Syst. Signal Process. 2019,
116, 12–24. [CrossRef]
4. Dai, Y.; Qiu, Y.; Feng, Z. Research on faulty antibody library of dynamic artificial immune system for fault diagnosis of chemical
process. Comput. Aided Chem. Eng. 2018, 44, 493–498.
5. Li, H.; Xiao, D.Y. Fault diagnosis using pattern classification based on one-dimensional adaptive rank-order morphological filter.
J. Process Control 2012, 22, 436–449. [CrossRef]
6. Jia, X.; Jin, C.; Buzza, M.; Di, Y.; Siegel, D.; Lee, J. A deviation based assessment methodology for multiple machine health patterns
classification and fault detection. Mech. Syst. Signal Process. 2018, 99, 244–261. [CrossRef]
7. Dhimish, M.; Holmes, V.; Mehrdadi, B.; Dales, M. Comparing Mamdani Sugeno fuzzy logic and RBF ANN network for PV fault
detection. Renew. Energy 2018, 117, 257–274.
8. Calderon-Mendoza, E.; Schweitzer, P.; Weber, S. Kalman filter and a fuzzy logic processor for series arcing fault detection in a
home electrical network. Int. J. Electr. Power Energy Syst. 2019, 107, 251–263. [CrossRef]
9. Sarazin, A.; Bascans, J.; Sciau, J.B.; Song, J.; Supiot, B.; Montarnal, A.; Lorca, X.; Truptil, S. Expert system dedicated to condition-
based maintenance based on a knowledge graph approach: Application to an aeronautic system. Expert Syst. Appl. 2021, 186,
115767. [CrossRef]
10. Bartmeyer, P.M.; Oliveira, L.T.; Leão, A.A.S.; Toledo, F.M.B. An expert system to react to defective areas in nesting problems.
Expert Syst. Appl. 2022, 209, 118207. [CrossRef]
11. Han, T.; Jiang, D.; Sun, Y.; Wang, N.; Yang, Y. Intelligent fault diagnosis method for rotating machinery via dictionary learning
and sparse representation-based classification. Measurement 2018, 118, 181–193. [CrossRef]
12. Jia, F.; Lei, Y.; Lin, J.; Zhou, X.; Lu, N. Deep neural networks: A promising tool for fault characteristic mining and intelligent
diagnosis of rotating machinery with massive data. Mech. Syst. Signal Process. 2016, 72, 303–315. [CrossRef]
13. Arrichiello, F.; Marino, A.; Pierri, F. A decentralized fault tolerant control strategy for multi-robot systems. IFAC Proc. 2014, 47,
6642–6647. [CrossRef]
14. Frommknecht, A.; Kuehnle, J.; Effenberger, I.; Pidan, S. Multi-sensor measurement system for robotic drilling. Robot. Comput.
-Integr. Manuf. 2017, 47, 4–10. [CrossRef]
15. Cai, Y.; Choi, S.H. Deposition group-based toolpath planning for additive manufacturing with multiple robotic actuators. Procedia
Manuf. 2019, 34, 584–593. [CrossRef]
16. Byeon, S.; Mok, S.H.; Woo, H.; Bang, H. Sensor-fault tolerant attitude determination using two-stage estimator. Adv. Space Res.
2019, 63, 3632–3645. [CrossRef]
17. Lei, R.H.; Chen, L. Adaptive fault-tolerant control based on boundary estimation for space robot under joint actuator faults and
uncertain parameters. Def. Technol. 2019, 15, 964–971. [CrossRef]
18. Miller, O.G.; Gandhi, V. A survey of modern exogenous fault detection and diagnosis methods for swarm robotics. J. King Saud
Univ. Eng. Sci. 2021, 33, 43–53.
19. Glorieux, E.; Riazi, S.; Lennartson, B. Productivity/energy optimisation of trajectories and coordination for cyclic multi-robot
systems. Robot. Comput.-Integr. Manuf. 2018, 49, 152–161. [CrossRef]
20. Jin, L.; Li, S.; La, H.M.; Zhang, X.; Hu, B. Dynamic task allocation in multi-robot coordination for moving target tracking: A
distributed approach. Automatica 2019, 100, 75–81. [CrossRef]
21. Shen, H.; Pan, L.; Qian, J. Research on large-scale additive manufacturing based on multi-robot collaboration technology. Addit.
Manuf. 2019, 30, 100906. [CrossRef]
22. de Almeida, J.P.L.S.; Nakashima, R.T.; Neves, F., Jr.; de Arruda, L.V.R. Bio-inspired on-line path planner for cooperative exploration
of unknown environment by a Multi-Robot System. Robot. Auton. Syst. 2019, 112, 32–48. [CrossRef]
Electronics 2023, 12, 3679 21 of 21
23. Katliar, M.; Olivari, M.; Drop, F.M.; Nooij, S.; Diehl, M.; Bülthoff, H.H. Offline motion simulation framework: Optimizing motion
simulator trajectories and parameters. Transp. Res. Part F Traffic Psychol. Behav. 2019, 66, 29–46. [CrossRef]
24. Ljasenko, S.; Ferreira, P.; Justham, L.; Lohse, N. Decentralised vs partially centralised self-organisation model for mobile robots in
large structure assembly. Comput. Ind. 2019, 104, 141–154. [CrossRef]
25. Leottau, D.L.; Ruiz-del-Solar, J.; Babuška, R. Decentralized reinforcement learning of robot behaviors. Artif. Intell. 2018, 256,
130–159. [CrossRef]
26. Zacharia, G. Collaborative Reputation Mechanisms for Online Communities. Master’s Thesis, Massachusetts Institute of
Technology, Cambridge, MA, USA, 1999.
27. Rosaci, D.; Sarné, G.M.; Garruzzo, S. Integrating trust measures in multiagent systems. Int. J. Intell. Syst. 2012, 27, 1–15. [CrossRef]
28. Sabater, J.; Sierra, C. REGRET: Reputation in gregarious societies. In Proceedings of the Fifth International Conference on Autonomous
Agents; Association for Computing Machinery: New York, NY, USA, 2001; pp. 194–195.
29. Aref, A.; Tran, T. Using fuzzy logic and Q-learning for trust modeling in multi-agent systems. In Proceedings of the 2014
Federated Conference on Computer Science and Information Systems, Warsaw, Poland, 7–10 September 2014; pp. 59–66.
30. Bianchi, R.; Lopez de Mantaras, R. Should I Trust My Teammates? An Experiment in Heuristic Multiagent Reinforcement
Learning. In Proceedings of the IJCAI’09, W12: Grand Challenges for Reasoning from Experiences, Los Angeles, CA, USA,
11 July 2009.
31. Das, R.; Dwivedi, M. Multi agent dynamic weight based cluster trust estimation for hierarchical wireless sensor networks.
Peer–Peer Netw. Appl. 2022, 15, 1505–1520. [CrossRef]
32. Rishwaraj, G.; Ponnambalam, S.G.; Loo, C.K. Heuristics-based trust estimation in multiagent systems using temporal difference
learning. IEEE Trans. Cybern. 2016, 47, 1925–1935. [CrossRef]
33. Rishwaraj, G.; Ponnambalam, S.G.; Kiong, L.C. An efficient trust estimation model for multi-agent systems using temporal
difference learning. Neural Comput. Appl. 2017, 28, 461–474. [CrossRef]
34. Rao, D.C.; Kabat, M.R. A trust based navigation control for multi-robot to avoid deadlock in a static environment using improved
Krill Herd. In Proceedings of the 2018 International Conference on Inventive Research in Computing Applications (ICIRCA),
Coimbatore, India, 11–12 July 2018; pp. 810–817.
35. Fortino, G.; Messina, F.; Rosaci, D.; Sarné, G.M.; Savaglio, C. A trust-based team formation framework for mobile intelligence in
smart factories. IEEE Trans. Ind. Inform. 2020, 16, 6133–6142. [CrossRef]
36. Buccafurri, F.; Comi, A.; Lax, G.; Rosaci, D. A trust-based approach to clustering agents on the basis of their expertise. In Agent
and Multi-Agent Systems: Technologies and Applications: Proceedings of the 8th International Conference KES-AMSTA 2014 Chania,
Greece, June 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 47–56.
37. Carbo, J.; Molina-Lopez, J.M. An extension of a fuzzy reputation agent trust model (AFRAS) in the ART testbed. Soft Comput.
2010, 14, 821–831. [CrossRef]
38. Khalastchi, E.; Kalech, M. Fault detection and diagnosis in multi-robot systems: A survey. Sensors 2019, 19, 4019. [CrossRef]
39. Kheirandish, M.; Yazdi, E.A.; Mohammadi, H.; Mohammadi, M. A fault-tolerant sensor fusion in mobile robots using multiple
model Kalman filters. Robot. Auton. Syst. 2023, 161, 104343. [CrossRef]
40. Al Hage, J.; El Najjar, M.E.; Pomorski, D. Multi-sensor fusion approach with fault detection and exclusion based on the Kullback–
Leibler Divergence: Application on collaborative multi-robot system. Inf. Fusion 2017, 37, 61–76. [CrossRef]
41. Ma, H.J.; Yang, G.H. Simultaneous fault diagnosis for robot manipulators with actuator and sensor faults. Inf. Sci. 2016, 366,
12–30. [CrossRef]
42. Zhang, F.; Wu, W.; Song, R.; Wang, C. Dynamic learning-based fault tolerant control for robotic manipulators with actuator faults.
J. Frankl. Inst. 2023, 360, 862–886. [CrossRef]
43. Crestani, D.; Godary-Dejean, K.; Lapierre, L. Enhancing fault tolerance of autonomous mobile robots. Robot. Auton. Syst. 2015, 68,
140–155. [CrossRef]
44. He, Y.; Yu, Z.; Li, J.; Ma, G.; Xu, Y. Fault correction of algorithm implementation for intelligentized robotic multipass welding
process based on finite state machines. Robot. Comput.-Integr. Manuf. 2019, 59, 28–35. [CrossRef]
45. Urrea, C.; Kern, J.; López, R. Fault-tolerant communication system based on convolutional code for the control of manipulator
robots. Control. Eng. Pract. 2020, 101, 104508. [CrossRef]
46. Khalili, M.; Zhang, X.; Gilson, M.A.; Cao, Y. Distributed fault-tolerant formation control of cooperative mobile robots. IFAC-
PapersOnLine 2018, 51, 459–464. [CrossRef]
47. Altowaijri, S.M. Workflow Scheduling and Offloading for Service-based Applications in Hybrid Fog-Cloud Computing. Int. J.
Adv. Comput. Sci. Appl. 2021, 12, 726–735. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.