0% found this document useful (0 votes)
7 views

Module-5 chapter 10

Chapter 10 discusses the interplay between artificial intelligence (AI) and natural intelligence (NI) in decision-making, emphasizing the importance of human context, ethics, and subjective factors in optimizing business processes. While AI enhances productivity through data analysis, it lacks the reasoning and emotional understanding inherent in human decision-making, making human oversight crucial. The chapter highlights the challenges of AI, including ethical considerations and the limitations of deep learning, advocating for a balanced approach that integrates human intuition and experience with AI capabilities.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Module-5 chapter 10

Chapter 10 discusses the interplay between artificial intelligence (AI) and natural intelligence (NI) in decision-making, emphasizing the importance of human context, ethics, and subjective factors in optimizing business processes. While AI enhances productivity through data analysis, it lacks the reasoning and emotional understanding inherent in human decision-making, making human oversight crucial. The chapter highlights the challenges of AI, including ethical considerations and the limitations of deep learning, advocating for a balanced approach that integrates human intuition and experience with AI capabilities.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Chapter 10

Natural intelligence and social


aspects of AI-based decisions

THE “ARTIFICIAL” IN AI

Artificial intelligence (AI) imitates natural intelligence (NI). Therefore, it


is called artificial intelligence as it is not the same as NI. Humans are able
to contextualize the decisions, apply ethics and morals, and able to feel the
joy of successful outcomes. These subjective factors make a huge difference
in the outcomes. Therefore, it is important to explore the human aspect of
AI-based decision-making in optimizing business processes. This chapter
explores the “soft” aspects of business optimization.
AI optimizes business processes and increases productivity but leads
to social, ethical, and moral challenges in decision-making. The machine
learning (ML) algorithms mimic and augment human thinking processes
but they are unable to explain the reasons behind the insights generated.
The lack of reasoning or explainability is an important consideration in AI
adoption. Deep learning (DL), in particular, is multilayered and complex,
making it impossible to ascertain the reasons behind the insights generated.
Explanations, however, are important from both development and usage
viewpoints. Philosophically, Agrawal et al.1 ask “Is AI an existential threat
to humanity itself?” This question is asked across business, social, politi-
cal, and various other sectors. Ada Lovelace2 clarified the impossibility of
AI taking over humans entirely, decades ago. The discussion in this chapter
underscores the importance of human inputs in decision-making because
AI does not have the same reasoning, contextualization, and sensitivity as
that in human decision-making.
In discussing AI impact beyond business, Agrawal et al.3 talk about three
tradeoffs: productivity versus distribution, innovation versus competition,
and performance versus privacy. In each of these three tradeoffs, there is a
substantial element of intelligence that is beyond AI. As a result, AI cannot
take over issues associated with ethics and morality that transcend the algo-
rithmic or legal frameworks. Neither can AI take over innovation – which
remains in human purview. Indeed AI is used in supporting the decisions in
a balanced manner.

233
234 Artificial Intelligence for Business Optimization

AI supporting human decision-making is qualitatively different from AI


taking over decision-making altogether. Whether it is a decision related to
a credit or a diagnosis related to a disease or a job candidacy interview in
a human resources department, AI without any human intervention can be
potentially catastrophic. The extent to which humanization of optimized
business processes should occur is, however, a subjective decision. This is
where the challenge of the soft factors in business comes into play in imple-
menting business optimization (BO). Humans essentially provide the cog-
nition in AI systems. The volume of data and the speed of computing are
such that once the algorithms are coded, machines can execute and humans
cannot keep up. This can lead to a loss of control over AI systems. AI-based
systems should not make automated decisions in situations where oversee-
ing that decision by a human is important. Therefore, in this discussion
on BO, NI is given substantial importance. Humanization is the complete
balancing act of AI-based decision-making with NI.

Subjective customer thinking


Faster and more accurate decision-making leads to greater customer satis-
faction and therefore greater customer value. Optimizing the value based
on data analytics and predictions resulting from AI is, however, a subjective
process. Customers and other users (e.g., staff) are humans whose needs
and underlying context can change depending on myriad subjective factors.
AI cannot ascertain all those factors beforehand and, therefore, is unable
to sufficiently code the emotions, perceptions, impressions, and judgments
made by humans.
Customers may make their decisions based on options and, perhaps, not
always on rationality. AI algorithms are not designed to ascertain these
subjective decisions and, therefore, need humanization. Humanization is
the introduction of subjective elements in AI-based decision-making. This
subjectivity is not limited to the business decisions, but also purchase and
recommendation decisions by the customer. The subjective factors in pro-
viding customer value are:

• Is the decision providing something more worthwhile to the customer


than the ability of the system to measure it?
• Is the decision ethically and morally appropriate to the given situation?
• Is the source of the data understood by the business? Are there pos-
sibilities of unethical sourcing of data and its use in analytics?
• What is the possibility of data bias? Is the input data skewed because
of previous results but the current context has changed?
• Is it possible to weed out data bias by the use of ML algorithms?
• Is the decision in accordance with the law of the land? And have the
legal considerations been included in the AI code?
Natural intelligence & social aspects of AI 235

• Is the decision right in terms of time and location? Timing corre-


sponding to a situation can also be subjective (e.g., slow service in a
fine dining restaurant is preferred for an anniversary dinner versus
service in a fast food one).
• Is the data and decision made with full respect to the security and
privacy of the customer?
• Is there a balance maintained between the corporate profit goals and
the customer’s well-being? Pursuing profit goals alone, no matter how
legal, can backfire in terms of customer sentiments.
• Is the AI a black box with no opportunity of seeing what is inside?
How is the situation of providing possible explanations addressed?

Most of the above evaluators of decisions are not quantifiable. These are
subjective constraints leading to subjective customer value. Optimizing
these decisions requires agility in decision-making. Agile characteristics in
AI enable iterative and incremental decisions. These iterations facilitate the
incorporation of consequence in the subsequent decisions. ML has its inher-
ent limitations, and it provides analytical results based on extensive correla-
tions. Furthermore, the depth and complexity of AI-based systems result in
them becoming a black box. Human experience, intuition, knowledge, and
expertise judge if the decision to be made is right, is in the interest of the
society, and is ethically and legally sound. These decisions are the input for
the next iteration of decisions.

AI compliments NI
AI is a tool for business. AI computes vast amounts of data using machine
power leading to an understanding of patterns in data. AI does not reflect
empathy and understanding of humans. AI can only analyze that which can
be encoded. AI uses machines with learning capabilities which are them-
selves coded. The architecture of AI solutions has multiple layers of patterns
(DL) that attempt to replicate human thinking once a pathway into that
thinking is established. AI augments natural (human) intelligence, but does
not replace it. AI is a software tool whose limitations can be mitigated and
complemented by NI.
NI relates to adaptive learning by experience. NI has multiple layers
in its depths that can handle complex and delicate decision-making. DL
algorithms lack common sense. While AI can recognize patterns in vast
datasets, there is no understanding of the meaning behind the pattern. A
trend in weather (based on temperatures) and a corresponding trend in the
temperature of a factory furnace are both datasets for AI. The analytics
executed to make predictions are not interpreted by the system in their con-
text. Providing checks and balances in AI is crucial and one way of doing
that is to let ML algorithms identify multiple “what-if” scenarios that can
be made to reason with each other.
236 Artificial Intelligence for Business Optimization

AI is useful when it leverages uniquely human skills rather than attempt-


ing to replace them. Human skills and values are brought to the fore when
judiciously combined with AI. For example, AI does not replace creativity
and leadership which are still essential for business. Humans make deci-
sions based on context. AI-based business processes need programming
based on context. But the context keeps changing based on customer senti-
ments and needs. Therefore, NI and AI are both needed for a healthy busi-
ness ­­decision-making scenario.

KNOWN–UNKNOWN MATRIX FOR AI vs NI

DL mimics the human brain with its multiple levels and depths. As a result,
DL algorithms are able to recognize speech (e.g., “Hey Google! Hey Siri!”)
and images (e.g., face recognition to open a cellphone). NLP and DL handle
situations that are defined from the “known” aspect of business reality.
Creative thinking and problem-solving are essential human traits that
can be augmented by AI but cannot be replaced. It is the “unknown–
unknown” quadrant that is the most challenging to handle.
The following is an explanation of the unknown–unknown matrix shown
in ­Figure 10.1.

­­Automation: ­­Hard, ­­mono-dimensional data


Successful automation happens only with simple and linear processes work-
ing on mono-dimensional data. Data is produced by human activities. The
IoT sensors capture data based on configurations by humans. AI algorithms
identify patterns in these datasets based on the instructions. Machine

Artificial /
System (AI)
Known Unknown
Known

Automation
Experience
(Hard, mono-
(Soft, inter-
dimensional
Human (NI)

disciplinary)
Natural /

Data)
Unknown

Prediction
(Fuzzy, multi Intuition
dimensional
Data)

­Figure 10.1 Known–unknown matrix for AI versus NI.


Natural intelligence & social aspects of AI 237

automated processes are isolated from the surroundings. ML algorithms


do not need to understand the surroundings, nor the context in which they
are operating. The automated processes faithfully operate as small compo-
nents or packages, without any knowledge or understanding of how their
functionality contributes to or supports the entire system or assembly. The
knowns in the human arena are taken over by AI to automate repeated and
monotonous tasks. Intelligence in machines reduces the onus of conducting
repetitive tasks. Chatbots, robots, and digital trains are A I-driven tech-
nologies that can undertake many routine tasks. Automation has a higher
speed of execution and accuracy. ML algorithms learn by storing the results
of their decisions and, when presented with the same input parameters,
arriving at the decisions much faster than humans. Nevertheless, these are
human-like decisions and not human decisions.
Manual and routine tasks that are well defined are subject to automation.
Automation needs humanization during execution. Optimization also needs
humanization which has to be incorporated during design. For example,
chatbots at various levels can answer queries on flights, play music, and
provide initial health diagnoses. Learning by machines results in time and
accuracy advantages. The design of the optimized processes incorporates
these AI advantages but keeps provisions for human inputs. Besides, the con-
sequences of decisions are evaluated by humans and fed back to the decision
engine to enhance its data and code to be able to handle a similar context.
AI helps in identifying sale patterns and prioritizing resources needed
to bring about a transaction. Prioritization of effort based on the likeli-
hood of a particular outcome reduces human effort dramatically. The
decision-maker needs vigilance in using the analytics because the prioriti-
zation provided by AI is based on past data. If the data is skewed for any
reason and the AI logic is not able to figure it out, then the entire identifica-
tion of pattern and ensuring recommendations could be analytically correct
but realistically wrong. These kinds of situations need NI.
AI can be confused by new experiences. Emotions that cannot be coded or
that change slightly from the ones that can be coded can throw an AI-based
system off balance. Furthermore, unstructured data (which is a character-
istic of Big Data) needs to be brought in some structured form before it can
be analyzed. Coding the human experience relieves humans from putting
effort to solve the same problems again provided it is the same experience
occurring again. While ML is meant to “learn” from the experience, it is
still the algorithm that provides the instructions to learn. Deciphering com-
pletely new experiences is outside the scope of AI.

­­E xperience: Soft, ­­inter-disciplinary


AI helps in identifying sale patterns and prioritizing resources needed to
bring about a transaction. Prioritization of effort based on the likelihood of a
238 Artificial Intelligence for Business Optimization

particular outcome reduces human effort dramatically. The decision-maker


needs vigilance in using the analytics because the prioritization provided
by AI is based on past data. If the data is skewed for any reason and the AI
logic is not able to figure it out, then the entire identification of pattern and
ensuring recommendations could be analytically correct but realistically
wrong. These kinds of situations need human experience.
Humans accumulate experience over time when they deal with a task.
This experience helps relate several disparate tasks, understand their con-
text, and generate creative thinking and solving problems.

­­
Machines are speedier in crunching large quantities of data enabling them
to spot trends and make predictions. Machines take in multidimensional
data, run algorithms through a large number of cycles, and dig out pat-
terns in the data which are impossible for humans to identify. The extract-
ing information and knowledge from vast multidimensional data by DL
algorithms is beyond natural intelligence. ML (especially unsupervised) can
help businesses ask the right questions. When the context is stable, ML can
identify KPIs to help focus human decisions. But with changing context, NI
is invaluable in arriving at the right decisions.

Intuition
Intuition is an outstanding feature of humans. Intuition, which comes from
knowledge, long years of practice, and experience, is a crucial ingredient
for NI. Intuition leading to solving problems is subtle. It cannot be pre-
cisely defined and is unknown to humans themselves. People can come up
with completely new ideas and, at times, arrive at conclusions much faster
than machines. ML by its very definition cannot reason abstractly and
generalize. Physicians, artists, and musicians, for example, perform their
art intuitively. Business decision-making has to make provision for intu-
ition. Thus, the decisions can be initially made by NI and then scaled up
accurately by AI. Alternatively, AI suggests a decision that is ratified by NI
before it is scaled up. Constant cross-checking of the context is also man-
dated by respecting intuition in decision-making. AI advantage is limited if
it is not combined with NI. People add valuable insights to decisions.

ADDITIONAL CHALLENGES IN D ECISION-MAKING

These additional challenges form the basis for the need to superimpose NI
over AI in decision-making. These challenges start with the DL architec-
ture, which is a part of AI. This is followed by ethical, legal, and user
experience challenges.
Natural intelligence & social aspects of AI 239

Deep learning ( DL) challenges


DL, as discussed in earlier chapters, classifies data and identifies identify
trends and patterns within that data. DL architecture (inputs, outputs,
nodes, and layers) is a neural network that reflects the human brain and
its multilayered decision-making capabilities. Similar to the brain, the DL
backpropagation4 algorithm assigns different weights to nodes in analyz-
ing speech, images, and translations. As a result, DL provides insights
beyond human capacity to enhance customer experience, speech and face
recognition, driving autonomous vehicles, computer vision, and so on.
DL’s advances are the product of pattern recognition: neural networks
memorize classes of things and more- or-less reliably know when they
encounter them again. However, classification is not the same as human
intuition, cognition, and contextualization.
This lack of DL to contextualize presents interesting challenges.
“Teaching machines to use data to learn and behave intelligently raises a
number of difficult issues for society.”5 Since AI is more than automating
the existing tasks, there is storage of “learnings” from an experience of an
interaction with a customer or solving a business problem. Machines can
continue to incrementally learn to a level where the logic behind the learn-
ing becomes so deep as to be unexplainable. This is the situation where DL
needs NI input.
DL is considered as resource hungry, unexplainable, and breaks easily. 6
This is so because DL needs huge training datasets that consume phe-
nomenal resources, unexplainable due to deep multilayers, and breaks
because it does not fully understand the context in which the decisions
are being made. For example, “A robot can learn to pick up a bottle,
but if it has to pick up a cup, it starts from scratch.”7 It not the lack of
training and provisioning of depth that is the issue but the fact that DL
(as a part of AI) cannot contextualize the situation which is continuously
changing.

Ethical challenges of AI-based decisions


The aforementioned challenge of lack of contextualization leads to
situations where the ethics and morality of decisions come into play.
Straightforward predictions based on a clean set of data are most helpful
in decision-making. These are the type of decisions that are automated.
However, the uncertainty of the context in which the decisions are made
presents a risk to automating AI-based decisions. Fully automated decisions
which leave humans completely out of the loop and which are meant to
provide customer value are risky. Customers have a subjective interpreta-
tion of their needs and the changing nature of their values. Full automation
especially in customer-facing decisions may even be detrimental to business
goals and to society in general.
240 Artificial Intelligence for Business Optimization

The ethical challenges in AI-based decisions arise because of potential


biases. AI systems make decisions based on data the provided and algo-
rithms coded – both are subject to biases. While data is usually considered
objective, it can still be biased since it incorporates the beliefs, purposes,
biases, and pragmatics of those who designed the data collection systems.
Data is not a singular record but a collection of many records of observa-
tions. Therefore, the potential exists that the beliefs of the observers have
colored the meaning of the data.8 Sample bias, prejudicial bias, exclusion
bias, measurement bias, noise bias, and accidental bias are examples of
­­data-specific biases9 that influence the models built upon the data. Decisions
to buy, sell, promote, and cut production are all sensitive to biases. Biases
in the data and the opacity of the algorithms used to learn from the biased
data are the central issues in AI and Big Data ethics.10
AI models can potentially code prejudices and beliefs. To find those biases
requires careful auditing of the models, which only an NI superimposed on
AI can handle (Figure 10.2). Appropriate checks and balances need to be
put in place to prevent misuse of decision-making systems that rely on ML.11

Legal issues in unexplained AI


A human understandable explanation for an AI decision is also imperative
from a legal perspective. Explainable AI provides a reason or justification
for the analytics generated. The need to demonstrate the reasons for the
analytical insights arises from the need to prove that the insights are not
violating the legal frameworks of the region. The analytics are based on the
relationship of data within the AI-based system. These systems are designed
and owned by the developers. The algorithms are coded to enable them to
traverse large dataset and establish correlations. There is no onus on the

­Figure 10.2 NI superimposing on the AI learning process in order to improve decision-making.


Natural intelligence & social aspects of AI 241

system to explain its decisions. An understanding of the data features and


the high-level system architecture may still not be enough to explain or
justify a particular recommendation.
These legal situations can have serious repercussions on BO. Disparate
impact resulting from the decisions can lead to legal wrangling and court
suits. Agrawal, et al. recommend examining the results from the analyt-
ics. “Do men get different results than women? Do Hispanics get different
results than others? What about the elderly or the disabled? Do these differ-
ent results limit their opportunities?”12
Incorporating NI in decisions is a way to ameliorate the impact of legally
poor decisions.

Interfacing with humans


An important “soft” issue with A I-based systems is the way in which they
interface with humans. Human-Computer Interface (HCI) is a discipline in
its own right, encompassing interface designs, presentations, communica-
tion channels, and the growing expertise of the user. A static user interface
will now “grow” with the expertise of the user and, therefore, may hinder
her use of the system. Sight, sound, and touch are the basics of user inter-
face design.
Interfacing with humans is an important element of successful BO.13 As
businesses relate to the users through multiple channels, the design of a
website or a mobile app that provides analytics to the users needs to incor-
porate user experience.
Understanding the purpose of the customer’s interaction with the busi-
ness, modeling the processes, and reviewing multiple aspects of a user’s
(and user group’s) relationship with a business help in improving the value
of AI to the customer. User experience design is a specialist business analy-
sis (BA) activity that makes provision for the incorporation of NI at all
levels of the customer’s interaction with the business.

SUPERIMPOSING NI ON AI

Nexus between NI and AI yields balanced intelligence in optimization. A


judicious superimposition of NI on every stage of the AI ML pipeline is
imperative for value-based decisions. Figure 10.2 shows the AI pipeline,
with four phases: data collection, ML, prediction, and decision-making.
The first three phases are relatively easy to automate based on current AI
technologies as they can be defined. The fourth phase is not easy to be
automated. The limitations of AI are handled by superimposing NI through
the design, development, and implementation of the solution. The following
outlines the role of NI in each phase:
242 Artificial Intelligence for Business Optimization

• Data collection: choosing the right kind of data for a given ML prob-
lem and filtering the varied types of possible biases from the data
• ML: allocating the right kind of ML algorithm
• Prediction: opening the ML black box to explain causal relationships
among inputs and prediction
• Decision-making: fully engaging in decision-making

Quality decisions, which are also ethical decisions, include humans in the
decision-making loop. Humans are capable of considering the consequences
of decisions vis-à-vis their quality and ethical ramifications. NI provides
invaluable insights, after inspecting the consequences of decisions, by
considering ethics and values. These N I-based insights are superimposed
on the learning algorithm (as shown in Figure 10.2). The feedback loop
illustrated in Figure 10.2 then tweaks the historical data, learning model,
and new data to filter possible sources of error and bias and retrains the
model. The learning- correction-relearning cycle is repeated multiple times
to enable the system to continue to learn and improve its performance.
Eventually, after multiple iterations, the model shown in Figure 10.2
arrives at ethically sound decisions that produce adequate customer value.
The caveat to keep in mind is that in earlier iterations of this model, NI
makes the actual decision, whereas in later iterations, AI learns from NI
and stores those insights.
At an organizational level, as business processes are reengineered, a suite
of principles related to ethics and morals can be adopted by the developers
of the solutions. Visibility of the solutions through a walkthrough of mod-
els explaining the decision-making process, safeguarding the ingestion of
data and its usage, and enabling judicious mixing of NI (humanization) in
the decision-making process can go a long way in building trust in AI-based
decision-making.
In Table 10.1, 1 addresses the bias issue, 2 and 3 address the inexplain-
ability of AI models, and 4–7 address the inability of AI models for making
decisions that have subjective value.
Biological neural network models14 are discussed to help understand
autonomous adaptive intelligence.

AGILE ITERATIONS ENHANCE VALUES

Critical thinking and problem-solving with AI


Critical thinking and problem-solving are human traits that are supported
by rich data and analytics. AI ranges from general-purpose analytics (e.g.,
on historical, descriptive) to specific, fi ne-granular analytics (predictive). In
each case, AI needs as well as supports critical thinking and problem-solving
in business. AI can be used to simulate business scenarios. “Digital twins”
Natural intelligence & social aspects of AI 243

­Table 10.1 AI limitations and NI superimposition over AI limitations for intelligent


automation
AI limitation Description NI superimposition
Biases in AI models AI models can only be as good NI challenges the data and
as the data fed to them and algorithm biases mainly because
the algorithms coded. Biases NI is not limited to data and
in models can crop-up based algorithms.
on the basis of observations
and data, and those based on
the developer’s viewpoints.
Inexplainability of AI AI models are a “black box” in NI helps in understanding the
models which a large amount of data underlying causes of decisions.
is fed and results come out.
Feedback in AI models is also
made objective.
Complexity AI models are extremely NI brings in intuition, experience,
complex – that are difficult to expertise, and associated
troubleshoot. knowledge.
­­
Performance-driven AI models base their successes NI seizes the opportunity to vary
metrics ­­
on performance-driven ­­
the performance-driven metrics
metrics. This leads to ongoing based on the needs of the time.
optimization that may not
care for value.
Ethics and moral not AI models can only encode the NI can superimpose ethical and
codable ­­
well-defined processes, and moral values based on the
they can only analyze data that context of the situation.
is available.
Values are AI models can understand the NI is in a position to understand
­­
context-driven context only to an extent that the context much better than
the context can be coded. If AI – because NI is capable of
the change in context is not absorbing contradictions and
describable or visible to the AI misalignments in values.
models, that context is lost to
the model.
Sequential vs agile AI models are sequential – NI, superimposed on AI, can
moving from manual to make processes increasingly
automated to optimized Agile.
processes.

used in simulating dams, human bodies, and hurricanes can also be used to
simulate the trends and pathways of the business.
Critical thinking is undertaken in an iterative and incremental man-
ner within BO. Critical thinking approaches a problem in a disciplined
manner. Critical thinking starts by conceptualizing a problem, followed
by analyzing it. A I-based analytics are immensely helpful in the analysis
of the problem as they provide insights that complement NI. NI supports
critical thinking by enabling an understanding of the changing subject
matter or the context in which a problem is occurring. Both the problem
244 Artificial Intelligence for Business Optimization

and the solution are subject to this changing context which AI may not be
able to decipher.
An important development to support critical thinking is the Hex-E
protocol,15 which explores machine learning models in an iterative and
incremental way. Superimposition of NI on Hex-E is facilitated by AI. For
example, Hex-E facilitates a backpropagation algorithm using through
its automated protocols that can also be explained. Hex-E protocol, with
automated correlations, can enable and support creativity, help solve
problems by recombining ideas, and develop fundamental new interface
primitives.

­­D ecision– ­action–decision–feedback


­­ cycle
Agility in business decision-making is based on iterations and increments.
NI plays an important role in these iterative and incremental decisions
because it provides input in the consequences of the decisions.
­

­Table 10.2 The decision–action–decision–feedback cycle for optimization of business


processes with inputs from NI
Execute and make Iterate after examining
Design and develop decisions consequences
Automate Create a model that Let machines execute Slightest variation in
replicates exactly what the algorithms with input can potentially
humans do with no varying data. change the way the
variation and code that. Machines can only machine understands
vary the execution it and throws the
based on the results out of
parameters and the balance/in
­­ chaos.
data input.
Optimize Reengineer business Machines execute Agile characteristics of
processes by questioning code only to the iterations and
each activity for its extent there is no increments are
contribution to the unexpected incorporated in the
overall goal of the variation in the iterations; decisions
process. Model and code input. Humans are and their
with flexibility and overseeing and consequences are
encapsulation in mind. providing relevant evaluated based on
inputs to the human-values (ethics,
­­
business process morality, and legality)
execution. and fed back in the
system.

You might also like