JIMS-S-24-02257

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Journal of Intelligent Manufacturing

A Comprehensive study of Forecasting, Prevention, Mitigation Techniques, and


guidelines for security risks in Vulnerable AI Systems
--Manuscript Draft--

Manuscript Number:

Full Title: A Comprehensive study of Forecasting, Prevention, Mitigation Techniques, and


guidelines for security risks in Vulnerable AI Systems

Article Type: Original Research

Keywords: Artificial Intelligence; AI security; malicious use of AI; AI threats; mitigating AI threats

Corresponding Author: J Boobalan


Kumaraguru College of Technology
INDIA

Corresponding Author Secondary


Information:

Corresponding Author's Institution: Kumaraguru College of Technology

Corresponding Author's Secondary


Institution:

First Author: J Boobalan

First Author Secondary Information:

Order of Authors: J Boobalan

Krithika R

Umamaheswari S

M Malleswaran

Order of Authors Secondary Information:

Funding Information:

Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
Click here to access/download;Manuscript;Manuscript.docx

Click here to view linked References

A Comprehensive study of Forecasting, Prevention, Mitigation Techniques,


1
2 and guidelines for security risks in Vulnerable AI Systems
3
4 Boobalan J1, Umamaheswari.S2, Malleswaran M3
5 1
6 Assistant Professor, Department of ECE, Kumaraguru College of Technology, Coimbatore, Tamilnadu, India.
7 2
Associate Professor, Department of ECE, Kumaraguru College of Technology, Coimbatore, Tamilnadu, India.
8
3
9 Assistant Professor, Department of ECE, University College of Engineering Kancheepuram, Tamilnadu, India.
10
11 Corresponding author: [email protected].
12
13 Abstract:
14
15 Artificial intelligence (AI) and Machine Learning (ML) technologies are promptly
16
17 advancing, opening a world of possibilities in various domains. From medical image analysis
18
19 to language translation, AI has proven to be a powerful tool. However, as with any
20
21 technological advancement, there is a need to consider the potential for malicious use. This
22
23
article explores the landscape of security threats stemming from the malicious use of AI and
24 proposes strategies for forecasting, preventing, and mitigating these threats. AI has become a
25
26 vital part of modern business operations, offering automation and optimization of various tasks.
27
28 However, as businesses increasingly rely on AI tools, it is crucial to understand and address the
29
30 security risks associated with these technologies. This comprehensive survey explores the
31
32 potential security challenges AI systems pose and provides strategies to safeguard the business
33
34
against the vulnerabilities of AI.
35
36 Keywords: Artificial Intelligence, AI security, malicious use of AI, AI threats,
37
38 mitigating AI threats.
39
40
41 1. Introduction
42
43 The field of AI is expanding at an unprecedented rate, with new applications being
44
45 developed and expected to emerge in the long term. While the positive impact of AI is widely
46
47 recognized, there has been relatively less attention given to the potential for malicious use. As
48
49 AI capabilities continue to evolve, addressing the security risks associated with its misuse
50
51
becomes crucial. Artificial intelligence has revolutionized businesses' operations, offering
52 efficiency, productivity, and innovation. However, as AI technologies become more prevalent,
53
54 it is essential to recognize the potential security risks they bring. This paper aims to give
55
56 enterprises a thorough grasp of the security issues related to AI systems and useful advice on
57
58 how to successfully reduce such risks. Another area of concern that has been noted by policy
59
60 leaders from various nations is employment and the future of labor. For instance, by looking at
61
62
63
64
65
the application of AI in driverless vehicles, the US 2016 Report considered whether current
1
2 regulation is sufficient to handle risk or whether adaptation is required. The UK outlines four
3
4 major problems in the policy paper, The goal of the Pan-Canadian AI Strategy is to establish
5
international thought leadership on the consequences of AI advancements for law, politics,
6
7 ethics, and the economy [1].
8
9
10 Figure 1 Types of concerns and ethical challenges
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37 Figure 1 illustrates the five issues that might lead to failures involving numerous
38
39 organizational, technological, and human agents. Due to the combination of technical and
40 human actors, there are challenging issues surrounding accountability and liability for the
41
42 effects of AI behaviors [2].
43
44
45 1.1. Types of Concerns and Ethical Challenges
46
47 Inclusive evidence: Algorithms develop likely but inherently uncertain information
48
49 when they deduce inferences from the data and process it using machine learning techniques
50
51
and/or inferential statistics. The characterization and quantification of this uncertainty are
52 fundamental to both computational and statistical learning theories. Significant correlations can
53
54 be found using statistical approaches; however, correlations alone sometimes cannot establish
55
56 causation, making it difficult to act based only on this knowledge. The notion of an "actionable
57
58 insight" effectively conveys the degree of uncertainty associated with statistical correlations
59
60 and the normative implications of acting upon them.
61
62
63
64
65
Unjustified actions: Inductive knowledge and connections discovered in a dataset are
1
2 highly rely on data mining and algorithmic decision-making. It is common to regard
3
4 correlations that are based on "sufficient" data as reliable enough to guide action without
5
requiring the establishment of causality. Two distinct types of issues might arise from reactions
6
7 to correlations. Instead of discovering actual causal information, it is possible to discover
8
9 spurious correlations. Even while actions that have a significant personal impact are directed
10
11 toward individuals, populations may only be impacted by strong correlations or causative
12
13 information.
14
15 Inscrutable evidence: It is acceptable to anticipate that the relationship between the data
16 and the conclusion should be understandable and susceptible to examination when data are
17
18 used as evidence for a conclusion. It is imperative to ensure the intelligibility and monitoring
19
20 of AI systems due to their vast complexity and scale. The inability to access datasets and the
21
22 inherent challenge of mapping the variety of data and attributes that an AI system analyses
23
24 results in outputs and conclusions that have both conceptual and practical constraints.
25
26 Opacity: This is the "black box" issue with AI; Input-to-output thinking may be
27 fundamentally opaque or incomprehensible, or witnesses or other affected stakeholders may
28
29 not be aware of it. Complex code, variable decision-making logic, and large data
30
31 dimensionality all contribute to opacity in machine learning algorithms. In general, one wants
32
33 transparency and comprehensibility since poorly predictable or interpretable algorithms are
34
35 hard to manage, keep an eye on, and fix. Many times, people mistakenly believe that
36
37
transparency will solve all ethical problems brought on by emerging technologies.
38 Misguided evidence: The same restriction that governs all other data processing
39
40 methods also applies to algorithms because they deal with data: the output can never exceed
41
42 the input This phenomenon and its importance are shown by the loose "garbage in, garbage
43
44 out" principle, which states that conclusions are only as trustworthy (yet impartial) as the data
45
46 they are based on.
47
48
Bias: Many times, the supposed lack of bias in AI and algorithms is used as justification
49 for automating human decision-making. This is unsustainable; biased conclusions are
50
51 inevitable with AI systems. If only to the extent that a specific design is favored as the ideal or
52
53 optimal choice, a system's functionality and design reflect the ideals of its creator and intended
54
55 usage. The development path is not impartial or straight. Thus, "the author's values,
56
57 intentionally or unintentionally, are frozen into the code, effectively institutionalizing those
58
values." Hence, addressing implicit biases requires inclusivity and equity in AI design and
59
60 application. According to Friedman and Nissenbaum, bias originates from emergent features
61
62
63
64
65
of a usage context, technological limitations, and ingrained societal norms contained in the
1
2 "social institutions, practices, and attitudes" from which the technology originates [2].
3
4 Unfair outcomes: Analyzing algorithmically driven activities might involve applying
5
various ethical frameworks, standards, and principles. It is possible to evaluate the action's
6
7 ethical appropriateness and consequences without considering its epistemic quality because
8
9 they are observer dependent. Even when a decision is made based on convincing, verifiable,
10
11 and well-founded facts, an action may be deemed discriminatory if it affects a protected class
12
13 of individuals.
14
15 Discrimination: AI system biases can potentially discriminate against individuals and
16 groups. Discriminatory analytics can exacerbate stigmatization and self-fulfilling prophecies,
17
18 which can impair the autonomy and social involvement of targeted groups. There is no
19
20 universally accepted definition of discrimination. However, legal systems worldwide have a
21
22 rich history of tackling diverse forms of discrimination, the objectives of equality legislation,
23
24 and setting suitable benchmarks for the allocation of outcomes among different groups. In this
25
26 context, it is challenging to include fairness and nondiscrimination concerns into AI systems in
27 this setting. Based on the appearance of discrimination in each circumstance, it might be able
28
29 to instruct algorithms not to take into account certain features that lead to discrimination, such
30
31 as gender or ethnicity. But proxies for protected qualities are difficult to find or anticipate,
32
33 especially when methods depend on relevant datasets.
34
35 Transformative effects: It is not always possible to blame ethical or epistemic failings
36
37
for the effects of AI systems. When there is no apparent harm, a large portion of their effects
38 may at first seem morally neutral. A different category of effects, sometimes called
39
40 transformative effects, deals with minute changes in the way the world is conceived and
41
42 structured.
43
44 Autonomy: Algorithms that make judgments based on values may also be a danger to
45
46 autonomy. In this sense, the personalization of material by AI systems like recommender
47
48
systems is very difficult. Customization can be defined as creating choice architectures that
49 vary from sample to sample. Through information filtering, By eliminating content that is
50
51 judged unnecessary or in opposition to the user's values or preferences, personalization lessens
52
53 the diversity of information that users come across. This is problematic since diversity of
54
55 information is thought to be a requirement for autonomy. When the preferred option prioritizes
56
57 the interests of third parties over those of the individual, the subject's autonomy in making
58
decisions is violated.
59
60
61
62
63
64
65
Data Privacy: Algorithms also alter our understanding of privacy. Reactions against
1
2 discriminatory practice, personalization, and the loss of autonomy because of transparency
3
4 frequently invoke data privacy, or the right of individuals to “protect their personal information
5
6 from unauthorized access.” The ability of a person to control their data and the strength
7
8 required by other individuals to obtain it are referred to as data privacy. A right to identify based
9 on informational privacy implies that it is problematic for a third party to engage in opaque or
10
11 clandestine profiling. This might include insurance firms, consumer technology companies,
12
13 and remote care providers. Making opaque decisions prevents scrutiny and well-informed
14
15 choices about data sharing. Data subjects are unable to set privacy rules that apply to all data
16
17 types in a general sense because analyzing is the only way that data value or insight is
18
19 determined.
20 Traceability: Multiple agents are frequently involved in AI systems; these agents might
21
22 include the systems and models themselves, as well as human developers and users,
23
24 manufacturers, and deploying organizations. Because of their complexity, speed, and vastness,
25
26 artificial intelligence (AI) systems can also communicate directly, forming multi-agent
27
28 networks with swift behaviors that elude human oversight and comprehension. Algorithms
29
30
absorb the moral challenges associated with developing and releasing new technologies and
31 managing enormous volumes of personal and other data since they are software
32
33 artifacts utilized in data processing [3], [32]. Because of all these variables, it can be
34
35 challenging to identify risks, determine what caused them, and determine who is to blame when
36
37 AI systems behave unexpectedly. Problems with any of the five categories of concerns stated
38
39 above may therefore give rise to a related problem with traceability, which requires determining
40
41
the source of poor behavior as well as who is responsible for it.
42 Moral responsibility and distributed responsibility: Insofar as they can articulate the
43
44 general architecture and operation of the machine to an outsider, developers and software
45
46 engineers have historically had "control of the machine's behavior in every detail". The
47
48 conventional understanding of software design responsibility presupposes that the developer
49
50 can consider the technology's possible consequences and malfunctions and make decisions to
51
select the best possible results while adhering to functional specifications. When a technology
52
53 malfunctions, there must be blame and sanctions assigned [32].
54
55 Automation bias: The spread of user-generated feelings of accountability and
56
57 responsibility, as well as the corresponding propensity to believe system outputs based on
58
59 observed objectivity, accuracy, or complexity, are issues related to artificial intelligence.
60
61
62
63
64
65
Artificial intelligence decision-making can absolve human decision-makers of some duties.
1
2 When stakeholders from many fields work together on algorithms, it can occur, for instance,
3
4 that each party believes others should accept ethical responsibility for the algorithm's actions.
5
The additional layer of complexity that machine learning introduces between algorithm-driven
6
7 activities and designers may legitimately lessen the weight of guilt put on the latter.
8
9 Safety and resilience: The need for assigning accountability is particularly evident
10
11 when algorithms break down. Algorithms that lack ethics can be compared to broken software
12
13 artifacts that don't work the way they're supposed to. It is useful to distinguish between
14
15 malfunction and dysfunction, as well as between types and tokens, which are faults of operation
16 and failure to operate as intended and the presence of unwanted side effects, respectively.
17
18 Ethical auditing: An ex-post procedural record of complicated automated decision-
19
20 making can be produced by auditing in systems where the human effect is expected to examine
21
22 and identify discriminatory practices or other harms, as well as incorrect judgments.
23
24 1.2. Understanding the Threat Landscape
25
26
27 Analyzing how AI may affect the threat environment across many domains is crucial to
28 properly mitigating the risks posed by the malicious use of AI. The three main security domains
29
30 covered in this paper are political, physical, and digital security. Figure. 2 presents the flow
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54 Figure. 2 Flow chart for an AI system ensures security and privacy.
55
56
57
chart for an AI system that ensures security and privacy. The flow chart tries to explain the
58 objectives and scope of this research and what type of security mechanism should be addressed
59
60 in AI systems.
61
62
63
64
65
Digital Security
1
2
3
Artificial Intelligence possesses the capability to mechanize cyberattack tasks, so
4 considerably influencing the scope and effectiveness of these operations. For instance, AI can
5
6 streamline labor-intensive cyberattacks like spear phishing, making them more widespread and
7
8 effective. Additionally, novel attacks exploiting human vulnerabilities, existing software
9
10 vulnerabilities, or even AI system vulnerabilities may emerge [5].
11
12 Physical Security
13
14
15 The automation enabled by AI can also extend to physical systems, such as drones and
16
17 autonomous weapons. As AI is increasingly deployed in these areas, there is a risk of expanding
18
19
threats associated with physical attacks. Subverting cyber-physical systems or orchestrating
20 attacks involving large numbers of autonomous drones are among the potential risks.
21
22
23 Political Security
24
25
AI's automation capabilities can be harnessed for surveillance, persuasion, and
26
27 deception, posing threats to political security. Misuse of AI can occur when it is utilized for
28
29 things like targeted propaganda, video tampering or analysis of large-scale data collection.
30
31 These actions can compromise privacy, manipulate public opinion, and undermine the integrity
32
33 of democratic processes.
34
35 In this paper, vulnerabilities in AI systems, risks and challenges in AI systems are
36
37 carefully investigated from the recent literature and real-time systems. Based on the literature
38
39 review, this paper discusses the suitable mitigation techniques, guidelines and policies for the
40
41 ethical usage of AI systems. The remainder of this paper is organized as follows: section 2
42
43 discusses the Related works that present a detailed investigation of the recent literature. Section
44 3 describes suitable recommendations for AI researchers and stakeholders. Section 4 explores
45
46 the priority research areas. Section 5 discusses how to develop technological and policy
47
48 solutions. Section 6 navigates the risks and challenges. Section 7 presents the Ethical AI and
49
50 its responsibilities and Section 8 presents the conclusion and future work.
51
52
53 2. Related Works
54
55 Artificial Intelligence has become an essential technology in recent years, and it spreads
56
57 its wings in many dimensions such as production, maintenance, and quality of assurance in the
58
59 field of Manufacturing. Increase the efficiency of customer experience using chatbots in
60
61
62
63
64
65
financial sectors. Protect the customers from fraud, cheating and money laundering, AI
1
2 provides impeccable security towards customer safety in every instance. Inadequate demand
3
4 prediction, protection of crops by recommending suitable pesticides and fertilizers [8], and
5
prediction of crop prices to inform sowing practices based on real-time advisory in agriculture.
6
7 AI plays a vital role such as minimizing power consumption, preventing accidents and assisting
8
9 differently-abled persons [9], early identification of potential diseases and telemedicine are
10
11 some of the most critical and important use cases in healthcare.
12
13
14
AI in the Educational system is concerned with improving the learning experience
15 through personal training, analyzing the facts and minimizing the number of dropouts, and
16
17 recommending vocational training and self-assessment of students and teachers. Furthermore,
18
19 AI can help to improve the efficacy of organic fruit testers [10], bio waste segregation [11],
20
21 Leukaemia Detection [12] and social media analysis [13]. A mutual case study/task (credit
22
23 default prediction) was used to demonstrate popular XAI methods. Additionally, these
24
techniques use XAI as a platform to analyze business advantages from several angles, provide
25
26 insightful information on quantifying explainability, and provide paths toward sustainable AI.
27
28 [18]. The social impact of AI is provided alongside statistics on events and locations. Certain
29
30 applications of AI have been uncovered using the online AI Incident Database. Top-ranked
31
32 applications include autonomous driving, intelligent robots and computer vision and language
33
34 models. [19]. A fresh ontological framework for fourteen implications of digital ethics for the
35 employment of AI in seven archetypes of digital technology was developed [20].
36
37
38 There is a huge discrepancy between moral values and modern technology. It is not
39
40 easy to translate complicated social notions into technical rulesets, even when this gap is
41
42 recognized and principles are attempted to be operationalized [21]. A systematized scoping
43 methodology and the following databases were used to search the databases for healthcare
44
45 applications [22]. A thorough examination of over 100 frameworks, process models, solutions,
46
47 and instruments that are suggested to assist in facilitating the required transition from theory to
48
49 practice, building on the findings of Morley and associates. This study validates that suggested
50
51 methods place a significant emphasis on a small number of ethical concerns, including
52
53 accountability, privacy, equity, and explainability [23]. In AI ethical standards, the political and
54 economic ramifications of AI business operations are notably absent [24]. Effectiveness,
55
56 validity, and bias in data are all part of information ethics. Excessive dependence on ChatGPT
57
58 can lower patient adherence and promote self-diagnosis, and biased training data can provide
59
60 biased output. [25]. The advancement of Artificial Intelligence in Education (AIED) has the
61
62
63
64
65
potential to transform the field of education and influence all parties concerned. However, the
1
2 application of AIED has brought up concerns and hazards about several ethical matters, such
3
4 as independence for learners and personal data [26].
5
6 Researchers investigated racial bias in widely used cutting-edge face expression
7
8 detection techniques including Deep Emotion, Self-Cure Network, ResNet50, InceptionV3,
9
10 and DenseNet121 [27]. The ChatGPT language model is an example of how modern
11
12 technology can completely transform the educational system [28]. In addition to the gaps in
13
14
processes toward new opportunities and risks in identifying robust fundamentals,
15 accountability, and AI's complex and responsible ramifications, the structural foundation,
16
17 fundamentals, and cardinal righteous protestations were also closely examined [29]. A
18
19 healthcare framework was developed, which complements the "principle-based" guidelines
20
21 that emphasize adherence to ethical values [30].
22
23 To handle the various backgrounds and fairness concerns during the design stage, the
24
25 Fairness in Design (FID) framework was developed to help the designers in surfacing and
26
27 exploring difficult fairness-related issues [31]. Adversarial examples are intentionally created
28
29 inputs that force a machine-learning model to make a mistake. Adversarial examples are
30
31 usually created by taking actual data, like a spam advertisement, and purposefully altering it to
32
trick the algorithm that will interpret it [33].
33
34
35 However, recent research revealed that adversarial attacks can be launched against
36
37 machine learning classification models. Initialization and detection are the two stages of a
38
39 proposed adversarial attack detection architecture for XAI IDS based on machine learning.
40
41
During the initialization stage, we use LIME (Local Interpretable Model-agnostic
42 Explanations) to extract details for the ideal information from the dataset and train an IDS
43
44 based on a SVM classification model. During the detection phase, the trained IDS analyses the
45
46 classification results explanation by explanation to identify an adversarial attack [34]. AI
47
48 models are no longer adequately resistant to adversarial attacks on neural networks, which have
49
50 become more deadly and aggressive in recent times. When exposed to thirteen different kinds
51
of adversarial attacks, six popular CNN models for image classification were tested using an
52
53 assessment approach.
54
55
56 The impartially computed resilience of the models can serve as a benchmark for future
57
58 development [35]. Facing the increasingly complex neural network model, adversarial attacks
59
on the image, text, and malicious code and focusing on the adversarial attack classifications
60
61
62
63
64
65
are concerned [36]. Further development of AI has relied more and more on strengthening AI
1
2 systems' resilience against adversarial attacks. Adversarial defense strategies such as altering
3
4 data, altering models, and employing auxiliary tools, as well as the applications of adversarial
5
assault technologies in computer vision, natural language processing, cyberspace security, and
6
7 the real world, were thoroughly examined [37]. Current machine learning-based systems are
8
9 very accurate and performant and are susceptible to small perturbations that can have disastrous
10
11 effects in situations connected to security. When the applications run in a hostile environment,
12
13 the threat increases. Therefore, developing strong learning strategies resistant to adversarial
14
15 attacks has become essential [38].
16
17 When assessing hand strength for Musculo Skeletal Disorders (MSDs) and other
18
19 occupational duties, gripping and pinching are commonly utilized [39]. Utilizing
20
21 straightforward binary and mathematical operations, a novel self-configuration method for
22
23 humanized Cyber-Physical Systems (CPS) was created to speed up convergence, increase
24
scalability, and handle the dynamic that people bring to CPS [40]. Machining terminals of all
25
26 kinds make up the machine and control levels. Their primary duty is to oversee and manage
27
28 equipment to provide the best possible solutions for Cloud Terminals Based Cyber-Physical
29
30 Systems (CTCPS) [41].
31
32
One of the primary technologies connected to the cyber-physical system that
33
34 investigates a Product Manufacturing Digital Twin (PMDT), which concentrates on the
35
36 manufacturing stage on a smart shop floor for smart manufacturing is the Digital Twin (DT)
37
38 [42]. Through numerous blogs and communities, social media gives a lot of people a platform
39
40 to discuss their experiences with cancer. When analyzing massive amounts of data in a
41
42 prospective flow, a distributed framework using an LSTM neural network offers an alternative
43 to traditional sentiment analysis techniques. [43]. To predict the closing price of the Indian
44
45 energy exchange, an LSTM neural network model based on enhanced particle swarm
46
47 optimization is employed [44]. Optimizing driving behavior can save energy usage, lower the
48
49 risk of traffic accidents, and enhance passenger comfort. The LSTM and a time cycle neural
50
51 network were employed to assess bus rider comfort in real-time and offer driving
52
53 recommendations [45]. A hybrid deep learning strategy combining the Generative Adversarial
54 Network (GAN) and the LSTM model was developed as part of a new photovoltaic generation
55
56 forecast method based on future cloud picture prediction, to improve the accuracy of the
57
58 weather information prediction [46]. Finding anomalies in wind turbines is a crucial but
59
60 difficult task. While deep learning shows promise, its complicated structure frequently makes
61
62
63
64
65
it difficult to interpret, especially for huge error distributions. A Composite Quantile Regression
1
2 Long Short-Term Memory Network with Group Lasso (CQR-LSTM-GL) was created,
3
4 ensuring that LSTM is always valid regardless of the error distribution [47]. Because of its
5
integrity in handling very lengthy input sequences and its ability to fix the vanishing and
6
7 exploding gradients problem that RNNs encounter, LSTM is employed as a base neural
8
9 network [48].
10
11
12 It was suggested to use game theory to explain the conceivable attack-defense dynamics
13
14
for an autonomous microgrid. Attackers operate in two stages: disruption and incursion. By
15 upgrading or strengthening the cyber component, defenses can perform better. [49]. Precise
16
17 time series prediction has been acknowledged as a crucial undertaking in numerous application
18
19 fields. In most datasets, real-world time series data exceed all benchmarking approaches in
20
21 terms of Symmetric Mean Absolute Percentage Error (SMAPE). This is because real-world
22
23 data frequently include complicated, non-linear patterns that make it difficult for conventional
24
forecasting algorithms to produce correct forecasts. It produces the best Average Rank (AR) of
25
26 all the applied techniques. [50].
27
28
29 A highly anticipated project of CPS is intelligent transportation. Any unwanted or
30
31 unauthorized person entering a vehicular network can seriously harm other networked
32
authorized cars. To guard against intrusions into the VANET, a safe and private signRecryption
33
34 protocol was devised, which embeds group signatures with a full authentication mechanism
35
36 [51].
37
38
39 The focus of healthcare is changing as a result of developments in machine learning
40
41
and sensor technologies, as well as the growing use of cell phones. Using sensors and mobile
42 devices, CPSs can be used to create efficient mobile health solutions and offer complex new
43
44 mechanisms for real-time monitoring of an individual [52]. A manufacturing cyber-physical
45
46 system (MCPS) driven by digital twins for mass customization that allows for parallel control
47
48 of smart workshops. Decentralized digital twin models can be used to develop cyber-physical
49
50 links that enable different manufacturing resources to function as dynamic autonomous systems
51
that collaborate to co-produce customized products. [53]. Enough security measures must be
52
53 implemented against cyberattacks for CPS to function securely. Nonetheless, increasing
54
55 security and optimizing energy efficiency are two different but equally important needs. The
56
57 best course of action for CPS in terms of lowering energy usage is to dynamically initialize the
58
59
60
61
62
63
64
65
security mechanism at the first sign of cyberattacks. CPS can disable the security feature to
1
2 reduce energy usage if there are no attacks [54].
3
4
5
3. Recommendations for AI Researchers and Stakeholders
6
7 Working together, governments, technical experts, and other stakeholders may
8
9 successfully address the vulnerabilities in AI. The following high-level suggestions can direct
10
11 the actions taken to stop and lessen the harmful usage of artificial intelligence.
12
13 Collaboration: Policymakers and researchers should collaborate to recognize and prepare for
14
15 potential mischievous uses of AI.
16
17
18 Dual-Use Considerations: AI scientists must be preemptive in considering the potential misuse
19
20 of their work and prioritize security measures.
21
22 Learn from Other Disciplines: Best practices from disciplines like computer security can
23
24 inform strategies to address the dual-use risks of AI.
25
26
Expanding Stakeholder Engagement: It is vital to involve a wide range of stakeholders and
27
28 domain experts to effectively tackle the challenges posed by the malicious use of AI.
29
30
31 4. Exploring Priority Research Areas
32
33
34 In addition to the high-level recommendations, further research is needed in several
35
36
priority areas to develop effective solutions. These include developing technology and policy
37 solutions, fostering a culture of accountability, investigating other openness models, and
38
39 learning from the cybersecurity community.
40
41
42 Learning from the Cybersecurity Community
43
44 Collaborating with the cybersecurity community can yield important insights into
45
46 countering assaults related to artificial intelligence. Enhancing security measures can be
47
48 accomplished by methods like formal verification, red teaming, responsible vulnerability
49
50 disclosure, secure hardware, and so forth.
51
52 Exploring Different Openness Models
53
54
55 Rethinking institutions and norms around research transparency is essential as AI's
56
57 dual-use nature becomes increasingly obvious. Risks can be reduced with the use of pre-
58 publication risk assessments, central access licensing models, and sharing policies that put
59
60 security and safety first.
61
62
63
64
65
Promoting a Culture of Responsibility
1
2
3
AI organizations and researchers have a special chance to influence how the security
4 environment develops in a world where AI is used. Raising awareness of the possible
5
6 repercussions of AI misuse and cultivating a culture of accountability can be facilitated by
7
8 training ethical norms, and expectations.
9
10
11 5. Developing Technological and Policy Solutions
12
13 It is necessary to investigate technical developments and policy initiatives in addition
14
15 to the aforementioned fields. Legislative and regulatory measures, tracking of resources for AI,
16
17 coordinated use of AI for public-good security, and guaranteeing privacy are some ways to
18
19 lessen the risks associated with AI misuse.
20
21
22 5.1. Popular AI Tools for Businesses
23
24 AI tools have become widely accessible to businesses of all sizes, offering capabilities
25
26 in content generation, image processing, data analysis, and more. Some popular AI tools used
27
28 by businesses include ChatGPT, Copy.ai, Imagen, Supertone, and Jitter. These tools automate
29
30 various tasks and streamline business operations. It is imperative to acknowledge the possible
31
32
security hazards linked to their utilization, though.
33
34 5.2. Data Breaches and Privacy Risks
35
36
37 One of the primary concerns with AI systems is the potential for data breaches and
38
39 privacy risks. AI tools often collect, store, and the process of substantial amounts of data, with
40
41 sensible data. To safeguard against data breaches and privacy risks, businesses must implement
42 robust security measures. This includes using secure file-sharing systems, utilizing antivirus
43
44 software, and implementing encryption protocols to protect sensitive data.
45
46
47 5.3. Model Poisoning and Attacks
48
49
50 AI systems are vulnerable to model poisoning, where malicious actors manipulate the
51 training data or inject malicious code into the system. This can lead to the production of
52
53 erroneous or malicious results. Preventing and detecting model poisoning attacks requires
54
55 implementing robust security measures during the development and deployment of AI systems.
56
57 Regularly auditing and monitoring the models can help identify potential vulnerabilities.
58
59
60
61
62
63
64
65
5.4. Plagiarism and Copyright Infringement
1
2
3 AI-generated content, such as text, poses a risk of plagiarism and copyright
4
5 infringement. AI tools can quickly generate text, but it may lack originality. Businesses using
6
7
AI-generated content should carefully review and edit the output to ensure uniqueness and
8 avoid potential plagiarism issues. Additionally, businesses should be cautious about using AI
9
10 systems that source data from the internet to generate video and sound clips, or videos to
11
12 prevent copyright infringement.
13
14
15 5.5. Adversarial Attacks on AI Systems
16
17 Adversarial attacks exploit vulnerabilities in AI algorithms to manipulate or deceive AI
18
19 systems. These attacks can lead to the misclassification of data or the evasion of security
20
21 controls. To enhance the robustness of AI systems against adversarial attacks, businesses
22
23 should implement techniques such as adversarial training, input sanitization, and anomaly
24
25 detection.
26
27 5.6. Bias and Ethical Concerns in AI Systems
28
29
30 AI systems can inadvertently perpetuate biases present in the training data, leading to
31
32 discriminatory outcomes. It is crucial for businesses to address bias in AI algorithms and ensure
33
34 fairness and ethical considerations in their AI applications. This includes conducting regular
35
36
audits of training data, diversifying data sources, and incorporating ethical guidelines into the
37 development process.
38
39
40 5.7. Supply Chain Attacks and AI-based Malware
41
42
43 Supply chain attacks can compromise AI systems by injecting malicious code or
44
45
manipulating the development process. Additionally, AI-based malware can exploit
46 vulnerabilities in AI systems to gain unauthorized access or disrupt business operations. To
47
48 protect against these threats, businesses should implement stringent security measures
49
50 throughout the supply chain, including secure development practices and regular vulnerability
51
52 assessments.
53
54
55
5.8. Securing AI Training Data and Models
56
57 Securing AI training data and models is vital to maintaining the safety and security of
58
59 the AI system. Businesses should implement data protection measures, such as access controls,
60
61
62
63
64
65
encryption, and anonymization techniques. Additionally, ensuring the security of AI models
1
2 involves protecting the model's parameters, validating inputs, and implementing runtime
3
4 defenses against attacks.
5
6 5.9. Continuous Observation and Incident Response
7
8
9 Continuous monitoring is essential to detect and respond to potential security incidents
10
11 in AI systems. Implementing security monitoring systems can help identify anomalous
12
13 behavior and potential threats. Additionally, Companies should create a strong incident
14
15
response strategy that details what to do in the case of a security problem, including recovery,
16 investigation, and containment.
17
18
19 5.10. Ensuring Compliance with Regulations
20
21
22 Regulations and legislation pertaining to privacy, like the California Consumer Privacy
23
24
Act (CCPA) and the General Data Protection Regulation (GDPR), must be followed by AI
25 systems. Businesses should understand the regulatory requirements and incorporate privacy
26
27 and security compliance into their AI systems. Regular evaluations and audits can assist assure
28
29 compliance and help avert possible legal repercussions.
30
31
32 6. Navigating the Risks and Challenges
33
34 In today's digital landscape, artificial intelligence (AI) is revolutionizing various
35
36 industries, including cyber security. But when AI is incorporated more and more into our
37
38 systems and procedures, new security threats and privacy issues arise. This paper will discover
39
40 the challenges associated with AI security and privacy and provide actionable strategies to
41
42 mitigate these risks.
43
44
45
6.1. Understanding the Benefits and Risks of AI in Cyber Security
46
47 Artificial intelligence has the potential to greatly enhance cybersecurity efforts. By
48
49 leveraging machine learning algorithms, AI systems can analyze massive amounts of data,
50
51 identify patterns, and detect potential threats in real-time. This helps companies to react to new
52
53
intrusions quickly and efficiently. AI can also automate repetitive processes, freeing up human
54 resources to concentrate on more difficult security problems.
55
56
57 However, along with these benefits come significant risks. Hackers are increasingly
58
59 leveraging AI technology to develop sophisticated cyber-attacks. AI-powered attacks can
60
61
62
63
64
65
exploit vulnerabilities in systems, generate convincing phishing emails, deploy malware, and
1
2 create realistic deepfake videos. As AI continues to advance, cybercriminals will undoubtedly
3
4 find new ways to exploit its capabilities.
5
6 Furthermore, biases in AI systems can lead to discriminatory outcomes, particularly in
7
8 areas such as facial recognition. Because AI systems can only be as good as the data they are
9
10 trained on, incomplete or biased training sets can provide biased results in terms of actions and
11
12 judgments. Additionally, the lack of human oversight in AI decision-making processes raises
13
14
concerns about accountability and ethical decision-making.
15
16 6.2. Best Practices for AI Security and Privacy
17
18
19 To effectively navigate the risks and challenges associated with AI security and privacy,
20
21 organizations should adopt a proactive and multi-faceted approach. Here are some best
22
23
practices to consider:
24
25 1. Implement Robust Security Measures
26
27
28 Organizations should set up robust governance procedures and carry out frequent risk
29 assessments to safeguard AI systems against changing cyberthreats. This involves making
30
31 certain that AI models are well-tested, secure, and that extensive security measures are in place
32
33 to safeguard private information. Organizations should also keep up with the most recent
34
35 security flaws and apply updates and patches on time.
36
37 2. Ensure Data Privacy and Ethical Use of AI
38
39
40 Privacy should be a top priority when developing and deploying AI systems. Only the
41
42 necessary personal data must be saved by organizations, and they must also put methods in
43
44 place to safeguard users' identities within the data and make their privacy policies apparent to
45 them. Conducting privacy audits and giving users control over their data are also essential
46
47 steps. Moreover, organizations should be mindful of potential biases in AI systems and work
48
49 towards eliminating discriminatory outcomes.
50
51
52 3. Enable Human Oversight and Accountability
53
54 While AI can automate many tasks, human oversight and accountability are crucial in
55
56 the decision-making process. Organizations should establish policies and procedures that
57
58 ensure human involvement in critical decisions, especially those with significant consequences.
59
60
61
62
63
64
65
This includes defining clear roles and responsibilities for human operators and developing
1
2 mechanisms to review and audit AI-generated decisions.
3
4 4. Foster Collaboration and Knowledge Sharing
5
6
7 A cooperative approach is necessary since AI security and privacy are continuously
8
9 evolving. To keep abreast of the most recent dangers and mitigation techniques, organizations
10
11
should actively participate in industry forums, exchange knowledge, and work with specialists.
12 Organizations may jointly address the issues raised by AI in cyber security and provide strong
13
14 solutions by promoting a collaborative culture.
15
16
17 5. Invest in Continuous Education and Training
18
19 As AI technology advances, it is essential for cybersecurity professionals to stay
20
21 updated with the latest developments and acquire the necessary skills to address emerging
22
23 threats. Organizations should invest in continuous education and training programs that focus
24
25 on AI security and privacy. This includes providing opportunities for professionals to enhance
26
27 their understanding of AI technologies, data protection regulations, and ethical considerations.
28
29 6. Integrating Privacy
30
31
32 The fundamental principle of privacy by design guarantees that privacy considerations
33
are ingrained in the system from the beginning. A useful resource for integrating privacy
34
35 safeguards into AI development is the OWASP AI Security and Privacy Guide [6].
36
37 Furthermore, security procedures designed specifically for AI systems are prioritized
38
39 by Google's Secure AI Framework, which addresses privacy, integrity, confidentiality, and
40
41 availability issues [7].
42
43 6.3. Mitigating Cybersecurity Risks: A Priority
44
45 Threats to AI privacy and security are a modern technological concern because of the
46
47 massive data collection and weaknesses in AI systems. To guarantee proper installation and
48
49 robust privacy and security in AI systems, it is essential to conduct in-depth risk assessments
50
51 and reinforce security measures. Implementing a better AI system should incorporate the
52
53 following strategies to attain optimum performance.
54
55 i. Monitoring and Security evaluations: This helps the system to be alert to possible
56
57 threats, spot risks, and take protective measures before serious harm is done.
58
59
60
61
62
63
64
65
ii. Adversarial training should be used while building models. This method aids the model
1
2 in identifying and fending off possible changes.
3
4 iii. Addressing the Hazards: Implementing access restrictions, user monitoring systems,
5
and language filters is essential. These steps can successfully cut down on harmful
6
7 activity and protect users from potential dangers.
8
9 7. Ethical AI and its responsibilities
10
11 Responsible AI and ethics are critical factors in the development, deployment, and use
12
13 of artificial intelligence systems. As AI technologies become increasingly integrated into
14
15 various aspects of society, addressing ethical concerns, and ensuring responsible AI practices
16
17 is of paramount importance. Here are key aspects of responsible AI and ethics:
18
19 (i). Transparency: Transparency should be maintained in the design and implementation
20 of AI systems. This entails giving transparent documentation, explanations, and guidance on
21
22 the operation of the AI system, its decision-making procedures, and the data it uses. This
23
24 transparency is essential for accountability and trust.
25
26 (ii). Fairness: It is important to work towards removing prejudice and bigotry from AI
27
28 systems. It is the responsibility of developers to make sure that AI systems do not unjustly favor
29
30
or discriminate against specific people or groups based on characteristics such as gender, age,
31 race, or socioeconomic status. To detect and lessen bias, this calls for meticulous data
32
33 collection, selection, and model validation.
34
35 (iii). Privacy: Respecting individuals' privacy is crucial. AI systems should adhere to
36
37 data protection regulations and ethical principles. Data collection and processing should be
38
39 transparent, and user consent should be obtained when necessary. Data anonymization and
40
41
encryption techniques should be employed to protect personal information.
42 (iv). Accountability and Liability: Accountability should be in place for the creation and
43
44 application of AI systems. AI systems' decisions and actions should be held accountable to
45
46 stakeholders, organizations, and developers. Legal frameworks ought to handle liability
47
48 concerns when AI systems injure people or make bad choices.
49
50 (v). Explainability and Interpretability: AI systems must be designed to be
51
understandable. Users and stakeholders should be able to comprehend how the system arrives
52
53 at its decisions. Techniques like model interpretability and explainability can help provide
54
55 insights into AI system behavior.
56
57
58
59
60
61
62
63
64
65
(vi). Beneficence: AI should be developed and used to foster the safety of the individuals
1
2 and organizations. Ethical considerations should prioritize positive impacts and avoid harm.
3
4 This involves concern about the societal and ethical consequences of AI applications.
5
(vii). Oversight and Regulation: Governments and regulatory bodies play a critical role
6
7 in determining the guidelines for AI. Responsible AI practices can be enforced through legal
8
9 frameworks, standards, and audits to ensure compliance.
10
11 (viii). Inclusivity: AI should be developed with an inclusive mindset. This means
12
13 considering diverse perspectives and implying a extensive range of investors in the
14
15 development process to prevent bias and discrimination.
16 (ix). Data Quality and Security: High-quality data is essential for responsible AI. Data
17
18 should be accurate, representative, and secure. Adequate security measures should be in place
19
20 to prevent the data from breaches.
21
22 (x). Continuous Monitoring and Improvement: AI systems should be subject to
23
24 continuous monitoring and evaluation for ethical considerations, bias, and performance.
25
26 Iterative improvements should be made to address any identified issues.
27 (xi). Ethical Training: Developers and AI practitioners should be educated and trained
28
29 in responsible AI and ethics. Ethics courses and guidelines can help in fostering a culture of
30
31 ethical AI development.
32
33 (xii). Public Engagement and Ethical Debate: Public engagement and ethical
34
35 discussions around AI technologies are crucial. It's important to involve the public and
36
37
encourage debate on AI applications, regulations, and ethical concerns.
38 (xiii). International Collaboration: Ethical AI is a global concern. International
39
40 collaboration and agreements can help ensure consistent standards and principles for
41
42 responsible AI across borders.
43
44 Responsible AI and ethics are ongoing considerations and involve a blend of technical,
45
46 legal, and ethical approaches to confirm that AI benefits society as minimizing potential risks
47
48
and harms.
49
50
7.1. Research objectives in the unbiased artificial intelligence system
51
52 Research objectives in unbiased artificial intelligence (AI) systems are crucial to
53
54 address the challenges associated with bias and fairness in AI. The goal of unbiased AI is to
55
56 create systems that don't discriminate against people or groups based on protected traits like
57
58 age, gender, color, or other attributes. Here are some research objectives in the field of unbiased
59
60
AI.
61
62
63
64
65
1. Bias Detection and Measurement: Develop robust methods for detecting and
1
2 quantifying biases in AI systems, including both overt and subtle biases that may
3
4 emerge from biased training data, biased algorithms, or biased user interactions.
5
2. Bias Mitigation Techniques: Research and develop techniques to mitigate bias in AI
6
7 systems, such as retraining models, data preprocessing, and algorithmic adjustments.
8
9 Explore ways to make AI systems more robust against different types of bias.
10
11 3. Fairness Metrics: Define and refine fairness metrics and criteria to evaluate AI systems'
12
13 performance in terms of fairness and equity, considering various dimensions of fairness,
14
15 including group fairness, individual fairness, and intersectional fairness.
16 4. Algorithmic Fairness: Investigate novel algorithms and models that are designed to
17
18 reduce or eliminate bias in AI systems and assess their effectiveness in real-world
19
20 applications.
21
22 5. Data Collection and Labeling: Develop guidelines and best practices for collecting,
23
24 curating, and labeling training data to minimize biases. Explore methods for obtaining
25
26 diverse and representative data.
27 6. Explainability and Transparency: Investigate ways to improve the transparency and
28
29 interpretability of AI systems so that users can comprehend the reasoning behind certain
30
31 decisions, hence assisting in the identification and remediation of biases.
32
33 7. User-Centered Design: In order to make sure that AI applications uphold social ideals
34
35 and advance justice, designers of AI systems should take user feedback and preferences
36
37
into consideration.
38 8. Ethical Considerations: Investigate the ethical implications of AI systems, considering
39
40 issues such as consent, accountability, and the impact of AI on marginalized
41
42 communities.
43
44 9. Bias in Reinforcement Learning: Address biases that can arise in reinforcement learning
45
46 systems, which may lead to unintended consequences or reinforce existing biases.
47
48
10. Cross-Cultural and Global Perspectives: Consider cultural and geographical variations
49 in fairness definitions and biases, ensuring that AI systems are designed to be fair and
50
51 unbiased in a global context.
52
53 11. Policy and Regulation: Collaborate with policymakers and stakeholders to develop
54
55 regulations and guidelines for the responsible deployment of AI systems, with a focus
56
57 on fairness and non-discrimination.
58
59
60
61
62
63
64
65
12. Bias in NLP and Computer Vision: Investigate bias in natural language processing
1
2 (NLP) and computer vision systems, which are commonly used in applications like
3
4 recommendation systems and facial recognition.
5
13. Long-Term Monitoring: Provide techniques for ongoing auditing and monitoring of AI
6
7 systems in practical settings so that biases can be identified and corrected as soon as
8
9 they arise.
10
11 14. Education and Awareness: Promote education and awareness about AI bias and fairness
12
13 among developers, users, and the public to encourage responsible AI development and
14
15 deployment.
16 15. Collaborative Research: To address prejudice and fairness concerns comprehensively,
17
18 encourage interdisciplinary collaboration among researchers, ethicists, legislators, and
19
20 industry stakeholders.
21
22 8. Conclusion and future work
23
24
25 As AI technologies continue to advance, it is crucial to address the vulnerabilities and
26
27 potential for malicious use. By collaborating and taking proactive measures, policymakers,
28 technical researchers, and stakeholders can work together to forecast, prevent, and mitigate the
29
30 threats associated with the malicious use of AI. Also, businesses increasingly integrate AI
31
32 systems into their operations, it is crucial to recognize and address the security risks associated
33
34 with these technologies. By understanding the security challenges and adopting best practices,
35
36 businesses can leverage the benefits of AI while mitigating the associated risks. Through a
37
38
combination of research, responsible practices, and policy interventions, the security risks of
39 AI can be effectively managed, ensuring a safer future in the AI-enabled world. Artificial
40
41 intelligence continues to transform the cyber security landscape, organizations must remain
42
43 vigilant in addressing the security and privacy risks associated with AI systems. By
44
45 implementing robust security measures, prioritizing data privacy and ethical use of AI, enabling
46
47 human oversight, fostering collaboration, and investing in continuous education, organizations
48
49
can navigate the challenges and leverage the benefits of AI responsibly and securely. By staying
50 proactive and adaptive, we can harness the power of AI while safeguarding our systems and
51
52 protecting user privacy.
53
54
55 In the future, addressing bias and promoting fairness in AI systems is an ongoing and
56
57
multifaceted challenge that requires a combination of technical, ethical, and societal efforts.
58 Researchers should work collaboratively to advance these objectives and create AI systems that
59
60 are more equitable and unbiased.
61
62
63
64
65
References
1
2
3 [1] Future Networks (FN) Division, “Artificial Intelligence (AI) Policies in India- A Status
4
5 Paper,” Telecommunication Engineering Center, pp. 1–19, 2020.
6
7
[2] B. Mittelstadt, “The impact of artificial intelligence on the doctor-patient relationship”,
8 2021.
9
10 [3] B. Friedman and C. College, “Bias in computer systems,” ACM Transactions on
11
12 Information Systems, vol. 14, no. 3, 1996.
13
14 [4] B. D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi, “The ethics of
15
16 algorithms: Mapping the debate,” Big Data & Society, vol. 3, no. 2, 2016.
17
[5] S. Eggers and C. Sample, “Vulnerabilities in Artificial Intelligence and Machine
18
19 Learning Applications and Data,”. 2020..
20
21 [6] “OWASP AI Security and Privacy Guide.” [Online]. Available:
22
23 https://owasp.org/www-project-ai-security-and-privacy-guide, 2023.
24
25 [7] “Introducing Google’s Secure AI Framework.” [Online]. Available:
26
27 https://blog.google/technology/safety-security/introducing-googles-secure-ai-
28 framework, 2023.
29
30 [8] J. Boobalan, V. Jacintha, J. Nagarajan, K. Thangayogesh, and S. Tamilarasu, “An IOT
31
32 Based Agriculture Monitoring System,” in 2018 International Conference on
33
34 Communication and Signal Processing (ICCSP), Chennai: IEEE, Apr. 2018.
35
36 [9] J. Boobalan and M. Malleswaran, “A novel and customizable framework for iot based
37
38
smart home nursing for elderly care,” Lecture Notes on Data Engineering and
39 Communications Technologies, vol. 35, pp. 27–38, 2019.
40
41 [10] D. S, K. B.V., H. S, and Umamaheswari. S, “Towards building of a robust organic
42
43 tester,” IEEE, vol. 1, pp. 631–634, 2022.
44
45 [11] Sri Suvetha, S. CS, S. S, and Umamaheswari. S, “Automatic Bio-Medical Waste
46
47 Segregator,” IEEE, vol. 1, pp. 1124–1130, 2022.
48
49
[12] B. K, N. G, and Umamaheswari. S, “Improving the Performance of Leukemia Detection
50 using Machine Learning Techniques,” IEEE, pp. 867–872, 2022.
51
52 [13] Umamaheswari. S and H. S, “Analyzing product usage based on Twitter users based on
53
54 data mining process,” IEEE, pp. 426–430, 2020.
55
56 [14] Danilevsky M, Qian K, Aharonov R, “A survey of the state of explainable ai for natural
57
58 language processing", 2020.
59
60
61
62
63
64
65
[15] Das A, Rad P “Opportunities and challenges in explainable artificial intelligence (XAI):
1
2 a survey”. arXiv:2006.11371. 2020
3
4 [16] Ferreira JJ, Monteiro MS, “What are people doing about XAI user experience? A survey
5
on AI explainability research and practice. In: Design, user experience, and usability.
6
7 Design for contemporary interactive environments”. Lecture notes in computer science.
8
9 Springer, pp 56–73. 2020.
10
11 [17] Guidotti R, Monreale A, Pedreschi D, “Principles of explainable artificial intelligence.
12
13 In: Explainable AI within the digital transformation and cyber physical systems: XAI
14
15 methods and applications”. Springer, pp 9–31, 2021.
16 [18] Islam SR, Eberle W, Ghafoor SK, “Explainable artificial intelligence approaches: a
17
18 survey”. arXiv:2101.09429, 2021.
19
20 [19]. S. F. Nasim, Muhammad Rizwan Ali, and Umme Kulsoom, “Artificial Intelligence
21
22 Incidents & Ethics A Narrative Review ”, Int. J. TIM, vol. 2, no. 2, pp. 52–64, Oct.
23
24 2022.
25
26 [20] Mona Ashok, Rohit Madan, Anton Joha, Uthayasankar Sivarajah, Ethical framework
27 for Artificial Intelligence and Digital technologies, International Journal of Information
28
29 Management, Volume 62, 2022.
30
31 [21] Munn, L. The uselessness of AI ethics. AI Ethics 3, 869–877, 2023.
32
33 [22] Anto Čartolovni, Ana Tomičić, Elvira Lazić Mosler, Ethical, legal, and social
34
35 considerations of AI-based medical decision-support tools: A scoping review,
36
37
International Journal of Medical Informatics, Vol.161, p.104738, 2022.
38 [23] Prem, E. From ethical AI frameworks to tools: a review of approaches. AI Ethics 3,
39
40 699–716, 2023.
41
42 [24] Attard-Frost, B., De los Ríos, A. & Walters, D.R. The ethics of AI business practices: a
43
44 review of 47 AI ethics guidelines. AI Ethics 3, 389–406, 2023.
45
46 [25] Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J Ethical Considerations of Using ChatGPT
47
48
in Health Care J Med Internet Res 2023.
49 [26] Nguyen, A., Ngo, H.N., Hong, Y. et al. Ethical principles for artificial intelligence in
50
51 education. Educ Inf Technol 28, 4221–4241, 2023.
52
53 [27] Sham, A.H., Aktas, K., Rizhinashvili, D. et al. Ethical AI in facial expression analysis:
54
55 racial bias. SIViP 17, 399–406, 2023.
56
57 [28] Mhlanga, David, Open AI in Education, the Responsible and Ethical Use of ChatGPT
58
Towards Lifelong Learning, 2023.
59
60
61
62
63
64
65
[29] Paraman, P., Anamalah, S. Ethical artificial intelligence framework for a good AI
1
2 society: principles, opportunities and perils. AI & Soc 38, 595–611, 2023.
3
4 [30] Solanki, P., Grundy, J. & Hussain, W. Operationalising ethics in artificial intelligence
5
for healthcare: a framework for AI developers. AI Ethics 3, 223–240, 2023.
6
7 [31] J. Zhang, Y. Shu and H. Yu, "Fairness in Design: A Framework for Facilitating Ethical
8
9 Artificial Intelligence Designs," in International Journal of Crowd Science, vol. 7, no.
10
11 1, pp. 32-39, 2023.
12
13 [32] https://cas.coe.int/cas/login?service=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Frm.coe.int%2Frma%2Fcomponent%2Fmain.
14
15 [33] S. G. Finlayson, J. D. Bowers, J. Ito, J. L. Zittrain, A. L. Beam, and I. S. Kohane,
16 “Adversarial attacks on medical machine learning,” Science, vol. 363, no. 6433, pp.
17
18 1287–1289, 2019.
19
20 [34] E. Tcydenova, T. W. Kim, C. Lee, and J. H. Park, “Detection of Adversarial Attacks in
21
22 AI-Based Intrusion Detection Systems Using Explainable AI,” Human-centric
23
24 Computing and Information Sciences, vol. 11, no. 0, pp. 1–1, 2021.
25
26 [35] C.-L. Chang, J.-L. Hung, C.-W. Tien, C.-W. Tien, and S.-Y. Kuo, “Evaluating
27 Robustness of AI Models against Adversarial Attacks,” in Proceedings of the 1st ACM
28
29 Workshop on Security and Privacy on Artificial Intelligence, Taipei Taiwan: ACM, pp.
30
31 47–54, 2020.
32
33 [36] Z. Kong, J. Xue, Y. Wang, L. Huang, Z. Niu, and F. Li, “A Survey on Adversarial Attack
34
35 in the Age of Artificial Intelligence,” Wireless Communications and Mobile
36
37
Computing, vol. 2021, pp. 1–22, 2021.
38 [37] S. Qiu, Q. Liu, S. Zhou, and C. Wu, “Review of Artificial Intelligence Adversarial
39
40 Attack and Defense Technologies,” Applied Sciences, vol. 9, no. 5, p. 909, 2019.
41
42 [38] A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “A survey
43
44 on adversarial attacks and defenses,” CAAI Trans on Intel Tech, vol. 6, no. 1, pp. 25–
45
46 45, 2021.
47
48
[39] P.-C. Sung, C.-C. Hsu, C.-L. Lee, Y.-S. P. Chiu, and H.-L. Chen, “Formulating grip
49 strength and key pinch strength prediction models for Taiwanese: a comparison between
50
51 stepwise regression and artificial neural networks,” J Ambient Intell Human Comput,
52
53 vol. 6, no. 1, pp. 37–46, 2015.
54
55 [40] B. Bordel, R. Alcarria, D. Martín, T. Robles, and D. S. De Rivera, “Self-configuration
56
57 in humanized Cyber-Physical Systems,” J Ambient Intell Human Comput, vol. 8, no. 4,
58
pp. 485–496, 2017.
59
60
61
62
63
64
65
[41] X. X. Li, F. Z. He, and W. D. Li, “A cloud-terminal-based cyber-physical system
1
2 architecture for energy efficient machining process optimization,” J Ambient Intell
3
4 Human Comput, vol. 10, no. 3, pp. 1049–1064, 2019.
5
[42] H. Zhang, G. Zhang, and Q. Yan, “Digital twin-driven cyber-physical production
6
7 system towards smart shop-floor,” J Ambient Intell Human Comput, vol. 10, no. 11, pp.
8
9 4439–4453, 2019.
10
11 [43] D. C. Edara, L. P. Vanukuri, V. Sistla, and V. K. K. Kolli, “Sentiment analysis and text
12
13 categorization of cancer medical records with LSTM,” J Ambient Intell Human
14
15 Comput, vol. 14, no. 5, pp. 5309–5325, 2023.
16 [44] V. Gundu and S. P. Simon, “PSO–LSTM for short term forecast of heterogeneous time
17
18 series electricity price signals,” J Ambient Intell Human Comput, vol. 12, no. 2, pp.
19
20 2375–2385, 2021.
21
22 [45] L. Zeng et al., “An LSTM-based driving operation suggestion method for riding
23
24 comfort-oriented critical zone,” J Ambient Intell Human Comput, vol. 14, no. 2, pp.
25
26 755–771, 2023.
27 [46] Y. Son, X. Zhang, Y. Yoon, J. Cho, and S. Choi, “LSTM–GAN based cloud movement
28
29 prediction in satellite images for PV forecast,” J Ambient Intell Human Comput, vol.
30
31 14, no. 9, pp. 12373–12386, 2023.
32
33 [47] Q. Xu, D. Wu, C. Jiang, and X. Wang, “A composite quantile regression long short-
34
35 term memory network with group lasso for wind turbine anomaly detection,” J Ambient
36
37
Intell Human Comput, vol. 14, no. 3, pp. 2261–2274, 2023.
38 [48] S. Bera, G. S. Gupta, D. Kumar, S. Shiva Kumar, and K. K. Gupta, “LSTM-UKF
39
40 framework for an effective global land-ocean index temperature prediction,” J Ambient
41
42 Intell Human Comput, vol. 14, no. 3, pp. 2369–2384, 2023.
43
44 [49] X.-P. Ji, W. Tian, W. Liu, and G. Liu, “Optimal attack strategy selection of an
45
46 autonomous cyber-physical micro-grid based on attack-defense game model,” J
47
48
Ambient Intell Human Comput, vol. 12, no. 9, pp. 8859–8866, 2021.
49 [50] H. Abbasimehr and R. Paki, “Improving time series forecasting using LSTM and
50
51 attention models,” J Ambient Intell Human Comput, vol. 13, no. 1, pp. 673–691, 2022.
52
53 [51] S. Kanchan, G. Singh, and N. S. Chaudhari, “SPSR-VCP: secure and privacy preserving
54
55 SignRecryption in vehicular cyber physical systems,” J Ambient Intell Human Comput,
56
57 vol. 13, no. 1, pp. 1–20, 2022.
58
59
60
61
62
63
64
65
[52] R. Steele, T. Hillsgrove, N. Khoshavi, and L. G. Jaimes, “A survey of cyber-physical s
1
2 ystem implementations of real-time personalized interventions,” J Ambient Intell
3
4 Human Comput, vol. 13, no. 5, pp. 2325–2342, 2022.
5
[53] J. Leng, H. Zhang, D. Yan, Q. Liu, X. Chen, and D. Zhang, “Digital twin-driven
6
7 manufacturing cyber-physical system for parallel controlling of smart workshop,” J
8
9 Ambient Intell Human Comput, vol. 10, no. 3, pp. 1155–1166, 2019.
10
11 [54] J. Jithish, S. Sankaran, and K. Achuthan, “A Decision-centric approach for secure and
12
13 energy-efficient cyber-physical systems,” J Ambient Intell Human Comput, vol. 12, no.
14
15 1, pp. 417–441, 2021.
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65

You might also like