XAI MajorProject
XAI MajorProject
BACHELORS OF TECHNOLOGY
to
by
Session 2023-2024
CERTIFICATE
This is to certify that the Project Synopsis entitled, “Explainable Artificial Intelligence” submitted
by Kartik Gupta (20csu340), Ananya Sharma (20csu014), and Yash (20csu361) to The
NorthCap University, Gurugram, India, is a record of bonafide synopsis work carried out by
them under my supervision and guidance and is worthy of consideration for the partial fulfillment
of the degree of Bachelor of Technology in Computer Science and Engineering of the
University.
<Signature of supervisor>
<Name and designation of supervisor>
Date: ………………
INDEX
1. Abstract
2. Introduction
2.1.1 Description
2.1.2 Background Study
2.1.3 Feasibility Study
3. Problem Statement
3.1.Objective
3.2.Methodology
3.3.LIME
4. Conclusion
5. References
1. ABSTRACT
The topic of explainable artificial intelligence (XAI), which bridges the gap between
sophisticated machine learning methods and human comprehension, has become crucial in
the development of AI systems. The main ideas, problems, and developments of XAI are
summarised in this summary.
The abstract also emphasizes the necessity of interdisciplinary cooperation among computer
scientists, ethicists, psychologists, and policymakers to ensure the development of XAI that
not only offers transparent and understandable models but also is in line with human values
and expectations.
2. INTRODUCTION
In recent years, artificial intelligence (AI) has advanced remarkably, revolutionizing sectors,
automating difficult activities, and enhancing human capacities. However, as AI systems
become more pervasive in our lives, they also bring up significant issues of responsibility,
openness, and trust. Introducing Explainable Artificial Intelligence (XAI), a developing
field that aims to offer concise justifications for the choices made by AI systems. The
promise of XAI is to close the gap between the "black box" nature of many sophisticated AI
algorithms and the need for people to understand the logic behind AI-driven results.
The necessity for explainability becomes vital when AI systems are used in crucial fields
including healthcare, finance, autonomous vehicles, and criminal justice. Making decisions
is not enough for AI; it is also crucial that these decisions can be understood and justified.
Users and stakeholders must comprehend why decisions were made when a self-driving car
decides in a split second to halt or accelerate or when a medical AI system suggests a course
of therapy.
2. Core Principles of XAI:
Making AI systems clearer and easier to understand is the central goal of XAI. This calls for a
variety of methods and strategies, such as:
• Feature Attribution: Determining which input attributes influenced a model's choice the most.
• Rule-Based Models: Developing rules that are comprehensible to humans to direct AI decision-
making.
• Interpretable Machine Learning Algorithms: Creating predictive models with decision trees or
linear regression that are naturally easier to interpret.
• Local and Global Explanations: Giving justifications for both specific predictions (local) and
the overall behavior of the model (global).
• User-Friendly Interfaces: Creating user interfaces with understandable explanations.
3. XAI challenges
XAI has its share of difficulties. A perpetual trade-off exists between the necessity for
explainability and the effectiveness of AI algorithms. While very complicated models
frequently lack transparency, highly interpretable models may forfeit predicted accuracy. To
evaluate the efficacy of XAI technologies, standardized evaluation measures, benchmark
datasets, and reliable evaluation techniques are required. Furthermore, persistent worries
include the moral ramifications of AI decision-making and the possibility of biased
explanations.
4. Recent Developments:
XAI has undergone interesting changes in recent years. We are now better able to
comprehend and believe in AI systems because of techniques like neural network
visualization, counterfactual justifications, and explanations in normal language. Through
interdisciplinary cooperation, researchers are also looking into how to guarantee fairness,
accountability, and transparency in AI (FAT/ML).
The fast development and wide-spread adoption of artificial intelligence and machine
learning technologies during the previous few decades is the foundation of Explainable
Artificial Intelligence (XAI). The emergence and increasing significance of XAI are largely
due to many important factors:
1. AI Model Complexity: Deep neural networks in particular have grown more complicated,
digesting massive quantities of data and producing predictions with high dimensionality.
Although these models frequently perform admirably, it has grown more difficult to
understand how they operate on the inside. Due to their "black box" nature, AI systems have
raised questions regarding their dependability and credibility.
3. Ethical and Legal Issues: As AI systems are used to make crucial judgments in touchy
fields like hiring, lending, criminal justice, and healthcare, there is an increasing awareness
of the ethical and legal ramifications of these choices. The need for AI transparency and
accountability has been highlighted by worries about prejudice, discrimination, and justice.
4. User Trust and Adoption: Users are less likely to trust and use these technologies if they
are unclear about how AI systems come to their findings. XAI is viewed as a way to
increase user acceptability and confidence, which will hasten the adoption of AI in a variety
of fields.
7. Research and Industry Initiatives: In response to the demand for XAI, the research
community and industry have created a variety of techniques and tools that make AI
systems easier to understand. These initiatives comprise model visualisation tools, rule-
based models, feature attribution approaches, and principles for moral AI development.
In summary, XAI is a response to the changing AI landscape, where the advantages of AI
systems must be weighed against the necessity of accountability, transparency, and ethical
issues. It is a crucial step towards ensuring that AI technology upholds ethical standards,
abides by legal requirements, and has the support of users and society at large. XAI will
continue to be a focus of research, development, and ethical debates as AI develops further.
The practicality and viability of applying XAI approaches in various real-world contexts
would be evaluated as part of a feasibility study for Explainable Artificial Intelligence
(XAI). An example of how such a feasibility study may be organised is given below:
• Specify the precise goals of the feasibility study, such as assessing the viability of
deploying XAI in a certain industry (such as banking, healthcare, or autonomous
vehicles).
• Specify the AI systems, models, or applications that are being taken into consideration
to clarify the scope.
2. Market Research:
• Determine the market potential and demand for XAI in the selected domain.
• Examine the market's rivals and available XAI solutions.
• Ascertain whether XAI adoption is in line with market trends and laws.
3. Economic Feasibility:
• Estimate the expenses of implementing XAI, including:
•XAI method development and integration costs.
• The price of gathering and preparing data.
• The price of retraining and training AI models.
• Software and hardware specifications.
Compare these expenses against potential gains like greater market share, lowered legal risk, or
enhanced trust.
Examine the ethical and legal ramifications of XAI in the selected field. This
could entail:
Examining pertinent laws and compliance standards.
Analysing possible legal risks, such as liability concerns.
Identifying moral issues with bias, fairness, and transparency.
5. Considerations of Law and Morality:
• Consider XAI's ethical and legal implications for the chosen field. This can entail: •
Analysing relevant legislation and compliance requirements.
• Examining potential legal issues, such as worries about liability.
6. Analysis of Stakeholders:
• Identify and interact with important stakeholders, including as users, clients, regulators,
and sector experts.
• Collect opinions and suggestions from stakeholders on the viability and benefit of
implementing XAI.
7. Risk Assessment:
• Recognise possible risks and difficulties related to deploying XAI. These might consist of:
3.1. Objective
The main goal of this research is to use Explainable Artificial Intelligence (XAI) approaches
to create a reliable and efficient false news detecting system. This method seeks to improve
the precision and openness of false news identification while offering concise and
understandable justifications for its choices. Important goals include:
1. Model Accuracy: To create and use deep learning and machine learning models that
can accurately tell the difference between authentic and unauthentic news articles, social
media posts, and other types of information.
3. Robustness: To make sure the detection algorithm works reliably when faced with
textual, visual, and multimedia fake news. The system needs to change to accommodate
new false news techniques.
5. Ethical Considerations: To incorporate ethical factors into the design of the system in
order to reduce prejudices, respect user privacy, and adhere to applicable laws.
7. User-Centric Design: Including stakeholders and end users in the development process
to collect input, preferences, and requirements for the XAI-based fake news detecting
system.
8. Raising Public Awareness: To raise public awareness and educate the public about
fake news by offering explanations of how the detection system operates and the criteria
used to determine whether a piece of information is real or fraudulent.
9. Deployment and Integration: To investigate ways to include the XAI-based fake news
detection system into several platforms and programs, such as online browsers, social
media platforms, and news aggregators.
10. Ongoing Monitoring and Improvement: Create a framework for ongoing monitoring
and enhancement of the fake news detection system based on user feedback and
changing fake news strategies.
By accomplishing these goals, this research aims to advance the field of fake news detection
by utilizing XAI's abilities to deliver accurate classifications as well as clear and
understandable explanations, ultimately promoting more informed and critical media
consumption in the digital age.
3.2. Methodology
Methodology for Fake News Detection Using Explainable Artificial Intelligence (XAI):
1. Data Collection and Preprocessing: Compile a diverse collection of relevant content, such
as posts on social media or news items, both real and fraudulent.
• To prepare the data for analysis, clean the text, eliminate stopwords, tokenize, and convert
it to an appropriate format.
• Mark instances in the collection as either phony or real.
2. Feature Extraction:
• Take relevant features, including word embeddings, TF-IDF vectors, or other domain-
specific features, and extract them from the text.
• If applicable, take into account further features like metadata (e.g., publication date,
source) and social network data (e.g., user interactions).
3. Model Selection:
• Select the best deep learning or machine learning models for detecting fake news, such as
• Natural Language Processing (NLP) models for text analysis, such as LSTM, BERT, or
GPT.
• Ensemble models, such as Gradient Boosting or Random Forest.
• To assure interpretability, look at XAI-compatible models or modifications of current
models.
4. XAI Integration: Include XAI methods in the model of choice to produce predicted
explanations. LIME (Local Interpretable Model-agnostic Explanations) is one possible
approach.
• Shapley Additive exPlanations, or SHAP
• Rule-based models that offer clear decision-making guidelines.
• Attention methods to draw attention to key textual elements.
7. Model Assessment:
• Use existing assessment metrics for false news identification to assess the trained model's
performance on the test dataset.
• Evaluate the XAI explanations' quality to make sure they are understandable, useful, and
informative for end users.
By using this methodology, you may create and deploy a fake news detection system that
makes use of the capabilities of XAI to deliver precise, understandable, and moral responses
to the persistent problem of fake news in the digital age.
3.2.1. LIME
The toolset for Explainable Artificial Intelligence (XAI) now includes the powerful
method known as Local Interpretable Model-agnostic Explanations (LIME), which is
particularly useful for the detection of fake news. Here is a step-by-step tutorial on using
LIME to justify the inferences made by your false news detection model:
1. Choose Instances for Explanation: From your dataset, select a group of instances
(news articles or posts) for which you wish to produce explanations. These incidents
ought to be a mixture of true and false news.
3. Generate Perturbed Data Samples: For each instance chosen in step 1, create
perturbed data samples using the LIME explanation. LIME accomplishes this by
randomly changing the instance's attributes while maintaining its label. An interpretable
surrogate model is trained using these perturbed samples.
4. Train a Surrogate Model: Using the modified data samples and the related
predictions of your fake news detection model, train a straightforward interpretable model
(such as linear regression or decision trees). This substitute model has been trained to
simulate your complex model's behavior.
5. Describe Predictions: Create local explanations for each case of interest using the
surrogate model. Each feature (words or phrases) in the input text will be assigned a
feature weight by LIME to show how important they are.
10. Evaluation and Validation: Test and validate user experiences to see how well
LIME-generated explanations work to increase user confidence and comprehension of the
false news detection system.
These instructions will help you use LIME to give understandable justifications for the
predictions made by your fake news detection algorithm. The openness and confidence in
your XAI-based fake news detection system can be improved by providing users and
stakeholders with insights into why particular judgments are reached.
4. CONCLUSION
It's important to note that the feasibility study is a critical step in determining whether XAI is both
practically achievable and economically viable in a specific context. It helps organizations make
informed decisions about the adoption of XAI and ensures that the technology aligns with their
objectives and values while complying with legal and ethical considerations.
In conclusion, the use of Local Interpretable Model-agnostic Explanations (LIME) to identify fake
news marks a significant advancement in the direction of transparency and interpretability in the
field of artificial intelligence. LIME gives us the ability to decipher complicated machine-learning
models and reveal how they make decisions.
We can improve our false news detection models by following the instructions in this guide, and we
can also give users clear, reliable justifications for each categorization. In order to effectively
combat the persistent problem of fake news in our digital age, a combination of predictive accuracy
and human-centered openness is essential.
Furthermore, the iterative and user-centered methodology ensures that LIME's explanations are not
only technically accurate but also connect with subject-matter expertise and human intuition.
Ongoing user feedback-based development enables optimization and fine-tuning, thus increasing
the usefulness of the explanations.
The inclusion of LIME-generated explanations into false news detection algorithms ultimately
stimulates critical media consumption, promotes informed decision-making, and advances the
overarching objectives of Explainable Artificial Intelligence (XAI). Building trust, reducing bias,
and encouraging responsible AI applications in the battle against fake news are all made possible by
this combination of AI-driven accuracy and insights that are understood by humans.
5. REFERENCES
Arrieta, Alejandro Barredo, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik,
Alberto Barbado, Salvador García et al. "Explainable Artificial Intelligence (XAI): Concepts,
taxonomies, opportunities and challenges toward responsible AI." Information fusion 58 (2020): 82-
115.
Das, A. and Rad, P., 2020. Opportunities and challenges in explainable artificial intelligence (xai): A
survey. arXiv preprint arXiv:2006.11371.
Speith, Timo. "A review of taxonomies of explainable artificial intelligence (XAI) methods."
In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp.
2239-2250. 2022.
Ali, Sajid, Tamer Abuhmed, Shaker El-Sappagh, Khan Muhammad, Jose M. Alonso-Moral, Roberto
Confalonieri, Riccardo Guidotti, Javier Del Ser, Natalia Díaz-Rodríguez, and Francisco Herrera.
"Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial
Intelligence." Information Fusion 99 (2023): 101805.
Mehta, Harshkumar, and Kalpdrum Passi. "Social Media Hate Speech Detection Using Explainable
Artificial Intelligence (XAI)." Algorithms 15, no. 8 (2022): 291
Samtani, Sagar, Hsinchun Chen, Murat Kantarcioglu, and Bhavani Thuraisingham. "Explainable
artificial intelligence for cyber threat intelligence (XAI-CTI)." IEEE Transactions on Dependable and
Secure Computing 19, no. 4 (2022): 2149-2150.
Al-Asadi, Mustafa A., and Sakir Tasdemir. "Using artificial intelligence against the phenomenon of
fake news: a systematic literature review." Combating Fake News with Computational Intelligence
Techniques (2022): 39-54.
Akhtar, Pervaiz, Arsalan Mujahid Ghouri, Haseeb Ur Rehman Khan, Mirza Amin ul Haq, Usama
Awan, Nadia Zahoor, Zaheer Khan, and Aniqa Ashraf. "Detecting fake news and disinformation
using artificial intelligence and machine learning to avoid supply chain disruptions." Annals of
Operations Research 327, no. 2 (2023): 633-657.
Wang, Bin, Wenbin Pei, Bing Xue, and Mengjie Zhang. "A multi-objective genetic algorithm to
evolving local interpretable model-agnostic explanations for deep neural networks in image
classification." IEEE Transactions on Evolutionary Computation (2022).
Yang, Mao, Chuanyu Xu, Yuying Bai, Miaomiao Ma, and Xin Su. "Investigating black-box model for
wind power forecasting using local interpretable model-agnostic explanations algorithm: Why should
a model be trusted?." CSEE Journal of Power and Energy Systems (2023).