0% found this document useful (0 votes)
38 views14 pages

XAI MajorProject

This document provides a synopsis report for a project on explainable artificial intelligence (XAI) for detecting fake news. It was submitted by three students - Kartik Gupta, Ananya Sharma, and Yash Gulati - to their university under the supervision of Monica Lamba. The synopsis introduces XAI and its importance for establishing trust in complex AI systems. It discusses feature attribution, rule-based models, and interpretable machine learning as key XAI techniques. The synopsis also covers challenges like the tradeoff between explainability and performance, and the need for evaluation standards. It concludes by emphasizing the interdisciplinary nature of XAI and the importance of collaboration across fields.

Uploaded by

kartik20csu340
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views14 pages

XAI MajorProject

This document provides a synopsis report for a project on explainable artificial intelligence (XAI) for detecting fake news. It was submitted by three students - Kartik Gupta, Ananya Sharma, and Yash Gulati - to their university under the supervision of Monica Lamba. The synopsis introduces XAI and its importance for establishing trust in complex AI systems. It discusses feature attribution, rule-based models, and interpretable machine learning as key XAI techniques. The synopsis also covers challenges like the tradeoff between explainability and performance, and the need for evaluation standards. It concludes by emphasizing the interdisciplinary nature of XAI and the importance of collaboration across fields.

Uploaded by

kartik20csu340
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

FAKE NEWS DETECTION USING XAI

MIDTERM VII SEMESTER SYNOPSIS REPORT

Submitted in partial fulfillment of the requirement of the degree of

BACHELORS OF TECHNOLOGY
to

The NorthCap University

by

Kartik Gupta (20csu340)

Yash Gulati (20csu361)

Ananya Sharma (20csu014)

Under the supervision of

Ms. Monica Lamba

Department of Computer Science and Engineering

Department of Computer Science and Engineering

School of Engineering and Technology

The NorthCap University, Gurugram- 122001, India

Session 2023-2024
CERTIFICATE

This is to certify that the Project Synopsis entitled, “Explainable Artificial Intelligence” submitted
by Kartik Gupta (20csu340), Ananya Sharma (20csu014), and Yash (20csu361) to The
NorthCap University, Gurugram, India, is a record of bonafide synopsis work carried out by
them under my supervision and guidance and is worthy of consideration for the partial fulfillment
of the degree of Bachelor of Technology in Computer Science and Engineering of the
University.

<Signature of supervisor>
<Name and designation of supervisor>

Date: ………………
INDEX

1. Abstract

2. Introduction

2.1.1 Description
2.1.2 Background Study
2.1.3 Feasibility Study

3. Problem Statement

3.1.Objective
3.2.Methodology
3.3.LIME

4. Conclusion

5. References
1. ABSTRACT

The topic of explainable artificial intelligence (XAI), which bridges the gap between
sophisticated machine learning methods and human comprehension, has become crucial in
the development of AI systems. The main ideas, problems, and developments of XAI are
summarised in this summary.

Explainability is crucial for establishing confidence and openness in AI systems,


particularly as they become more interwoven into our daily lives and have an impact on
decisions involving, among other things, healthcare, economics, and the criminal justice
system. The fundamental ideas of XAI are covered in this abstract, with particular emphasis
on feature attribution, rule-based models, and interpretable machine learning algorithms.
The trade-off between model complexity and explainability, the necessity for standardized
evaluation measures, and the ethical ramifications of AI decision-making are all explored as
difficulties in XAI. It also explores current developments in the area, including the
visualization of neural networks, the formation of counterfactual justifications, and
explanations in natural language.

The abstract also emphasizes the necessity of interdisciplinary cooperation among computer
scientists, ethicists, psychologists, and policymakers to ensure the development of XAI that
not only offers transparent and understandable models but also is in line with human values
and expectations.

2. INTRODUCTION

In recent years, artificial intelligence (AI) has advanced remarkably, revolutionizing sectors,
automating difficult activities, and enhancing human capacities. However, as AI systems
become more pervasive in our lives, they also bring up significant issues of responsibility,
openness, and trust. Introducing Explainable Artificial Intelligence (XAI), a developing
field that aims to offer concise justifications for the choices made by AI systems. The
promise of XAI is to close the gap between the "black box" nature of many sophisticated AI
algorithms and the need for people to understand the logic behind AI-driven results.

2.1.1. Description of XAI:

1. The Need for Explainability:

The necessity for explainability becomes vital when AI systems are used in crucial fields
including healthcare, finance, autonomous vehicles, and criminal justice. Making decisions
is not enough for AI; it is also crucial that these decisions can be understood and justified.
Users and stakeholders must comprehend why decisions were made when a self-driving car
decides in a split second to halt or accelerate or when a medical AI system suggests a course
of therapy.
2. Core Principles of XAI:

Making AI systems clearer and easier to understand is the central goal of XAI. This calls for a
variety of methods and strategies, such as:
• Feature Attribution: Determining which input attributes influenced a model's choice the most.
• Rule-Based Models: Developing rules that are comprehensible to humans to direct AI decision-
making.
• Interpretable Machine Learning Algorithms: Creating predictive models with decision trees or
linear regression that are naturally easier to interpret.
• Local and Global Explanations: Giving justifications for both specific predictions (local) and
the overall behavior of the model (global).
• User-Friendly Interfaces: Creating user interfaces with understandable explanations.

3. XAI challenges

XAI has its share of difficulties. A perpetual trade-off exists between the necessity for
explainability and the effectiveness of AI algorithms. While very complicated models
frequently lack transparency, highly interpretable models may forfeit predicted accuracy. To
evaluate the efficacy of XAI technologies, standardized evaluation measures, benchmark
datasets, and reliable evaluation techniques are required. Furthermore, persistent worries
include the moral ramifications of AI decision-making and the possibility of biased
explanations.

4. Recent Developments:

XAI has undergone interesting changes in recent years. We are now better able to
comprehend and believe in AI systems because of techniques like neural network
visualization, counterfactual justifications, and explanations in normal language. Through
interdisciplinary cooperation, researchers are also looking into how to guarantee fairness,
accountability, and transparency in AI (FAT/ML).

5. The Interdisciplinary Approach of XAI

Collaboration between computer scientists, ethicists, psychologists, attorneys, policymakers,


and domain experts is essential for XAI since it is fundamentally interdisciplinary. The
creation and implementation of explainable AI systems must prioritize societal implications,
legal requirements, and ethical issues. XAI is a societal necessity as much as a technical
difficulty.
2.1.2. BACKGROUND STUDY

The fast development and wide-spread adoption of artificial intelligence and machine
learning technologies during the previous few decades is the foundation of Explainable
Artificial Intelligence (XAI). The emergence and increasing significance of XAI are largely
due to many important factors:

1. AI Model Complexity: Deep neural networks in particular have grown more complicated,
digesting massive quantities of data and producing predictions with high dimensionality.
Although these models frequently perform admirably, it has grown more difficult to
understand how they operate on the inside. Due to their "black box" nature, AI systems have
raised questions regarding their dependability and credibility.

2. Critical Decision-Making: AI is being included in applications where the outcomes of its


decisions have a substantial impact on the actual world. For instance, AI is utilized in
finance for investment strategies, autonomous vehicles for driving judgments, and
healthcare for diagnosis and treatment recommendations. It is critical for decision-makers,
end users, and regulators to comprehend why AI systems take particular actions in such
domains.

3. Ethical and Legal Issues: As AI systems are used to make crucial judgments in touchy
fields like hiring, lending, criminal justice, and healthcare, there is an increasing awareness
of the ethical and legal ramifications of these choices. The need for AI transparency and
accountability has been highlighted by worries about prejudice, discrimination, and justice.

4. User Trust and Adoption: Users are less likely to trust and use these technologies if they
are unclear about how AI systems come to their findings. XAI is viewed as a way to
increase user acceptability and confidence, which will hasten the adoption of AI in a variety
of fields.

5. Regulatory and Compliance Requirements: Governments and regulatory organizations


have begun to acknowledge the significance of AI accountability and transparency. There
are laws like the General Data Protection Regulation (GDPR) that incorporate provisions
relating to AI explainability in specific areas, like the European Union. The creation and use
of XAI techniques are necessary for compliance with such laws..

6. Interdisciplinary Collaboration: The demand for XAI has sparked interdisciplinary


cooperation among politicians, computer scientists, ethicists, and psychologists. This
interdisciplinary approach acknowledges the need for skills outside of typical machine
learning research in order to address the issues of AI explainability.

7. Research and Industry Initiatives: In response to the demand for XAI, the research
community and industry have created a variety of techniques and tools that make AI
systems easier to understand. These initiatives comprise model visualisation tools, rule-
based models, feature attribution approaches, and principles for moral AI development.
In summary, XAI is a response to the changing AI landscape, where the advantages of AI
systems must be weighed against the necessity of accountability, transparency, and ethical
issues. It is a crucial step towards ensuring that AI technology upholds ethical standards,
abides by legal requirements, and has the support of users and society at large. XAI will
continue to be a focus of research, development, and ethical debates as AI develops further.

2.1.3. FEASIBILITY STUDY

The practicality and viability of applying XAI approaches in various real-world contexts
would be evaluated as part of a feasibility study for Explainable Artificial Intelligence
(XAI). An example of how such a feasibility study may be organised is given below:

1. Project Goals and Scope:

• Specify the precise goals of the feasibility study, such as assessing the viability of
deploying XAI in a certain industry (such as banking, healthcare, or autonomous
vehicles).
• Specify the AI systems, models, or applications that are being taken into consideration
to clarify the scope.

2. Market Research:

• Determine the market potential and demand for XAI in the selected domain.
• Examine the market's rivals and available XAI solutions.
• Ascertain whether XAI adoption is in line with market trends and laws.

3. Economic Feasibility:
• Estimate the expenses of implementing XAI, including:
•XAI method development and integration costs.
• The price of gathering and preparing data.
• The price of retraining and training AI models.
• Software and hardware specifications.
Compare these expenses against potential gains like greater market share, lowered legal risk, or
enhanced trust.

4. Considerations of Law and Morality:

 Examine the ethical and legal ramifications of XAI in the selected field. This
could entail:
 Examining pertinent laws and compliance standards.
 Analysing possible legal risks, such as liability concerns.
 Identifying moral issues with bias, fairness, and transparency.
5. Considerations of Law and Morality:
• Consider XAI's ethical and legal implications for the chosen field. This can entail: •
Analysing relevant legislation and compliance requirements.
• Examining potential legal issues, such as worries about liability.

• Recognising moral dilemmas with transparency, fairness, and bias.

6. Analysis of Stakeholders:

• Identify and interact with important stakeholders, including as users, clients, regulators,
and sector experts.

• Collect opinions and suggestions from stakeholders on the viability and benefit of
implementing XAI.

7. Risk Assessment:

• Recognise possible risks and difficulties related to deploying XAI. These might consist of:

• Technical hazards, such as restrictions on model performance.

• Regulatory and legal risks.

• Threats to market adoption.

• Create plans for reducing or managing these risks.

8. Prototype or Proof of Concept:


If practical, think about creating a small-scale prototype or proof of concept to show how
well XAI works in the selected domain.
•Use this prototype to solicit more comments from interested parties.

9. Concluding Remarks and Recommendations

• Write a summary of the feasibility study's results.


•Offer concise advice on how to deploy XAI, including whether to move on, change the
strategy, or give up.
•Include a broad implementation plan with timetables and milestones.
10. Reporting: - Create a thorough report for stakeholders, investors, and decision-
makers using the findings of the feasibility study.

Making an informed decision on whether to move forward with the implementation of


XAI in the selected domain is step 11 of the decision-making process.
It's important to note that the feasibility study is a critical step in determining whether
XAI is both practically achievable and economically viable in a specific context. It helps
organizations make informed decisions about the adoption of XAI and ensures that the
technology aligns with their objectives and values while complying with legal and ethical
considerations.

3. FAKE NEWS DETECTION USING XAI

3.1. Objective

The main goal of this research is to use Explainable Artificial Intelligence (XAI) approaches
to create a reliable and efficient false news detecting system. This method seeks to improve
the precision and openness of false news identification while offering concise and
understandable justifications for its choices. Important goals include:

1. Model Accuracy: To create and use deep learning and machine learning models that
can accurately tell the difference between authentic and unauthentic news articles, social
media posts, and other types of information.

2. Interpretability: XAI approaches should be incorporated into the detection model to


produce clear justifications for whether a certain piece of material is false or authentic.
These explanations will increase user confidence and make subsequent research easier.

3. Robustness: To make sure the detection algorithm works reliably when faced with
textual, visual, and multimedia fake news. The system needs to change to accommodate
new false news techniques.

4. Scalability: To provide a system that can process a significant amount of content in


real-time or almost real-time, making it appropriate for usage in social media platforms,
news organizations, and online discussion boards.

5. Ethical Considerations: To incorporate ethical factors into the design of the system in
order to reduce prejudices, respect user privacy, and adhere to applicable laws.

6. Evaluation measures: To create a comprehensive set of evaluation measures that


accurately assess the system's performance. These metrics should include measurements
for accuracy, precision, recall, F1-score, and interpretability.

7. User-Centric Design: Including stakeholders and end users in the development process
to collect input, preferences, and requirements for the XAI-based fake news detecting
system.
8. Raising Public Awareness: To raise public awareness and educate the public about
fake news by offering explanations of how the detection system operates and the criteria
used to determine whether a piece of information is real or fraudulent.

9. Deployment and Integration: To investigate ways to include the XAI-based fake news
detection system into several platforms and programs, such as online browsers, social
media platforms, and news aggregators.

10. Ongoing Monitoring and Improvement: Create a framework for ongoing monitoring
and enhancement of the fake news detection system based on user feedback and
changing fake news strategies.

By accomplishing these goals, this research aims to advance the field of fake news detection
by utilizing XAI's abilities to deliver accurate classifications as well as clear and
understandable explanations, ultimately promoting more informed and critical media
consumption in the digital age.

3.2. Methodology

Methodology for Fake News Detection Using Explainable Artificial Intelligence (XAI):

1. Data Collection and Preprocessing: Compile a diverse collection of relevant content, such
as posts on social media or news items, both real and fraudulent.
• To prepare the data for analysis, clean the text, eliminate stopwords, tokenize, and convert
it to an appropriate format.
• Mark instances in the collection as either phony or real.

2. Feature Extraction:
• Take relevant features, including word embeddings, TF-IDF vectors, or other domain-
specific features, and extract them from the text.
• If applicable, take into account further features like metadata (e.g., publication date,
source) and social network data (e.g., user interactions).

3. Model Selection:
• Select the best deep learning or machine learning models for detecting fake news, such as
• Natural Language Processing (NLP) models for text analysis, such as LSTM, BERT, or
GPT.
• Ensemble models, such as Gradient Boosting or Random Forest.
• To assure interpretability, look at XAI-compatible models or modifications of current
models.

4. XAI Integration: Include XAI methods in the model of choice to produce predicted
explanations. LIME (Local Interpretable Model-agnostic Explanations) is one possible
approach.
• Shapley Additive exPlanations, or SHAP
• Rule-based models that offer clear decision-making guidelines.
• Attention methods to draw attention to key textual elements.

5. Training and Validation.


• Split the dataset into training, validation, and test sets.
• Use the training set to train the model and adjust the hyperparameters.
• Verify the model's effectiveness using the validation set, optimizing for metrics like
accuracy, precision, recall, F1-score, and interpretability.

6. Interpretation of XAI Explanations:


•To determine which traits or other criteria contribute to a classification decision, and
analyze the explanations produced by XAI approaches.
• Determine whether the explanations are consistent with domain knowledge and human
intuition.

7. Model Assessment:
• Use existing assessment metrics for false news identification to assess the trained model's
performance on the test dataset.
• Evaluate the XAI explanations' quality to make sure they are understandable, useful, and
informative for end users.

8. Considerations for Ethics:


• Address ethical issues like as bias reduction, justice, privacy, and adherence to laws like
the GDPR.
• Put measures in place to stop the false news detecting system from being abused.

9. Designing the user interface (optional):


• If necessary, create a user-friendly interface that allows consumers to interact with the
algorithm for detecting bogus news.
• Communicate XAI-generated explanations in an understandable way.

10. Implementation and Integration:


•Integrate the fake news detection system into existing platforms, such as social media sites,
news websites, or content moderation systems.
• The system should be integrated with current platforms or applications to guarantee a
seamless user experience.

By using this methodology, you may create and deploy a fake news detection system that
makes use of the capabilities of XAI to deliver precise, understandable, and moral responses
to the persistent problem of fake news in the digital age.
3.2.1. LIME

The toolset for Explainable Artificial Intelligence (XAI) now includes the powerful
method known as Local Interpretable Model-agnostic Explanations (LIME), which is
particularly useful for the detection of fake news. Here is a step-by-step tutorial on using
LIME to justify the inferences made by your false news detection model:

1. Choose Instances for Explanation: From your dataset, select a group of instances
(news articles or posts) for which you wish to produce explanations. These incidents
ought to be a mixture of true and false news.

2. Produce a LIME Explanation: If Python is your favorite programming language,


import the LIME library or package.
Create a LIME explainer object and initialize it by specifying attributes like the explainer
style (text or tabular data, for example) and the number of samples to use for
explanations.

3. Generate Perturbed Data Samples: For each instance chosen in step 1, create
perturbed data samples using the LIME explanation. LIME accomplishes this by
randomly changing the instance's attributes while maintaining its label. An interpretable
surrogate model is trained using these perturbed samples.

4. Train a Surrogate Model: Using the modified data samples and the related
predictions of your fake news detection model, train a straightforward interpretable model
(such as linear regression or decision trees). This substitute model has been trained to
simulate your complex model's behavior.

5. Describe Predictions: Create local explanations for each case of interest using the
surrogate model. Each feature (words or phrases) in the input text will be assigned a
feature weight by LIME to show how important they are.

6. Visualise and Present Explanations: Make the explanations easy to understand by


using visuals. To make the explanation understandable and instructive, you can emphasize
keywords or phrases in the text, offer an ordered list of influential qualities, or utilize
other visualization approaches.

7. Examine and confirm justifications: Analyse the LIME-generated explanations with


care. Make sure they make sense to people and fit the rationale behind each prediction
that was made.
Check to see if the explanations line up with the model's behavior and domain expertise.
By doing this, you can make sure that the explanations are accurate and not meaningless.

8. Iteration and Improvement: Based on user and stakeholder feedback, continuously


hone and enhance the LIME application. Think about adjusting the LIME explainer's
parameters or experimenting with different surrogate models for better interpretability.
9. Integration into Your Fake News Detection System: Integrate LIME-generated
justifications into the user interface or reporting mechanism of your fake news detection
system. Users should have easy access to these justifications so they may learn why a
specific news item was labeled as either phony or real.

10. Evaluation and Validation: Test and validate user experiences to see how well
LIME-generated explanations work to increase user confidence and comprehension of the
false news detection system.

These instructions will help you use LIME to give understandable justifications for the
predictions made by your fake news detection algorithm. The openness and confidence in
your XAI-based fake news detection system can be improved by providing users and
stakeholders with insights into why particular judgments are reached.

4. CONCLUSION

It's important to note that the feasibility study is a critical step in determining whether XAI is both
practically achievable and economically viable in a specific context. It helps organizations make
informed decisions about the adoption of XAI and ensures that the technology aligns with their
objectives and values while complying with legal and ethical considerations.

In conclusion, the use of Local Interpretable Model-agnostic Explanations (LIME) to identify fake
news marks a significant advancement in the direction of transparency and interpretability in the
field of artificial intelligence. LIME gives us the ability to decipher complicated machine-learning
models and reveal how they make decisions.

We can improve our false news detection models by following the instructions in this guide, and we
can also give users clear, reliable justifications for each categorization. In order to effectively
combat the persistent problem of fake news in our digital age, a combination of predictive accuracy
and human-centered openness is essential.

Furthermore, the iterative and user-centered methodology ensures that LIME's explanations are not
only technically accurate but also connect with subject-matter expertise and human intuition.
Ongoing user feedback-based development enables optimization and fine-tuning, thus increasing
the usefulness of the explanations.

The inclusion of LIME-generated explanations into false news detection algorithms ultimately
stimulates critical media consumption, promotes informed decision-making, and advances the
overarching objectives of Explainable Artificial Intelligence (XAI). Building trust, reducing bias,
and encouraging responsible AI applications in the battle against fake news are all made possible by
this combination of AI-driven accuracy and insights that are understood by humans.

5. REFERENCES
 Arrieta, Alejandro Barredo, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik,
Alberto Barbado, Salvador García et al. "Explainable Artificial Intelligence (XAI): Concepts,
taxonomies, opportunities and challenges toward responsible AI." Information fusion 58 (2020): 82-
115.
 Das, A. and Rad, P., 2020. Opportunities and challenges in explainable artificial intelligence (xai): A
survey. arXiv preprint arXiv:2006.11371.
 Speith, Timo. "A review of taxonomies of explainable artificial intelligence (XAI) methods."
In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp.
2239-2250. 2022.
 Ali, Sajid, Tamer Abuhmed, Shaker El-Sappagh, Khan Muhammad, Jose M. Alonso-Moral, Roberto
Confalonieri, Riccardo Guidotti, Javier Del Ser, Natalia Díaz-Rodríguez, and Francisco Herrera.
"Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial
Intelligence." Information Fusion 99 (2023): 101805.
 Mehta, Harshkumar, and Kalpdrum Passi. "Social Media Hate Speech Detection Using Explainable
Artificial Intelligence (XAI)." Algorithms 15, no. 8 (2022): 291
 Samtani, Sagar, Hsinchun Chen, Murat Kantarcioglu, and Bhavani Thuraisingham. "Explainable
artificial intelligence for cyber threat intelligence (XAI-CTI)." IEEE Transactions on Dependable and
Secure Computing 19, no. 4 (2022): 2149-2150.
 Al-Asadi, Mustafa A., and Sakir Tasdemir. "Using artificial intelligence against the phenomenon of
fake news: a systematic literature review." Combating Fake News with Computational Intelligence
Techniques (2022): 39-54.
 Akhtar, Pervaiz, Arsalan Mujahid Ghouri, Haseeb Ur Rehman Khan, Mirza Amin ul Haq, Usama
Awan, Nadia Zahoor, Zaheer Khan, and Aniqa Ashraf. "Detecting fake news and disinformation
using artificial intelligence and machine learning to avoid supply chain disruptions." Annals of
Operations Research 327, no. 2 (2023): 633-657.
 Wang, Bin, Wenbin Pei, Bing Xue, and Mengjie Zhang. "A multi-objective genetic algorithm to
evolving local interpretable model-agnostic explanations for deep neural networks in image
classification." IEEE Transactions on Evolutionary Computation (2022).
 Yang, Mao, Chuanyu Xu, Yuying Bai, Miaomiao Ma, and Xin Su. "Investigating black-box model for
wind power forecasting using local interpretable model-agnostic explanations algorithm: Why should
a model be trusted?." CSEE Journal of Power and Energy Systems (2023).

You might also like