DeepFake-edit Final
DeepFake-edit Final
A Project Report
Submitted
In Partial Fulfillment of the Requirements
For the Degree of
by
Ritik Singh Ambrish Yadav
2101920100232 2101920100048
We hereby declare that the project work presented in this report entitled “Deepfake
Video Detection”, in partial fulfilment of the requirement for the award of the degree
of Bachelor of Technology in Information Technology, submitted to A.P.J. Abdul
Kalam Technical University, Lucknow, is based on our own work carried out at the
Department of Information Technology, G.L. Bajaj Institute of Technology &
Management, Greater Noida. The work contained in the report is true and original to
the best of our knowledge and project work reported in this report has not been
submitted by us for award of any other degree or diploma.
Signature:
Signature:
Signature:
Signature:
Date:
II
Certificate
This is to certify that the Project report entitled “Deepfake Video Detection” done by
Date:
III
Acknowledgement
The merciful guidance bestowed to us by the almighty made us stick out this project
to a successful end. We humbly pray with sincere heart for his guidance to continue
forever.
We pay thanks to our project guide Dr. Anshika Chaudhary who has given
guidance and light to us during this project. His/her versatile knowledge has helped
us in the critical times during the span of this project.
We pay special thanks to our Head of Department Dr. Sansar Singh Chauhan who
has been always present as a support and help us in all possible way during this
project.
We also take this opportunity to express our gratitude to all those people who have
been directly and indirectly with us during the completion of the project.
We want to thanks our friends who have always encouraged us during this project.
At the last but not least thanks to all the faculty members of the Department of
Information Technology who provided valuable suggestions during the period of
project.
4
Abstract
The world is facing a major issue of climate change, and the impact of our daily
activities is a significant contributor to this problem. One of the activities that
contributes to carbon footprint is the use of the internet, especially with the growing
trend of online shopping, streaming, and browsing. Each website we visit—whether
it’s a news portal, an e-commerce platform, or a social networking site—requires
energy to load and display content. The more complex the website (with images,
videos, and interactive features), the greater its energy consumption. Additionally,
poorly optimized websites with excessive scripts and unnecessary elements contribute
disproportionately to the carbon footprint.
According to statistics, the download of one GB of data from the internet emits 11
grams of CO2-equivalents. With the increasing amount of data being transferred, it's
essential to understand and quantify the carbon footprint caused by our online
activities To address this issue, projects are emerging that aim to calculate the carbon
footprint of website. This project aims to address this issue by providing a solution
that calculates the carbon footprint caused by visiting various websites.
5
TABLE OF CONTENT
Declaration.....................................................................................................................ii
Certificate......................................................................................................................iii
Acknowledgement........................................................................................................iv
Abstract..........................................................................................................................v
Table of Content............................................................................................................vi
List of Figures.............................................................................................................viii
Chapter 1. Introduction...............................................................................................9
1.1 Preliminaries.....................................................................................................................9
1.2 Problem Analysis............................................................................................................10
1.3 Motivation.......................................................................................................................10
1.4 Objectives.......................................................................................................................10
Chapter 2. Literature Survey....................................................................................11
2.1 Introduction.....................................................................................................................11
2.2 Existing System..............................................................................................................12
Key Contribution............................................................................................................14
Challenge Addressed......................................................................................................15
Relevance to the Current Study......................................................................................16
2.3 Benefits of the Project....................................................................................................22
Chapter 3. Proposed Methodology...........................................................................24
3.1 Problem Formulation......................................................................................................24
3.2 System Analysis & Design.............................................................................................24
3.3 Proposed Work...............................................................................................................24
Chapter 4. Implementation.......................................................................................26
4.1 Introduction.....................................................................................................................26
4.2 Implementation Strategy (Flowchart, Algorithm, etc.)..................................................26
4.2.1 Algorithms Used in the Framework......................................................................29
4.3 Tools/Hardware/Software Requirements........................................................................31
Chapter 5. Result & Discussion................................................................................33
5.1 Results.............................................................................................................................33
Left Graph: Training and Validation Accuracy................................................................34
Right Graph: Training and Validation Loss.....................................................................35
Key Takeaways for the Report:........................................................................................35
5.2 Discussion.......................................................................................................................40
Chapter 6. Conclusion & Future Scope...................................................................43
6
6.1 Conclusion......................................................................................................................43
6.2 Future Scope...................................................................................................................44
References...................................................................................................................45
7
LIST OF FIGURES
8
Chapter 1
Introduction
1.1 Preliminaries
In the age of advanced technology, digital media plays a vital role in our everyday
lives. However, with the rapid development of artificial intelligence (AI), a new threat
has emerged: deepfake videos. Deepfake videos are created using cutting-edge AI
techniques, including generative adversarial networks (GANs) and advanced neural
networks, to manipulate or synthesize video content. These technologies allow the
creation of hyper-realistic fake videos, where individuals appear to say or do things
they never actually did.
For instance, GANs use a two-part system comprising a generator, which creates fake
content, and a discriminator, which evaluates its authenticity. This iterative process
improves the quality of deepfakes to the point where they become challenging to
differentiate from real footage. Other techniques involve autoencoders and deep
learning frameworks that analyse and recreate facial expressions, voice, and body
movements in a highly convincing manner.
Understanding and detecting deepfakes has become an urgent task to prevent misuse,
such as spreading misinformation, damaging reputations, or conducting fraudulent
activities. This report delves into the intricacies of deepfake detection by exploring
methodologies, implementations, and outcomes to ensure digital trust and safeguard
against this rising technological challenge. This report delves into the problem of
deepfake detection, providing a thorough exploration of methodologies,
implementations, and outcomes.
9
1.2 Problem Analysis
Spread misinformation and fake news, impacting social and political stability.
Damage personal and professional reputations, leading to psychological and
financial harm.
Be used for illegal activities such as blackmail, identity theft, or fraud. The
accessibility of AI tools has made creating and sharing deepfake content
easier, raising concerns about digital security and trust. Current detection
systems face challenges due to the increasing sophistication of deepfakes,
making it imperative to develop innovative and robust solutions.
1.3 Motivation
The motivation to study deepfake detection stems from the need to:
1.4 Objectives
10
Chapter 2
Literature Survey
2.1 Introduction
The literature survey aims to provide an overview of existing research in the field of
deepfake detection, focusing on state-of-the-art methodologies, challenges, and
advancements. Over the past few years, researchers have explored various
approaches, ranging from traditional computer vision techniques to advanced deep
learning-based models, to identify and differentiate fake content from genuine media.
Through this literature review, the goal is to identify the strengths, weaknesses, and
limitations of existing approaches, which serve as a foundation for developing
improved and more reliable deepfake detection frameworks. The reviewed studies
also highlight emerging challenges, such as dealing with compressed videos, detecting
complex manipulations, and ensuring model generalization across multiple deepfake
generation techniques.
The literature survey explores existing research and systems related to deepfake
detection. It provides insights into the strengths and limitations of current methods
11
and helps identify gaps for improvement. This section covers state-of-the-art
technologies and algorithms employed in the detection of manipulated content.
Table 1
[1]. Leandro A. Passosa, Danilo Jodasa (2024) - "A Review of Deep Learning-
based Approaches for Deepfake Content Detection"
In their comprehensive study, Passosa and Jodasa (2024) provide an in-depth analysis
of deep learning-based techniques for detecting deepfake content. The paper focuses
on the growing threat posed by deepfakes and the advancements in AI-driven methods
to counteract these synthetic forgeries. Key highlights of the research include the
categorization of existing approaches, challenges faced in detection, and future
research directions.
Key Contributions:
12
o Image-based detection: Models like Convolutional Neural Networks
(CNNs) analyze spatial artifacts, such as inconsistent textures, face
boundary mismatches, or pixel-level anomalies.
o Video-based detection: Temporal models, such as Recurrent Neural
Networks (RNNs) and Temporal Convolutional Networks (TCNs),
detect inconsistencies in facial movements, blinking patterns, and
frame-to-frame artifacts.
2. Popular Architectures
The review covers prominent deep learning architectures, including:
o XceptionNet and EfficientNet for image-based detection.
o Long Short-Term Memory (LSTM) networks and 3D-CNNs for
temporal and motion analysis. These architectures are proven to
achieve state-of-the-art performance in detecting subtle manipulations.
3. Challenges in Deepfake Detection
The authors highlight major challenges, such as:
o The diminishing visibility of artifacts in high-quality deepfakes.
o Generalization issues where models fail to detect deepfakes generated
by unseen algorithms.
o Real-world factors like low lighting, noise, and compression, which
often obscure detection cues.
4. Evaluation Metrics and Datasets
5. Future Directions
The paper underscores the need for:
o More robust models that generalize across multiple deepfake
generation techniques.
o Real-time detection systems for practical deployment.
o Hybrid approaches combining spatial and temporal analysis for
enhanced detection accuracy.
13
Relevance to the Current Study:
In their 2023 study, Kanwal et al. propose a novel deep learning-based approach to
detect AI-generated deepfake images using a Siamese Network architecture
combined with Triplet Loss. This method focuses on capturing minute visual
inconsistencies between authentic and deepfake images, making it highly effective for
identifying synthetic manipulations.
Key Contributions:
14
the distance between the anchor and the negative sample (deepfake
image).
Challenges Addressed:
Generalization to Unseen Data: The use of Triplet Loss enables the model to
generalize well across different datasets and deepfake generation methods.
Handling Fine-Grained Artifacts: The network is particularly effective at
identifying fine-grained pixel-level artifacts that may go unnoticed by
conventional classifiers.
The approach presented by Kanwal et al. aligns with the need for robust and efficient
deepfake detection methods. The use of a Siamese Network with Triplet Loss
highlights the importance of feature-based learning for distinguishing real and
15
manipulated content. In this project, while focusing on video-based detection, the
insights from this paper provide a foundation for understanding how deep learning
techniques can be employed to detect artifacts in AI-generated media. The concept of
minimizing similarity between real and deepfake samples can inspire further
enhancements in temporal deepfake detection frameworks.
Key Contributions:
Challenges Addressed:
The work by Preeti et al. highlights the importance of adversarial training for
improving deepfake detection accuracy. The use of a GAN-based approach aligns
with the need to develop models that can adapt to evolving deepfake generation
techniques. This paper is particularly relevant to the current project, as it provides
insights into handling low-quality videos and real-world scenarios, similar to the
challenges posed in detecting manipulated video content. The findings inspire further
exploration of GAN-based frameworks for enhancing the robustness and reliability of
deepfake detection systems in practical applications.
17
In this 2022 study, Venkatachala et al. present an innovative deepfake detection
framework that combines a Sparse Autoencoder with a Graph Capsule Dual
Graph Convolutional Neural Network (Graph CNN) to enhance the accuracy and
efficiency of detecting manipulated videos and images.
Key Contributions:
18
o The framework is evaluated on well-known benchmark datasets,
including FaceForensics++, Celeb-DF, and custom deepfake datasets.
o The results show that the proposed method outperforms existing state-
of-the-art models in terms of precision, recall, and overall accuracy.
o It also demonstrates improved generalization capabilities across unseen
data and different deepfake generation techniques.
5. Efficiency in Resource Utilization:
o The integration of the sparse autoencoder reduces the computational
overhead, making the system suitable for real-world deployment
scenarios.
o The dual graph CNN enhances the detection performance without
requiring excessive computational resources.
Challenges Addressed:
The study by Venkatachala et al. introduces a novel integration of sparse learning and
graph-based convolutional networks, offering a robust and efficient approach for
deepfake detection. This paper is highly relevant to the current project as it highlights
the importance of advanced feature extraction methods (sparse autoencoders) and
graph-based analysis for detecting spatial inconsistencies. These insights can inspire
further enhancements in video-based detection frameworks, particularly for
identifying subtle and complex artifacts in manipulated videos.
19
[5]. U. A. Ciftci, I. Demir, L. Yin (2020) - "Deepfake Source Detection via
Interpreting Residuals with Biological Signals"
In this 2020 study, Ciftci et al. propose a novel deepfake detection approach that
leverages biological signals and residual features to identify deepfake content and
detect its source. Unlike traditional deepfake detection methods that rely primarily on
visual artifacts, this method focuses on subtle biological inconsistencies that arise
during the deepfake generation process.
Key Contributions:
20
o The model demonstrates robustness to video compression, which is a
common challenge when detecting deepfakes, especially on platforms
like social media where compression often occurs.
5. Performance Evaluation:
o The proposed method is evaluated on widely-used datasets such as
FaceForensics++ and custom datasets that include videos generated
with various deepfake techniques.
o The results highlight the superior performance of the biological signal-
based detection framework, particularly in identifying subtle
manipulations.
Challenges Addressed:
Furthermore, the study’s emphasis on source detection aligns with the broader goal
of identifying the origins of manipulated content, which is essential for understanding
21
and mitigating the spread of deepfake videos. The insights gained from this paper can
inspire the integration of biological signal analysis into video-based detection
systems, providing a more robust and reliable solution for detecting sophisticated
deepfakes in real-world scenarios.
By employing these diverse methods, existing systems tackle deepfake detection from
multiple angles, but challenges like scalability and adaptability to evolving deepfake
techniques remain areas of ongoing research.
22
Deepfakes pose a significant threat to the safety and authenticity of online
interactions. By reducing the spread of harmful or misleading media, this
project contributes to the creation of a secure and trustworthy digital
ecosystem. A safer online environment not only protects individuals and
organizations but also fosters greater confidence in digital communications.
Through robust detection systems and public education, this project addresses
the vulnerabilities introduced by deepfake technology and works toward
enhancing the overall resilience of the digital landscape.
23
Chapter 3
Proposed Methodology
Deepfake detection is challenging due to the high quality of fake videos and the vast
amount of content generated daily. The goal is to design a system that can:
1. Data Collection: Gather a comprehensive dataset of real and fake videos from
public repositories.
2. Preprocessing: Enhance video quality, extract key frames, and ensure data
consistency.
3. Feature Extraction: Utilize AI models to identify unique patterns in videos,
such as facial movements, audio-visual sync, and temporal inconsistencies.
4. Classification: Implement deep learning models, such as LSTMs and
transformers, to classify videos as real or fake with high accuracy.
5. Performance Evaluation: Use metrics like precision, recall, and F1-score to
assess system performance.
24
3.3 Proposed Work
25
Chapter 4
Implementation
4.1 Introduction
Flowchart:
Feature
Pre-processing steps Extraction
26
Real
Prediction Training
The methodology for detecting real or fake videos have several stages. Each stage
involves several steps, ensuring a systematic approach to training, testing, and
evaluating the model's performance.
1. Dataset Preparation:
2. Pre-Processing Steps:
3. Feature Extraction:
Relevant features are extracted from the processed frames. This can
involve techniques like convolutional feature extraction or specific
27
domain-related feature selection, depending on the type of videos
being analyzed.
4. Model Creation:
28
7. Prediction:
When a new video is provided, its frames are pre-processed and passed
through the model to generate predictions. Each frame is classified as
"real" or "fake" by the help of Sigmoid Activation.
Sigmoid Activation
8. Conclusion:
Algorithm: OpenCV
The framework uses OpenCV for feature extraction from video frames.
OpenCV’s image processing capabilities, such as edge detection (e.g., Canny
Edge Detection), histogram analysis, or keypoint detection (e.g., SIFT or
ORB), are employed to capture spatial patterns and visual characteristics.
These features help identify inconsistencies and anomalies introduced by
deepfake generation techniques.
29
Algorithm: Recurrent Neural Networks (RNN), Gated Recurrent Units
(GRU).
Purpose:
This component ensures that temporal patterns and inconsistencies introduced
during the creation of deepfakes are effectively modeled, providing an
additional layer of analysis beyond spatial features.
4. Preprocessing Pipeline
Algorithms Used:
o Center Cropping: Extracts a square region from the center of each
video frame to standardize input dimensions.
o Resizing: Rescales frames to a uniform size of 224x224 pixels for
compatibility with the InceptionV3 model.
30
o Pixel Normalization: Uses the preprocess_input function from
TensorFlow's InceptionV3 application to normalize pixel values,
ensuring consistency with the pre-trained model's requirements.
Purpose:
Preprocessing ensures that all video frames are uniformly formatted,
minimizing noise and discrepancies in the input data.
While GANs are not directly used in this detection framework, the algorithms
target artifacts and temporal inconsistencies introduced by GAN-based
deepfake generation methods.
Hardware Requirements:
• RAM: 4 GB
• High-Speed SSDs: For large dataset storage and quick access.
• Multi-core CPUs: For preprocessing and handling tasks.
Software Requirements:
TensorFlow and Keras: TensorFlow and Keras are used for building, training, and
fine-tuning deep learning models, enabling efficient implementation of neural
networks for video analysis.
OpenCV: OpenCV facilitates video processing tasks such as frame extraction and
manipulation, ensuring data consistency before feeding into the models.
Flask: Flask is utilized to develop the backend, providing a lightweight and flexible
framework for handling video uploads and processing requests. HTML/CSS are
31
employed for designing the frontend, ensuring a user-friendly and visually appealing
interface for users to interact with the system.
Expected Outcome-
32
Chapter 5
Result & Discussion
5.1 Results
This result part of our research study presents a detailed analysis of the assessment
and performance of our hybrid deep learning model proposed for Deepfake video
detection in real-time processing, applied to diverse datasets to check the robustness
and efficiency of our state-of-the-art deep hybrid model. Datasets, including
FaceForensics++, CelebV1, and CelebV2, compare our model’s promised results with
previous state-of-the-art research to check the credibility and efficiency of our
proposed model. The assessment aims to provide insights into the model’s efficiency
in perception between genuine and manipulated content under varying scenarios.
The diverse datasets chosen for the experimental work offer various challenges, such
as different manipulation techniques, diverse subjects, and various environmental
situations. Through discussion, the promised results show a more profound
understanding of the proposed model’s capabilities and limitations from this
evaluation basis.
33
This bar chart illustrates the distribution of labels in the training dataset used for the
deepfake video detection project. It demonstrates a significant imbalance between the
number of fake and real video samples.
Fake Videos: The majority of the training set consists of fake videos, with
their count exceeding 300 samples. This indicates that a substantial portion of
the dataset focuses on training the model to recognize and classify fake
content accurately.
Real Videos: In contrast, real videos are underrepresented, with fewer than
100 samples. This imbalance could pose a challenge to the model, as it might
become biased toward detecting fake videos while potentially
underperforming in identifying real ones.
This figure illustrates the training and validation accuracy (left) and loss (right) trends
over 30 epochs for a deepfake video detection model. Here's an analysis for the
report:
34
Validation Accuracy: The validation accuracy stabilizes at approximately
81.2% from the first epoch onward. This slight improvement over training
accuracy could be an artifact of the class distribution in the validation set.
Observation: The flat trends indicate that the model does not show significant
improvement after the first epoch, potentially suggesting insufficient model
capacity or early convergence.
Training Loss: The training loss decreases steadily over 30 epochs, indicating
that the model is minimizing its error on the training data.
Validation Loss: The validation loss follows a similar decreasing trend as the
training loss, showing no significant divergence, which implies that the model
generalizes reasonably well to unseen data.
Observation: While the loss decreases, the stagnation in accuracy suggests
that the model might struggle to make meaningful progress in correctly
predicting challenging samples.
1. Model Performance: The consistent accuracy and loss trends indicate that the
model is stable but has limited capacity to learn beyond its initial performance.
2. Potential Issues:
o Class Imbalance: The dataset's imbalance (as shown in the previous
chart) might cause the model to favor the majority class (fake videos)
over the minority class (real videos).
o Underfitting: The lack of improvement in accuracy despite decreasing
loss could mean the model architecture or training strategy is not fully
leveraging the data.
3. Recommendations:
o Address Class Imbalance: Use techniques such as oversampling,
undersampling, or weighted loss functions.
o Enhance Model Complexity: Experiment with deeper architectures or
pre-trained models like ResNet or EfficientNet.
35
o Early Stopping and Regularization: Introduce early stopping or
regularization to prevent overfitting while enhancing performance.
This analysis emphasizes the need for further optimization to achieve better detection
performance.
36
Figure: 5.3. Results of proposed model on FF++ dataset identifying a fake face
through eye movement.
The figure above presents six representative frames from different video samples
labeled as abofeumbvv.mp4, bqkdbcqjvb.mp4, cdyakrxkia.mp4, cycacemkmt.mp4,
czmqpxrqoh.mp4, and dakqwktlbi.mp4. These videos were analyzed as part of our
deepfake detection framework, with the goal of identifying synthetic manipulations
and distinguishing authentic videos from manipulated ones.
Key Observations:
37
comparison of frames demonstrates the model's ability to identify
temporal inconsistencies and subtle pixel-level artifacts.
Final Result:
38
Figure 5.5: Prediction as Fake Video
The deepfake video detection system was tested using video inputs to determine
whether the uploaded content is Real or Fake. The detection mechanism provides both
the classification result and a corresponding confidence score, reflecting the
certainty of the model's prediction.
39
indicates that the system is relatively less confident about this classification,
possibly due to subtle characteristics that may resemble synthetic features.
Conclusion:
The results confirm that the implemented deepfake detection system effectively
identifies manipulated videos with reasonable confidence levels. The combination of
OpenCV for feature extraction and a classification model using sigmoid activation
successfully achieves binary classification (Real or Fake).
5.2 Discussion
40
1. Correct Classification: The system was able to distinguish between real and
fake videos based on extracted features.
2. Confidence Scores: The confidence scores reflect the degree of certainty of
the predictions. Lower scores indicate borderline cases, while higher scores
provide stronger assurance of the results.
3. Performance: The system performs well in identifying anomalies, which are
often introduced in deepfake videos during synthetic generation. These
anomalies are captured using visual feature extraction techniques (e.g., edge
detection, key points) and processed through the classification model.
The discussion delves into the key findings of the proposed deepfake detection
system, their implications for combating synthetic media, and how the system
addresses existing challenges in the field. Additionally, it outlines a roadmap for
future enhancements to improve the accuracy, efficiency, and applicability of the
system in real-world scenarios.
The results of the system demonstrate its ability to successfully differentiate between
Real and Fake videos with reasonable accuracy and confidence. By leveraging
computer vision techniques for feature extraction and deep learning-based binary
classification, the system was able to identify subtle inconsistencies introduced during
deepfake video generation. Key findings include:
41
3. Implications for Misinformation Mitigation: The system contributes to the
fight against misinformation by providing a tool to verify the authenticity of
digital media. In domains such as journalism, security, and content moderation
on social media, deploying such systems can help identify and prevent the
spread of fake content.
4. Real-World Applications: The proposed system can serve as a valuable
solution for detecting deepfake content in various industries, including law
enforcement, politics, entertainment, education, and online platforms. Its
modular nature allows for easy integration into existing workflows.
The implications of these findings are significant, as they highlight the importance of
AI-driven solutions for enhancing trust in digital media. With the growing
sophistication of deepfake technologies, tools like the one presented in this project
play a critical role in safeguarding the integrity of online content.
42
Chapter 6
Conclusion & Future Scope
6.1 Conclusion
43
video inputs, analyse individual frames, and produce accurate predictions makes it a
viable tool for real-world applications. Such applications include combating
misinformation on social media, verifying digital content for news agencies,
safeguarding privacy, and preventing misuse of synthetic media in sensitive domains
like education, politics, and entertainment.
Enhancing the system to detect not only video but also audio deepfakes,
leveraging advanced audio analysis techniques and synchronization checks.
Integrating detection tools into widely used platforms, such as social media
and video-sharing websites, enabling seamless verification of uploaded
content.
Expanding the dataset to include multi-lingual and culturally diverse content,
ensuring the system is robust across various demographics and linguistic
nuances.
Collaborating with international organizations and regulatory bodies to
establish global standards for deepfake detection and video authentication.
Developing real-time detection systems that can be integrated into video
conferencing tools, providing instant verification during live sessions.
Educating the public and stakeholders about deepfake detection techniques
through awareness campaigns, workshops, and accessible online resources to
promote digital literacy and trust.
44
References
[1] Md Shohel Rana, Mohammad Nur Nobi, Beddhu Murali, & Andrew H. Sung
D.P. (2022) Deepfake Detection: A Systematic Literature Review.
[3] Ahmed, I., Ahmad, M., Rodrigues, J. J., & Jeon, G. (2021) Edge computing-
based person detection system for top view surveillance: Using CenterNet with
transfer learning.
[4] U. A. Ciftci, I. Demir, L. Yin. (2020). How do the hearts of deep fakes beat?
deep fake source detection via interpreting residuals with biological signals.
45
[10] Anusha O. Vaidya, Monika Dangore, Vishal Kisan Borate, Nutan Raut,
Yogesh Kisan Mali, Ashvini Chaudhari. (2024). Deep Fake Detection for Preventing
Audio and Video Frauds Using Advanced Deep Learning Techniques.
[12] Yeoh, P.S.Q. Lai, K.W. Goh, S.L. (2021). Emergence of deep learning in knee
osteoarthritis diagnosis,Computational intelligence and neuroscience.
46