0% found this document useful (0 votes)
4 views7 pages

1fbc2f37e33b6d06bac5df4acc76f5cfb0b2

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 7

PROtek : Jurnal Ilmiah Teknik Elektro Volume 11.

No 2, May 2024
https://ejournal.unkhair.ac.id/index.php/protk/index e-ISSN: 2527-9572 / ISSN: 2354-8924

A Deep learning Approach for Recognizing the


Noon Rule for Reciting Holy Quran
Hanaa Mohammed Osman Ban Sharief Mustafa Basim Mahmood
Computer Science Dept./ Computer Science Dept./ Computer Science Dept./
College of Computer Science College of Computer Science College of Computer Science
and Mathematics/ and Mathematics/ and Mathematics/ University
University of Mosul, 41002 University of Mosul, 41002 of Mosul, 41002 Mosul, Iraq
Mosul, Iraq Mosul, Iraq ICT Research Unit/ Computer
[email protected] [email protected] Center/ University of Mosul
[email protected]

Abstract – Ahkam Al-Tajweed represents the most in teaching Ahkam Al-Tajweed, including the
precious religious heritage that is in critical need to be requirement of an expert for private tutoring called
preserved and kept for the next generation. This study "Talqeen," which is not always available [1]. To tackle
tackles the challenge of learning Ahkam Al-Tajweed by this issue, researchers have turned to Machine
developing a model that considers one of the rules
Learning (ML) techniques aiming to develop
experienced by early learners in the Holy Quran. The
proposed model focuses, specifically, on the "Hukm Al- computerized systems that check the proper
Noon Al-Mushaddah," which pertains to the proper application of Ahkam Al-Tajweed based on audio
pronunciation of the letter "Noon" when it is recordings. However, existing systems are limited in
accompanied by a Shaddah symbol in Arabic. By the rules they consider or the Quran Verses they cover
incorporating this rule into the proposed model, learners [2]. Moreover, to detect errors in this rule and other
will benefit the model because it will improve their rules, we utilized state-of-the-art deep learning
Tajweed skills and facilitate the learning process for techniques, including Convolutional Neural Networks
those who do not have access to private tutors or experts. (CNNs) and Recurrent Neural Networks (RNNs), to
The proposed approach involved three models namely,
Convolutional Neural Network (CNN), Recurrent
analyze audio recordings of Quran recitations.
Neural Network (RNN), and Random Forest models in Furthermore, one of the remarkable aspects of the
the context of a classification task. The models were Quran is its linguistic structure and the way its Verses
evaluated based on their validation accuracy, and the are composed. The study of the phonetics of the
results indicate that the CNN model achieved the highest Quran, also known as the science of Tajweed, involves
validation accuracy of 0.8613. The other contribution of the analysis of the sounds, rhythm, and melody of the
this work is collecting a novel dataset for this kind of Quranic text [3]. The Quranic text is written in Arabic,
study. The findings show that the Random Forest model a language that is highly complex and rich in its
outperformed the other models in terms of accuracy. phonetic and linguistic features. The proper recitation
of the Quranic text is considered to be of great
Keywords: Artificial Intelligence, Deep Learning, Quran
Recitation, Ahkam Al-Tajweed, Hukm Al-Noon Al- importance in Islam, and a skilled reciter of the Quran
Mushaddah. is highly respected and admired within the Muslim
community. The phonetics of the Quran are, therefore,
Creative Commons Attribution-NonCommercial- an essential part of Islamic scholarship, and the study
ShareAlike 4.0 International License. of Tajweed is highly valued by Muslim scholars and
laypeople alike [3]. The phonetics of the Quran are
I. INTRODUCTION also of interest to linguists and scholars of Arabic
This section provides a comprehensive description language and literature. The rhythmic and melodic
and background about Quran recitation as well as patterns of the Quranic text have been the subject of
Ahkam Al-Tajweed. It also states the related works of much study and analysis, and the beauty and
this study and provides the problem considered complexity of these patterns continue to fascinate
alongside the contribution of this work. scholars today. The phonetics of the Quran also offer
A. Overview insights into the development of the Arabic language
The Holy Quran is the main sacred book of Islam, and its evolution over time [4].
composed of 30 chapters and 6236 Verses grouped On the other hand, there are several audio feature
into 114 groups called "Surahs." Correct extraction techniques such as MFCCs, Spectral
pronunciation during recitation is called "Tajweed," Contrast, Chroma STFT, and Spectrograms for speech
and rules must be considered to ensure the correct and music analysis. These features have been shown
meaning is delivered. However, there are many issues to be effective in capturing the spectral content of

http://doi.org/10.33387/protk.v1i2.7026 74
A Deep learning Approach for Recognizing the Noon Rule for Reciting Holy Quran

audio signals and can provide valuable information for for tracking Tajweed rules. The system recognized the
tasks such as speech recognition, music genre recitation of 10 different reciters, including men,
classification, and emotion recognition. The features women, and children, and could identify mistakes at
and their effectiveness in improving the performance both Verse and word levels. However, it has a
of the system are described as follows [5]: limitation as it was based on phonetic rules that are not
- MFCC (Mel-Frequency Cepstral Coefficients): provided. Another study called "Makhraj" was
MFCCs are widely used in speech and music introduced in [11] to make the recitation of the Holy
analysis. They are computed by first applying a Quran less dependent on expert reciters. The authors
filter bank to the power spectrum of an audio used MFCC for feature extraction and tested the
signal to obtain the mel-scaled power spectrum. system in two modes: one-to-one and one-to-many.
This is followed by taking the logarithm of the The system achieved a 98% accuracy in the one-to-one
mel-scaled power spectrum and then performing mode, which is not considered very accurate due to the
the Discrete Cosine Transform (DCT) on the utilization of a simple matching technique. Moreover,
resulting coefficients. The first few coefficients the authors in [12] introduced an intelligent tutoring
are typically the most informative, capturing the system for teaching Tajweed. It was evaluated by
overall spectral shape of the signal, while higher reciting teachers and students; the results were
coefficients capture more detailed spectral promising. However, the system was limited to
information [6]. teaching Tafkhim and Tarqiq in Tajweed for the Holy
- Spectral Contrast: it is a feature that captures the Quran recitation with the Rewaya of Hafs from
differences in energy between adjacent 'Aasem. Another intelligent recognition model
frequency bands in a spectrogram. It is calculated proposed in [13] to recognize Qira'ah from the
by dividing the spectrum into sub-bands and corresponding Holy Quran acoustic wave. The model
computing the difference in energy between the was built upon three phases: 1) feature extraction, 20
highest and lowest frequencies in each sub-band. training the SVM learning model, and 3) recognizing
Spectral Contrast can be useful for speech and Qira'ah based on the trained model. The SVM-based
music classification, as it can capture the recognition model achieved a success rate of 96%.
distinctive spectral features of different types of More studies are available in the literature in this
sounds [7]. area, for instance, the authors in [14] used traditional
- Chroma STFT: Chroma features are a way of audio processing techniques for feature extraction and
summarizing the pitch content of an audio signal. classification on an in-house dataset of thousands of
Chroma features are computed by first audio recordings covering all occurrences of the rules
calculating the short-time Fourier transform under consideration in the entire Holy Quran. The
(STFT) of the signal, and then projecting the work showed how to enhance the classification
magnitude spectrum onto a set of pitch classes. accuracy to surpass 97.7% by incorporating deep
Each pitch class corresponds to a particular learning techniques. The researchers in [15] proposed
musical note, and the value of the chroma feature a machine learning approach for recognizing the
for each pitch class is the sum of the magnitudes reciter of the Holy Quran. The system achieved
of the corresponding spectral components. excellent accuracy of 97.62% for chapter 18 and
Chroma features are often used in music 96.7% for chapter 36 using the ANN classifier. In [16],
information retrieval tasks, such as genre the authors addressed the problem of identifying the
classification or chord recognition [8]. correct usage of Ahkam Al-Tajweed in the entire
- Spectrogram: A spectrogram is a visual Quran, focusing on eight Ahkam Al-Tajweed faced by
representation of the frequency content of an early learners of recitation. The results showed that the
audio signal over time. It is computed by highest accuracy achieved was 94.4%, which was
dividing the signal into overlapping frames, obtained when bagging was applied to SVM with all
computing the magnitude of the Fourier features except for the LPC features. Finally, the work
transform for each frame, and then plotting the proposed in [17] suggested a deep learning model
resulting magnitudes as a function of frequency using MFCCs to distinguish between trustworthy and
and time. Spectrograms can be useful for fraudulent reciters of the Qur'an. It compared the deep
visualizing the spectral content of an audio learning approach to machine learning methods and
signal, and can also be used as input to machine determined the optimal segment length and number of
learning models for tasks such as speech or features. The proposed model achieved high accuracy
music classification [9]. and outperformed other models, while a future
B. Literature Review direction includes creating a dataset encompassing the
The literature includes several studies that aimed entire Qur'an for further research on recitation rules
to develop an intelligent model for recognizing the using deep learning techniques. The data used in the
rules of Holy Quran recitation and tracking reading study is available from the corresponding author upon
errors using automatic speech recognition techniques. request.
One of the early studies was of Muhammad et al., [10],
who developed an intelligent system called "Hafize"

75
A Deep learning Approach for Recognizing the Noon Rule for Reciting Holy Quran

C. Problem Statement and Contributions


As mentioned in the previous section, there are Start
many issues in teaching Ahkam Al-Tajweed,
including the requirement of an expert for private
tutoring called "Talqeen," who is not always available, Dataset Collection
which is a critical issue when learning Ahkam Al-
Tajweed. According to the literature, there is still a
lack of stable and reliable models for Ahkam Al-
Algorithm Selection
Tajweed. Hence, this study tries to fill this gap and
tackles the challenge of learning Ahkam Al-Tajweed
by developing a model that takes into account one of
CNN RNN RF
the rules experienced by early learners in the Holy
Quran. The proposed model involves three AI models;
CNN, RNN and Random Forest, and focuses,
specifically, on the "Hukm Al-Noon Al-Mushaddah" Validation Accuracy Evaluation
which pertains to the proper pronunciation of the letter
"Noon" when it is accompanied by a Shaddah symbol
in Arabic. By incorporating this rule into the proposed
model, learners will benefit the model because it will Best Model
improve their Tajweed skills and facilitate the learning
process for those who do not have access to private
tutors or experts.
The proposed model considers many aspects for End
learners' benefit such as the economic aspect in case a
learner cannot find a tutor in the local area and it is Figure 1: Flowchart of the proposed approach.
difficult for the learner to travel. Also, the other benefit
aspect is the case of distance learning. It is important A. Dataset Creation
to mention that the development of the proposed A dataset of over 6000 audio files representing
model represents the beginning of an integrated readings of different Verses by volunteers was
learning system that relies on machine learning collected. The dataset was designed to capture the
techniques and can improve and develop itself to variability in speech patterns and acoustic
distinguish correct recitations. The other contribution characteristics across different speakers and Verses
of this work is building a novel dataset that has not and was collected with the aim of building a system
been available in this literature. The dataset is for recognizing the correct recitation. Each audio file
considered comprehensive and appropriate to be used was annotated with metadata such as the speaker ID,
for research purposes. Verse ID, and recording conditions, in order to
The organization of this document is summarized as facilitate analysis and model training. This section
follows: Section 2 describes the dataset used and the describes the process of building the dataset, including
details of the proposed model. Section 3 demonstrates data collection, annotation, and preprocessing. A
the experimental results that are obtained using the summary statistics and analysis of the dataset were
proposed and benchmarking models and Section 4 provided as well as highlighting the main
concludes this work. characteristics and the potential challenges for model
training.
II. RESEARCH METHOD A dataset has been collected containing recitations
This section describes about the dataset used in this of Quranic Verses recited according to the rules
work and the method followed in performing this related to the Hukm Al-Noon Al-Mushaddah, which
research. Figure 1 demonstrates the flow of the includes six different Verses. This type of rule
proposed method step by step. includes specific instructions for reciting Al-Noon Al-
Mushaddah and presents a challenge for learners to
master correct recitation. The Ghunnah must be shown
in Hukm Al-Noon Al-Mushaddah when it is pronounced
with emphasis through two movements. This ruling is
called an “accentuated Ghunnah” letter. The dataset
can be used to develop techniques for Quranic
recitation based on the rules and to improve the
accuracy of the final recitation. The training dataset
includes the following Verses:

76
A Deep learning Approach for Recognizing the Noon Rule for Reciting Holy Quran

ۡ َ ِ َ‫[ }إِنِكَ ِأَنت‬Surah Al-Ma'idah 109]


- {‫ُوب‬ࣰ ‫عل ٰـ ُمِٱلغُی‬ Al_Gunna
ِ‫ع ٰـت‬َ ‫) َوٱلن ٰـز‬

-َ ‫ق‬-‫ر‬--‫غ‬
َ -‫ت‬
َ ---‫ع‬
َ -ˏ-‫ز‬---‫ن‬
َ -‫ن‬
َ --‫و‬
- {‫ع ٰـتِِغ َۡر ࣰقِا‬
َ ‫[} َِوٱلن ٰـز‬Surah An-Nāziʿāt: 1] ِ (‫غ َۡرقا‬ َ-
- {‫ط ٰـتِِن َۡشطِا‬ َ ‫[} َِوٱلن ٰـش‬Surah An-Nāziʿāt: 2] َ ‫) َوٱلن ٰـ ࣰش‬
ِ‫ط ٰـت‬ -َ -‫ن‬
َ -ˏ-‫ت‬---‫ط‬-ˏ-‫ش‬---‫ن‬
َ -‫ن‬
َ --‫و‬
- {ِ‫[}منِشَرِٱ ۡل َوسۡ َواسِِٱ ۡلخَناس‬Surah An-Nās: 4] Al_Gunna
ِ )‫ن َۡشطا‬ َ --‫ط‬-‫ش‬
- {ِ‫صدُورِٱلناس‬ ُ ِ‫سِفِی‬ ُِ ‫[}ٱلذِیِیُِ َوسۡ و‬Surah An-Nās: 5] ِ‫(منِشَر‬
- {ِ‫[}منَ ِٱ ۡلجنةِِ َِوٱلناس‬Surah An-Nās: 6] Al_Gunna ِ‫ۡٱل َوسۡ َواس‬
-َ --َ ‫و‬-‫س‬--‫ َو‬-‫ل‬-ˏ-‫ر‬--‫ش‬-‫ن‬َ -ˏ-‫م‬
َ‫س‬ َ ---‫ن‬
َ -‫ن‬
َ --‫خ‬-‫ل‬-ˏ-‫س‬
ِ ِ)‫ۡٱلخَناس‬
And the following Verses in testing dataset: ُ ‫(ٱلذیِی َُوسۡ و‬
ِ‫س‬ -َ ˏ-‫و‬-َ ‫س‬--‫و‬-َ ُ-‫ي‬-ˏ-ˏ-‫ذ‬--‫ل‬-َ ‫ل‬--‫ء‬
- {‫ور‬ ُّ ‫[} َو ََلِٱ‬Surah Fāṭir: 20]
ُِ ُّ‫لظلُ َم ٰـتُِِ َو ََلِٱلن‬ Al_Gunna ِ ‫صدُور‬
ُ ِ‫فی‬ -َ ‫ن‬
َ -‫ن‬ َ -ˏ-َ‫ر‬-َ ُ-ُ-َ‫د‬-ُ-‫ص‬-ˏ-ˏ-‫ف‬-ُ-‫س‬
- {ُِ‫[ }ٱلن ۡج ُِمِٱلثاقب‬Surah Aṭ-Ṭāriq: 3] ِ ِ)‫ٱلناس‬ َ ‫س‬--
ۡ َ‫(من‬
- {ِ‫عنِٱلنبَِإِٱ ۡل َعظِیم‬ َ }[Surah An-Nabaʾ: 2] Al_Gunna
ِ‫ِٱلجنة‬ -َ -‫ َو‬-ˏ-‫ت‬--‫ن‬ َ -‫ن‬َ -ˏ-‫ج‬-‫ل‬--‫ن‬
َ -ˏ-‫م‬
{ِ‫ِوعُِیُِون‬ ࣲ‫ت‬ ‫ـ‬ ‫ن‬‫ج‬ ‫و‬ ِ ِ)‫َوٱلناس‬ َ ‫س‬---‫ن‬ َ -‫ن‬
َ
- َ ٰ َ َ }[Surah Ash-Shuʿarāʾ: 134]
- {ِ‫[}ٱلنارِِذَاتِٱ ۡل َوقُود‬Surah Al-Burūj: 5]

- {‫[} َِوٱلن ٰـش َِر⁠ِتِن َۡشرا‬Surah Al-Mursalāt: 3] B. Proposed Model
Several models were experimented using different
The dataset comprises more than 6000 audio features and input representations. Specifically, the
recordings of Quranic recitations. They were models trained CNN [18], RNN [19], and Random
performed by more than 750 individuals with varying
Forest with K-Fold training model [20]. The use of the
ages and genders. The dataset is divided into two parts,
CNN model started by converting the audio files into
one for training and the other for testing. The training
dataset consists of audio in the form of (.wav) files for spectrogram image files. For the RNN model, a neural
six Verses, and the test dataset consists of audio files network architecture was used with multiple layers.
for another six Verses, all of which have Al-Noon Al- The model was trained using a batch size of 32 and a
Mushaddada along with a label indicating whether it learning rate of 0.001. For the Random Forest model,
was recited correctly or not. These recordings were K-Fold cross-validation with a k value of 10 was
captured using online platforms such as WhatsApp applied. Also, different kernel functions were
and Telegram channels. The QDAT2 dataset is an experimented including radial basis function (RBF),
updated version of the previously published QDAT polynomial, and linear. For the CNN model, the audio
dataset available on Kaggle. Each audio sample files were converted into spectrogram image files
consists of the recitation of one of the six Verses using the Mel spectrogram representation.
containing the Al-Noon al-Mushaddada, along with Additionally, a neural network architecture was used
other related features such as age, gender, and with multiple convolutional and pooling layers,
accuracy of the recitation of the Ghunnah rule. All followed by dense layers. The model was trained using
audio files are provided in a (.wav) format, and their a batch size of 64 and a learning rate of 0.0001.
corresponding links are included in an Excel (.csv)
file. The dataset is valuable for analyzing the proper III. RESULTS AND DISCUSSIONS
pronunciation and intonation of Al-Noon Al- Table 1 summarizes the accuracy results for each
Mushaddad in Quranic recitation. The readers who model on the test set. As can be seen, the CNN model
participated in the recordings are from the southern achieved the highest accuracy, followed by RNN
regions of Iraq and Syria. The recitation rule of Al- model and the Random Forest model. Overall, our
Noon Al-Mushaddad in Quranic recitation is results suggest that CNN models with Mel
commonly known as (Al-Ghunnah). This rule pertains spectrogram features are effective for recognizing the
to the correct pronunciation of the letter (Noon) in correct recitation of Quranic Verses with the "Al
Arabic when it appears with a Shadda, and is Gunna" rule.
considered a crucial aspect of Quranic recitation.
Regarding the Phonological distribution, Table 1 Table 2: Accuracy Results for Different Models.
presents the provisions of the Verse in our dataset. Validation
Model Type Accuracy
Accuracy
Table 1: The provisions of theِِVerse CNN 0.9117 0.8613
ِ Provisions ِ Verse ِ Verse in symbols RNN 0.8749 0.8121
ِ ُ‫(و ََلِٱلظُّلُ َم ٰـت‬
َ -‫ن‬ َ --َ ‫ل‬--َ ‫و‬-ُ-‫ت‬َ ---‫م‬-ٌ-‫ل‬-َ َ‫ظ‬-َ‫ل‬-َ‫و‬ Random Forest ---------- 0.8085
ِ Al_Gunna
ِ ِ)‫ور‬ ُ ُّ‫َو ََلِٱلن‬ َ ‫ر‬-ُ-ُ-‫ن‬َ
Al_Gunna ِ ِ) ُ‫(ٱلنجۡ ُمِٱلثاقب‬ ‫ب‬َ -‫ق‬ َ ---‫ث‬-‫ث‬-ُ-‫م‬-‫ج‬--‫ن‬ َ -‫ن‬ َ --‫ء‬ In current practice, predictive models are deployed
ِ‫عنِٱلنبَإ‬ َ ( -َ -‫ع‬ َ -‫ل‬-ˏ-‫ء‬--‫ب‬--‫ن‬ َ -‫ن‬
َ -ˏ-‫ن‬َ --‫ع‬َ to estimate the accuracy of a given test set, which was
ِ Al_Gunna
ِ ِ)‫ۡٱلعَظیم‬ ‫م‬-ˏ-ˏ-‫ظ‬ not utilized in the model training or validation process.
ِ ࣲ‫(و َجن ٰـت‬
َ -َ ُ-‫ع‬
َ --‫و‬-‫ن‬ َ -ˏَ -‫ت‬---‫ن‬ َ -‫ن‬
َ - - ‫ج‬- - ‫و‬ In this context, we have employed six distinct Verses,
ِ Al_Gunna each characterized by the Al Gunna recitation rule.
ِ )‫َوعُیُون‬ َ‫ن‬ َ -ُ-ُ-‫ي‬
ِ‫(ٱلنارِذَات‬ -َ -‫و‬-َ ‫ل‬-َ ˏ-‫ت‬---‫ذ‬-ˏ-‫ر‬---‫ن‬ َ -‫ن‬ َ -- ‫ء‬ However, due to the phonetic variability of the Noon
ِ Al_Gunna
ِ ِ)‫ۡٱل َوقُود‬ َ ‫د‬-ٌ-ٌ-‫ق‬ sound across different Verses, it is observed that the
ِ‫(وٱلن ٰـش َر ٰ⁠ِت‬ -َ -‫ن‬ َ -ˏ-‫ت‬---‫ر‬-ˏ-‫ش‬---‫ن‬ َ -‫ن‬ َ --َ‫و‬ test accuracy varies between them. Specifically, the
ِ Al_Gunna ࣰ َ
ِ )‫ن َۡشرا‬ َ -َ -‫ر‬-‫ش‬ Verse3 and 4 from the test set exhibit a similar
phonetic distribution and are therefore considered to

77
A Deep learning Approach for Recognizing the Noon Rule for Reciting Holy Quran

be the most closely related. Table 2 shows the Verses


test accuracy for the different models. This table
shows the accuracy of three different models (CNN,
RNN, and Random Forest) for each Verse in the test
dataset.
Furthermore, the accuracy of the RNN model on
both the training and test data is depicted in Figure 2
(A and B). Figure 3 shows the accuracy and the
evaluation of the accuracy of the CNN model. On the
other hand, Figure 4 demonstrates the accuracy of the
SVM model using K-Fold Cross-Validation. In
addition, Table 3 shows the values of implementing
the Random Forest. (A)
According to the aforementioned results, the
accuracy of three different models (CNN, RNN, and
Random Forest) for each Verse in the test dataset
represents evidence that the Random Forest model
outperformed the other models since it achieved the
highest accuracy for Verse 3 and Verse 4 with scores
of 0.7063 and 0.6956, respectively. However, for
Verse 1, Verse 2, and Verse 6, the CNN model
outperformed both RNN and Random Forest models,
with accuracy scores of 52.0, 55.47, and 61.67,
respectively. On the other hand, the RNN model
achieved the highest accuracy score for Verse 4 with a
value of 0.6782.
The results suggest that the performance of the
models varies across different Verses, indicating that (B)
the phonetic distribution of Noon sound plays a crucial Figure 2: Random Forest model on both the training and
role in determining the accuracy of the models. As we testing data where (A) is the accuracy and (B) is the loss.
have discussed earlier, the phonetic variability of
Noon sound within each Verse needs high attention
when evaluating the performance of predictive models
in Arabic recitation. The findings also highlight the
importance of carefully selecting test Verses to ensure
that they are representative of the broader phonetic
variation in the recitation rules being modeled. Finally,
the results provide insights into the challenges of
modeling Arabic recitation accurately.

Table 3: Accuracy of the Verses using each of the three


models used.
Verse Verse Verse Verse Verse Verse
Model
1 2 3 4 5 6
CNN 52.0 55.47 63.49 52.17 69.35 61.67 (A)
RNN 0.59 0.625 0.7063 0.6956 0.6854 0.6000
Random
0.6016 0.6328 0.6507 0.6782 0.6129 0.6105
Forest

Table 3: Random Forest validation details.


Validation
Test accuracy Validation Loss
accuracy
0.6119402985074627 0.8303519595120427 5.859523780066005

(B)
Figure 3: (A) CNN model accuracy, and (B) evaluation
accuracy.

78
A Deep learning Approach for Recognizing the Noon Rule for Reciting Holy Quran

predictive models that account for the phonetic


variability of Arabic recitation, ultimately facilitating
the preservation and transmission of this rich cultural
heritage.

ACKNOWLEDGMENT
We would like to appreciate the support of the
Computer Science Department/ College of Computer
Science and Mathematics/ University of Mosul and the
ICT Research Unit/ Computer Center/ University of
Mosul, Iraq. We are also grateful to the participants
who volunteered to help us in collecting the dataset.

Figure 3: Accuracy of the SVM model on both the training and DATA AVAILABILITY
test data. The data used in the study is available from the
corresponding author upon request.
According to the obtained results in Figures 2, 3, and
4, it can be observed that CNN is better for achieving REFERENCES
the purpose of this work. While RNN that is used for [1] Samara G, Al-Daoud E, Swerki N, Alzu’bi D. The
sequential data processing underperformed CNN. Recognition of Holy Qur’an Reciters Using the
Also, since Random Forest is used for classification MFCCs’ Technique and Deep Learning. Advances
and regression tasks it is also underperformed CNN. in Multimedia. 2023 Mar 21;2023.
This is because CNN uses convolutional layers to https://doi.org/10.1155/2023/2642558
efficiently extract features from data, while RNN uses [2] Al-Ayyoub M, Damer NA, Hmeidi I. Using deep
recurrent layers to maintain a memory of the previous
learning for automatically determining correct
inputs and outputs. On the other hand, Random Forest application of basic quranic recitation rules. Int.
uses decision trees to make predictions. However, it is Arab J. Inf. Technol.. 2018 Apr;15(3A):620-5.
observed that Random Forest is a simpler algorithm [3] Nasallah MK. The Importance Of Tajweed In The
that can be used for smaller datasets or when Recitation Of The Glorious Qur’an: Emphasizing
computational resources are limited [21-22]. Its Uniqueness As A Channel Of Communication
Between Creator And Creations. IOSR Journal Of
IV. CONCLUSIONS Humanities And Social Science (IOSR-JHSS).
This work tried to address the issue of learning 2016;21(2):55-61.
Ahkam Al-Tajweed by developing a model that [4] Ahmad M. Literary Miracle of the Quran. Ar-
considered one of the rules experienced by early Raniry: International Journal of Islamic Studies.
learners in the Holy Quran. The proposed model 2020 Jul 28;3(1):205-20.
focuses, specifically, on the "Hukm Al-Noon Al- [5] Nogueira AF, Oliveira HS, Machado JJ, Tavares
Mushaddah," which pertains to the proper JM. Sound Classification and Processing of Urban
pronunciation of the letter "Noon" when it is Environments: A Systematic Literature Review.
accompanied by a Shaddah symbol in Arabic. By Sensors. 2022 Nov 8;22(22):8608. doi:
incorporating this rule into the proposed model, 10.3390/s22228608
learners will benefit the model because it will improve [6] Ayvaz, Uğur, et al. "Automatic speaker recognition
their Tajweed skills and facilitate the learning process using mel-frequency cepstral coefficients through
for those who do not have access to private tutors or machine learning." CMC-Computers Materials &
experts. The proposed approach involved three AI Continua 71.3 (2022).
models namely, CNN, RNN, and Random Forest. The [7] Su, Yu, et al. "Performance analysis of multiple
other contribution of this work is collecting a novel
aggregated acoustic features for environment
dataset for this kind of study. The collected data was sound classification." Applied Acoustics 158
used by the three models. The findings show that the (2020): 107050.
CNN model outperformed the other models in terms [8] Kumar, Nagendra, et al. "CNN based approach for
of validation accuracy. Also, the test accuracy varies Speech Emotion Recognition Using MFCC,
from verse to verse in different models, some test verse Croma and STFT Hand-crafted features." 2021 3rd
showed promised result against others. Finally, this International Conference on Advances in
study is ongoing and will continue until it covers all Computing, Communication Control and
the available aspects of Ahkam Al-Tajweed. Networking (ICAC3N). IEEE, 2021.
Furthermore, this work is considered an approach [9] Gong, Yuan, et al. "Ssast: Self-supervised audio
for cultural and religious heritage preservation, which
spectrogram transformer." Proceedings of the
contributes to having sustainable communities as the AAAI Conference on Artificial Intelligence. Vol.
United Nations declares its goals in promoting our 36. No. 10. 2022.
communities to be sustainable. Future works can build
upon these findings to develop more effective

79
A Deep learning Approach for Recognizing the Noon Rule for Reciting Holy Quran

[10] Muhammad A, ul Qayyum Z, Tanveer S,


Martinez-Enriquez A, Syed AZ. E-hafiz:
Intelligent system to help muslims in recitation and
memorization of Quran. Life Science Journal.
2012 Oct;9(1):534-41.
[11] Akkila AN, Abu-Naser SS. Teaching the right
letter pronunciation in reciting the holy Quran
using intelligent tutoring system.
[12] Nahar KM, Al-Khatib RM, Al-Shannaq MA,
Barhoush MM. An efficient holy Quran recitation
recognizer based on SVM learning model.
Jordanian Journal of Computers and Information
Technology (JJCIT). 2020 Dec 1;6(04):394-414.
[13] Nahar KM, Al-Khatib RM, Al-Shannaq MA,
Barhoush MM. An efficient holy Quran recitation
recognizer based on SVM learning model.
Jordanian Journal of Computers and Information
Technology (JJCIT). 2020 Dec 1;6(04):394-414.
doi: 10.5455/jjcit.71-1593380662
[14] Al-Ayyoub M, Damer NA, Hmeidi I. Using deep
learning for automatically determining correct
application of basic quranic recitation rules. Int.
Arab J. Inf. Technol.. 2018 Apr;15(3A):620-5.
[15] Alkhateeb JH. A machine learning approach for
recognizing the Holy Quran reciter. International
Journal of Advanced Computer Science and
Applications. 2020;11(7).
[16] Damer NA, Al-Ayyoub M, Hmeidi I.
Automatically determining correct application of
basic quranic recitation rules. InProceedings of the
International Arab Conference on Information
Technology 2017 Dec 22.
[17] Samara G, Al-Daoud E, Swerki N, Alzu’bi D. The
Recognition of Holy Qur’an Reciters Using the
MFCCs’ Technique and Deep Learning. Advances
in Multimedia. 2023 Mar 21;2023.
[18] Demir, Fatih, Daban Abdulsalam Abdullah, and
Abdulkadir Sengur. "A new deep CNN model for
environmental sound classification." IEEE
Access 8 (2020): 66529-66537.
[19] Ou, Zhenyi, Ke Qu, and Chen Liu. "Estimation of
sound speed profiles using a random forest model
with satellite surface observations." Shock and
Vibration 2022 (2022).
[20] Park, Ji Soo, et al. "A machine learning approach
to the development and prospective evaluation of a
pediatric lung sound classification
model." Scientific Reports 13.1 (2023): 1289.
[21] Al-Ameri, Mohammed Abdulbasit Ali, et al.
"Unsupervised Forgery Detection of Documents:
A Network-Inspired Approach." Electronics 12.7
(2023): 1682.
[22] Sultan, Ahmed, and Basim Mahmood. "A Novel
Network Model for Artificial Intelligence
Algorithms." 2022 8th International Conference
on Contemporary Information Technology and
Mathematics (ICCITM). IEEE, 2022.

80

You might also like