0% found this document useful (0 votes)
0 views

Development of a Machine Learning System for Traffic Sign Detection Using CNN

This study develops a traffic sign detection system using convolutional neural networks (CNNs) to enhance road safety and support autonomous vehicles. The research involves data collection, preprocessing, CNN architecture design, and model training, achieving high accuracy in recognizing traffic signs under various conditions. The findings contribute to improved traffic management and the advancement of artificial intelligence applications in transportation.

Uploaded by

budinurohman96
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Development of a Machine Learning System for Traffic Sign Detection Using CNN

This study develops a traffic sign detection system using convolutional neural networks (CNNs) to enhance road safety and support autonomous vehicles. The research involves data collection, preprocessing, CNN architecture design, and model training, achieving high accuracy in recognizing traffic signs under various conditions. The findings contribute to improved traffic management and the advancement of artificial intelligence applications in transportation.

Uploaded by

budinurohman96
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Development of a Machine Learning System for Traffic Sign Detection Using

CNN
Budi Nurohman, Hendra Marcos 2

1,2
Department of Informatics, Faculty of Computer Science, Universitas Amikom Purwokerto

Abstract. The goal of this study is to create a traffic detection system based on convolutional neural networks (CNNs), which are
used to process images. With the help of traffic lights, signs, and cars, this technique seeks to automatically recognize and
categorize traffic objects in photos. The procedure employed in this study involves gathering traffic image datasets, pre-
processing data to enhance dataset quality, designing the CNN architecture, and training and validating the model. To boost
tolerance for differences in lighting conditions and shooting angles, this CNN model was trained utilizing a variety of traffic
image data sets. The created CNN model can recognize and classify traffic items with high accuracy, according to experimental
data. The evaluation model performs well in traffic identification tasks when considering criteria like accuracy, precision, and
recall. This system has the potential to increase traffic safety and be utilized for automated traffic monitoring applications.
Purpose: In the context of contemporary transportation, the objective of this research is to create a machine learning system
based on convolutional neural networks (CNN) that can be used for traffic sign detection. This study is crucial for a number of
reasons, including the following:
Traffic Safety: Around the world, there is a lot of worry about traffic safety. A crucial step in enhancing road safety is the
efficient and accurate identification of traffic signs. An effective system will enable both human drivers and autonomous vehicles
to react to existing indicators accurately.
Traffic sign detection is one of the components required for the development of autonomous cars. The results of this study
contribute to the advancement of safer and more intelligent driverless vehicles since autonomous vehicles must be able to read
traffic signs and react appropriately.
Transportation Efficiency: Traffic sign detection can boost transportation efficiency in addition to safety. To better manage traffic
and prevent congestion, traffic sign data can be employed.
Relevance to Artificial Intelligence: This research is also highly pertinent to the advancement of AI. The machine learning
techniques and models that were employed in this study can be used in a range of situations involving object detection, including
security surveillance, computer vision, and other fields.
Convolutional neural networks (CNN), which have been shown to be particularly effective in visual identification tasks, are used
in this study. It provides a potent method for traffic sign detection that can be applied in a number of circumstances.
The goal of this research is to create a sophisticated system for recognizing traffic signs, which will improve the security,
effectiveness, and growth of intelligent transportation. Our key discoveries will offer helpful advice and significant contributions
to the advancement of this technology.
Methods: Convolutional neural networks (CNNs) are used in this study to look at picture matrices and improve the quality of
image details so that traffic signs can be found more accurately. generates pictures that highlight the objects in them. The
accuracy value is the focused outcome. The Bitucket German Traffic Signs dataset is used in this study.
Result/Findings: Here you can explain ‘what’ you found during your study, whether it answers the problem you set out to
explore, and whether your hypothesis was confirmed. You need to be very clear and direct and give exact figures rather than
generalize. It’s important not to exaggerate or create an expectation that your paper won’t fulfill.
Novelty/Originality/Value: This is your opportunity to provide readers with an analysis of the value of your results. It’s a good
idea to ask colleagues whether your analysis is balanced and fair, and again, it’s important not to exaggerate. You can also
conjecture what future research steps could be.

Keywords: machine learning, convolutional neural networks, traffic sign detection, autonomous vehicles, artificial
intelligence.
Received May 2020 / Revised November 2020 / Accepted March 2021

This work is licensed under a Creative Commons Attribution 4.0 International License.

INTRODUCTION
It is impossible to separate the usage of traffic signs in large cities from their role in enhancing road user safety. By
offering helpful sign information, traffic signs are intended to assist vehicles in safely arriving at their destination.
However, when road users fail to correctly understand the information stored on traffic signs, negative things may
occur. This can develop into a fresh issue with driving safety. To reduce this issue, technology can be developed to
identify traffic sign items automatically. This could serve as a backup solution to increase driving safety.

Beyond that, the development of contemporary transportation technology has given rise to significant concerns
about autonomous vehicles, commonly referred to as driverless automobiles. Artificial intelligence is becoming
more and more important in handling dynamic traffic situations as a result of technological revolution and the
concept of autonomous vehicles [1]. In addition to changing how people travel, autonomous cars also need to be
highly accurate in order to avoid accidents. They also need to be able to read traffic signs and react quickly. This
emphasizes how urgent it is to create a traffic sign detection system that can be trusted to work in a range of
conditions and significantly improve sign identification as well as autonomous vehicles' interactions with their
surroundings.

Autonomous technology has the potential to transform how we travel, increase the effectiveness of public
transportation, and—most importantly—increase traffic safety [2]. One of the crucial aspects of autonomous vehicle
driving is the monitoring of traffic signs. Traffic signs provide information that drivers and autonomous vehicle
systems rely on to respond appropriately to shifting traffic conditions.

The issue of traffic sign detection has been addressed in a number of earlier studies, utilizing a variety of strategies.
Convolutional neural networks (CNN) have emerged as the preferred method for object detection, including traffic
signs, thanks to computer vision technology. Despite the outstanding outcomes of several studies, there are still
issues that need to be resolved [3]. M. Akbar's research. A study that has succeeded in recognizing traffic signs in
Indonesia with an accuracy of 97.33%. The best training parameters were a learning rate of 0.005 and 48 filters,
which resulted in an error of 0.107 and an accuracy of 97.33%. This CNN method succeeded in outperforming other
methods such as SVM, KNN, template matching, and shape detection. CNN shows better accuracy compared to
previous methods and achieves higher accuracy than CNN methods in previous studies [4].

The literature describing earlier research, however, has some serious gaps. Although studies have shown that people
can recognize traffic signs, it is still difficult to emphasize different lighting situations and the variety of road
backgrounds. In order to close this knowledge gap and increase our understanding of traffic sign identification in the
context of autonomous cars, this research is being conducted [5].

In the existing literature, not many researchers have worked on traffic sign detection in different lighting conditions
and with complicated road backgrounds. Previous research on this topic has been limited to one area. The purpose of
this study is to fill in this gap by developing a method that can deal with these problems and produce a reliable
system for traffic sign detection [6].
As part of this study, we want to make and test a machine learning system using convolutional neural networks
(CNN) that can read traffic signs in a range of road conditions and lighting types. Our goal is to provide longer-
lasting and more effective solutions that will help autonomous vehicles handle difficult traffic situations better [7].

The primary research challenge within this paradigm is to design a system that can reliably and accurately identify
traffic signs in a variety of lighting conditions and complicated road backgrounds. In the context of autonomous
vehicles, this research will also take into account a number of other factors, such as processing efficiency, response
time, and practical applications [8].

METHODS
The research stages are shown in Figure 1. Each step in Figure 1 is explained in detail at the next section.
Figure 1. Flowchart of the proposed method

Convolutional neural networks (CNN) are used experimentally in this research to recognize traffic signs against
various road backgrounds and lighting scenarios. This research focuses on data collection, by collecting a collection
of photos of traffic signs that include various types of signs and lighting conditions. This information includes traffic
signs in urban and rural areas and various lighting conditions. The next step is data pre-processing by preparing
photos for model training and pre-processing the data to remove noise. It includes image cleaning, data
augmentation, and normalization.
Convolutional Neural Network (CNN) Model Training Using a pre-processed data set, train a CNN model. CNN
architectures such as ResNet or VGG16 are used, which excel in object recognition [9]. Testing and Validation To
evaluate the effectiveness of the model, to divide the data set into several subsets for training and testing . By
evaluating the sensitivity, specificity and accuracy of the model in identifying traffic signs in various scenarios [10].
Model Optimization When necessary, apply methods such as transfer learning or hyperparameter tuning to improve
the model. By comparing the model with other currently existing approaches, if any, assess the effectiveness of the
model in identifying traffic signs. Evaluate the effectiveness of the model and response time. Analyze the results by
examining experimental findings to determine the advantages and disadvantages of the model created [11].
These steps were taken to create a reliable traffic sign detection system that can be applied to autonomous cars and
help improve traffic safety and transportation effectiveness.

Data Collections
The dataset used in this research is the "german-traffic-signs" dataset from the BitBucket repository. This dataset
consists of 42 classes based on the number of numbers where the total class has 216 images. An example of the
dataset used can be seen in Figure 2.
Figure 2. Dataset Sample

Variable Values
The variable value in this study aims to divide and display the value into 3 sections, the title section "Training
Dataset Distribution", variable x "Class number", and variable y "Number of images".can be seen in figure 3.

Figure 3. Training Dataset Distribution

Data Processing
Data processing here is a process where the image is resized from its original dimensions of 200 x 200 pixels to 32 x
32 pixels and 128 x 128 pixels. The purpose of image resizing is to make the image have the same pixel size, this is
because the training process must input an image with a predetermined size [22]. In addition to making the same
image size, this stage is also intended to lighten the dataset so that it does not require a very large computation
required. An illustration of the image resize process can be seen in Figure 4.
Figure 4. Data Processing

Image To Grey
In a Convolutional Neural Network (CNN), grayscale image conversion serves a number of purposes and
advantages that can enhance the effectiveness and performance of the model. Each pixel in a color image typically
has three color channels (red, green, and blue). Compared to grayscale photos with a single color channel, the
convolution technique on color images requires greater computing power. It can decrease the dimensionality of the
data and expedite the model training process by transforming photos to grayscale.Can be seen in Figure 5.

Figure 5. Image to Grey

Image Histogram Equalization


The goal of the image processing method known as histogram equalization is to enhance the contrast and intensity
distribution of a picture's pixels. Regarding the development of a machine learning system for traffic sign detection
Histogram equalization, which makes use of CNN, can boost contrast in a picture by uniformly distributing pixel
intensity over the whole value range. This may improve the clarity and recognition of other picture elements, such as
traffic signs. Can be seen in Figure 6.
Figure 6. Histogram Equalization

Bacth Data
The development of a machine learning system for traffic sign detection offers several significant advantages and
goals when it comes to training machine learning models, particularly those that use Convolutional Neural Networks
(CNNs) and batch data. It could be necessary to use a lot of processing power to train a model using the complete
dataset at once. By updating the model with a little quantity of data at each iteration, batch data saves computing
time and memory. Because the model is changed every time a batch of data is processed, batch data enables more
frequent model updates over the course of an epoch, or cycle through the complete dataset. This facilitates the
model's ability to modify its parameters to the training set more rapidly and effectively. Can be seen in Figure 7.

Figure 7.Bacth Data

Training Model CNN


Developing a machine learning system for traffic sign detection requires a crucial step called CNN model training.
Teaching a CNN model to identify patterns and characteristics connected to traffic signs is the primary objective of
model training. CNN is specifically made to recognize traffic signs with a high degree of accuracy by extracting and
comprehending complicated aspects from picture input. The CNN model's parameters are specified and modified
throughout training in order to match the traits and patterns of the training dataset. Through this process, the model's
ability to categorize photos of traffic signs becomes more precise and efficient. An illustration can be seen in Figure
8.
Figure 8. Training Model CNN

Training Matrix Graph


Using training and validation data, graph measures like accuracy and loss can be used to spot overfitting or
underfitting. Underfitting happens when a model is too simple to handle patterns in the data, whereas overfitting
happens when the model overlearns the training set and is unable to generalize to new data. Model hyperparameters
can be adjusted with the use of metric graphs. For example, developers can find the ideal hyperparameter settings to
speed up model convergence by examining the graph of loss against learning rate or number of epochs. Return the
object representing the artificial neural network model after generating graphs for loss, accuracy, and val_accuracy.
This was accomplished by generating the layers conv2d and conv2d_1. An illustration can be seen in Figure 9.

Figure 9. Training Matrix graph

CNN Evaluation Model


Evaluation of the CNN (Convolutional Neural Network) model is a crucial step in the creation of the machine
learning system for traffic sign detection. The goal and advantages of evaluating the CNN model include
determining how well the model can carry out tasks including the identification of traffic signs. The evaluation
process aids in determining the model's accuracy, or how well it predicts traffic signs. A crucial metric for assessing
the model's classification performance of the test dataset's images is accuracy. A batch data generator with 10
epochs and 2000 steps per epoch is used to carry out the evaluation.Can be seen in Figure 10.
Figure 10. CNN Evaluation Model

Calculate Evaluation Matrix


The model's correctness is one of the primary goals of producing evaluation metrics. Metrics like accuracy provide
insight into the model's accuracy in predicting traffic signs. Ensuring the model's ability to yield dependable
outcomes in detecting tasks is crucial. The types of errors the model makes are analyzed with the aid of evaluation
metrics. For instance, it is feasible to determine where and why the model tends to make mistakes by examining
False Positive, which is a positive classification that should be negative, and False Negative, which is a negative
classification that should be positive.Can be seen in Figure 11.

Figure 11. Calculate Evaluation Matrix

Testing Data
Evaluating the model's performance on data that was not encountered during training is the primary goal of testing
data. It provides a sense of how well the model can generalize and make accurate predictions in scenarios that may
not exactly match the training set. Developers can determine whether a model is underfitting—that is, has not learnt
as much from the training data—or overfitting—that is, knows the training data too well—by utilizing testing data.
The model's accuracy in making predictions on previously unseen data can be determined by evaluating its
performance on test data. In this study, internet-sourced photos of traffic signs were used as testing data.Can be seen
in Figure 12.

Figure 12.Testing Data


Prediction Sign
In this case, prediction's primary goal is to identify traffic sign presence in pictures or videos. In order to support the
detection system function, the trained CNN model is intended to identify items in the image as traffic signs. When
the prediction is accurate, the technology can help to increase traffic efficiency and safety. When the prediction is
accurate, the technology can help to increase traffic efficiency and safety. Accurate recognition of traffic signs can
aid in the proper response of transportation systems and drivers to laws and modifications in traffic
circumstances.Can seen in Figure 13.

Figure 13.Pediction Sign

Accuracy
The accuracy test provides a sense of the model's ability to classify data accurately. This evaluation metric is
frequently employed to assess the performance of the model when it comes to traffic sign detection. The ability of
the model to produce accurate predictions out of all of its forecasts is directly measured by its accuracy. This
involves classifying traffic indicators as either positive or negative, and a high accuracy means the model produces
the desired outcome. The accuracy of traffic signs in this investigation produced the following results: (-0.5, 1023.5,
682.5, -0.5). Can seen in Figur 14.

Figure 14. Accuracy


RESULTS AND DISCUSSION
The results and discussion should be presented in the same part, clearly and briefly. The discussion part should
contain the benefit of the research result, not the repeat result part. The results and discussion part can be written in
the same part to avoid the extensive quotation. Tables or graphs must present different results. The results of data
analysis must be reliable in answering research problems. References to the discussion should not repeat the
references in the introduction. Comparisons to the findings of previous studies must be included.

Manuscripts can be presented with the support of tables, graphs, or images which are needed to clarify the results of
the presentation verbally. Results and discussion are shown clearly and concisely.

Figure and format tables are using center alignment. Each of the figures and tables are given a number and
description, as well as a reference to the writing. The number and figure title is placed below the image, as shown in
Figure 1.

Figure 1. The cameraman was taking pictures

Provisions for writing the title figure:


1. The initial letters are capitalized, unless an acronym should be written in capital letters.
2. The writing of the corresponding provisions must be uppercase, e.g. name of province (Central Java), etc.
3. Color figures are made in black and white in order to be readable when printed.
4. Figures should not be compressed in order to not be chapped.

The number and title of the table are placed on the table where is concerned and made in center alignment. In Table
1, the following example of writing the number and title of the table. It is recommended to not use vertical lines but
only horizontal lines (on the header and footer), as, for example, shown in Table 1.

Table 1. The table form which used, table font is adjusting (not must 10pt)
ID DF ID 173 ID 174 ID 175 ID 176
term NT LT/LN NT LT/LN NT LT/LN NT LT/LN

1 3 0 0 0 0 0 0 1 1
2 1 1 1 0 0 0 0 0 0
3 1 0 0 0 0 0 0 0 0
4 1 1 1 0 0 0 0 0 0
5 1 1 1 0 0 0 0 0 0
6 1 0 0 0 0 2 1.3 0 0
7 1 0 0 0 0 0 0 0 0
8 1 0 0 1 1 0 0 0 0
9 1 0 0 1 1 0 0 0 0
10 1 0 0 0 0 0 0 0 0

CONCLUSION
Conclusions written in one paragraph, presented briefly, are narrative, non-bulleted, and conceptual. The research
impact must be stated.

REFERENCES

[1] M. Hidayah, A. N. I. Irfansyah, and D. Purwanto, “Deteksi Objek Pada Mobil Otonom dengan
Kamera Termal Infra Merah,” J. Tek. ITS, vol. 11, no. 3, pp. A204–A209, Dec. 2022, doi:
10.12962/j23373539.v11i3.94793.
[2] K. N. Ramadhani, M. S. Mubarok, and A. D. Palit, “DETEKSI DAN REKOGNISI RAMBU-
RAMBU LALU LINTAS DENGAN MENGGUNAKAN METODE SUPPORT VECTOR
MACHINE,” J. Ilm. Teknol. Infomasi Terap., vol. 3, no. 2, Apr. 2017, doi:
10.33197/jitter.vol3.iss2.2017.131.
[3] A. D. A. Wibisono, A. W. Widodo, and M. A. Rahman, “Deteksi Rambu Lalu Lintas
menggunakan Algoritma Moore Neighbour Contour Following dan Simplifikasi Poligon dalam
HSV Color Space”.
[4] M. Akbar, “Traffic sign recognition using convolutional neural networks,” J. Teknol. Dan Sist.
Komput., vol. 9, no. 2, pp. 120–125, Apr. 2021, doi: 10.14710/jtsiskom.2021.13959.
[5] A. Sumarudin, A. Suheryadi, A. Puspaningrum, E. Prasetyo, and Y. N. Azis, “Implementation
Assistance Driver System for Public Transportation Based on Embedded System,” J. Ilm. SAINS,
vol. 20, no. 2, p. 64, Jun. 2020, doi: 10.35799/jis.20.2.2020.28261.
[6] D. Y. Sutrisno and J. D. Setiawan, “MODEL DETEKSI RAMBU UNTUK SISTEM NAVIGASI
PROTOTYPE AGV,” vol. 9, no. 2, 2021.
[7] S. Rahman and H. Dafitri, “Pengembangan Convolutional Neural Network untuk Klasifikasi
Ketersediaan Ruang Parkir,” Explorer (Hayward), vol. 2, no. 1, pp. 1–6, Jan. 2022, doi:
10.47065/explorer.v2i1.148.
[8] R. T. Nursetyawan and F. Utaminingrum, “Pengembangan Sistem Rekognisi Rambu Kecepatan
Menggunakan Circle Hough Transform dan Convolutional Neural Network”.
[9] O. R. Sitanggang, H. Fitriyah, and F. Utaminingrum, “Sistem Deteksi dan Pengenalan Jenis
Rambu Lalu Lintas Menggunakan Metode Shape Detection Pada Raspberry Pi”.
[10] N. C. Kuncoro, S. A. Wibowo, and K. Usman, “ANALISIS KINERJA PROTOTIPE TRAFFIC
SIGN RECOGNITION UNTUK SISTEM AUTONOMOUS CAR MENGGUNAKAN YOU
ONLY LOOK ONCE”.
[11] A. S. G. Raharjo and E. Sugiharti, “Alphabet Classification of Sign System Using Convolutional
Neural Network with Contrast Limited Adaptive Histogram Equalization and Canny Edge
Detection,” Sci. J. Inform., vol. 10, no. 3, pp. 239–250, Jun. 2023, doi: 10.15294/sji.v10i3.44137.
References should be numbered and the numbering in order of appearance in the text. When referring to references
in document text, write the references number in square brackets, eg: [1]. All the served data or quotes in the article
taken from the other author articles should attach the reference sources. The writing format used in Scientific
Journal of Informatics, SJI, follows the format applied by IEEE citation style.

Example of how to write references as follows:

Material Type Works Cited

Book in print [1] D. Sarunyagate, Ed., Lasers. New York: McGraw-Hill, 1996.

Chapter in book [2] G. O. Young, "Synthetic structure of industrial plastics," in Plastics, 2nd ed., vol. 3, J.
Peters, Ed. New York: McGraw-Hill, 1964, pp. 15-64.

eBook [3] L. Bass, P. Clements, and R. Kazman, Software Architecture in Practice, 2nd ed.
Reading, MA: Addison Wesley, 2003. [E-book] Available: Safari e-book.

Journal article [4] G. Liu, K. Y. Lee, and H. F. Jordan, "TDM and TWDM de Bruijn networks and
shufflenets for optical communications," IEEE Trans. Comp., vol. 46, pp. 695-701, June
1997.

eJournal (from [5] H. Ayasso and A. Mohammad-Djafari, "Joint NDT Image Restoration and Segmentation
database) Using Gauss–Markov–Potts Prior Models and Variational Bayesian Computation," IEEE
Transactions on Image Processing, vol. 19, no. 9, pp. 2265-77, 2010. [Online]. Available:
IEEE Xplore, http://www.ieee.org. [Accessed Sept. 10, 2010].

eJournal (from [6] A. Altun, “Understanding hypertext in the context of reading on the web: Language
internet) learners’ experience,” Current Issues in Education, vol. 6, no. 12, July, 2005. [Online serial].
Available: http://cie.ed.asu.edu/volume6/number12/. [Accessed Dec. 2, 2007].

Conference paper [7] L. Liu and H. Miao, "A specification based approach to testing polymorphic
attributes," in Formal Methods and Software Engineering: Proceedings of the 6th
International Conference on Formal Engineering Methods, ICFEM 2004, Seattle, WA, USA,
November 8-12, 2004, J. Davies, W. Schulte, M. Barnett, Eds. Berlin: Springer, 2004. pp.
306-19.

Conference [8] T. J. van Weert and R. K. Munro, Eds., Informatics and the Digital Society: Social,
proceedings ethical and cognitive issues: IFIP TC3/WG3.1&3.2 Open Conference on Social, Ethical and
Cognitive Issues of Informatics and ICT, July 22-26, 2002, Dortmund, Germany. Boston:
Kluwer Academic, 2003.

Newspaper article [9] J. Riley, "Call for new look at skilled migrants," The Australian, p. 35, May 31, 2005.
(from database) [Online]. Available: Factiva, http://global.factiva.com. [Accessed May 31, 2005].

Technical report [10] K. E. Elliott and C.M. Greene, "A local adaptive protocol," Argonne National
Laboratory, Argonne, France, Tech. Rep. 916-1010-BB, 1997.
Patent [11] J. P. Wilkinson, “Nonlinear resonant circuit devices,” U.S. Patent 3 624 125, Jul. 16,
1990.

Standard [12] IEEE Criteria for Class IE Electric Systems, IEEE Standard 308, 1969.

Thesis/Dissertation [1] J. O. Williams, “Narrow-band analyzer,” Ph.D. dissertation, Dept. Elect. Eng., Harvard
Univ., Cambridge, MA, 1993.

In-text Citing It is not necessary to mention an author's name, pages used, or date of publication in the in-text
citation. Instead, refer to the source with a number in a square bracket, e.g. [1], that will then correspond to the full
citation in your reference list.

 Place bracketed citations within the line of text, before any punctuation, with a space before the first
bracket.
 Number your sources as you cite them in the paper. Once you have referred to a source and given it a
number, continue to use that number as you cite that source throughout the paper.
 When citing multiple sources at once, the preferred method is to list each number separately, in its own
brackets, using a comma or dash between numbers, as such: [1], [3], [5] or [1] - [5].

The below examples are from Murdoch University's IEEE Style LibGuide.
Examples of in-text citations:
"...end of the line for my research [13]."
"This theory was first put forward in 1987 [1]."
"Scholtz [2] has argued that..."
"Several recent studies [3], [4], [15], [16] have suggested that...."
"For example, see [7]."

Creating a Reference List The Reference List appears at the end of your paper and provides the full citations for
all the references you have used. List all references numerically in the order they've been cited within the paper, and
include the bracketed number at the beginning of each reference.

 Title your list as References either centered or aligned left at the top of the page.
 Create a hanging indent for each reference with the bracketed numbers flush with the left side of the page.
The hanging indent highlights the numerical sequence of your references.
 The author's name is listed as first initial, last name. Example: Adel Al Muhairy would be cited as A. Al
Muhairy (NOT Al Muhairy, Adel).
 The title of an article is listed in quotation marks.
 The title of a journal or book is listed in italics.

You might also like