0% found this document useful (0 votes)
48 views7 pages

Fatigue Using Eye

Uploaded by

Tanvir Bin Azam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views7 pages

Fatigue Using Eye

Uploaded by

Tanvir Bin Azam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 14 (2018) pp.

11582-11588
© Research India Publications. http://www.ripublication.com

Driver Fatigue Detection with Time-Feedback Neural Network


Robinson Jiménez
Auxiliary Professor, Department of Mechatronics Engineering,
Militar Nueva Granada University, Bogotá, Colombia.

Oscar F. Avilés S
Titular Professor, Department of Mechatronics Engineering,
Militar Nueva Granada University, Bogotá, Colombia.

Mauricio Mauledoux
Assistant Professor, Department of Mechatronics Engineering,
Militar Nueva Granada University, Bogotá, Colombia.

Abstract make the driving task a less incident factor in the cost of
human lives.
This article presents a monitored alert system for the detection
of driver fatigue using artificial intelligence techniques based From engineering applications, many pattern recognition tools
on image processing for extraction of visual features and have been developed, so that distinguishing patterns of risk in
neural networks for classification. The visual features of driving can be solved by such tools. Therefore, the present
fatigue is focused on the detection of eye closure states, investigation addresses a solution to the problems of vehicular
nodding and yawning, where these parameters are entered to a accident that distinguishes patterns of fatigue in the driver,
neural network with time-feedback of states, generating alert generating alerts that allow him to become aware of the
or critical outputs in the classification of the fatigue state. implicit risk factor of continuing to drive in the detected state.
Almost 91% accuracy in detection is obtained by evaluating
Next, the article describes the algorithms used for the
10 videos of users in front of the steering wheel, which exhibit
identification of the fatigue state. In section 2, a revision of the
fatigue characteristics.
state of the art is done, section 3 discusses the proposed work and
Keywords: Machine intelligence, fatigue detection, neural the image processing algorithms that originate the inputs of the
network, pattern recognition, computer vision. neural network are presented, in section 4 the training and
architecture parameters of the neural network are presented, with
the results obtained and in section 5 the conclusions
INTRODUCTION
According to the world health organization, traffic accidents
RELATED RESEARCH REVIEW
are declared a public health problem. The rates allow to
estimate more than one million deaths around the world [1]. In the last decades, the algorithms of computer vision have
Since human lives are not the only loss factor, the material allowed the development of multiple applications in the field
damages in vehicles, road infrastructure or buildings are of process automation. Among these fields of application are
added to the list of tragedies derived from this problem, distinguished the developments in robotic systems [5], which
evidencing the great magnitude of the same. allow them to be able to see and take actions to move in a
medium [6], [7]. However, the applications of such algorithms
The analysis derived from these accidents become are expanded to fields with the interaction with people, for
investigations that seek to mitigate this problem, allowing to example for medical [8] or security purposes [9].
emphasize effects as behaviors when driving in states of
anger, as it is presented in [2], and offering a critical point of Within this latter field, many image processing algorithms
view against the causes that originate a vehicular accident. focused on the detection of fatigue states in conductors have
Faced with this problem, vehicles equipped with technological been addressed, due to their evident incidence in the care of
systems allow to address solutions that support the work of human life. Developments such as those presented in [9] and
the driver from different fronts, for example, in [3], derived [10] offer a solution to this problem but from an invasive
from the decrease in reaction capacities that occur with age, it point of view in the driver, where it is necessary to put in the
is presented a vehicle with advanced technologies oriented to user the sensors of capture of electromyographic information,
support elderly people in the 60-85 age range, generating an impractical aspect in conventional driving systems.
safety schemes for driving assistance. Because of this it is necessary to look for alternatives for safe
driving based on driver fatigue [11].
Other research related to the aforementioned problems establish
the relationships that most affect drivers and co-drivers in
vehicular accidents, as discussed in [4]. In summary, there is a
strong interest in providing solutions from research to

11582
International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 14 (2018) pp. 11582-11588
© Research India Publications. http://www.ripublication.com

This is where image processing systems become relevant, in Figure 1, setting two regions of interest (ROI) corresponding
[12] a computer vision system is presented for discrimination of to the eyes and mouth. From these regions can be determined
fatigue state in drivers, where the computer vision system a state of fatigue, by basic characteristics such as eye closure,
includes not only image processing algorithms but also pattern nodding and yawning.
recognition algorithms, for instance, the case presented in [13].
However, these systems need to take into account the temporal
On the ROI of the eyes, which can be seen in Figure 2, which
perception of the fatigue state, for example, the analysis based on
is in conventional color scale with , and channels, it is
the recognition of the state of the eye [14], a person who initiates
grayscale by Eq. (1) and a histogram-based thresholding is
a sleep episode starts to close the eyes, however the blinking is a
applied, varying according to the left eye ( ) or the right eye (
natural and necessary action of ocular lubrication, which should
), due to the incidence of non-uniform light on the face, as
not generate confusion or false alarms [15]. It is for this reason
seen in Figure 3.
= (0,299 + 0,587 + 0,114 ) (1)
that the present article proposes a neuronal architecture with
time-feedback as a support to the image processing techniques
based on the extraction of features in a particular moment of
capture of the scene of the driver behind the steering wheel,
without losing the temporal relation of the state that was
exhibiting the driver.
Within the image recognition schemes, those that use information
coding, corresponding to the characteristics of a particular class,
allow to optimize the identification processes. Such is the case of
Haar classifiers, which encode the existence of contrasts oriented
between regions in an image. For example, a set of these
characteristics can be used to encode the contrasts exhibited by a
human face and its spatial relationships, to extract the features of Figure 1. Face Geometry
fatigue. P. Viola and M. Jones [16] implemented a fast method of
detecting objects based on a cascade algorithm using the Haar
descriptors, which can be calculated by an intermediate
representation called an integral image. This method presents a
classifier of faces highly efficient and of fast convergence once
trained, which is the basis of this development and is already Figure 2. ROI of the eye
implemented in multiple software tools, so it will not be delved
into it.
The driver's sleep recognition systems increasingly support
intelligent algorithms equipped in vehicles [17][18], so that
they can be embedded within the on-board computer [19]
supporting real-time execution [20] and so robust that it does Figure 3. Thresholding of the ROI of the eye.
not matter if the driver wears glasses [21], a factor that at
some point becomes a limitation for the recognition of eyes by
The initial classification of the opening/closing state of the eye can
machine vision algorithms. Although the developments in the be performed by setting the significant points of the eye contour,
detection of this state of drowsiness are extensive which are taken as: left end ( ), right end ( ), upper central point ( )
[22][23][24], the detection modalities present variations that and lower central point ( ), as illustrated in Figure 4. Where the
distance | − | allows to discriminate the opening/closing ratio of
exhibit complements of some algorithms to others, where each eye according to Eq. (2), as an average of the opening value of
basic techniques like the mentioned Viola-Jones algorithm each eye relative to the calculated maximum aperture.
continue being base of the detection [25].
Semi-assisted or intelligent vehicles must have the ability to
discriminate those conditions that lead to decisions to ensure
the integrity of drivers and passengers. In this field the
detection of driver fatigue plays a fundamental part [26][27],
both integrated in the vehicle and in other mobile support
systems [28].
Figure 4. Points of interest of the eye

PROPOSED WORK
FEATURE EXTRACTION
Once obtained the identification of the face region using the
Viola-Jones algorithm, it is feasible to segment the regions of the
face, based on studies of facial anthropometry reported in the
literature [29], [30], which present the standard measures of

11583
International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 14 (2018) pp. 11582-11588
© Research India Publications. http://www.ripublication.com

1 | − | +| − | < 0,4 ∙ | − | be considered, it is possible to validate up to 3 states without


reaching the 200 ms threshold, operating at a processing speed
{
0
2 (2) of 50 ms per frame for the extraction of characteristics, this
would be reflected by the current frame and the response of
two previous frames.
For yaw detection, a color space transformation of to is
applied to the ROI of the mouth, by Eq. (3), in order to Hence it would be necessary to generate a training base in
emphasize the pronounced reddish characteristics of the which these characteristics are included as part of the inputs
mouth from the face of the component , on this component an of the neuronal network with feedback, it has as a first
adaptive threshold is made due to the changes in intensity instance that the current frame will follow the characterization
level presented by the different pigmentations in the skin of according to Table 1, which presents the possible cases to be
each person [31], as can be seen in figure 4. detected, for example, yawning derived from fatigue and as a
response to stress [59], is typically accompanied by eye
closure, a speech state that is mistakenly classified as yawning
= 128 + 112 − 93,786 − 18,214 may coincide with a blink, generating a critical false positive,
(3)
so that only 3.1 and 3.2 cases are really critical, see Table 1
and generating case 2.3 as an alert. The feedback states
To determine the yawn, the mouth width is set as the provide the accumulation of one or two alarm states that can
difference between the distance of the upper and lower lead to an alert or critical state, which, depending on a single
contour. The different widths obtained in the used test images frame, generates the false positive conditions or does not
were averaged in order to obtain a fixed threshold for yawn generate the most adequate alarm condition. Similarly, the
discrimination, in case this threshold resulted in 14 pixels. accumulation of a warning state as condition 2.2 will maintain
an alert condition up to two frames but will generate a critical
condition with a greater accumulation of this condition,
evidencing the importance of the temporal perception.

Table 1. Classification conditions at a time t


Fig. 4. Red Distance Detection. Input Output
CASE Closed Eye Yawning Nodding Fatigue
The detection of nodding associated with a fatigue state is related 1.1 No No No No
to head-settling movements, in which the relative position of the
head changes abruptly downwards. For the above reason it is not 1.2 No No Yes No
feasible to validate this change frame to frame, since the variation 1.3 No Yes No No
of said position will not be significant and is susceptible of being
confused with another movement. 2.1 No Yes Yes Alert
The nodding detection is determined by a pixel of an initial relative position of the
head, conformed by the coordinates and , given by the mean value of the distance 2.2 Yes No No Alert
between the ROI of the eye area. From a frame of = 0, a frame count of 1 second
duration is performed, so that the new position of the head will be . If the 2.3 Yes Yes No Alert
difference − exceeds a given threshold, which depends on the resolution of the
camera and the distance from the driver, the presence of a nodding is considered,
as indicated in Eq. (4). 3.1 Yes No Yes Critical
3.2 Yes Yes Yes Critical

In Table 2 some of these characteristic states are presented as


a function of the feedback according to their relevance.
In example 1, the classification by means of Table 2 sets a state
1, ∆( − ) > ℎ. = 1 {0, ∆( − ) < ℎ. = 1
of alert in the current frame and which is reinforced by the
(4)
previous state of alert of state one, state two does not
influence the output response. In example 2, it has the same
alert time result which generate a current critical condition to
CLASSIFICATION STAGE
the output by accumulation, this case can be generated by a
In order to avoid alarm conditions due to the detection of pronounced eye closure tending to microsleep or a yawn
closed eyes, by normal lashing, whose normal duration is from with closed eyes continuous, this case represents a
100 to 130 ms, an eye closure of more than 200 ms is refinement in the training vector with respect to the state of
fatigue not possible without feedback. Example 3 represents
considered a state of alert, and in order for the system to have
a state in which after a critical fatigue episode such as
an early but assertive response, it must be aware of the
pronounced nodding, it is feasible the loss of face location
previous states as much as possible. In order to determine the
and therefore of the eye measurement and
number of states to

11584
International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 14 (2018) pp. 11582-11588
© Research India Publications. http://www.ripublication.com

nodding, where it results in retaining a state of alarm and previous states. These six parameters determine the training
avoiding a false response due to lack of data. Example 4 sets a input vector (60%), validation (20%) and prediction (20%).
fatigue recovery effect or simpler even a somewhat pronounced The output layer allows to determine each of the set states, i.e.
and non-incident eye closure in relation to fatigue. Example 5
presents a non-coherent situation in which a critical state is normal, alert and critical, in order to simplify the structure of
detected followed by a state of non-fatigue and again critical in the network is coded in this case according to Table 3, so that
the current frame, this would indicate more a loss of the 2 neurons are required in the output layer.
measurement or error of the classification by which the network
is trained to sustain the critical state. Example 6 again generates a Table 3. Final classification codification
critical alarm based on the accumulation of previous alert states.
Example 7 replicates the condition of Example 5, under the same Codification State
type of alarm in the current frame, but with respect to a different 0 0 Does not Register
condition. Examples 8 and 9 show the transition of previous
alarm states with a current non-fatigue state, due to the previous 0 1 Normal
state (state t − 1) that reports a critical alarm, the result of the
current frame is smoothed, generating the intermediate alert state. 1 0 Alert
1 1 Critical
Table 2. Classification conditions with feedback at time t, t − 1 y t − 2.

Current State (t)


Newly for the hidden layer, the number of optimal neurons is
Exam State State Outp set according to the performance that it presents by subjecting
Clos
ple Yawni Noddi ut iteratively to the training and validation database, it has a
ed
t−2 t−1

ng ng critical point with 60 neurons and it does not present a slightly


Eye
better result but until a number of 140 neurons. Having
1 No Alert Yes No No Alert between the two points of consideration a difference of more
fatig than double the number of neurons by reason of a 7%
ue improvement in the error, reason why it is not justified to
increase unnecessarily the complexity of the network, which
2 Alert Alert Yes No No Critic will also increase the final prediction time, finally opting for
al
60 neurons in the hidden layer. The neural architecture
3 Critic Alert No No No Alert employed has the form illustrated in Figure 5.
al
4 Alert Alert No No No No
5 Critic No Yes No Yes Critic
al fatig al
ue
6 Alert Alert No Yes Yes Critic
al
7 Critic No Yes No Yes Critic
al fatig al
ue
8 Critic Critic No No No Alert Figure 5. Neural architecture used.
al al
9 Alert Critic No No No Alert The number of iterations or epochs required to reach an
al adequate training, according to the presented error rate, was
achieved after the 200 iterations, at which point the validation
error does not change significantly.
The structure of the neural network with state feedback to be
implemented is of the three layer multilayer perceptron type:
input, output and a hidden layer. The input layer is composed SIMULATION AND ANALYSIS
of six neurons, given the number of characteristics for setting
In order to be able to validate the performance of the chosen
the state of fatigue, eye opening, opening of the mouth,
structure of the neural classifier, the respective confusion
magnitude of head movement and angle of movement
matrices are made that allow to evaluate the performance of the
(calculated by Pythagoras), plus the feedback from the two
system in a group of 10 videos of short duration (approximately 3
minutes), for 10 different users in front of the steering wheel. In
tables 4 to 7, the most representative cases of Table 1 are
presented based on the time-feedback, where TP corresponds to
the true positives, i.e., the cases in which it is presented and

11585
International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 14 (2018) pp. 11582-11588
© Research India Publications. http://www.ripublication.com

aptly detects said state, FP corresponds to false positives, in


other words, to the cases a state is not present but is detected
erroneously, and the accuracy reached (ACC).

Table 4. State of fatigue reported for case 2.1


Video 1 2 3 4 5 6 7 8 9 10 TOTAL
TP 2 3 1 2 1 2 3 4 2 3 23
FP 0 1 0 0 0 0 1 1 0 0 3
Figure 6. Graphical response of the network.
ACC 100 75 100 66.6 100 100 75 80 100 100 88.4

Table 5. State of fatigue reported for case 2.2 Table 8 illustrates the overall performance of the network for
all alert and critical cases, where it is observed that the final
Video 1 2 3 4 5 6 7 8 9 10 TOTAL precision reached with the test videos is 90%.
TP 20 26 21 24 25 21 19 21 26 22 225 Table 8. Final network accuracy
FP 2 1 2 1 1 3 2 2 3 2 19
ACC 90,9 96,3 91,3 96 96,1 87,5 90,4 91,3 89,6 91,6 92,21
Neural network with time-feedback
Vid TOT
1 2 3 4 5 6 7 8 9 10
Table 6. State of fatigue reported for case 3.1 eo AL
Video 1 2 3 4 5 6 7 8 9 10 TOTAL Tot
TP 6 6 5 5 6 6 5 5 7 6 57 3
al 39 48 42 41 43 37 45 48 43 424
FP 1 0 0 0 1 1 1 2 0 1 7
8
TP
ACC 85,7 100 100 100 85,7 85,7 83,3 71,42 100 85,7 89
Tot
al 4 3 2 3 3 6 4 8 5 4 42
Table 7. State of fatigue reported for case 3.2 FP

Video 1 2 3 4 5 6 7 8 9 10 TOTAL AC 90 94 9 93 93 87 90 84, 90, 91,


TP 3 2 4 2 4 2 5 4 1 29 90,99
2 C ,7 ,1 5 ,3 ,1 ,7 ,2 91 57 49
FP 0 0 0 0 0 1 0 1 1 0 3
ACC 100 100 100 100 100 80 100 83 80 100 90,6
CONCLUSIONS

It is observed that the general accuracy is 90%, highlighting It was possible to implement an algorithm of identification of
the fact that the critical cases are being found in their entirety fatigue states in a driver with a high efficiency, taking as
and the system response to these critical cases improves and it reference the time dimension that requires the analysis of said
is more consistent with the state presented by the driver. state, evidencing the benefits of this contribution against the
Although it is evidenced that false alarms are detected, it does developments found in the state of the art, through the use of
not go against the performance of the algorithm, in relation to recurrent neural networks, where the performance increase
the alert of the driver. compared to recognition techniques by means of conventional
neural networks, in favor of the reduction of alarms detecting
Figure 6 shows the graphical result of the feedback network in the states of interest.
the algorithm for video 1. In the first image there is a nodding
sequence that starts with sporadic eye closures until it reaches Given the various tasks of the algorithm, face detection,
the nodding, where it is denoted not to generate excessive feature extraction and classification, it works at 20 frames per
alarms. In the second, there are prolonged eye closures that second, below the rate of a conventional video (30 fps),
derive in a critical state, the third image shows a prolonged without affecting the final result, regarding the consideration
non-fatigue state with an initial blink detection during at least of fatigue times visually detectable and that may be vital to
two consecutive frames, the final image denotes several prevent an accident. This processing time is based on the use
consecutive nodding episodes, where red represents critical of non-dedicated computing equipment, so it allows
and purple alertness. projection of the replication of these algorithms to embedded
systems or on-board computers in semi-assisted vehicles with
real-time operation.
The performance of the algorithm is subject to little variant
daylight conditions, which limits its application. The tests
performed are demarcated in daytime slots in the range of 7
a.m. to 5 p.m. with normal daylight conditions. It requires a
complement to the algorithm for its generality in any driving

11586
International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 14 (2018) pp. 11582-11588
© Research India Publications. http://www.ripublication.com

condition, a problem that could be solved under detection Minimization Analysis in an EEG-Based System," in
algorithms in more robust images such as convolutional neural IEEE Journal of Biomedical and Health Informatics, vol.
networks, based on image acquisitions with infrared light. 21, no. 3, pp. 715-724, May 2017. doi:
10.1109/JBHI.2016.2532354.
[10] J. Yin, J. Hu and Z. Mu, "Developing and evaluating a
ACKNOWLEDGMENT
mobile driver fatigue detection network based on
The research for this paper was supported by Davinci research electroencephalograph signals," in Healthcare
Group of Nueva Granada Military University. Technology Letters, vol. 4, no. 1, pp. 34-38, 2 2017.
[11] S. Naz, A. Ahmed, Q. ul ain Mubarak and I. Noshin,
"Intelligent driver safety system using fatigue detection,"
REFERENCES 2017 19th International Conference on Advanced
[1] Consejo Colombiano de Seguridad (CCS), “Diariamente Communication Technology (ICACT), Bongpyeong,
se presentan en Colombia 90 accidentes viales”. 2017, pp. 89-93. doi: 10.23919/ICACT.2017.7890063.
Consulted in September, 2017. [Online], Available in:
[12] G. F. Zhao and A. X. Han, "Method of Detecting
http://ccs.org.co/salaprensa/index
Logistics Driver's Fatigue State Based on Computer
.php?option=com_content&view=article&id=516.
Vision," 2015 International Conference on Computer
[2] Lisa Precht, Andreas Keinath, Josef F. Krems, Effects of Science and Applications (CSA), Wuhan, 2015, pp. 60-
driving anger on driver behavior – Results from 63. doi: 10.1109/CSA.2015.70.
naturalistic driving data, In Transportation Research Part
[13] C. Zhang, F. Cong and H. Wang, "Driver fatigue
F: Traffic Psychology and Behaviour, Volume 45, 2017,
analysis based on binary brain networks," 2017 Seventh
Pages 75-92, ISSN 1369-8478.
International Conference on Information Science and
[3] Jessica Gish, Brenda Vrkljan, Amanda Grenier, Benita Van Technology (ICIST), Da Nang, 2017, pp. 485-489. doi:
Miltenburg, Driving with advanced vehicle technology: A 10.1109/ICIST.2017.7926809.
qualitative investigation of older drivers’ perceptions and
[14] F. Zhang, J. Su, L. Geng and Z. Xiao, "Driver Fatigue
motivations for use, In Accident Analysis & Prevention,
Detection Based on Eye State Recognition," 2017
Volume 106, 2017, Pages 498-504, ISSN 0001-4575,
International Conference on Machine Vision and
https://doi.org/10.1016/j.aap.2016.06.027.
Information Technology (CMVIT), Singapore, 2017, pp.
[4] Yong Peng, Xinghua Wang, Shuangling Peng, Helai 105-110.doi: 10.1109/CMVIT.2017.25.
Huang, Guangdong Tian, Hongfei Jia, Investigation on
[15] Y. Chellappa, N. N. Joshi and V. Bharadwaj, "Driver
the injuries of drivers and copilots in rear-end crashes
fatigue detection system," 2016 IEEE International
between trucks based on real world accident data in Conference on Signal and Image Processing (ICSIP),
China, In Future Generation Computer Systems, 2017,
Beijing, 2016, pp. 655-660. doi:
ISSN 0167-739X, 10.1109/SIPROCESS.2016.7888344.
https://doi.org/10.1016/j.future.2017.07.065.
[16] Rapid Object Detection using a Boosted Cascade of
[5] Franceschini, N., 2014. "Small Brains, Smart Machines: simple Features, Paul Viola and Michael Jones. 2001
From Fly Vision to Robot Vision and Back Again," IEEE.
Proceedings of the IEEE, vol.102, no.5, pp.751-781,
May 2014. [17] K. Vasudevan, A. P. Das, Sandhya B and Subith P,
"Driver drowsiness monitoring by learning vehicle
[6] Mohammed A., L. Wang, R.X. Gao, 2013. Integrated telemetry data," 2017 10th International Conference on
Image Processing and Path Planning for Robotic Human System Interactions (HSI), Ulsan, 2017, pp. 270-
Sketching, Procedia CIRP, Volume 12, 2013, Pages 199- 276. doi: 10.1109/HSI.2017.8005044.
204, ISSN 2212-8271.
[18] Gobhinath S., Aparna V and Azhagunacchiya R, "An
[7] Montironi M.A., Castellini P., Stroppa L., Paone N., automatic driver drowsiness alert system by using GSM,"
2014. Adaptive autonomous positioning of a robot vision 2017 11th International Conference on Intelligent Systems
system: Application to quality control on production and Control (ISCO), Coimbatore, 2017, pp. 125-
lines, Robotics and Computer-Integrated Manufacturing,
128. doi: 10.1109/ISCO.2017.7855966
Volume 30, Issue 5, October 2014, Pages 489-498, ISSN
0736-5845. [19] B. Reddy, Y. H. Kim, S. Yun, C. Seo and J. Jang, "Real-
Time Driver Drowsiness Detection for Embedded
[8] Hairong Jiang; Duerstock, B.S.; Wachs, J.P., 2014. "A
System Using Model Compression of Deep Neural
Machine Vision-Based Gestural Interface for People
Networks," 2017 IEEE Conference on Computer Vision
With Upper Extremity Physical Impairments," Systems,
and Pattern Recognition Workshops (CVPRW),
Man, and Cybernetics: Systems, IEEE Transactions on,
Honolulu, HI, 2017, pp. 438-445.
vol.44, no.5, pp.630-641, May 2014.
[20] F. Rohit, V. Kulathumani, R. Kavi, I. Elwarfalli, V.
[9] R. Chai et al., "Driver Fatigue Classification With
Kecojevic and A. Nimbarte, "Real-time drowsiness
Independent Component by Entropy Rate Bound

11587
International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 14 (2018) pp. 11582-11588
© Research India Publications. http://www.ripublication.com

detection using wearable, lightweight brain sensing facial en función de proporciones,” International Journal
headbands," in IET Intelligent Transport Systems, vol. 11, of Cosmetic Medicine and Surgery, vol. 6, no. 3.
no. 5, pp. 255-263, 6 2017. doi: 10.1049/iet-its.2016.0183
[30] Jimenez Moreno, Robinson and Prieto, Flavio,
[21] W. C. Li, W. L. Ou, C. P. Fan, C. H. Huang and Y. S. "Segmentación de labio mediante técnicas de visión de
Shie, "Near-infrared-ray and side-view video based máquina y análisis de histograma" . Inge@Uan ISSN:
drowsy driver detection system: Whether or not wearing 2145-0935. v.2 fasc.4 p.7 - 12 ,2012.
glasses," 2016 IEEE Asia Pacific Conference on Circuits
and Systems (APCCAS), Jeju, 2016, pp. 429-432. doi:
10.1109/APCCAS.2016.7803994
[22] Z. A. Haq and Z. Hasan, "Eye-blink rate detection for
fatigue determination," 2016 1st India International
Conference on Information Processing (IICIP), Delhi,
2016, pp. 1-5. doi: 10.1109/IICIP.2016.7975348
[23] O. Khunpisuth, T. Chotchinasri, V. Koschakosai and N.
Hnoohom, "Driver Drowsiness Detection Using Eye-
Closeness Detection," 2016 12th International
Conference on Signal-Image Technology & Internet-
Based Systems (SITIS), Naples, 2016, pp. 661-668. doi:
10.1109/SITIS.2016.110
N. N. Sari and Y. P. Huang, "A two-stage intelligent
model to extract features from PPG for drowsiness
detection," 2016 International Conference on System
Science and Engineering (ICSSE), Puli, 2016, pp. 1-2.
doi: 10.1109/ICSSE.2016.7551597.
[24] M. K. Hasan, S. M. H. Ullah, S. S. Gupta and M. Ahmad,
"Drowsiness detection for the perfection of brain computer
interface using Viola-jones algorithm," 2016
3rd International Conference on Electrical Engineering and
Information Communication Technology (ICEEICT),
Dhaka, 2016, pp. 1-5. doi:
10.1109/CEEICT.2016.7873106.
[25] D. Tran, E. Tadesse, W. Sheng, Y. Sun, M. Liu and S.
Zhang, "A driver assistance framework based on driver
drowsiness detection," 2016 IEEE International
Conference on Cyber Technology in Automation,
Control, and Intelligent Systems (CYBER), Chengdu,
2016, pp. 173-178. doi: 10.1109/CYBER.2016.7574817.
[26] Y. Chellappa, N. N. Joshi and V. Bharadwaj, "Driver
fatigue detection system," 2016 IEEE International
Conference on Signal and Image Processing (ICSIP),
Beijing, 2016, pp. 655-660. doi:
10.1109/SIPROCESS.2016.7888344.
[27] Jibo He, William Choi, Yan Yang, Junshi Lu, Xiaohui
Wu, Kaiping Peng, Detection of driver drowsiness using
wearable devices: A feasibility study of the proximity
sensor, In Applied Ergonomics, Volume 65, 2017, Pages
473-480, ISSN 0003-6870,
https://doi.org/10.1016/j.apergo.2017.02.016.
[28] A. M. Martinez and R. Benavente, “The ar face database,”
Base de Datos de Rostros Frontales de la Universidad de
Purdue, vol. ‘http://www2.ece.ohiostate.edu/
aleix/ARdatabase.html., 1998.
[29] G. R. Maximiliano Florez Mendez, Ivan Hernandez,
“Estructuración y estandarización de la atropometría

11588

You might also like