Robotalk Prototyping A Humanoid Robot As A Speech-To-Sign Language Translator
Robotalk Prototyping A Humanoid Robot As A Speech-To-Sign Language Translator
Robotalk Prototyping A Humanoid Robot As A Speech-To-Sign Language Translator
RoboTalk
Prototyping a Humanoid Robot as a Speech-to-Sign Language Translator
Daniel Christian Homburg and Mirja Sophie Thieme,
Edith-Stein School, Darmstadt, Germany, E-Mail: [email protected]
Acknowledgements:
1) The authors thank the Förderverein für Marktorientierte Unternehmensführung, Invensity, Deutscher
Sparkassenverlag, VorsprungAtWork, and Taunussparkasse for the financial support of the research. Furthermore,
the authors thank Franz Bönsel from the Edith-Stein School for the great support of the research. Finally, the authors
thank Felix Wendt from FiberThree for the great support with resources and advice during the printing process of
the Robot.
2) The idea and the major part of this research owes to the two first authors (Daniel Homburg and Mirja Thieme).
The two first authors conducted this research in the context of the program “Jugend forscht - Schüler
experimentieren” at the Edith-Stein School, Darmstadt, Germany. The paper was written by the first author and
refined and edited by the fourth author. The third author (Johannes Völker) supported the project with valuable
advice about programming issues.
URI: https://hdl.handle.net/10125/59611
ISBN: 978-0-9981331-2-6 Page 1706
(CC BY-NC-ND 4.0)
overall, this communication barrier can have to develop a humanoid robotic avatar that can function
detrimental effects on many aspects of their lives” [7, as a sign language translator. In developing this tool,
p. 1]. In many cases, they rely on sign language we focus specifically on the needs of users and gather
interpreters in their daily lives.1 their insights to define the direction of our research, as
well as which features the robot should possess. In
particular, we established our first research question as:
1. How do potential users (i.e., deaf or hard-of-
hearing people) perceive the use of a robotic sign
language translator (RSLT)?
Accordingly, we started with a survey of deaf and
Figure 1: Samples for German sign language [57] hearing-impaired potential users, to learn more about
their likely acceptance and needs for such a new
Such reliance is problematic for several reasons, technology. An initial insight revealed that these
including the lack of independence it implies and the potential users considered speech-to-sign language
limitations on people’s integration into society. For translation significantly more important than vice
example, even as deaf students increasingly have versa. Thus, we sought to build a prototype that could
enrolled in universities, more than 80% of the hearing- translate spoken language into sign language. Research
impaired population worldwide is considered on robots that can express themselves in sign language
undereducated, due to a lack of support, [44] as well as is scarce, particularly because most existing robots lack
educational difficulties stemming from an inability to the manual dexterity required to perform the
follow lectures, low self-esteem, experienced isolation, complicated finger gestures of sign language.
and social barriers [10, 55]. Thus, “effective Therefore, we also ask:
technological support is essential to enhance the 2. How can the arms and hands of a robot be
learning environment of deaf and hearing-impaired designed to allow the expression of complex
learners” [7, p. 107]. Various technical applications operations (i.e., letters, words) in sign language?
have been developed [51, 54], but we propose going a For this project, we used 3D printing to create the
step further to address the multiple needs of this arms of a humanoid robot. As a foundation, we used a
population. robot model called “InMoov” [28], for which
Specifically, we consider whether a humanoid individual components are widely available. However,
robot can function as an avatar for sign language the robot’s existing thumb, index finger, and middle
expression. Robots are “automatically controlled, re- finger are not very flexible, so we sought to redevelop
programmable, [and] multipurpose” [41, p. 402]; and print these three parts. To control the hands and
modern humanoid robots possess human-like physical arms at the same time, we attached them to a human-
traits (e.g., head, arms) but still look mechanical. These sized doll, which we called Robert, and connected
robots already help humans in various settings, them via cables. Next, to teach the robot sign language,
whether by providing assistance to elderly and older we considered which signs can be expressed with two
people [16, 60], offering entertainment [24], supporting arms and hands. That is, the third question asked,
educational efforts [11], or providing health care 3. Which signs in sign language can be expressed by
services [8, 43]. Medical support robots in particular two arms and hands?
already provide rehabilitation tools [27], assist In answering these questions, this article begins
cognitively impaired people [33, 53], and motivate with a literature review of robotic and sign language
people to exercise or lose weight [22]. With a similar research, which leads us to propose a three-stage
logic, we posit that humanoid robots may be able to model. We present the findings from a survey of 50
translate and express sign languages. For example, a German deaf or hearing-impaired people in Section 4.
robotic sign language translator (RSLT) in school Then in Section 5, we describe different elements of
classes that include both hearing-impaired and non– the system architecture and the general platform for the
hearing-impaired students could translate teachers’ sign language robot Robert, followed by the user
speech immediately to sign language, in support of the interaction process and some experimental tests of
inclusion of all students users’ sign language recognition and robot acceptance
To develop a humanoid robot that facilitates (Section 6). We also outline some research
communication by and with deaf and hearing-impaired implications and limitations.
people, we undertook the project “RoboTalk.” It seeks
1
The German statistics only includes deaf people. The European
statistics includes hard hearing and deaf people.
Page 1707
2. Literature Review through various technologies, such as screens and
virtual avatars [5, 6, 9, 13, 14, 30, 52]. For our
Extant studies on sign language essentially focus on research, the third group is particularly important.
three areas: A first group of studies examines aspects Table 1 contains a summary of relevant literature. Most
of sign language learning, teaching, and development extant research describes techniques for either recog-
in childhood stage [18, 34, 39]. These studies provide nizing or expressing sign language. Several studies
valuable insights on how sign language is created as deal with the detection of American, Indian, or Chinese
first language in different life stages. It also helps us to body language, using stereo cameras, gloves, or
program sign language for the robot. A second group animated screens. Other studies also address ways to
of studies focuses on sign language recognition, recognize different national sign languages (e.g.,
reading, and interpretation [40, 45, 46, 48, 56]. A third English, Chinese, Indian, Greek) [7, 17, 35, 38].
research group investigates sign language expression
Page 1708
Authors Sign Sign Coun- Me- Key Findings
Language Language try thodb)
Device Modea)
Lee & Xu Cyberglove R USA D - Online learning of new gestures
[29] - Reliable recognition of 14 different gestures
- Application of hidden Markov models
Malima, Real hands R Turkey D Algorithm for automatic recognition of a
Özgür, & limited set of gestures from hand images
Çetin [31]
Mouri, Anthropomor- E Japan D - Anthropomorphic robot hand
Kawasaki, & phic robot - Dexterous manipulation and displaying
Umbayashi (KH hand type hand shape
[36] S) - Five fingers of robot are directed by a
bilateral controller
Nandy et al. Robot R India E - Real-time Indian sign language recognition
[38] by humanoid robot
- Categorization of gestures with Euclidean
distance method
Starner & Human hands R USA E - System for American sign language
Pentland [49] with and - Hands with colored gloves (accuracy 99%),
without gloves hands without gloves (92% accuracy)
Starner, Desk and R USA E - Computer vision-based method of
Weaver, & wearable recognizing sentence-level American sign
Pentland [50] computer- language from a 40 word lexicon
based videos - Use of hidden Markov models
a)
The sign language modes are either R = recognition or E = expression. b)The methods include E = experiment, C = conceptual article, Ev =
Event, and D = hardware/software design.
One study used sensor gloves to recognize sign develop a robotic avatar that can translate speech to
language and translate it into normal language [35]. sign language.
Few works focus on speech-to-sign language trans-
lation though. Mouri and colleagues [36] have 3. Three-Stage Research Process
developed a robot hand that expresses Japanese sign
language, and researchers have developed a robot body
We depict the three research stages in Figure 2.
to express Greek sign language [21]. Some isolated
First, we sought to identify important features for an
studies also try to program a humanoid robot, such as
RSLT, which directed our development and the design
NAO, using sign language [4], but this robot only has
of the robotic arm. We also tested users’ acceptance of
three fingers, which limits expressivity. Many studies
Robert the RSLT, relative to a human translator.
rely on displaying pictures of hands making the signs
Second, for the construction and prototyping of arms,
on screens [20].
we developed a testable robotic arm. The major
Such contributions indicate the possibility of
challenge in this stage was to create a robotic arm with
programming at least some sign language capabilities
fingers that were sufficiently flexible to express
for robots. We know of no studies that explicitly aim to
complex letters and words in sign language. We also
establish complicated abbreviations of sign language
started programming the sign language. Third, in an
by using human-like hands (and arms) with five
ongoing stage, we are conducting experiments to test
fingers. Thus, the current research is the first to
participants’ recognition of robotic signs and their
acceptance of the RSLT.
Page 1709
Figure 2: Three stages of the research project
4.1 Sample
4.2 Results
Page 1710
As Figure 3 indicates, deaf and hearing-impaired
people feel excluded from various life areas, nearly all
the time. This exclusion appears particularly prominent
in private social settings and relationships, rather than
in job-related areas. The survey respondents also
indicate that their greatest need for a sign language
translator arises during meetings with hearing people
and for education (Panel b), because “most hearing
people do not know sign language and know very little
about Deafness in general. For example, most hearing
people do not know how to communicate in spoken
language with a Deaf or hard–of–hearing person who
can speak and read lips (e.g. that they should turn their
head or not to cover their mouth)” [58, p. 1].
In addition, we asked respondents to rate the
importance of three possible capacities of a RSLT: (a)
speech-to-sign language, (b) sign language-to-speech, Figure 4: Areas of usage for a RSLT as compared to
or (c) both. Notably, 79% preferred both functions, but existing sign language translators
these respondents also considered speech-to-sign
language translation tools as significantly more The respondents considered the robot as
important than the other way around (M = 4.09, SD = particularly important for information desks and for
.53 vs. M = 2.98, SD = .45; 7-point scale). This finding larger groups. From these findings, we conclude that a
is surprising; according to our literature review, extant sign language robot may also be particularly accepted
research mostly has focused on sign language-to- by groups of students at school or university.
speech capacities. Thus, we have determined that the The importance of robots at information desks is
RSLT we develop should be able to translate in both consistent with the trend that firms increasingly place
directions. However, considering its importance to humanoid robots at service encounters with customers
potential users, we start by seeking to develop a because they provide a “richer” interaction than
speech-to-sign language feature. screens or self-service terminals [47]. During these
We also uncover some divergent preferences interactions, humanoid robots are argued to be superior
regarding application areas for different sign language to a sign language translation screen, because these
support modes. As we detail in Figure 4, about one- robots have been shown to express and transfer
third of the respondents could not imagine being emotions to humans [59]. We argue that humanoid
supported by a RSLT; in response to an open question, robots can enrich an interpersonal interaction and sign
most cited their lack of experience as the reason for language translation by its human-like expressions.
their reluctance to interact with a humanoid robot. Still,
they acknowledge the potential of RSLTs at 5. Construction and System Architecture
information desks (27.3%), and some respondents for RSLT Robert (Stage 2)
think that everybody should have one (17.4%). In
particular, using RSLTs would align with the widely
growing trend in which robots provide various We printed a model of a robotic hand for “InMoov”
services, including staffing information desks at that is freely available from the Internet [28]. Figure 5,
airports, fairs, and hotels. In these areas, RSLT could Panel a depicts the exact measurements of the hand and
provide valuable translation services for deaf guests. shows the additional degrees of freedom of thumb and
index finger.
Page 1711
(a) Hand of RSTL „Robert“
170.1
70.3
Figure 5: Hand of the RSLT Robert and sample expressions of the German finger alphabet
The hand is driven by six motors. Each finger is run movements impossible. For the first two fingers of the
by a single motor, and the sixth motor directs the wrist. second hand, we had to develop new joints to allow the
During the test of this prototype, we noticed that the fingers to move in two directions.
thumb and index finger, which originally had just one Using two prototype robotic arms with more
degree of freedom, needed more leeway for many flexible fingers, we programmed the hand movements
gestures. The original joint was too simple (Figure 5, with Python. We started with the alphabet and numbers
Panel c), such that the pointer finger and thumb could in sign language, then moved to words and sentences.
only move one-dimensionally, making cross- Figure 6 depicts our RSLT architecture.
Page 1712
At the beginning of the process, a normal 6.2 Limitations and Areas for Further Research
hearing user submits a voice message using a
microphone (default input mode), which Robert This research project is ongoing, seeking
translates into sign language. That is, upon a voice continuous improvements to the Robert prototype.
submission. We use Google speech recognition. Currently, the focus is on speech-to-sign
The user’s input gets transferred to a cloud database translation; we hope that further research identifies
via WiFi. If WiFi is not available, the input can be means to integrate existing sign language
provided by a user typing on a keyboard. Then the recognition technologies to achieve comprehensive
system compares the input with the data in the capabilities for both translation directions. This
database. If a comparable word or term appears in research has several limitations that may offer
the database, the robot expresses the pre- interesting areas for future research: First, this
programmed sign language gesture. If no adequate research focuses on speech-to-sign translation
sign output suggestion exists in the system, the without offering a vice versa option. Future
robot spells each letter of the input. We connected a research could develop an integrated humanoid
regular computer with a microcontroller. The micro robot, being able to provide both, speech-to-sign
controller directs the motors. and sign-to-speech. Second, with today’s speech
The robot is able to show every letter of the recognition, the robot cannot understand everything
alphabet, numbers from one to 20, and words that one tells him. In classes with school or college
can be expressed with two hands (and without students, it can sometimes be very loud and the
further gestures by the head)2. If the robot is not robot could have hearing difficulties. Third, so far,
able to translate a word, it will spell the word with the robot is only able to show rather simple signs.
single letters of the sign language alphabet. For Fourth, this study surveyed 50 sign language
example, it can translate the word “hello” and thus speaking people in Germany. Future research could
shows it with the sign that can be done with two study potential cultural differences based on a
hands. A word is not as easy to show as a letter or a larger sample.
number. Furthermore, the expression of single We recently tested the prototype in a laboratory
letters can vary in terms of their difficulty levels. setting, demonstrating that the current iteration of
For example, an “r“ is not as easy to express as an Robert can express the entire German sign language
“a“, because the “r“ needs to get the index finger alphabet and a set of about 50 words. We will soon
and the middle finger crossed. conduct tests of the extent to which hearing-
impaired or deaf people can recognize the sign
6. Discussion language that Robert expresses, in an experimental
study. This experiment will indicate if our efforts to
6.1 Research Implications develop specific joints and fingers that offer
sufficiently flexible hand and finger movements to
People who experience hearing limitations face express sign language have been sufficient. We also
considerable challenges in their daily lives. The plan to compare human–robot interactions with
current research therefore attempts to enhance the human–human interactions, by determining
inclusion of hearing-impaired and deaf people by people’s recognition of sign language expressed by
developing a prototype for a robotic sign language the robot compared with that of sign language
translator (RSLT). We extend prior robotic research expressed by a human sign language translator.
in several important directions. First, as our
literature review shows, extant research largely
focuses on sign language recognition, mostly in
relation to visual recognition or deep learning. Our
developed robotic prototype Robert is, to the best of
our knowledge, the first system that can translate
speech into sign language.
Second, this investigation contributes to
research on assistive education robots. These robots
mainly have been applied to teach psychologically
disabled people [42] or supervise users and ensure
their acquisition of technical skills [2 , 37]. We
propose an extension, such that language robots
might teach sign language, as well as assist teachers
in classrooms by translating their speech
immediately, to increase the inclusion of hearing-
impaired students in conventional school classes.
2 Page 1713
Several words of the sign language require head gestures and
mimical expressions.
7. References [16] Forlizzi, J. & DiSalvo, C. (2006, March). Service
robots in the domestic environment: a study of the
Roomba vacuum in the home. In Proceedings of the
[1] Abuzinadah, N. E., Malibari, A. A., and Krause, P. 1st ACM SIGCHI/SIGART conference on Human-
(2017). Towards empowering hearing impaired robot inter-action (pp. 258-265). ACM.
students’ skills in computing and technology. [17] Gao, W., Fang, G., Zhao, D., & Chen, Y. (2004). A
International Journal of Advanced Computer Chinese sign language recognition system based on
Science and Application, 8 (1), 107-118. SOFM/SRN/HMM. Pattern Recognition, 37(12),
[2] Akrour, R., Schoenauer, M., & Sebag, M. (2013, 2389-2402.
September). Interactive robot education. ECML/ [18] Gardner, R. A., & Gardner, B. T. (1969). Teaching
PKDD Workshop on Reinforcement Learning with sign language to a chimpanzee, Science, 165(3894),
Generalized Feedback: Beyond Numeric Rewards. 664-672
[3] Armstrong, J. S., & Overton, T. S. (1977). [19] German Deaf Association (2014).
Estimating nonresponse bias in mail sur- [20] Habili, N., Lim, C. C., & Moini, A. (2004).
veys. Journal of Marketing Research, 14(3), 396- Segmentation of the face and hands in sign
402. language video sequences using color and motion
[4] Barros, P., Magg, S., Weber, C., & Wermter, S. cues. IEEE Transactions on Circuits and Systems
(2014). A multichannel convolutional neural for Video Technology, 14(8), 1086-1097.
network for hand posture recognition. International [21] Karpouzis, K., Caridakis, G., Fotinea, S. E., &
Conference on Artificial Neural Networks, Efthimiou, E. (2007). Educational resources and
September (pp. 403-410). Springer, Cham. implementation of a Greek sign language synthesis
[5] Bellugi, U., & Fischer, S. (1972). A comparison of architecture. Computers & Education, 49(1), 54-74.
sign language and spoken language, Cognition, 1(2- [22] Kidd, C. D. & Breazeal C. (2006). Sociable robot
3), 173-200. systems for weight maintenance. In: Proceedings of
[6] Bowden, R., Windridge, D., Kadir, T., Zisserman, the 3rd IEEE Consumer Communications and
A., & Brady, M. (2004, May). A linguistic feature Networking Conference, Las Vegas, Nevada, 253–
vector for the visual interpretation of sign language, 257.
European Conference on Computer Vision (pp. [23] Kipp, M., Heloir, A., & Nguyen, Q. (2011,
390-401). Springer, Berlin, Heidelberg. September). Sign language avatars: Animation and
[7] Brashear, H., Starner, T., Lukowicz, P., & Junker, comprehen-sibility. International Workshop on
H. (2003). Using multiple sensors for mobile sign Intelligent Virtual Agents (pp. 113-126). Springer,
language recognition. Georgia Institute of Berlin, Heidelberg.
Technology. [24] Kirby, R., Forlizzi, J., & Simmons, R. (2010).
[8] Broadbent, E., MacDonald, B., Jago, L., Juergens, Affective social robot. Robotics and Autonomous
M., & Mazharullah, O. (2007). Human reactions to Systems, 58, 322–332.
good and bad robots. Proceedings of the 2007 [25] Kose, H. & Yorganci, R. (2011, October). Tale of a
IEEE/RSJ Inter-national Conference on Intelligent robot: humanoid robot assisted sign language
Robots and Systems, San Diego, CA, USA, Oct 29 - tutoring. 11th IEEE-RAS International Conference
Nov 2. on Humanoid Robots (Humanoids), (pp. 105-111).
[9] Chamberlain, C., & Mayberry, R. I. (2008). [26] Kose, H., Yorganci, R., Algan, E. H., & Syrdal, D.
American Sign Language syntactic and narrative S. (2012). Evaluation of the robot assisted sign
com-prehension in skilled and less skilled readers: language tutoring using video-based studies.
Bilingual and bimodal evidence for the linguistic International Journal of Social Robotics, 4(3), 273-
basis of reading, Applied Psycholinguistics, 29(3), 283.
367-388. [27] Krebs, H. I., Palazzolo, J. J., Dipietro, L., Ferraro,
[10] Course, S. T. (2006). ICTs in education for people M., Krol, J., Rannekleiv, K., Volpe, B. T., &
with special needs. In United Nations Educational, Hogan, N. (2003). Rehabilitation robotics:
Scientific and Cultural Organisation. IITE Training Performance-based pro-gressive robot-assisted
Materials, Moscow. therapy. Autonomous Robots, 15, 7–20.
[11] De Graaf, M. & Ben Allouch, S. (2013). Exploring [28] Langevin, G. (2017). InMoov. http://inmoov.fr
influencing variables for the acceptance of social [research date: 09/11/2017].
robots. Robotics and Autonomous Systems, 61(12) [29] Lee, C., & Xu, Y. (1996, April). Online, interactive
1476–1486. learning of gestures for human/ robot interfaces. In
[12] Efthimiou, E. & Fotinea, S. E. (2007). GSLC: International Conference on Robotics and
creation and annotation of a Greek sign language Automation, IEEE (Vol. 4, pp. 2982-2987).
corpus for HCI. International Conference on [30] Liddell, S. K., & Johnson, R. E. (1989), American
Universal Access in Human-Computer Interaction, sign language: The phonological base, Sign
July (pp. 657-666). Springer, Berlin, Heidelberg. Language Studies, 64(1), 195-277.
[13] Elliott, R., Glauert, J. R., Kennaway, J. R., [31] Malima, A. K., Özgür, E., & Çetin, M. (2006). A
Marshall, I., & Safar, E. (2008). Linguistic model- fast algorithm for vision-based hand gesture
ling and language-processing technologies for recognition for robot control.
Avatar-based sign language presentation, Universal [32] Markellou, P., Rigou, M., Sirmakessis, S., &
Access in the Information Society, 6(4), 375-391. Tsakalidis, A. (2000). A web adaptive educational
[14] Emmorey, K., Klima, E., & Hickok, G. (1998), system for people with hearing difficulties.
Mental rotation within linguistic and non-linguistic Education and Information Technologies, 5 (3),
domains in users of American sign 189-200.
language, Cognition, 68(3), 221-246. [33] Matarić, M. J., Eriksson, J., Feil-Seifer, D. J., &
[15] Ethnologue (2018). German sign language. Winstein, C. J. (2007). Socially assistive robotics
Ethnologue – Languages of the World, for post-stroke rehabilitation. Journal of
https://ethnologue.com/ language/gsg (research Neuroengineering Rehabilitation, 4, 5. Page 1714
date: 03/30/2018).
[34] Mayberry, R. I. (1993). First-language acquisition American Sign Language from inconsistent in-
after childhood differs from second-language put. Cognitive Psycho-logy, 49(4), 370-407.
acquisition: The case of American Sign [49] Starner, T. & Pentland, A. (1997). Real-time
Language. Journal of Speech, Language, and American sign language recognition from video
Hearing Research, 36(6), 1258-1270. using hidden Markov models. Motion-Based
[35] Mehdi, S. A. & Khan, Y. N. (2002). Sign language Recognition (pp. 227-243). Springer, Dordrecht.
recognition using sensor gloves. Proceedings of the [50] Starner, T., Weaver, J., & Pentland, A. (1998).
9th International Conference on Neural Real-time American sign language recognition
Information Processing, 5 (November), pp. 2204- using desk and wearable computer based video.
2206). IEEE Transactions on Pattern Analysis and
[36] Mouri, T., Kawasaki, H., & Umebayashi, K. Machine Intelligence, 20(12), 1371-1375.
(2005). Developments of new anthropomorphic [51] Stephanidis, C., Savidis, A., & Akoymianakis, D.
robot hand and its master slave system. (1995). Tools for user interfaces for all. In
International Conference on Intelligent Robots and Proceedings of the 2nd TIDE Congress, Paris,
Systems, IEEE/RSJ (pp. 3225-3230). France, 26-28 April, IOS Press, Amsterdam, 167-
[37] Mubin, O., Stevens, C. J., Shahid, S., Al Mahmud, 170.
A., & Dong, J. J. (2013). A review of the [52] Stokoe Jr, W. C. (2005). Sign language structure:
applicability of robots in education. Journal of An outline of the visual communication systems of
Technology in Education and Learning, 1(209-15), the American deaf, Journal of Deaf Studies and
13. Deaf Education, 10(1), 3-37.
[38] Nandy, A., Mondal, S., Prasad, J. S., Chakraborty, [53] Tejima, N. (2001). Rehabilitation robotics: A
P., & Nandi, G. C. (2010). Recognizing & review. Advanced Robotics, 14(7), 551–564.
interpreting Indian sign language gesture for human [54] Vanderheiden, G. G. (1994). Application software
robot interaction. International Conference on design guidelines: Increasing the accessibility of
Computer and Communi-cation Technology, appli-cation software to people with disabilities
September, (pp. 712-717). and older users. Trace R&D Center, Dept. of
[39] Newport, E. L. (1988). Constraints on learning and Industrial Engineering University of Wisconsin-
their role in language acquisition: Studies of the Madison at http://trace.wisc.edu/docs/software
acquisition of American Sign Language. language guidelines/software html.
Sciences, 10(1), 147-172. [55] Van Gent, T., Goedhart, A. W., Knoors, H., &
[40] Padden, C., & Ramsey, C. (2000). American Sign Treffers, P. D. A. (2012). Self-concept and ego
Language and reading ability in deaf children. development in deaf adolescents: a comparative
Language Acquisition by Eye, 1, 65-89. study. Journal of Deaf Studies and Deaf Education,
[41] Piltan, F., Haghighi, S. T., Sulaiman, N., Nazari, I., 17(3), 333-351.
& Siamak, S. (2011). Artificial control of PUMA [56] Vogler, C., & Metaxas, D. (2001). A framework for
robot manipulator: A review of fuzzy inference recognizing the simultaneous aspects of American
engine and application to classical controller. sign language. Computer Vision and Image Under-
International Journal of Robotics and Automation, standing, 81(3), 358-384.
2(5), 401-425. [57] Wareham, T., Clark, G., and Laugesen, C. (2001).
[42] Robins, B., Dautenhahn, K., Te Boekhorst, R., & Pro-viding learning support for d/deaf and
Billard, A. (2005). Robotic assistants in therapy hearing-impaired students undertaking fieldwork
and education of children with autism: can a small and related activities. Cheltenham, UK: Geography
humanoid robot help encourage social interaction Discipline Network.
skills? Universal Access in the Information Society, [58] Who.int. WHO (2015). Deafness and hearing loss.
4(2), 105-120. Available from http://www.who.int/mediacentre/
[43] Roy, N., Baltus, G., Fox, D., Gemperle, F., Goetz, fact-sheets/ fs300/en/[research date: 05/06/2018].
J., Hirsch, T., & Thrun, S. (2000). Towards [59] Xu, J., Broekens, J., Hindriks, K., & Neerincx,
personal service robots for the elderly. Workshop M.A. (2014). Robot mood is contagious: Effects of
on Interactive Robots and Entertainment, 25, 184. robot body language in the imitation game. In
[44] Sandhu, J. S. & Wood, T. (1990). Demography and Proceedings of the 2014 International Conference
market sector analysis of people with special needs on Autonomous Agents and Multi-Agent Systems,
in thirteen European countries: A report on 973–980.
telecommunication usability issues, RACE R1088 [60] You, B. J., Hwangbo, M., Lee, S.O., Oh, S.R., Do
TUDOR, Special Needs Research Unit, Newcastle Kwon, Y., & Lim, S. (2003). Development of a
Polytechnic, Newcastle upon Tyne, UK. home service robot ‘ISSAC’. International
[45] Senghas, A., & Coppola, M. (2001). Children Conference on Intelligent Robots and Systems
creating language: How Nicaraguan Sign Language IROS, 3, 2630–2635.
acquired a spatial grammar. Psychological Science,
12(4), 323-328.
[46] Senghas, A., Kita, S., & Özyürek, A. (2004).
Children creating core properties of language:
Evidence from an emerging sign language in
Nicaragua. Science, 305(5691), 1779-1782.
[47] Silva, G., & DeSocio, T. J. (2016). Meet Wally.
The room service robot of the Residence Inn
Marriott at LAX. Posted: February 17, 2016.
http://www.fox-la.com/news/local-news/meet-
wally-the-room-service-robot-of-the-residence-inn-
marriott-at-lax [Research date: 4/11/2016].
[48] Singleton, J. L., & Newport, E. L. (2004). When Page 1715
learners surpass their models: The acquisition of