Icta13 MJ
Icta13 MJ
REFERENCES
[1] Cooper HM, Bowden R. Large Lexicon Detection of Sign
Language. IEEE Workshop Human Computer Interaction,
Fig. 2 Movement prediction ICCV, Rio Brazil 07, LNCS 4796, Springer Verlag. pp88-
97, 2007.
Each dialog step is controlled by the system in a [2] Fang, G., Gao, W., Chen, X., Wang, C., Ma, J. Signer-
known context, which is used to remove the independent continuous sign language recognition based
on SRN/HMM. In: Revised Papers from the International
ambiguities of similarity between the signs and to Gesture Workshop on Gestures and Sign Languages in
predict certain sign characteristics (hand location, Human–Computer Interaction, pp. 76–85. Springer,
head orientation…). Processing with prediction is Heidelberg, 2002.
easier than the bottom-up processing (segmentation, [3] Holden, E.J., Owens, R.A. Visual sign language
recognition. In: Proceedings of the 10th International
tracking, characterization, ...), because if there are Workshop on Theoretical Foundations of Computer
errors in one of the different stages, the rest will be Vision, pp. 270–288. Springer, Heidelberg, 2001.
false. Also, measurements are more simple and [4] Parashar, A.S. Representation and interpretation of
easy to check with the predction processing. manual and non-manual information for automated
American sign language recognition. Ph.D. Thesis,
For example, figure 2, shows how we can predict Department of Computer Science and Engineering,
the hand position according to speed and movement College of Engineering, University of South Florida, 2003.
of the hand. Also, we can check if the global shape [5] Pitsikalis V, Theodorakis S, Vogler C, Maragos P.
of the hand has changed or not, instead of Advances in Phonetics- based Sub-Unit Modeling for
Transcription Alignment and Sign Language Recog-
segmenting different fingers, which is costly in nition. IEEE CVPR Workshop on Gesture Recognition,
terms of time. Colorado Springs, USA, 2011.
Dialogue is a particular case of Human-Machine [6] Starner, T., Weaver, J., Pentland, A. Real-time American
Interactions where it is controlled by the machine. sign language recognition using desk and wearable
computer based video. IEEE Trans. Pattern Anal. Mach.
So, there is expectations on the response and rightly Intell. 20(12), 1371– 1375, 1998.
to exploit in order to make a feasible system. [7] Vogler, C., Metaxas, D. Parallel hidden Markov models
In each interaction in dialogue, we predicted all for American sign language recognition. In: Proceedings
condidats signs to be recognized. In the following of the International Conference on Computer Vision,
1999.
phase (recognition), we are concerned with [8] Yang, M., Ahuja, N., Tabb, M. Extraction of 2D motion
trajectories and its application to hand gesture recognition.
IEEE Trans. Pattern Anal. Mach. Intell. 24, 1061–1074,
2002.