Cientific Learning Reading Assistan
Cientific Learning Reading Assistan
Cientific Learning Reading Assistan
Next, using a headset microphone, the student reads The software also provides a wide range of reports to
the selection aloud. Sphinx speech recognition technology teachers, and administrators. These include measuring
is used to interpret the reading. When the application performance and tracking progress over time in different
detects that the student is unable to read a word, or has areas (fluency, comprehension strategies). Teachers can
made a significant error, the software will intervene with listen to recorded samples of oral reading and see words the
visual and/or audio prompting. The application will first students had difficulty with. Reports can be generated at
highlight the word so the student can try again, and then, if the student, class, grade, school or even district level.
needed, it will read the word to the student. Due to the
importance of this reading practice, Reading Assistant 3. THE READING VERIFICATION TASK
Expanded Edition requires users to read a selection two or
three times. After reading a selection, a user may listen to As mentioned previously, the application of speech
his or her reading of the selection. recognition technology to the guided oral reading process in
In the final step, users take a quiz to measure their Reading Assistant is different from a typical speech
comprehension of the material. The quiz questions may be recognition application. This difference has motivated
multiple choice, true/false, or ask the student to select the many of the modifications made to the recognition
sentence or paragraph that communicates a particular idea. technology, and necessitated the development of application
The student is given feedback on their performance in components outside of the recognition technology that
each of the task areas (preview, read aloud, and quiz). The manage the recognizer configuration, data processing,
user is also given a review list of words, including words position tracking, and user interaction functionality. Figure
that required prompting (intervention) during the oral 2 illustrates the architecture of the guided oral reading
reading, as well as words that the software determined had functionality in Reading Assistant.
more subtle errors or hesitations associated with them.
Figure 2 - Guided Oral Reading Components
user read a word, we also measure whether it was read
Fundamentally this is a verification task where we wish fluently – i.e. without inappropriate hesitations or false
to determine whether or not the student read out loud each starts[8]. Therefore timing information from the
word of the text correctly. This influences both how the recognition process, and not just word sequence, is
recognizer is configured for this task and how we measure important and must be accurate.
performance. In terms of configuration, we know the exact In addition we have implemented a hierarchy of word
text the user is supposed to be reading and we track importance which is used both in the configuration of the
progress through this text during the oral reading. recognizer and in the processing of recognition results. In
Therefore we can reasonably limit the number of words terms of comprehension, some words (such as articles and
from the text that we include in the language model to prepositions) are less likely to be critical to meaning.
words in a relatively small window of text around where the Readers are expected to know these short common words
user is currently reading. At the same time, the user may well. Because they are short and less important to
misread words and interject unrelated speech or non-speech meaning, these words are often de-emphasized or mumbled
sounds. Therefore the language model configuration at any in read text, and often misrecognized. Therefore we may
given point in the text consists of a combination of words not prompt or stop the user even if we do not get a correct
local to that point in the text, plus ‘competition’ elements. recognition on a word in this category. Whether or not a
Competition elements include word foils[7] (models of word should be placed in this category depends both on text
mispronunciations or partial pronunciations of the word), reading level and on context, since at lower levels readers
noise models, and context-independent phoneme models. may still be learning some of these words.
The guided oral reading task, with the goal of In addition, some application and recognizer changes
promoting fluency in reading, also influences the design of were motivated by the difficult acoustic environments in
the application. Aside from whether the recognizer heard a schools and the challenges of obtaining a high quality audio
signal, especially when the users are children. A running the recognizer on an embedded device, and the
significant amount of effort has gone into a microphone ability to evaluate fully continuous models.
check capability which detects signal quality issues[9] and
instructs the user regarding correct microphone headset 5. MODIFICATIONS TO SPHINX
placement.
Finally, it is worth briefly discussing performance Developing the Reading Assistant application for the
measurement for this application and the guided oral education market required many modifications to the
reading activity. Since this is a verification task, the most Sphinx software. These changes were largely driven by the
useful metrics are: requirements of the education market as well as by the
• false negative rate: the percentage of the time unique needs of the Reading Assistant application.
we prompt or intervene on a word that was read First of all, the source code had to be ported to the
correctly Apple Macintosh platform because Macs represent a
• false positive rate: the percentage of the time significant share of the education market and install base.
we do not prompt or intervene on a word that The work required was mainly in developing the audio
was read incorrectly input and output component for this platform.
In our experience, the fundamental usability Another area of modifications was to make memory
requirement is that the false negative rate be kept very low, use, management, and load time more efficient for
typically 1% or less on average across a corpus of test data. recognizer configuration data. In particular, the way
Higher false negative rates lead to frustration and detract Reading Assistant creates and uses language models
from the goal of building and promoting fluency. False required significant re-work of the corresponding code.
negative and false positive rates trade off against one Even though Reading Assistant does use the Sphinx-2 ‘n-
another when tuning a system, depending on the language gram’ language model implementation designed for
model weights that are used and the penalties applied to statistical language models, true statistical language models
‘competition’ elements such as phoneme filler models. (generated from a large corpus of text) are not appropriate
This trade-off led to the development of a graded for the reading verification task.
categorization of errors, where less severe errors such as Therefore we have implemented a language model
mispronunciations and hesitations are marked by the generation process which creates language models by rule
software and placed on the word review list, but there is no from the given text. Trigram models represent the
intervention or real-time correction of the user. expectation that the user will mostly adhere to the presented
Mispronunciations are detected using a word confidence text, with back-offs to bigram and unigram models
metric, and hesitations are categorized using timing allowing for departure from the correct word order. Word-
analysis. specific competition (word foil or mispronunciation) can
occur in the same n-gram positions as the correct word.
4. CMU SPHINX TECHNOLOGY IN READING Items that are not text-specific, such as noise models,
ASSISTANT context independent phoneme filler models, and silence
models, can be inserted at any point.
Work on the development of Reading Assistant using CMU In fact, finite-state grammar language models might be
Sphinx technology began in 2002. Among available more appropriate for the reading verification task, in order
recognition technologies at the time, Sphinx was chosen to model user behaviors such as word repeats and sentence
because of its accessibility and its application orientation. re-starts. The Sphinx version originally integrated did not
Sphinx-2 in particular was designed to be real-time and had include finite-state grammar support, so we adapted the n-
a basic application interface. The version initially gram implementation to suit our application as described
incorporated into the application was Sphinx-2 version 0.4. above. However finite-state grammars are an approach we
Since elementary-age students were the primary initial wish to investigate in future work.
focus of the software, the application also required the Since we know the text the user is reading and are
development of acoustic models based on children’s speech. following along during reading, we can use this
To develop these we used the SphinxTrain suite of information to ‘focus’ the language model on the area of
programs for acoustic modeling. In more recent releases of text the user is reading at the moment. This implies that
the software, we have created improved models for adults we need to generate on-the-fly, and switch between, many
as well as children using data collected with the application ‘small’ language models, each representing a short segment
software. of text. These needs led to a number of changes to the
In 2006 we updated the Sphinx recognizer code base by language modeling portion of the code including the
merging the PocketSphinx code base into our code base. development of a binary format for language models, the
The goal of this merge was to obtain code fixes, support for ability to load language models from a stream, and other
changes to optimize memory usage and memory
management. 6. ACOUSTIC MODELS
In the Reading Assistant application acoustic models
may also need to be switched or re-loaded. The changes The development of acoustic models using SphinxTrain for
made to enable this included adding the capability to re- the Reading Assistant application began with the
initialize the recognizer within the application, and binary development of models based on children’s speech in order
formats for some acoustic model files to reduce load time. to obtain adequate performance on the Reading Assistant’s
These improvements were needed in particular to support largest target user group. The initial acoustic models were
an automatic model selection capability that was introduced developed from commercially available children’s speech
in Reading Assistant 4.1. Reading Assistant has models corpora. More recent work in acoustic model development
which cover a spectrum of users from 5 or 6 years old to has included developing improved models for adult female
adults, but it is not necessarily easy or obvious for a user (or speakers (another important user group since the majority
even a teacher) to select the model set that is going to work of primary education teachers are women). All of the
best, particularly for older children and teenagers. Model models developed for the application have been semi-
selection is done automatically based on a one-time continuous acoustic models.
enrollment process where the user is asked to repeat a small Finally, in our most recent release of the software, we
number of phrases. enhanced our set of acoustic models further by adding adult
At the same time as automatic model selection, a vocal and child acoustic models focused on the dialects of the
tract length normalization (VTLN) factor[12] is also Southern United States. These models were developed
calculated. This capability was added to the front end using more than 110 hours of audio data from 685
feature extraction in Sphinx and is used to further improve speakers, collected at schools using a customized version of
the match of the user’s speech to the model selected. the application. With the addition of the new models to
Another modification made to the Sphinx front-end the application, the false negative error rate on test speakers
was the detection of specific noise quality problems. The from the Southern region was reduced from 1.5% to 1%,
following signal quality issues are detected: breath noise while the false positive rate stayed constant.
(breath pops), low signal to noise ratio, and hum noise in
the signal due to 60Hz power signal harmonics. These 7. SUMMARY
signal quality issues occur frequently in school situations; Reading Assistant is an interactive reading tutor which
school-age children have a lot of difficulty using a headset uses speech recognition technology to provide a helpful
microphone correctly, and poorly designed audio hardware listener for guided oral reading practice. CMU Sphinx
and environment contribute to a high incidence of hum recognizer technology, including Sphinx-2, PocketSphinx,
noise interference in the audio signal. Detection of these and SphinxTrain, has been used to develop and deploy this
issues can be used to give instruction regarding corrective commercially successful application. The unique
action (e.g. better microphone placement). This capability requirements of the recognition task for this application,
can also be used to alert users and teachers that a and of the education market, have led to many
significant problem exists, and prevent use of the recording modifications of the original CMU Sphinx technology.
functionality when signal quality is too poor.
Another significant group of modifications to CMU 8. REFERENCES
Sphinx is aimed at providing ‘competition’ elements during
speech recognition processing in order to determine if the [1] J. Mostow, and J. Beck. When the Rubber Meets the
user has made an error in reading. As mentioned Road: Lessons from the In-School Adventures of an
previously, an additional ‘filler’ dictionary consisting of Automated Reading Tutor that Listens. In B. Schneider and
context independent phoneme models has been added to the S.-K. McDonald (Eds.), Scale-Up in Education (Vol. 2, pp.
recognizer. This is implemented as a separate dictionary 183-200). Rowman & Littlefield Publishers, Lanham, MD,
from the noise fillers so that separate penalties can be 2007.
applied. A word confidence score measure[10,11] has also
been implemented, which also required the addition of a [2] J. Mostow, S. Roth, A. G. Hauptmann, and M. Kane,
context-independent phoneme network in parallel with the "A Prototype Reading Coach that Listens", Proceedings of
main recognition search. The score from this ‘competitor’ the Twelfth National Conference on Artificial Intelligence
network is used in the word confidence calculation. Finally, (AAAI-94), American Association for Artificial
the dictionary implementation has been modified to Intelligence, Seattle, WA, pp. 785-792, August 1994.
accommodate the addition of partial pronunciations and
mispronunciations of words (word foils) to the dictionary at [3] S. Williams, D. Nix, P. Fairweather, Using Speech
load time. Recognition Technology to Enhance Literacy Instruction
for Emerging Readers, In B. Fishman and S. O’Connor- of Computer Science, Carnegie Mellon University,
Divelbiss (Eds.), Fourth International Conference of the Pittsburgh, PA, May 1997.
Learning Sciences, Erlbaum, Mahwah, NJ, pp. 115-120,
2000.