CLAN
CLAN
Brian MacWhinney
Carnegie Mellon University
https://doi.org/10.21415/T5G10R
When citing the use of TalkBank facilities, please use this reference to the last printed
version of the CHILDES manual:
MacWhinney, B. (2000). The CHILDES Project: Tools for Analyzing Talk. 3rd Edition.
Mahwah, NJ: Lawrence Erlbaum Associates
This allows us to systematically track usage of the programs and data through
scholar.google.com.
Part 2: CLAN 2
1 Getting Started........................................................................................................... 8
1.1 Why you want to learn CLAN ........................................................................................8
1.2 Learning CLAN....................................................................................................................8
1.3 Installing CLAN – MacOS .................................................................................................9
1.4 Installation Problems – MacOS ....................................................................................9
1.5 Installing CLAN – Windows ...........................................................................................9
2 Using the Web ......................................................................................................... 10
2.1 Community Resources ................................................................................................. 10
2.2 Downloading Materials ............................................................................................... 10
2.3 Using the Browsable Database ................................................................................. 10
2.4 Downloading Transcripts and Media ..................................................................... 11
3 Tutorial ..................................................................................................................... 12
3.1 The Commands Window ............................................................................................. 12
3.1.1 Setting the Working Directory ........................................................................................... 12
3.1.2 Output and Mor Lib Directories ........................................................................................ 13
3.1.3 The Recall Button .................................................................................................................... 13
3.1.4 The Progs Menu........................................................................................................................ 13
3.1.5 The FILE IN Button ................................................................................................................. 13
3.1.6 The TIERS Button .................................................................................................................... 14
3.2 Typing Command Lines ............................................................................................... 14
3.2.1 The Asterisk Wildcard ........................................................................................................... 15
3.2.2 Output Files ................................................................................................................................ 15
3.2.3 Redirection ................................................................................................................................. 16
3.3 Sample Runs..................................................................................................................... 16
3.3.1 Sample KWAL Run .................................................................................................................. 16
3.3.2 Sample FREQ Run .................................................................................................................... 17
3.3.3 Sample MLU Run ...................................................................................................................... 17
3.3.4 Sample COMBO Run................................................................................................................ 18
3.3.5 Sample GEM and GEMFREQ Runs .................................................................................... 18
3.4 Advanced Commands ................................................................................................... 19
3.5 Exercises............................................................................................................................ 22
3.5.1 MLU50 Analysis ........................................................................................................................ 23
3.5.2 MLU5 Analysis .......................................................................................................................... 25
3.5.3 MLT Analysis ............................................................................................................................. 25
3.5.4 TTR Analysis .............................................................................................................................. 26
3.5.5 Generating Language Profiles ............................................................................................ 27
3.6 Further Exercises ........................................................................................................... 28
4 The Editor ................................................................................................................. 30
4.1 Screencasts ....................................................................................................................... 30
4.2 Text Mode vs. CHAT Mode .......................................................................................... 30
4.3 File, Edit, Format, and Font Menus .......................................................................... 31
4.4 Mode Menu ....................................................................................................................... 31
4.5 Default Window Positioning, Size, and Font Control ........................................ 31
4.6 CA Styles ............................................................................................................................ 32
4.7 Setting Special Colors ................................................................................................... 33
4.8 Searching........................................................................................................................... 33
4.9 Send to Sound Analyzer ............................................................................................... 33
4.10 Tiers Menu Items ...................................................................................................... 33
Part 2: CLAN 3
1 Getting Started
This manual describes the use of the CLAN program, designed and written by Leonid
Spektor at Carnegie Mellon University. The acronym CLAN stands for Computerized
Language ANalysis. CLAN is designed specifically to analyze data transcribed in the
CHAT format. This is the format used in the various segments of the TalkBank system.
There are three parts to the overall TalkBank manual. Part 1 describes the CHAT
transcription system. Part 2 (this current manual) describes the CLAN analysis programs.
Part 3 describes the segments of the CLAN program that perform automatic
morphosyntactic analysis.
such as aphasia, adult conversation, or second language may wish to use practice the
exercises with CHAT files and media appropriate to those areas.
3 Tutorial
Once you have installed CLAN, you start it by double-clicking on its icon or its
shortcut.
The output continues down the page. The exact shape of this window will depend on
how you have sized it.
“sample.cha”.
At this point, the command being constructed in the Commands window should look
like this: freq @ +t*CHI If you hit the RUN button at the bottom right of the Commands
window, or if you just hit a carriage return, the FREQ program will run and will display
the frequencies of the six words the child is using in this sample transcript.
In this command line, there are three parts. The first part gives the name of the
command; the second part tells the program to look at only the *CHI lines; and the third
part tells the program which file to analyze as input.
If you press the return key after entering this command, you should see a CLAN Out-
put window that gives you the result of this MLU analysis. This analysis is conducted, by
default, on the %mor line which was generated by the MOR program. If a file does not
have this %mor line, then you will need to use other forms of the MLU command that only
count utterances in words. Also, you will need to learn how to use the various options,
Part 2: CLAN 15
such as +t or +f. One way to learn the options is to use the various buttons in the graphic
user interface as a way of learning what CLAN can do. Once you have learned these
options, it is often easier to just type in this command directly. However, in other cases, it
may be easier to use buttons to locate rare options that are hard to remember. The decision
of whether to type directly or to rely on buttons is one that is left to each user.
What if you want to send the output to a permanent file and not just to the temporary
CLAN Output window? To do this you add the +f switch:
mlu +t*CHI +f sample.cha
Try entering this command, ending with a carriage return. You should see a message
in the CLAN Output window telling you that a new file called sample.mlu.cex has been
created. If you want to look at that file, type Control-O (Windows) or ⌘-o (Mac) for Open
File and you can use the standard navigation window to locate the sample.mlu.cex file. It
should be in the same directory as your sample.cha file.
You do not need to worry about the order in which the options appear. In fact, the only
order rule that is used for CLAN commands is that the command name must come first.
After that, you can put the switches and the file name in any order you wish.
The asterisk wildcard can be used to refer to a group of files (*.cha), a group of speakers
(CH*), or a group of words with a common form (*ing). To see how these could work
together, try out this command:
freq *.cha +s”*ing”
This command runs the FREQ program on all the .cha files in the LIB directory and
looks for all words ending in “-ing.” The output is sent to the CLAN Output window and
you can set your cursor there and scroll back and forth to see the output. You can print this
window or you can save it to a file.
If you do this, the output file will have the name “sample.mot.cex.” As an example of
a case where this would be helpful, consider how you might want to have a group of output
Part 2: CLAN 16
files for the speech of the mother and another group for the speech of the father. The
mother’s files would be named *.mot.cex and the father’s files would be named *.fat.cex.
3.2.3 Redirection
Instead of using the +f switch for output, you may sometimes want to use the redirect
symbol (>). This symbol sends all of the outputs to a single file. The individual analysis of
each file is preserved and grouped into one output file that is named in the command string.
There are three forms of redirection, as illustrated in the following examples:
freq sample.cha > myanalyses
freq sample.cha >> myanalyses
freq sample.cha >& myanalyses
The -w and +w options indicate how many lines of text should be included before and
after the search words. A segment of the output looks as follows:
----------------------------------------
*** File "0042.cha": line 2724. Keyword: bunny
*CHI: 0 .
*MOT: see ?
*MOT: is the bunny rabbit jumping ?
*MOT: okay .
*MOT: wanna [: want to] open the book ?
----------------------------------------
If you triple-click on the line with the three asterisks, the whole orginal transcript will
Part 2: CLAN 17
open with that line highlighted. Repetitions and retracing will be excluded by default
unless you add the +r6 switch to the command.
In this file, the child uses the filler “uh” a lot, but that is ignored in the analysis. The
output for this command is:
> freq +t*CHI 0042.cha
freq +t*CHI 0042.cha
Sat Jun 14 14:38:12 2014
freq (13-Jun-2014) is conducting analyses on:
ONLY speaker main tiers matching: *CHI;
****************************************
From file <0042.cha>
Speaker: *CHI:
1 ah
2 bow+wow
1 vroom@o
------------------------------
3 Total number of different item types used
4 Total number of items (tokens)
0.750 Type/Token ratio
A statistical summary is provided at the end. In the above example, there were a total
of 4 words or tokens used with only 3 different word types. The type–token ratio is found
by dividing the total of unique words by the total of words spoken. For our example, the
type–token ratio would be 3 divided by 4 or 0.750.
The +f option can be used to save the results to a file. CLAN will automatically add the
.frq.cex extension to the new file it creates. By default, FREQ excludes the strings xxx,
yyy, www, as well as any string immediately preceded by one of the following symbols:
0, &, +, -, #. However, FREQ cludes all retraced material unless otherwise commanded.
For example, given this utterance:
*CHI: the dog [/] dog barked.
FREQ would give a count of one for the word dog. If you wish to include retraced material,
use the +r6 option. To learn more about the many variations in FREQ, read the section
devoted specifically to this useful command.
Thus, we have the mother’s MLU or ratio of morphemes over utterances (3.108) and
her total number of utterances (511).
Here, the string +tMOT selects the mother’s speaker tier only for analysis. When
searching for a combination of words with COMBO, it is necessary to precede the com-
bination with +s (e.g., +s"kitty^kitty") in the command line. The symbol ^ specifies that
the word kitty is immediately followed by the word kitty. The output of the command used
above is as follows:
> combo +tMOT +s"kitty^kitty" 0042.cha
kitty^kitty
combo +tMOT +skitty^kitty 0042.cha
Sat Jun 14 14:44:21 2014
combo (13-Jun-2014) is conducting analyses on:
ONLY speaker main tiers matching: *MOT;
****************************************
From file <0042.cha>
----------------------------------------
*** File "0042.cha": line 3034.
*MOT: (1)kitty (1)kitty kitty .
----------------------------------------
*** File "0042.cha": line 3111.
*MOT: and (1)kitty (1)kitty .
Strings matched 2 times
transcripts for further analyses. For example, we might want to divide the transcript by
different social situations or activities. In the 0012.cha file, there are gem markers
delineating the segment of the transcript that involves book reading, using the code word
“book”. By dividing the transcripts in this manner, separate analyses can be conducted on
each situation type. Once this is done, you can use this command to compute a frequency
analysis for material in these segments:
gemfreq +t*CHI +sbook 0012.cha
GEM and GEMFREQ are particularly useful in corpora such as the AphasiaBank
transcripts. In these, each participant does a retell of the Cinderella story that is marked
with @G: Cinderella. Using the three Kempler files, the following command will create
three new files with only the Cinderella segment:
gem +sCinderella +n +d1 +t*PAR +t%mor +f *.cha
You can then run further programs such as MLU or FREQ on these shorter files.
Run KWAL on the Participant on the list of words in the whwords.cut file in the
/examples/pos folder. You will need to copy that file into the /Adler folder.
kwal +t*PAR [email protected] adler23a.cha
Run KWAL on the Participant to exclude utterances coded with the post-code [+ exc]
and create new files in legal CHAT format for all the files:
kwal -s"[+ exc]" +d +t*PAR +t%mor +t@ +f *.cha
Run COMBO on the Participant to find all sequences of "fairy” followed immediately
by “godmother" and combine the results from all the files into a single file:
combo +t*PAR +sfairy^godmother +u *.cha
Run COMBO on the Participant's %mor tier to find all combinations of infinitive and
verb in adler01a.cha:
combo +s"inf|*^v|*" +t*PAR +t%mor adler01a.cha
Run MAXWD on the Participant to get the longest utterance in words in all files:
maxwd +g2 +t*PAR *.cha
Run EVAL on the Participant to get a spreadsheet with summary data (duration, MLU,
TTR, % word errors, # utterance errors, % various parts of speech, # repetitions, and #
revisions) in all the files. Add +o4 to get output in raw numbers instead of percentages.
eval +t*PAR +u *.cha
Run MLU on the Participant, creating one spreadsheet for all files. Add -b to get mlu
in words:
mlu +t*PAR +d +u *.cha
Run MLT on the Participant, creating one spreadsheet for all files. MLT counts
utterances and words on a line that may include xxx (unlike MLU):
mlt +t*PAR +d *.cha
Run TIMEDUR on the Participant, creating a spreadsheet with ratio of words and
utterances over time duration for all files:
timedur +t*PAR +d10 *.cha
Run GEM on the Participant, including the %mor line, using the “Sandwich” gem with
lazy gem marking, outputting legal CHAT format for adler07.cha:
gem +t*PAR +t%mor +sSandwich +n +d1 adler07a.cha
Run GEM on the Participant main tier and %mor tier for the Sandwich “gem”, using
lazy gem marking, create a new file in legal CHAT format called "Sand" for all Adler files
gem +t*PAR +t%mor +sSandwich +n +d1 +fSand *.cha
Run VOCD on the Participant, output to spreadsheet only, and include repetitions and
revisions in all the files:
Part 2: CLAN 21
Run CHIP to compare the Mother and the Child in terms of utterance overlaps with
both the previous speaker (%chi and %adu, echoes) and their own previous utterances
(%csr and %asr, self-repetitions) in chip.cha:
chip +bMOT +cCHI chip.cha (the chip.cha file is in examples/progs)
Same thing, but excluding printing of the results for the self-repetitions:
chip +tMOT +cCHI –ns chip.cha
The next commands all use the FREQ program to illustrate various options.
Run FREQ on the Participant tier and get output in order of descending frequency for
adler01a.cha:
freq +t*PAR +o adler01a.cha
Run FREQ on the Participant tier and send output to a spreadsheet for adler01a.cha. To
open the spreadsheet, triple-click on stat.frq.xls:
freq +t*PAR +d2 adler01a.cha
Run FREQ on the Participant tier and get type token ratio only in a spreadsheet for
adler01a.cha:
freq +t*PAR +d3 adler01a.cha
Run FREQ on the Participant %mor tier and not the Participant speaker tier and get
output in order of descending frequency for adler01a.cha:
freq +t%mor +t*PAR -t* +o adler01a.cha
Run FREQ on the Participant %mor tier for stems only (happily and happier = happy)
and get output in order of descending frequency for adler01a.cha:
freq +t*PAR +t%mor -t* +s"mr-*,o-%” +o adler01a.cha
Learn how to use the +s switch for analysis of the %mor line
freq +sm
Learn how to use the +s switch for analysis of the %gra line
freq +sg
Run FREQ on the Participant tier, include fillers "uh" and "um", and get output in order
of descending frequency for adler01a.cha:
freq +t*PAR +s+&uh +s+&um +o adler01a.cha
Run FREQ on the Participant tier and count instances of unintelligible jargon for
adler01a.cha:
freq +t*PAR +s"xxx" adler01a.cha
Same, but adding +d to see the actual place of occurrence, then triple-click on any line
that has a file name to open the original:
Part 2: CLAN 22
Run FREQ on the Participant tier, counting instances of gestures for adler01a.cha:
freq +t*PAR +s&=ges* adler01a.cha
Run FREQ on the Participant tier, including repetitions and revisions, excluding
neologisms (nonword:unknown target), and getting output in order of descending
frequency for adler01a.cha. Add +d6 to include error production info. Add +d4 for type
token info only.
freq +t*PAR +r6 -s"<*\* n:uk*>" +o adler01a.cha
Run FREQ on the Participant, searching for a list of words in a 0list.cut file you have
created with multiple words searched per line, where multiple words do not have to be
found in consecutive alignment, but must be in the same utterance, and merging output
across all files:
freq +t*PAR [email protected] +c3 +u *.cha
3.5 Exercises
This section presents exercises designed to help you think about the application of
CLAN for specific aspects of language analysis. The illustrations in the section below are
based on materials developed by Barbara Pan originally published in Chapter 2 of Sokolov
and Snow (1994).They are included in the /examples/transcripts/ne20 folder. The original
text has been edited to reflect subsequent changes in the programs and the database.
Barbara Pan devised the initial form of this extremely useful set of exercises and kindly
consented to their inclusion here.
One approach to transcript analysis focuses on the computation of certain measures or
scores that characterize the stage of language development in the children or adults in the
sample.
1. One popular measure (Brown, 1973) is the MLU or mean length of utterance, which
can be computed by the MLU program.
2. A second measure is the MLU of the five longest utterances in a sample, or MLU5.
Wells (1981) found that increases in MLU of the five longest utterances tend to parallel
those in MLU, with both levelling off after about 42 months of age. Brown suggested
that MLU of the longest utterance tends, in children developing normally, to be
approximately three times greater than MLU.
Part 2: CLAN 23
3. A third measure is MLT or Mean Length of Turn which can be computed the the MLT
program.
4. A fourth popular measure of lexical diversity is the type–token ratio of Templin (1957)
In these exercises, we will use CLAN to generate these four measures of spontaneous
language production for a group of normally developing children at 20 months. The goals
are to use data from a sizeable sample of normally developing children to inform us as to
the average (mean) performance and degree of variation (standard deviation) among chil-
dren at this age on each measure; and to explore whether individual children's performance
relative to their peers was constant across domains. That is, were children whose MLU was
low relative to their peers also low in terms of lexical diversity and conversational partici-
pation? Conversely, were children with relatively advanced syntactic skills as measured by
MLU also relatively advanced in terms of lexical diversity and the share of the conversa-
tional load they assumed?
The speech samples analyzed here are taken from the New England corpus of the
CHILDES database, which includes longitudinal data on 52 normally developing children.
Spontaneous speech of the children interacting with their mothers was collected in a play
setting when the children were 14, 20, and 32 months of age. Transcripts were prepared
according to the CHAT conventions of the Child Language Data Exchange System,
including conventions for morphemicizing speech, such that MLU could be computed in
terms of morphemes rather than words. Data were available for 48 of the 52 children at 20
months. The means and standard deviations for MLU5, TTR, and MLT reported below are
based on these 48 children. Because only 33 of the 48 children produced 50 or more
utterances during the observation session at 20 months, the mean and standard deviation
for MLU50 is based on 33 subjects.
For illustrative purposes, we will discuss five children: the child whose MLU was the
highest for the group (68.cha), the child whose MLU was the lowest (98.cha), and one child
each at the first (66.cha), second (55.cha), and third (14.cha) quartiles. Transcripts for these
five children at 20 months can be found in the /examples/transcripts/ne20 directory.
Our goal is to compile the following basic measures for each of the five target children:
MLU on 50 utterances, MLU of the five longest utterances, TTR, and MLT. We then com-
pare these five children to their peers by generating z-scores based on the means and stan-
dard deviations for the available sample for each measure at 20 months. In this way, we
were will generate language profiles for each of our five target children.
analysis, you need to be in the directory where your data is when you issue the appropriate
CLAN command. In this case, we want to be in the /examples/transcripts/ne20 folder.
The command string we used to compute MLU for all five children is:
mlu +t*CHI +z50u +f *.cha
+t*CHI Analyze the child speaker tier only
+z50u Analyze the first 50 utterances only
+f Save the results in a file
*.cha Analyze all files ending with the extension .cha
The only constraint on the order of elements in a CLAN command is that the name of
the program (here, MLU) must come first. Many users find it good practice to put the name
of the file on which the analysis is to be performed last, so that they can tell at a glance
both what program was used and what file(s) were analyzed. Other elements may come in
any order.
The option +t*CHI tells CLAN that we want only CHI speaker tiers considered in the
analysis. Were we to omit this string, a composite MLU would be computed for all speakers
in the file.
The option + z50u tells CLAN to compute MLU on only the first 50 utterances. We
could, of course, have specified the child’s first 100 utterances (+z100u) or utterances from
the 51st through the 100th (+z51u-100u). With no +z option specified, MLU is computed
on the entire file.
The option +f tells CLAN that we want the output recorded in output files, rather than
simply displayed onscreen. CLAN will create a separate output file for each file on which
it computes MLU. If we wish, we may specify a three-letter file extension for the output
files immediately following the +f option in the command line. If a specific file extension
is not specified, CLAN will assign one automatically. In the case of MLU, the default
extension is .mlu.cex. The .cex at the end is mostly important for Windows, since it allows
the Windows operating system to know that this is a CLAN output file.
Finally, the string *.cha tells CLAN to perform the analysis specified on each file end-
ing in the extension .cha found in the current directory. To perform the analysis on a single
file, we would specify the entire file name (e.g., 68.cha). It was possible to use the wildcard
* in this and following analyses, rather than specifying each file separately, because all the
files to be analyzed ended with the same file extensions and were in the same directory;
and in each file, the target child was identified by the same speaker code (i.e., CHI), thus
allowing us to specify the child’s tier by means of +t*CHI.
Utilization of wildcards whenever possible is more efficient than repeatedly typing in
similar commands. It also cuts down on typing errors. For illustrative purposes, let us
suppose that we ran the above analysis on only a single child (68.cha), rather than for all
five children at once (by specifying *.cha). We would use the following command:
mlu +t*CHI +z50u 68.cha
****************************************
From file <68.cha>
MLU for Speaker: *CHI:
MLU (xxx, yyy and www are EXCLUDED from the utterance and morpheme
counts):
Number of: utterances = 50, morphemes = 133
Ratio of morphemes over utterances = 2.660
Standard deviation = 1.595
MLU reports the number of utterances (in this case, the 50 utterances we specified),
the number of morphemes that occurred in those 50 utterances, the ratio of morphemes
over utterances (MLU in morphemes), and the standard deviation of utterance length in
morphemes. The standard deviation statistic gives some indication of how variable the
child’s utterance length is. This child’s average utterance is 2.660 morphemes long, with a
standard deviation of 1.595 morphemes.
Check line 1 of the output for typing errors in entering the command string. Check lines
3 and possibly 4 of the output to be sure the proper speaker tier and input file(s) were
specified. Also, check to be sure that the number of utterances or words reported is what
was specified in the command line. If CLAN finds that the transcript contains fewer
utterances or words than the number specified with the +z option, it will still run the
analysis but will report the actual number of utterances or words analyzed.
In this output file, the results for the mother in 68.cha are:
Part 2: CLAN 26
There is similar output data for the child. This output allows us to consider Mean
Length of Turn either in terms of words per turn or utterances per turn. We chose to use
words per turn in calculating the ratio of child MLT to mother MLT, reasoning that words-
per-turn is likely to be sensitive for a somewhat longer developmental period. MLT ratio,
then, was calculated as the ratio of child words/turn over mother words/turn. As the child
begins to assume a more equal share of the conversational load, the MLT ratio should
approach 1.00. For file 68.cha, this ratio is: 2.184 ÷ 5.991 = 0.365.
We can look at each of the five output files to get this summary TTR information for
each child.
The distribution of MLU50 scores was quite skewed, with most children who produced
at least 50 utterances falling in the MLU range of 1.00-1.30. As noted earlier, 17 of the 48
children failed to produce even 50 utterances. At this age most children in the sample are
essentially still at the one-word stage, producing few utterances of more than one word or
morpheme. Like MLU50, the shape of the distributions for MLU5 and for the MLT ratio
were somewhat skewed toward the lower end, though not as severely as was MLU50.
Z-scores, or standard scores, are computed by subtracting the sample mean score from
the child’s score on a particular measure and then dividing the result by the overall standard
deviation: (child's score - group mean) / standard deviation. The results of this computation
are given in the following table.
We would not expect to see radical departures from the group means on any of the
measures. For the most part, this expectation is borne out: we do not see departures greater
than 2 standard deviations from the mean on any measure for any of the five children,
except for the particularly high MLU50 and MLU5 observed for Subject 068.
It is not the case, however, that all five of our target children have flat profiles. Some
children show marked strengths or weaknesses relative to their peers in certain domains.
For example, Subject 14, although very close to the mean in terms of utterance length
Part 2: CLAN 28
(MLU5O and MLU5), shows marked strength in lexical diversity (TTR), even though she
shoulders relatively little of the conversational burden (as measured by MLT ratio).
Overall, Subject 68 seems advanced on all measures except TTR. The subjects at the
second and third quartile in terms of MLU (Subject 055 and Subject 066) have profiles that
are relatively flat: Their z-scores on each measure fall between -1 and 0. However, the child
with the lowest MLU50 (Subject 098) again shows an uneven profile. Despite her limited
production, she manages to bear her portion of the conversational load. You will recall that
unintelligible vocalizations transcribed as xxx or yyy, as well as nonverbal turns indicated
by the postcode [+ trn], are all counted in computing MLT. Therefore, it is possible that
many of this child’s turns consisted of unintelligible vocalizations or nonverbal gestures.
What we have seen in examining the profiles for these five children is that, even among
normally developing children, different children may have strengths in different domains,
relative to their age mates. For illustrative purposes, we have considered only three do-
mains, as measured by four indices. To get a more detailed picture of a child’s language
production, we might choose to include other indices, or to further refine the measures we
use. For example, we might compute TTR based on the number of words, or we might
time-sample by examining the number of word types and word tokens the child produced
in a certain number of minutes of mother–child interaction. We might also consider other
measures of conversational competence, such as number of child initiations and responses;
fluency measures, such as number of retraces or hesitations; or pragmatic measures, such
as variety of speech acts produced. Computation of some of these measures would require
that codes be entered in the transcript prior to analysis; however, the CLAN analyses
themselves would, for the most part, simply be variations on the techniques discussed in
this chapter. In the exercises that follow, you will have an opportunity to use these
techniques to perform analyses on these five children at both 20 months and 32 months.
4. Perform the same analyses for the four target children for whom data are available at
age 32 months. Use the data given earlier to compute z-scores for each target child on
each measure (MLU 50 utterances, MLU of five longest utterances, TTR, MLT ratio).
Then plot profiles for each of the target children at 32 months. What consistencies and
inconsistencies do you see from 20 to 32 months? Which children, if any, have similar
profiles at both ages? Which children's profiles change markedly from 20 to 32 months?
5. Conduct a case study of a child you know to explore whether type of activity and/or
interlocutor affect mean length of turn (MLT). Videotape the child and mother engaged
in two different activities (e.g., bookreading, having a snack together, playing with a
favorite toy). On another occasion, videotape the child engaged in the same activities
with an unfamiliar adult. Compare the MLT ratio for each activity and adult–child pair.
Describe any differences you observe.
Part 2: CLAN 30
4 The Editor
CLAN includes an editor that is specifically designed to work cooperatively with
CHAT files. To open up an editor window, either type ⌘-n (Control-n on Windows) for a
new file or ⌘-o to open an old file (Control-o on Windows). This is what a new text
window looks like on the Macintosh:
You can type into this editor window just as you would in any full-screen text editor,
such as MS-Word. In fact, the basic functions of the CLAN editor and MS-Word are all
the same. Some users say that they find the CLAN editor difficult to learn. However, on
the basic level it is no harder than MS-Word. What makes the CLAN editor difficult is the
fact that it is used to transcribe the difficult material of child language data with all its
special forms, overlaps, and precise timings. These functions are outside of the scope of
editors, such as MS-Word or Pages.
4.1 Screencasts
Use of the tutorial can be supplemented through the online screencasts for specific
CLAN features found at https://talkbank.org/screencasts/ and on YouTube. These movies,
created by Davida Fromm and Brian MacWhinney, show the use of specific CLAN
functions in real time with real transcripts.
will be opened using CHAT mode. You can use this editor window to start learning the
editor or you can open an existing CHAT file using the option in the File menu. It is prob-
ably easiest to start work with an existing file. To open a file, type Command-o (Macintosh)
or Control-o (Windows). You will be asked to locate a file. Try to open the sample.cha file
that you will find in the Lib directory inside the CLAN directory or folder. This is just a
sample file, so you do not need to worry about accidentally saving changes.
You should stay in CHAT mode until you have learned the basic editing commands.
You can insert characters by typing in the usual way. Movement of the cursor with the
mouse and arrow keys works the same way as in Word or Pages. Functions like scrolling,
highlighting, cutting, and pasting also work in the standard way. You should try out these
functions right away. Use keys and the scroll bar to move around in the sample.cha file.
Cut and paste sections and type a few sentences, just to convince yourself that you are
already familiar with the basic editor functions.
4.6 CA Styles
CHAT supports many of the CA (Conversation Analysis) codes as developed by Sacks,
Schegloff, Jefferson (1974) and their students. The implementation of CA inside CLAN
was guided by suggestions from Johannes Wagner, Chris Ramsden, Michael Forrester, Tim
Koschmann, Charles Goodwin, and Curt LeBaron. Files that use CA styles should declare
this fact by including CA in the @Options line, as in this example:
@Options: CA
By default, CA files will use the CAfont, because the characters in this font have a
fixed width, allowing the INDENT program to make sure that CA overlap markers are
clearly aligned. When doing CA transcription, you can also select underlining and italics,
although bold is not allowed, because it is too difficult to recognize. Special CA characters
can be inserted by typing the F1 function key followed by some letter or number, as
indicated in a list that you can find by selecting Special Characters under CLAN’s
Windows menu. The full list is at https://ca.talkbank.org/codes.html .
The F1 and F2 keys are also used to facilitate the entry of special characters for Hebrew,
Arabic, and other systems. These uses are also listed in the Special Characters window.
Part 2: CLAN 33
The raised h diacritic is bound to F1-shift-h and the subscript dot is bound to F1-comma.
4.8 Searching
In the middle of the Edit pulldown menu, you will find a series of commands for
searching. The Find command brings up a dialog that allows you to enter a search string
and to perform a reverse search. The Find Same command allows you to repeat that same
search multiple times. The Go To Line command allows you to move to a particular line
number. The Replace command brings up a dialog like the Find dialog. However, this
dialog allows you to find a certain string and replace it with another one. You can replace
some strings and not others by skipping over the ones you do not want to replace with the
Find-Next function. When you need to perform a large series of different replacements,
you can set up a file of replacement forms and use it by pressing the from file button. You
then are led through the words in this replacement file one by one. The form of that file is
like this:
“String_A” “Replacement_A”
“String_B” “Replacement_B”
name quickly, using the commands listed in that menu. If you make changes to the
@Participants line, you can press the Update button at the bottom of the menu to reload
new speaker names. As an alternative to manual typing of information on the @ID lines,
you can enter information for each participant separately using the dialog system that you
start up using the ID Headers option in the Tiers menu.
hangs. If you check this, CLAN will not create a backup file.
6. Start in CHAT Coder mode. Checking this will also start you in Text Mode when
you open a new text window.
7. Auto-wrap in Text Mode. This will wrap long lines when you type.
8. Auto-wrap CLAN output. This will wrap long lines in the output.
9. Show mixed stereo sound wave. CLAN can only display a single sound wave when
editing. If you are using a stereo sound, you may want to choose this option.
10. Output Unix CRs. This is for people who use CLAN on Unix.
The [E] entry indicates that you are in editor mode and the [chat] entry indicates that
you are in CHAT Mode. To begin coding, you first want to set your cursor on the first
utterance you want to code. If the file already has %spa lines coded, you will be adding
additional codes. If none are present yet, Coder’s Editor will be adding new %spa line.
You can use the barry.cha file in the /examples/transcripts folder. Once you have placed
the cursor anywhere on the first line you want to code, you are ready to leave CHAT Mode
and start using Coder Mode. To go into Coder Mode, type esc-e (always release the escape
key before entering the next key). You will be asked to load a codes file. Just navigate to
the /examples/coder/ directory and select one of the demo codes files beginning with the
word “code.” We will use codes1.cut for our example.
type esc-c to signal that you have completed the current code. You may then enter any
subsequent codes for the current tier.
Once you have entered all the codes for a tier, type esc-c to signal that you are finished
coding the current tier. You may then either highlight a different coding tier relevant to the
same main line, or move on to code another main line. To move on to another main line,
you may use the arrow keys to move the cursor, or you may automatically proceed to next
main speaker tier by typing Control-t. Typing Control-t will move the cursor to the next
main line, insert the highlighted dependent coding tier, and position you to select a code
from the list of codes given. If you want to move to yet another line, skipping over a line,
type Control-t again. Try out these various commands to see how they work.
If you want to code data for only one speaker, you can restrict the way in which the
Control-t feature works by using esc-t to reset the set-next-tier-name function. For
example, you confine the operation of the coder to only the *CHI lines, by typing esc-t and
then entering CHI. You can only do this when you are ready to move on to the next line.
If you receive the message “Finish coding current tier” in response to a command (as,
for example, when trying to change to editor mode), use esc-c to extricate yourself from
the coding process. At that point, you can reissue your original command. Here is a sum-
mary of the commands for controlling the coding window. On Macintosh, use the
command key instead of the control key. Remember to release the esc key before the next
character.
Command Function
esc-c finish current code
esc-c (again) finish current tier
control-z undo
control-t or F1 finish current tier and go to next
esc-t restrict coding to a particular speaker
esc-esc go to the next speaker
esc-s show subcodes under cursor
In this example, the +b option sets the checkpoint buffer (that is, the interval at which
the program will automatically back up the work you have done so far in that session). If
you find the interval is too long or too short, you can adjust it by changing the value of b.
The +d option tells the editor to keep a “.bak” backup of your original CHAT file. To turn
Part 2: CLAN 37
off the backup option, use –d. The +l option reorders the presentation of the codes based
on their frequency of occurrence. There are three values of the +l option:
0 leave codes without frequency ordering
1 move most frequent code to the top
2 move codes up one level by frequency
If you use the +s option, the program assumes that all the codes at a particular level
have the same codes symmetrically nested within them. For example, consider the codes-
basic.cut file:
\ +b50 +l1 +s1
%spa:
" $MOT
:POS
:Que
:Res
:NEG
" $CHI
The spaces in this file must be spaces and not tabs. The line with $MOT begins with
a space. Then there is the quote sign, followed by one more space. There are two spaces
before :POS, because that code appears in the second field. There are three spaces before
:Que, because that code appears in the third field. There must be a tab following the colon
on the %spa: tier, because that code needs to be inserted in the actual output in the CHAT
file. The above file is a shorthand for the following complete listing of code types:
$MOT:POS:Que
$MOT:POS:Res
$MOT:NEG:Que
$MOT:NEG:Res
$CHI:POS:Que
$CHI:POS:Res
$CHI:NEG:Que
$CHI:NEG:Res
It is not necessary to explicitly type out each of the eight combinations of codes. With
the +s1 switch turned on, each code at a level is copied across the branches so that all of
the siblings on a given level have the same set of offspring. A more extensive example of
a file that uses this type of inheritance is the system for error coding given in the
/coder/codeserr.cut file.
If not all codes at a given level occur within each of the codes at the next highest
level, each individual combination must be spelled out explicitly and the +s option should
not be used. The second line in the file should declare the name for your dependent tier. It
should end with a tab, so that the tab is inserted automatically in the line you are con-
structing. A single codes.cut file can include coding systems for many different dependent
tiers with each system in order in the file and beginning with an identifier such as $spa:.
Setting up the codes.cut file properly is the trickiest part of Coder Mode. Once properly
specified, however, it rarely requires modification. If you have problems getting the editor
to work, chances are the problem is with your codes.cut file.
Part 2: CLAN 38
5 Media Linkage
In the old days, transcribers would use a foot pedal to control the rewinding and
replaying of tapes. With the advent of digitized audio and video, it is now possible to use
the computer to control the replay of sound during transcription. Moreover, it is possible
to link specific segments of the digitized audio or video to segments of the computerized
transcript. This linkage is achieved by inserting a header tier of this shape
@Media: clip, audio
The first field in the @Media line is the name of the media file. You do not need to
include the extension of the media file name. The second field in the @Media header tells
whether the media is audio, video, or missing. Each transcript should be associated with
one and only one media file. For media linkage to work, it is crucial to include the @Media
line before you begin linking, and the media file name must exactly match the transcript
file name.
Once this header tier is entered, you can use various methods to insert sound markers
that appear initially to the user as bullets. When these bullets are opened, they look like
this:
.
*ROS: alert [!] alert ! 1927_4086.
The size and shape of the bullet character varies across different fonts, but it will usually
be a bit darker than what you see above. The information in the bullet provides clearer
transcription and immediate playback directly from the transcript. The first number in the
bullet indicates the beginning of the segment in milliseconds and the second number
indicates the end in milliseconds.
Once a CHAT file has been linked to audio or video, it is easy to playback the
interaction from the transcript using “Continuous Playback” mode (esc-8, remember to
always release the escape key before typing the next key). In this mode, the waveform
display is turned off and the computer plays back the entire transcript, one utterance after
another, while moving the cursor and adjusting the screen to continually display the current
utterances. This has the effect of “following the bouncing ball” as in the old sing-along
cartoons or karaoke video. In Continuous Movie Playback Mode, the video is played as
the cursor highlights utterances in the text.
To create a text that can be played back and studied in this way, however, the user can
make use of any combination of six separate methods: sonic mode, transcriber mode, video
mode, sound walker, time mark editing, and exporting to partitur editors. This chapter
describes each of these six methods and leaves it up to the individual researcher which of
these methods is best for his or her project.
in Amadeus Pro, you should use CBR (Constant Bit Rate), Best quality, 320 kbps bitrate,
Id3v2.4 tags, automatic sample rate, and no joint stereo.
For use in TalkBank browser web display and editing with CLAN, video files should
be in MP4 (.mp4) format. Resolution can be 720x480 or 960x640. Very high resolution is
not good for web delivery and lower resolution can be too grainy.
same effect using command-I (insert time code). If you want to change the value of
a bullet already in the transcript, you do the same thing while your cursor is inside or
right next to the bullet in the transcript window.
7. Changing the waveform window. The +H and -H buttons on the left allow you to
increase or decrease the amount of time displayed in the window. For highly accurate
border placement, use a very wide horizontal display. The +V and -V buttons allow
you to control the amplitude of the waveform.
8. Scrolling. At the bottom of the sound window is a scrollbar that allows you to move
forward or backward in the sound file (please note that scrolling in the sound file can
take some time as the sound files for long recordings are very large and take up
processing capacity).
9. Utterance -> Waveform Display. To highlight the section of the waveform associ-
ated with an utterance, you need to triple-click on the bullet following the utterance
you want to replay. You must triple-click at a point just before the bullet to get reliable
movement of the waveform. If you do this correctly, the waveform will redisplay.
Then you can replay it by using command-click.
10. Waveform -> Utterance Display. Correspondingly, you can double-click an area of
the waveform and, if there is a corresponding bullet in the transcript, then the line
with that bullet will be highlighted.
11. Undo. If you make a mistake in linking or selecting an area, you can use the Undo
function with command-Z to undo that mistake.
12. Time duration information. Just above the waveform, you will see the editor mode
line. This is the black line that begins with the date. If you click on this line, you will
see three additional numbers. The first is the beginning and end time of the current
bullets-1 and F5bullets-2window in seconds. The second is the duration of the
selected part of the waveform in hours:minutes:seconds.milliseconds. The third is the
position of the cursor in seconds.milliseconds. If you click once again on the mode
line, you will see sampling rate information for the audio file.
you the functions you will need for this. Many of these functions apply to both video and
audio. Their use is summarized here:
1. <- will set back the current time. This function makes small changes at first and then
larger ones if you keep it pressed down.
2. -> will advance the current time. This function makes small changes at first and then
larger ones if you keep it pressed down.
3. control <- will decrease the beginning value for the segment in the text window as well
as the beginning value for the media in the video window. This function makes small
changes at first and then larger ones if you keep it pressed down.
4. control -> will increase the beginning value for the segment in the text window as well
as the beginning value for the media in the video window. This function makes small
changes at first and then larger ones if you keep it pressed down.
5. command <- will decrease the beginning value for the segment in the text window as
well as the beginning value for the media in the video window. This function makes
small changes at first and then larger ones if you keep it pressed down.
6. command -> will increase the beginning value for the segment in the text window as
well as the beginning value for the media in the video window. This function makes
small changes at first and then larger ones if you keep it pressed down.
7. / pressing the button with the right slash with the start time active moves the start time
to current time. If the current time is active, it moves the current time to the start time.
8. \ pressing the button with the left slash with the end time active moves the end time to
current time. If the current time is active, it moves the current time to the end time.
9. Triple-clicking on the relevant cell has the same effect as the above two functions.
10. You can play the current segment either by pressing the repeat button or the space
button when the video window is active. The behavior of the repeat play function can
be altered by inserting various values in the box to the right of “repeat”. These are
illustrated in this way:
-400 add 400 milliseconds to the beginning of the segment to be repeated
+400 add 400 milliseconds to the end of the segment to be repeated
b400 play the first 400 milliseconds of the segment
e400 play the last 400 milliseconds of the segment
If you would like to use a real foot pedal with SoundWalker, you can order one
(Windows only) from www.xkeys.com. This foot pedal installs along with the keyboard
and allows you to bind F6, F7, and F8 to the left, middle, and right pedals for the functions
of rewind, play, and forward.
multiple video files using a single CHAT transcript. To do this, you should name your
videos with a constant first part of the name and then add an additional variable part to
distinguish each video. For example, you could have “scale01” as your constant name and
then the three videos would have the names scale01-1.mov, scale01-2.mov, and scale01-
3.mov. If this is your naming convention for the three videos, then you would have these
two lines in your *.cha transcript file:
@Media: scale01, video
@Videos: 1, 2, 3
If you want somewhat more mnemonic names, you could name the files as scale05-
1center, scale05-2left, and scale05-3right and then the @Videos line would be:
@Videos: 1center, 2left, 3 right
When you open the transcript and start playback using continuous playback with esc-8
or some other method, the default video will be the first one in the list. To switch to
playback from another video clip, stop the playback and type F3 followed by the number
of the video you want to play. For example, if you want to shift playback from scale05-1
to scale05-3, you should type F3-3. A sample file with three videos for testing out this
process can be downloaded from https://talkbank.org/resources/F3.
6 Other Features
6.1 Supplementary Commands
The chapter on Supplementary Commands lists several basic commands for working
with files. They are batch for creating batch files, cd for changing directory, dir for listing
the files in a directory, info for getting a list of possible commands, ren for renaming files,
and rm for removing or deleting files. See the descriptions in that chapter for details.
The patterns for the +sg switch are much less complicated, but you can see these by typing:
freq +sg
These patterns for +sm and +sg can also be used with KWAL and COMBO.
6.5 Macros
CLAN can store up to 10 Macros. Typing esc-n will open a dialog that lets you assign
strings to numbers. For example, you might want to assign the string %spa: as Macro #1.
You would type %spa: into the box next to Macro String: and then enter the number “1”
above. Then you could insert this string by typing control-w-1. You will also see that just
typing control-w pops up a list of all the macros you have assigned.
6.6 Aliases
If you make frequent use of a CLAN command with specific switches, you can save
time and memory and increase reliability by creating an alias. For example, this alias
trim kwal +t* +t@ +t% +d +f
Part 2: CLAN 48
will save you from having to remember all the details of how to configure KWAL to
remove a given dependent tier. So, if you then type:
trim –t%mor *.cha
Note that it saves you from having to type the first 7 arguments, but you still have to
type the last *.cha
If you want to remove both %mor and %gra lines, you can then just type:
trim –t%mor –t%gra *.cha
This trim alias is already included in the 0aliases.cut file in CLAN/lib/fixes. If you type
the word “trim” by itself, CLAN will give you the usage message. The other standard alias
is
chat2text flo +cr +t*
We will add additional aliases from time to time. However, if you want to create you
own aliases, then you should create a file called aliases.cut that you put into the CLAN/lib
folder. The format of that file is to have one command per line with the alias at the
beginning of the line without spaces, then a space, and then the full CLAN command.
2. a copy of the file that the program was being run on,
3. the complete command line used when the malfunction occurred,
4. all the results obtained by use of that command, and
5. the date of compilation of your CLAN program, which you can find by clicking on
“About CLAN” at the top left of the menu bar on Macintosh or the “Help CLAN”
option at the top right of the menu bar for Windows.
Use WinZip or Stuffit to save the input and output files and include them as an e-mail
attachment. Please try to create the smallest possible file you can that will still illustrate the
bug.
7 Analysis Commands
The analytic work of CLAN is performed by a series of commands that search for
strings and compute a variety of indices. These commands are all run from the Commands
window. In this section, we will examine each of the commands and the various options
that they take. The commands are listed alphabetically. The following table provides an
overview of the various CLAN commands. The CHECK program is included here,
because it is so important for all aspects of use of CLAN. To go directly to any command
in this click, click on the page number.
CLAN also includes two other major groups of commands. The first group is used to
perform morphosyntactic analysis on files by tagging words for their part of speech and
detecting grammatical relations. These programs are discussed in in the MOR manual. In
addition, CLAN includes a large group of Utility commands that are described in the
chapter on Utility Commands.
The best way to see a complete list of options for a command is to type the names of
the command followed by a carriage return in the Commands window. For example, if
you type just the word chip, you will see a list of all the available options for CHIP. You
can see a list of all available commands by typing "info".
7.1 CHAINS
CHAINS is used to track sequences of interactional codes. These codes must be entered
by hand on a single specified coding tier. To test out CHAINS, you may wish to try the file
chains.cha that contains the following sample data.
@Begin
@Participants: CHI Sarah Target_child, MOT Carol Mother
*MOT: sure go ahead [^c].
%cod: $A
%spa: $nia:gi
*CHI: can I [^c] can I really [^c].
%cod: $A $D. $B.
%spa: $nia:fp $npp:yq.
%sit: $ext $why. $mor
*MOT: you do [^c] or you don't [^c].
%cod: $B $C.
%spa: $npp:pa
*MOT: that's it [^c].
%cod: $C
%spa: $nia:pa
@End
The symbol [^c] in this file is used to delimit clauses. Currently, its only role is within
the context of CHAINS. The %cod coding tier is a project-specific tier used to code possi-
ble worlds, as defined by narrative theory. The %cod, %sit, and %spa tiers have periods
inserted to indicate the correspondence between [^c] clausal units on the main line and
sequences of codes on the dependent tier.
To change the order in which codes are displayed in the output, create a file called
codes.ord. This file could be in either your working directory or in the \childes\clan\lib
directory. CHAINS will automatically find this file. If the file is not found then the codes
are displayed in alphabetical order, as before. In the codes.ord file, list all codes in any
order you like, one code per line. You can list more codes than could be found in any one
file. But if you do not list all the codes, the missing codes will be inserted in alphabetical
order. All codes must begin with the $ symbol.
you will get a complete analysis of all chains of individual speech acts for all speakers,
as in the following output:
Part 2: CLAN 52
It is also possible to use the +s switch to merge the analysis across the various speech
act codes. If you do this, alternative instances will still be reported, separated by commas.
Here is an example:
chains +d +t%spa chains.cha +s$nia:%
You can use CHAINS to track two coding tiers at a time. For example, one can look at
chains across both the %cod and the %sit tiers by using the following command. This com-
mand also illustrates the use of the +c switch, which allows the user to define units of
analysis lower than the utterance. In the example file, the [^c] symbol is used to delimit
clauses. The following command makes use of this marking:
chains +c"[^c]" +d +t%cod chains.cha +t%sit
+d Use this switch to change zeroes to spaces in the output. The following command
illustrates this option:
chains +d +t%spa chains.cha +s$nia:%
The +d1 value of this option works the same as +d, while also displaying every input
line in the output.
+sS This option is used to specify codes to track. For example, +s$b will track only the
$b code. A set of codes to be tracked can be placed in a file and tracked using the
form +s@filename. In the examples given earlier, the following command was used
to illustrate this feature:
chains +d +t%spa chains.cha +s$nia:%
Part 2: CLAN 54
7.2 Chatter
The Chatter program is not included in CLAN. However, for full validation of CHAT
transcripts and inclusion in TalkBank, all transcripts must be validated by Chatter. When
creating a new CHAT transcript, it is easiest to use CHECK on individual files for
validation, as described in the next section. Often users submit corpora that have only been
validated by CHECK, and then the TalkBank staff will run Chatter for final validation.
However, it is most helpful for users to run Chatter themselves. Chatter can be downloaded
from https://talkbank.org/software/chatter.html. That page also provides additional
instructions for running Chatter. Chatter only runs on complete folders, not on individual
files, although it analyzes each file in the specified folder or folder hierarchy.
After running, Chatter creates a file called 0errors.cut in each folder. It is possible to
double-click on errors reported in that file to open the site of the error in the original file
for correction. If necessary, you can use the column number given for the error to find its
precise location. After correcting errors, you can run Chatter again and there will hopefully
be fewer or no errors. Once all errors detected by Chatter are corrected, a corpus is ready
for inclusion in one of the TalkBank databases.
7.3 CHECK
Checking the syntactic accuracy of a file can be done in two ways. One method is to
work within the editor. In the editor, you can start up the CHECK program by just typing
esc-L. Alternatively, you can run CHECK as a separate program. The CHECK program
checks the syntax of the specified CHAT files. If errors are found, the offending line is
printed, followed by a description of the problem.
please contact [email protected] to discuss ways in which we can extend the CHAT system
and its reflection in the XML Schema.
7.4 CHIP
CHIP was designed and written by Jeffrey Sokolov. The program analyzes specified
pairs of utterances. CHIP has been used to explore parental input, the relation between
speech acts and imitation, and individual differences in imitativeness in both normal and
language-impaired children. Researchers who publish work based on the use of this pro-
gram should cite Sokolov and MacWhinney (1990). CHIP now works on theh %mor tier
by default. To run CHIP on the main speaker tier, as illustrated in the following examples,
please add the -t%mor option.
There are four major aspects of CHIP: (1) the tier creation system, (2) the coding
system, (3) the technique for defining substitution classes, and (4) the nature of the
summary statistics.
utterance and the second is the “response” utterance. The response is compared to the
source. Speakers are designated by the +b and +c codes. An example of a minimal CHIP
command is as follows:
chip +bMOT +cCHI chip.cha
We can run this command runs on the following seven-utterance chip.cha file that is
distributed with CLAN, which is given here without the %mor and %gra lines:
@Begin
@Participants: MOT Mother, CHI Child
*MOT: what’s that?
*CHI: hat.
*MOT: a hat!
*CHI: a hat.
*MOT: and what’s this?
*CHI: a hat !
*MOT: yes that’s the hat .
@End
The output from running this simple CHIP command on this short file is as follows:
chip (02-Apr-2024) is conducting analyses on:
ALL speaker tiers
and those speakers' ONLY dependent tiers matching: %MOR;
****************************************
From file <chip.cha>
*MOT: what's that ?
%mor: pro:int|what~cop|be&3S pro:dem|that ?
*CHI: hat .
%mor: n|hat .
%chi: $NO_REP $REP = 0.00
*MOT: a hat !
%mor: det:art|a n|hat !
%asr: $NO_REP $REP = 0.00
%adu: $EXA:hat $ADD:a $EXPAN $DIST = 1 $REP = 0.50
*CHI: a hat .
%mor: det:art|a n|hat .
%csr: $EXA:hat $ADD:a $EXPAN $DIST = 2 $REP = 0.50
%chi: $EXA:a-hat $EXACT $DIST = 1 $REP = 1.00
*MOT: and what's this ?
%mor: coord|and pro:int|what~cop|be&3S pro:dem|this ?
%asr: $NO_REP $REP = 0.00
%adu: $NO_REP $REP = 0.00
*CHI: that a hat !
%mor: pro:rel|that det:art|a n|hat !
%csr: $EXA:a-hat $ADD:that $EXPAN $DIST = 2 $REP = 0.67
%chi: $NO_REP $REP = 0.00
*MOT: yes that's the hat .
%mor: co|yes pro:dem|that~cop|be&3S det:art|the n|hat .
%asr: $NO_REP $REP = 0.00
%adu: $EXA:hat $ADD:yes-that's-the $DEL:that-a $DIST = 1 $REP =
0.25
The output also includes a long set of summary statistics which are discussed later. In
the first part of this output, CHIP has introduced four different dependent tiers:
%chi: This tier is an analysis of the child’s response to an adult’s utterance, so the adult’s
utterance is the source and the child’s utterance is the response.
Part 2: CLAN 58
%adu: This tier is an analysis of the adult’s response to a child’s utterance, so the child is
the source and the adult is the response.
%csr: This tier is an analysis of the child’s self repetitions. Here the child is both the source
and the response.
%asr: This tier is an analysis of the adult’s self repetitions. Here the adult is both the source
and the response.
By default, CHIP produces all four of these tiers. However, by using the -n option, the
user can limit the tiers that are produced. Three combinations are possible:
1. You can use both -ns and -nb. The -ns switch excludes both the %csr tier and the %asr
tier. The -nb switch excludes the %adu tier. Use of both switches results in an analysis
that computes only the %chi tier.
2. You can use both -ns and -nc. The -ns switch excludes both the %csr tier and the %asr
tier. The -nc switch excludes the %chi tier. Use of both these switches results in an
analysis that computes only the %adu tier.
3. You can use both -nb and -nc. This results in an analysis that produces only the %csr
and the %asr tiers.
It is not possible to use all three of these switches at once.
The %adu dependent tier indicates that the adult’s response contained an EXAct match of
the string “hat,” the ADDition of the string “yes-that’s-the” and the DELetion of “a.” The
DIST=1 indicates that the adult’s response was “one” utterance from the child’s, and the
repetition index for this comparison was 0.25 (1 matching stem divided by 4 total stems in
the adult’s response). The maxium value for DIST is 7.
CHIP also takes advantage of CHAT-style morphological coding. Upon encountering
a word, the program determines the word’s stem and then stores any associated prefixes or
suffixes along with the stem. During the coding process, if lexical stems match exactly, the
program then also looks for additions, deletions, repetitions, or substitutions of attached
morphemes.
The %adu line now indicates that there was an EXAct repetition of hat, an ADDition
of the string yes that’s and a within-class substitution of the for a. If the substitution option
is used, EXPANsions and REDUCtions are tracked for the substituted words only. In
addition to modifying the dependent tier, using the substitution option also affects the
summary statistics that are produced. With the substitution option, the summary statistics
will be calculated relative only to the words that are substituted. In many cases, you will
want to run CHIP analyses both with and without the substitution option and compare the
contrasting analyses.
values for each of the coding categories for each speaker type that are outlined below. The
definition of each of these measures is as follows. In these codes, the asterisk stands for
any one of the four basic operations of ADD, DEL, EXA, and SUB.
Total # of Utterances The number of utterances for all speakers regardless of the number
of intervening utterances and speaker identification.
Total Responses The total number of responses for each speaker type regardless of
amount of overlap.
Overlap The number of responses in which there is an overlap of at least one word stem
in the source and response utterances.
No Overlap The number of responses in which there is NO overlap between the source
and response utterances.
Avg_Dist The sum of the DIST values divided by the total number of overlapping
utterances.
%_Overlap The percentage of overlapping responses over the total number of
responses.
Rep_Index Average proportion of repetition between the source and response utterance
across all the overlapping responses in the data.
*_OPS The total (absolute) number of add, delete, exact, or substitution operations for
all overlapping utterance pairs in the data.
%_*_OPS The numerator in these percentages is the operator being tracked and the
denominator is the sum of all four operator types.
*_WORD The total (absolute) number of add, delete, exact, or substitution words for
all overlapping utterance pairs in the data.
%_*_WORDS The numerator in these percentages is the word operator being
tracked and the denominator is the sum of all four-word operator types.
MORPH_* The total number of morphological changes on exactly matching stems.
%_MORPH_* The total number of morphological changes divided by the number of
exactly matching stems.
AV_WORD_*The average number of words per operation across all the overlapping
utterance pairs in the data.
FRONTED The number of lexical items from the word list that have been fronted.
EXACT The number of exactly matching responses.
EXPAN The number of responses containing only exact matches and additions.
REDUC The number of responses containing only exact-matches and deletions.
SUBST The number of responses containing only exact matches and substitutions.
IMITAT Index of total imitativeness, using 4 indices above. It is this sum:
EXACT+EXPAN+REDUC+SUBST
+b Specify that speaker ID S is an “adult.” The speaker does not actually have to be an
adult. The “b” simply indicates which speaker is taken to be the source. If you want
to study the “child” as respondent, you will focus on the %chi line. If you want to
see focus on the “adult” as respondent, you will focus on the %adu line.
+c Specify that speaker ID S is a “child.” The speaker does not actually have to be a
child. The “c” simply indicates which speaker is taken to be the “response”. If you
want to study the “child” as respondent, you should focus on the %chi line. If you
want to see focus on the “adult” as respondent, you should focus on the %adu line.
+d Using +d with no further number outputs only coding tiers, which are useful for
iterative analyses. Using +d1 outputs only summary statistics, which can then be
sent to a statistical program.
+g Enable the substitution option.
-h Use a word list file. The target file is specified after the letter “h.” Words to be
excluded are searched for in the target file. The use of an exclude file enables CHIP
to filter out variations in words that are less important for the message, including
words such as okay or yeah. Standard CLAN wildcards or metacharacters may be
used anywhere in the word list.
+n This switch has three values: +nb, +nc, and +ns. See the examples given earlier for
a discussion of the use of these switches in combination.
+qN Set the utterance window to N utterances. The default window is seven utterances.
CHIP identifies the source-response utterances pairs to code. When a response is
encountered, the program works backwards (through a window determined by the
+q option) until it identifies the most recent potential source utterance. Only one
source utterance is coded for each response utterance. Once the source-response pair
has been identified, a simple matching procedure is performed.
+x Set the minimum repetition index for coding.
CHIP also uses several options that are shared with other commands. For a complete
list of options for a command, type the name of the command followed by a carriage return
in the Commands window. Information regarding the additional options shared across
commands can be found in the chapter on Options.
7.5 CHIPUTIL
This program is designed to identify the utterance which is the source of a repetition.
It does this by outputting a pair of utterances, i.e. both the source and the target. By using
the +sS switch you can limit tracking to sources at a particular distance from the target. If
you run this command:
chip +bMOT +cCHI chip.cha +f
on the chip.cha example used in the previous section, and then run this command on the
chip.chip.cex output:
chiputil chip.chip.cex +f +s
you will get this output, where the +sswitch limits output to imitations:
*MOT: a hat !
%chU: $SOURCE=1
%mor: det:art|a n|hat !
%asr: $NO_REP $REP = 0.00
Part 2: CLAN 62
7.6 COMBO
COMBO provides the user with ways of composing Boolean search strings to match
patterns of letters, words, or groups of words in the data files. This program is particularly
important for researchers who are interested in syntactic analysis. The search strings are
specified with either the +s/-s option or in a separate file. Use of the +s switch is obligatory
in COMBO. When learning to use COMBO, what is most tricky is learning how to specify
the correct search strings.
Inserting the ^ operator between two strings causes the program to search for the first
string followed by the second string. The + operator inserted between two strings causes
the program to search for either of the two strings. In this case, it is not necessary for both
to match the text to have a successful match of the whole expression. Any one match is
sufficient. The ! operated inserted before a string causes the program to match a string of
text that does not contain that string.
The items of the regular expression will be matched to the items in the text only if they
directly follow one another. For example, the expression big^cat will match only the word
big directly followed by the word cat as in big cat. To find the word big followed by the
word cat immediately or otherwise, use the metacharacter * between the items big and cat,
as in big^*^cat. This expression will match, for example, big black cat. Notice that, in this
example, * ends up matching not just any string of characters, but any string of words or
characters up to the point where cat is matched. Inside a word, such as go*, the asterisk
stands for any number of characters. In the form ^*^, it stands for any number of words.
The * alone cannot be used in conjunction with the +g or +x option.
Part 2: CLAN 63
The underscore is used to “stand in for” for any single character. If you want to match
any single word, you can use the underscore with the asterisk as in +s"_*." which will
match any single word followed by a period. For example, in the string cat., the underscore
would match c, the asterisk would match at and the period would match the period.
The backslash (\) is used to quote either the asterisk or the underline. When you want
to search for the actual characters * and _, rather than using them as metacharacters, you
insert the \ character before them.
Using metacharacters can be quite helpful in defining search strings. Suppose you want
to search for the words weight, weighs, weighing, weighed, and weigh. You could use the
string weigh* to find all the previously mentioned forms. Metacharacters may be used
anywhere in the search string.
When COMBO finds a match to a search string, it prints out the entire utterance in
which the search string matched, along with any previous context or following context that
had been included with the +w or -w switches. This whole area printed out is what we will
call the “window.”
If you are interested not just in cases where “to” immediately follows “want,” but also cases
where it eventually follows, you can use the following command syntax:
combo +s"want^*^to" sample.cha
The next command searches the file and prints out any window that contains both “want”
and “to” in any order:
combo +s"want^to" +x sample.cha
The next command searches sample.cha and sample2.cha for the words “wonderful” or
“chalk” and prints the window that contains either word:
combo +s"wonderful+chalk" sample*.cha
The next command searches sample.cha for the word “neat” when it is not directly followed
by the words “toy” or “toy-s.” Note that you need the ^ in addition to the ! to clearly specify
the exact nature of the search you wish to be performed.
combo +s"neat^!toy*" sample.cha
In this next example, the COMBO program will search the text for either the word “see”
directly followed by the word “what” or all the words matching “toy*.”
combo +s"see^(what+toy*)" sample.cha
You can use parentheses to group the search strings unambiguously as in the next example:
combo +s"what*^(other+that*)" sample.cha
This command causes the program to search for words matching “what” followed by either
the word “that” or the word “other.” An example of the types of strings that would be found
are: “what that,” “what’s that,” and “what other.” It will not match “what is that” or “what
Part 2: CLAN 64
do you want.” Parentheses are necessary in the command line because the program reads
the string from left to right. Parentheses are also important in the next example.
combo +s"the^*^!grey^*^(dog+cat)" sample2.cha
This command causes the program to search the file sample2.cha for the followed, im-
mediately or eventually, by any word or words except grey. This combination is then to be
followed by either dog or cat. The intention of this search is to find strings like the big dog
or the boy with a cat, and not to match strings like the big grey cat. Note the use of the
parentheses in the example. Without parentheses around dog+cat, the program would
match simply cat. In this example, the sequence ^*^ is used to indicate “immediately or
later.” If we had used only the symbol ^ instead of the ^*^, we would have matched only
strings in which the word immediately following the was not grey.
To use this form, you first need to create a file of prepositions called “preps.cut” with
one preposition on each line and a file of articles called “arts.cut” with one article on each
line. By maintaining files of words for different parts of speech or different semantic fields,
you can use COMBO to achieve a wide variety of syntactic and semantic analyses. Some
suggestions for words to be grouped into files are given in the chapter of the CHAT manual
on word lists. Some particularly easy lists to create would be those including all the modal
verbs, all the articles, or all the prepositions. When building these lists, remember the pos-
sible existence of dialect and spelling variations such as dat for that.
Here is another example of how to refer to files in search strings. In this case, we are
looking in Spanish files for words that follow the definite articles la and el and begin with
either vowels or the silent “h” followed by a vowel. So, we can have one file, called
arts.cut, with the words el and la each on their own line. Then, we can have another file,
called vowels.cut, that looks like this:
hu*
u*
ha*
a* etc.
In this example, the +g5 option specifies that the words want, to, as well as the $INI on the
%spa line may occur in any order. The +t%spa option must be added to allow the program
to look at the %spa tier when searching for a match. The main tier is always searched, but
dependent tiers are only searched if they are specifically included with the +t switch. The
+d option is used to specify that the information produced by the program, such as file
name, line number and exact position of words on the tier, should be excluded from the
output. This way the output is in a legal CHAT format and can be used as an input to
another CLAN program, FREQ in this case.
delimiters, you can use the asterisk wildcard preceded by an underline. Note that this use
of the asterisk treats it as referring to any number of letters, rather than any number of
words. By itself, the asterisk in COMBO search strings usually means any number of
words, but when preceded by the underline, it means any number of characters. Here is the
full command:
combo +s"_*^(\!+?+.)" +f sample.cha
The +t*MOT switch tells the program to select only the main lines associated with the
speaker *MOT. The +t%spa tells the program to add the %spa tier to the *MOT main
speaker tiers. By default, the dependent tiers are excluded from the analysis. After this,
comes the file name, which can appear anywhere after the program name. The +s"want^to"
then tells the program to select only the *MOT clusters that contain the phrase want to.
You can then run programs like FREQ or MLU on the output.
Sometimes researchers want to maintain a copy of their data that is stripped of the
various coding tiers. This can be done by this command:
combo +s* +o@ -t% +f *.cha
The +o switch controls the addition of the header material that would otherwise be ex-
cluded from the output and the -t switch controls the deletion of the dependent tiers. It is
also possible to include or exclude individual speakers or dependent tiers by providing
additional +t or -t switches. The best way to understand the use of limiting for controlling
data display is to try the various options on a small sample file.
with all the prepositions (one on each line) and call it something like prep.cut. Then you
would create a second support file called something like combo.cut with this line:
"@prep.cut^the" "$Pthe" "%cod:"
The first string in this line gives the term used by the standard +s search switch. The second
string says that the code produced will bye $Pthe. The third string says that this code
should be placed on a %cod line under the utterance that is matched. If there is no %cod
line there yet, one will be created. The COMBO command that uses this information would
then be:
combo +s"@combo.cut" +d4 filename.cha
You can include as many lines as you wish in the combo.cut file to control the addition of
additional codes and additional coding lines. Once you are done with this, you can use
these new codes to control better inclusion and exclusion of utterances and other types of
searches.
computer to search for the expressions: _air_, _air., air?, air!, and so forth, where the
underline indicates a space.
The same expression air*^plane in string-oriented search mode will match airline
plane, airy plane, air in the plane or airplane. They will all be found because the
search string, in this case, specifies the string consisting of the letters “a,” “i,” and
“r”, followed by any number of characters, followed by the string “p,” “l,” “a,” “n,”
and “e.” In string-oriented search, the expression (air^plane) will match airplane but
not air plane because no space character was specified in the search string. In
general, the string-oriented mode is not as useful as the word-oriented mode. One of
the few cases when this mode is useful is when you want to find all but some given
forms. For example, if you are looking for all the forms of the verb kick except the
ing form, you can use the expression “kick*^! ^!ing” and the +g switch.
+g2: Do a string-oriented search on just one word. This option is for searching for strings
within each word.
+g3: Do not continue searching on a tier after first failure. This option is in cases users do
not want to look for word patterns further down the tier, if the first match fails. This
option is used for searches with the "not", "!", operator.
+g4: Exclude utterance delimiters from search. This will remove all utterance delimiters
from the search string. It is useful, if you want to find the last word on the tier.
+g5: Make search <s1>^<s2> identical to search <s2>^<s1>. This option is used as a
short cut. Normally words specified this way "word1^word2" are searched for in a
specific order. This option will match for word1 and word2 regardless whether
word1 precedes word2 or follows it on the tier. Otherwise user will have to specify
this: (word1^word2)+(word2^word1). By default, the ^ operator means followed
by, but the +g6 options turns ^ into a true AND operator. So COMBO search will
succeed only if all words separated by "^" are found anywhere on the cluster tier.
This also takes care of the situation when dependent tiers are not always in the same
order.
+o The +t switch is used to control the addition or deletion of particular tiers or lines
from the input and the output to COMBO. In some cases, you may want to include
a tier in the output that is not being included in the input. This typically happens
when you want to match a string in only one dependent tier, such as the %mor tier,
but you want all tiers to be included in the output. To do this you would use a
command of the following shape:
combo +t%mor +s"*ALL" +o% sample2.cha
+s This option is obligatory for COMBO. It is used to specify a regular expression to
search for in a data line(s). This option should be immediately followed by the
regular expression itself. The rules for forming a regular expression are discussed in
detail earlier in this section.
-s This switch allows you to exclude certain line from a COMBO search. It can be
used in combination with the +s switch.
+t Dependent tiers can be included or excluded by using the +t option immediately
followed by the tier code. By default, COMBO excludes the header and dependent
code tiers from the search and output. However, when the dependent code tiers are
included by using the +t option, they are combined with their speaker tiers into
clusters. For example, if the search expression is the^*^kitten, the match would be
Part 2: CLAN 69
found even if the is on the speaker tier and kitten is on one of the speaker’s associated
dependent tiers. This feature is useful if one wants to select for analyses only speaker
tiers that contain specific word(s) on the main tier and some specific codes on the
dependent code tier. For example, if one wants to produce a frequency count of the
words want and to when either one of them is coded as an imitation on the %spa
line, or neat when it is a continuation on the %spa line, the following two commands
could be used:
combo +s(want^to^*^%spa:^*^$INI*)+(neat^*^%spa:^*^$CON*)
+t%spa +f +d sample.cha
In this example, the +s option specifies that the words want, to, and $INI may occur
in any order on the selected tiers. The +t%spa option must be added to allow the
program to look at the %spa tier when searching for a match. The +d option is used
to specify that the information produced by the program, such as file name, line
number and exact position of words on the tier, should be excluded from the output.
This way the output is in a legal CHAT format and can be used as an input to another
CLAN program, FREQ in this case. The same effect could also be obtained by using
the piping feature.
COMBO also uses several options that are shared with other commands. For a complete
list of options for a command, type the name of the command followed by a carriage return
in the Commands window. Information regarding the additional options shared across
commands can be found in the chapter on Options.
7.7 COOCUR
The COOCCUR program tabulates co-occurences of words. This is helpful for analyz-
ing syntactic clusters. By default, the cluster length is two words, but you can reset this
value just by inserting any integer up to 20 immediately after the +n option. The second
word of the initial cluster will become the first word of the following cluster, and so on.
cooccur +t*MOT +n3 sample.cha +f
The +t*MOT switch tells the program to select only the *MOT main speaker tiers. The
header and dependent code tiers are excluded by default. The +n3 option tells the program
to combine three words into a word cluster. The program will then go through all of *MOT
main speaker tiers in the sample.cha file, three words at a time. When COOCCUR reaches
the end of an utterance, it marks the end of a cluster, so that no clusters are broken across
speakers or across utterances. Co-ocurrences of codes on the %mor line can be searched
using commands such as this example:
cooccur +t%mor -t* +s*def sample2.cha
7.8 DIST
This program produces a listing of the average distances between words or codes in a
file. DIST computes how many utterances exist between occurrences of a specified key
word or code. The following example demonstrates a use of the DIST program.
dist +t%spa -t* +b: sample.cha
This command line tells the program to look at the %spa tiers in the file sample.cha for
codes containing the : symbol. It then does a frequency count of each of these codes, as a
group, and counts the number of turns between occurrences. The -t* option causes the pro-
gram to ignore data from the main speaker tiers.
You can also use DIST to search for distances between words in the main line. For
example, when looking for the word frog in CHILDES/Frogs/WolfHemp/08/01w8.cha,
you would use this command:
dist +sfrog +g 01w8.cha
Note that the information about where the word first occurs, last occurs, and the distance
between occurrences is given in terms of turns, not line numbers.
by the b option, rather than all codes in addition to those containing your special
character.
DIST also uses several options that are shared with other commands. For a complete
list of options for a command, type the name of the command followed by a carriage return
in the Commands window. Information regarding the additional options shared across
commands can be found in the chapter on Options.
7.9 FREQ
One of the most powerful programs in CLAN is the FREQ program for frequency anal-
ysis. In its basic form, without special switches, it is also one of the easiest programs to use
and a good program to start with when learning to use CLAN. FREQ constructs a frequency
word count for user-specified files. A frequency word count is the calculation of the
number of times a word occurs in a file or set of files. FREQ produces a list of all the words
used in the file, along with their frequency counts, and calculates a type–token ratio. The
type–token ratio is found by calculating the total number of unique words used by a
selected speaker (or speakers) and dividing that number by the total number of words used
by the same speaker(s). It is generally used as a rough measure of lexical diversity. Of
course, the type–token ratio can only be used to compare samples of equivalent size,
because the ratio of types to tokens tends to vary with sample size.
This command would conduct a frequency analysis on all the articles that you have put in
the file called articles.cut. You can create the articles.cut file using either the CLAN editor
in Text Mode or some other editor saving in “text only.” The file looks like this:
Part 2: CLAN 72
a
the
an
If you wish to use this feature to search for items on the %mor tier, then use this format:
m;run
m;play
m;jump
This feature also works when used with KWAL and COMBO.
To count the frequencies of words on tiers marked with the [- spa] pre-code:
freq +s"[- spa]" *.cha
The language pre-code has a space in it, so it is important that when you specify this
language pre-code that you include the space and use quotes:
+s"[- spa]" +s"[- eng]" +s"<- spa>" +s"<- eng>"
Note that material after the +s switch is enclosed in quotation marks to guarantee that
wildcards will be correctly interpreted. For Macintosh and Windows, the quotes are the
best way of guaranteeing that a string is correctly interpreted. On Unix, only single quotes
can be used. In Unix, single quotes are necessary when the search string contains a $, |, or
Part 2: CLAN 74
> sign.
The next examples give additional search strings with asterisks and the output they will
yield when run on the sample file. Note that what may appear to be a single underline in
the second example is actually two underline characters.
String Output
*-acc 1 n:a|ball-acc
1 n:a|duck-acc
1 n:i|plane-acc
*-a__ 1 n:a|baby-all
1 n:a|ball-acc
1 n:a|duck-acc
1 n:i|grape-all
1 n:i|plane-acc
N:*|*-all 1 N:A|baby-all
1 N:I|grape-all
These examples show the use of the asterisk as a wildcard. When the asterisk is used,
FREQ gives a full output of each of the specific code types that match. If you do not want
to see the specific instances of the matches, you can use the percentage wildcard, as in the
following examples:
String Output
N:A|% 3 N:A|
%-ACC 3 -ACC
%-A__ 3 -ACC
2 -ALL
N:%|%-ACC 3 N:|-ACC
N:%|% 5 N:|
It is also possible to combine the use of the two types of wildcards, as in these examples:
String Output
N:%|*-ACC 1 N:|ball-acc
1 N:|duck-acc
1 N:|plane-acc
N:*|% 3 N:A|
2 N:I|
Researchers have also made extensive use of FREQ to tabulate speech act and interac-
tional codes. Often such codes are constructed using a taxonomic hierarchy. For example,
a code like $NIA:RP:NV has a three-level hierarchy. In the INCA-A system discussed in
the chapter on speech act coding in the CHAT manual, the first level codes the interchange
type; the second level codes the speech act or illocutionary force type; and the third level
codes the nature of the communicative channel. As in the case of the morphological exam-
ple cited earlier, one could use wildcards in the +s string to analyze at different levels. The
following examples show what the different wildcards will produce when analyzing the
%spa tier. The basic command here is:
freq +s"$*" +t%spa sample.cha
String Output
$* frequencies of all the three-level
codes in the %spa tier
$*:% frequencies of the interchange types
$%:*:% frequencies of the speech act codes
Part 2: CLAN 75
If some of the codes have only two levels rather than the complete set of three levels,
you need to use an additional % sign in the +s switch. Thus, the switch
+s"$%:*:%%"
will find all speech act codes, including both those with the third level coded and those
with only two levels coded.
As another example of how to use wild cards in FREQ, consider the task of counting
all the utterances from the different speakers in a file. In this case, you count the three-
letter header codes at the beginnings of utterances. To do this, you need the +y switch to
make sure FREQ sees these headers. The command is:
freq +y +s”\**:” *.cha
+sm;*,o%,o~
find all stems of all "adv" and erase all other markers:
+sm;*,|adv,o%
find all forms of "be" verb:
+sm;be
find all stems and parts-of-speech and erase other markers:
+sm;*,|*,o%
find only stems and parts-of-speech that have suffixes and eras other
markers:
+sm;*,|*,-*,o%
find all stems, parts-of-speech and distinguish those with suffix
and erase other markers:
+sm;*,|-*,-+*,o-%
find only stems and parts-of-speech that do not have suffixes and
erase other markers:
-sm-* +sm;*,|*,o%
find only noun words with "poss" parts-of-speech postclitic:
+sm|n,|n:*,~|poss
find all noun words and show postclitics, if they have any:
+sm|n,|n:* +r4
find all noun words and erase postclitics, if they have any:
+sm|n,|n:*,o~
If you are using an include file with the +s@filename option, the format of this file is:
m;firstword
m;anotherword
etc.
In other words, the format of the include file is the same as the format of the +sm
options, minus the +s. When using the +sm switch with FREQ, you must also include the
+t%mor switch to instruct FREQ to include searches of the %mor tier.
This output shows first the frequency, then the code from the %mor line and then the error
as coded on the main line. To find all PASTs with only "neg" errors, as in "word [* neg]",
you can use this command:
freq +u +t*CHI +sm-PAST,*neg +sm&PAST,*neg *cha
freq +f sample.cha
This results in the output being sent to sample.frq.cex. If you wish, you may specify a file
extension other than .frq.cex for the output file. For example, to have the output sent to a
file with the extension .mot.cex, you would specify:
freq +fmot sample.cha
Suppose, however, that you are using FREQ to produce output on a group of files rather
than on a single file. The following command will produce a separate output file for each
.cha file in the current directory:
freq +f *.cha
To specify that the frequency analysis for each of these files be computed separately but
stored in a single file, you must use the redirect symbol (>) and specify the name of the
output file. For example:
freq *.cha > freq.all
This command will maintain the separate frequency analyses for each file separately and
store them all in a single file called freq.all. If there is already material in the freq.all file,
you may want to append the new material to the end of the old material. In this case, you
should use the form:
freq *.cha >> freq.all
Sometimes, however, researchers want to treat a whole group of files as a single database.
To derive a single frequency count for all the .cha files, you need to use the +u option:
freq +u *.cha
Again, you may use the redirect feature to specify the name of the output file, as in the
following:
freq +u *.cha > freq.all
The +t*CHI switch tells the program to select the main and dependent tiers associated
only with the speaker *CHI. The +t%spa tells the program to further narrow the selection.
It limits the analysis to the %spa dependent tiers and the *CHI main speaker tiers. The -t*
option signals the program to eliminate data found on the main speaker tier for NIC from
the analysis. The +s option tells the program to eliminate all the words that do not match
the $INI* string from the analysis. Quotes are needed for this +s switch to guarantee correct
interpretation of the asterisk. In general, it is safest to always use pairs of double quotes
with the +s switch. The +z20u option tells the program to look at only the first 20
utterances. Now the FREQ program can perform the desired analysis. This command line
will send the output to the screen only. You must use the +f option if you want it sent to a
file. By default, the header tiers are excluded from the analysis.
The +/-s switch can also be used in combination with special codes to pick out sections
of material in code-switching. For example, stretches of German language can be marked
inside a transcript of mostly English productions with this form:
*CHI: <ich meine> [@g] cow drinking.
You are use FREQ to create Excel output with crosstabulation between variables on
dependent tiers. For example, if you have two coding tiers called %xarg and %xsem, you
can crosstabulate using these commands:
freq +d8 +t%xarg +t%xsem +d2 sample.cha
freq +d8 +t%xarg +t%xsem +c5 +d2 sample.cha
freq +d8 +t%xarg +t%xsem sample.cha
freq +d8 +t%xarg +t%xsem +c5 sample.cha
that is grounded on whole word forms, rather than lemmas. For example, “run,” “runs,”
and “running” will all be treated as separate types. If you want to treat all forms of the
lemma “run” as a single type, you should run the file through MOR and POST to get a
disambiguated %mor line. Then you can run FREQ in a form such as this to get a lemma-
based TTR.
freq +sm;*,o% sample.mor.pst
Depending on the shape of your morphological forms, you may need to add some addition-
al +s switches to this sample command.
The first command will produce a sample.frq.cex file with the mother’s words and the
second will produce a sample.fr0.cex file with the child’s words. Next you should run
FREQ on the output files:
freq +y +o +u sample.f*
The output of these commands is a list of words with frequencies that are either 1 or 2. All
words with frequencies of 2 are shared between the two files and all words with frequencies
of 1 are unique to either the mother or the child.
The top of that page has the core GRs, but just below are these further subtypes which are
important for most languages. Although the web documentation uses lowercase for these
tags, the actual %gra output is in uppercase.
If the %gra was produced by the earlier MOR system, the relevant GRs are:
CSUBJ: the finite clausal subject of another clause
COMP: the clausal complement of a verb
CPRED: a full clause that serves as the predicate nominal of verbs
CPOBJ: a full clause that serves as the object of a preposition
COBJ: a full clause that serves as the direct object
CJCT: a finite clause that attaches to a verb, adjective, or adverb
XJCT: a non-finite clause that attaches to a verb, adjective, or adverb
NJCT: the head of a complex NP with a PP attached as an adjunct of a noun. The inclusion
of this GR is optional.
CMOD: a finite clause that is a nominal modifier or complement
XMOD: a non-finite clause that is a nominal modifier or complement.
To locate and sum all of the complex constructions, one can use this command for UD:
freq +sg|CSUBJ* +sg|CCOMP +sg|XCOMP +sg|ACL* +sg|ADVCL* +sg|NSUBJ*
+d2 +t*PAR *.cha
The result is a stat.frq.xls file that you can open in Excel. It has everything you need to
compute the index for each of the input files, except for the number of tokens in the files.
To get that, you can run this command:
freq +t%gra +t*PAR +s% *.cha
You can then cut and paste those numbers into the first spreadsheet into a column called
TokenAllGR. Then you create another Excel column, which we can call
TokenComplexGRs, that sums all the frequencies of the various complexity GRs. Finally,
you create a third column that divides TokenComplexGR by TokenAllGR, and that is your
complexity index. Thanks to Kimberly Mueller for formulating this procedure. Using her
Part 2: CLAN 81
test files, this procedure spotted 74 embeddings. Of these two were false alarms and there
was one miss. So, overall accuracy of this procedure is at about 95% which compares
favorably with results from human coders.
types, tokens, and the type–token ratio. Word frequencies are not placed into the
output. You do not need to use the +f option with +d2 or +d3, since this is assumed.
+d4 This switch allows you to output just the type–token information.
+d5 This switch will output all words you are searching for, including those that occur
with zero frequency. This could happen, for example, if you use the +s switch to
search for a specific word and that word does not occur in the transcript. This switch
can be combined with other +d switches.
+d6 When used for searches on the main line, this switch will output matched forms with
a separate tabulation of replaced forms, errors, partial omissions, and full forms, as
in this example for “going” as the target word:
17 going
3 going
11 gonna [: going to]
2 goin(g)
1 go [: going] [* 0ing]
This switch can also be used on the %mor line in a form such as this:
freq +d6 adler01a.cha +sm|n*,o%
will produce separate counts for all instantiations of a given part of speech, organized by
part of speech. For example, the output for n:gerund would be:
2 n:gerund
1 n:gerund|go-PRESP
1 n:gerund|look-PRESP
+d7 This command links forms on a “source” tier with their corresponding words on a
“target” tier, yielding output such as this:
12 pro|you
8 you
4 you're
In this example, and by default, the first line gives the form on the %mor line as the
source tier, and the following lines give the corresponding main line or “target”
words. If you add the name of a tier, such as %gra, then that becomes the source.
This switch expects that the items on the two tiers be in one-to-one correspondence.
If you want to switch the display so that the target becomes the source, you can add
the +c5 switch. You can also specify a match between two dependent tiers, as in
this example:
freq +d7 +sm|cop +sg|ROOT +t%gra +t%mor t.cha
+d8 outputs words and frequencies of cross tabulation of one dependent tier with another
+o Normally, the output from FREQ is sorted alphabetically. This option can be used
to sort the output in descending frequency. The +o1 level will sort to create a reverse
concordance.
+o1 sort output by reverse concordance
+o2 sort by reverse concordance of first word; non-CHAT, preserve the whole line
+o3 By default, FREQ tabulates separate frequencies for each speaker. To see the
combined results across all speakers, use this switch.
+pS add S to word delimiters. (+p_ will break New_York into two words)
FREQ also uses several options that are shared with other commands, such as +f, +k,
Part 2: CLAN 83
+l, +y, +r, +s, +u, +x, +z, and others. For a complete list of options for a command, type
the name of the command followed by a carriage return in the Commands window.
Information regarding the additional options shared across commands can be found in the
chapter on Options.
If you want to merge results across files, add +u to the above command. If you want to
exclude unintelligible words and neologisms, add -sm|neo,|unk" to the above command.
2. If you want to do this on the speaker line and exclude unintelligible words and
neologisms, use:
freq +t*PAR +o -s"xx" -s"<*\* n:uk*>" *.cha
If you want to get information about error productions, add +d6 to the above command.
If you want to send the results to a file instead of having them appear on the computer
screen, add +f to the above command.
3. If you want the frequency of all words from the %mor line in descending order,
by stems, with information on parts of speech, bound morphemes, and error codes, use:
freq +t*PAR +d6 +o +s"@r-*,o-%" *.cha
4. If you want a list and frequency count of all prefixes used by Participants in
descending order of frequency, merged across files in the folder, use:
freq +t*PAR +o +s"@r-*,#-*,o-%" +u *.cha
If you want the %mor line printed out with that information, use +d.
5. To get a frequency count of Participant word-level errors (see the Error Coding
sheet at the website for a description of these error codes), file by file, a basic command is:
freq +s"\[\* *\]" +t*PAR *.cha
If you want to include errors that were repeated and revised, add +r6 to the above command.
6. If you want to specify which errors you want listed and counted, you can list them
as in the following command. Remember to add +r6 to the command if you want word-
level errors within repetitions and retracings (e.g., *s:r, *s:r-rep, *s:r-ret).
freq +s"[\* s*]" +s"[\* p*]" +s"[\* n*]" +s"[\* d*]" +s"[\* m*]"
+s"[\* f*]" +t*PAR *.cha
7. Alternatively, you can create a CLAN cut file of all error types and use that
instead. Put the cut file in the same folder as the files you are analyzing and use:
freq [email protected] +t*PAR *.cha
Remember to add +r6 to the command if you want to include errors within repetitions and
retracings.
If you want to create a list with frequencies of each error and the CHAT transcript line that
Part 2: CLAN 84
Triple click on the line with the filename at the end of the CLAN output to open the Excel
file. The Excel file itself will be in the folder you put as your working directory.
9. To get a frequency count of Participant errors at the sentence level (see the Error
Coding sheet at the website for a description of these error codes), a basic command is:
freq +s"<+ >" +t*PAR *.cha
If you want a certain type of sentence error, for example jargon, use <+ jar> inside the
quotation marks.
10. If you want to see all error productions associated with a target word, for example,
Cinderella, use:
freq +s"<: Cinderella>" *.cha
11. To list all parts of speech that occur in the files, merged across all the files, with
their corresponding frequencies, in descending order of frequency, use:
freq +t*PAR +d5 +o +sm|*,o% +u *.cha
12. If you want to list and count the frequency of all verb forms, stems ony, merged
across files in a folder, use:
freq +t*PAR +sm;*,|v*,|aux*,|part*,|mod*,|cop*,o% +u *.cha
13. If you want the total number of nouns (of all types), stems only, merged across
files in a folder, use:
freq +t*PAR +d5 +o +sm|n,|n:*,o-% +u *.cha
14. If you want to count and list the nouns, stems only, merged across files, use:
freq +t*PAR +d5 +o +sm;*,|n,|n:*,o% +u *.cha
15. If you want word-level errors and other part of speech and bound morpheme info
about the noun, merged across files, use:
freq +t*PAR +d6 +o +sm;*,|n,|n:*,o% +u *.cha
16. If you want to list and count the frequency (in descending order) of all
prepositions used by the Participant merged across files in a folder, use:
freq +t*PAR +d5 +o +sm;*,|prep*,o% +u *.cha
If you want to see the lines in which they're used, use +d instead of +d5.
17. If you want to list and count the frequency (in descending order) of all adverbs
used by the Participant, by stems, merged across files in a folder, use:
freq +t*PAR +d5 +o +sm;*,|adv*,o% +u *gem.cex
Part 2: CLAN 85
18. If you want to list and count the frequency (in descending order) of all adjectives
used by the Participant, by stems, merged across files in a folder, use:
freq +t*PAR +d5 +o +sm;*,|adj*,o% +u *gem.cex
7.10 FREQMERG
If you have collected many FREQ output files and you want to merge these counts
together, you can use freqmerg to combine the outputs of several runs of the FREQ
program. For example, you could run this command:
freq sample*.cha +f
This would create sample.frq.cex and sample2.frq.cex. Then you could merge these two
counts using this command:
freqmerg *.frq.cex
The only option that is unique to freqmerg is +o, which allows you to search for a specific
word on the main speaker tier. To search for a file that contains a set of words use the form
+o@filename.
7.11 FREQPOS
The FREQPOS program is a minor variant of freq. What is different about FREQPOS
is the fact that it allows the user to track the frequencies of words in initial, final, and second
position in the utterance. This can be useful in studies of early child syntax. For example,
using FREQPOS on the main line, one can track the use of initial pronouns or auxiliaries.
For open class items like verbs, one can use FREQPOS to analyze codes on the %mor line.
This would allow one to study, for example, the appearance of verbs in second position,
initial position, final position, and other positions.
To illustrate the running of freqpos, let us look at the results of this simple command:
freqpos sample.cha
Here are the first six lines of the output from this command:
1 a initial = 0, final = 0, other = 1, one word = 0
1 any initial = 0, final = 0, other = 1, one word = 0
1 are initial = 0, final = 1, other = 0, one word = 0
3 chalk initial = 0, final = 3, other = 0, one word = 0
1 chalk+chalkinitial = 0, final = 1, other = 0, one word = 0
1 delicious initial = 0, final = 0, other = 1, one word = 0
We see here that the word “chalk” appears three times in final position, whereas the word
“delicious” appears only once and that is not in either initial or final position. To study
occurrences in second position, we must use the +d switch as in:
freqpos +d sample.cha
+g Display only selected words in the output. The string following the +g can be either
a word or a file name in the @filename notation.
-s The effect of this option for FREQPOS is different from its effects in the other
CLAN commands. Only the negative -s value of this switch applies. The effect of
using -s is to exclude certain words as a part of the syntactic context. If you want to
match a word with FREQPOS, you should use the +g switch rather than the +s
switch.
7.12 GEM
Researchers use gem markers for different purposes. In some CHILDES corpora, they
are used to mark the dates or numbers of diary entries. In studies of narratives and book
reading, they are used to mark page numbers. In tasks with object and picture description,
they may indicate the number or name of the picture. In some corpora, they are used just
to enter descriptive remarks.
One important and interesting use of gems is to facilitate later retrieval and analysis.
For example, some studies with children make use of a fixed sets of activities such as
MotherPlay, book reading, and story telling. For these gems, it can be useful to compare
similar activities across transcripts. To support this, we have entered the possible gems in
a corpus that uses gems in this way into the TalkBankDB facility in a pulldown menu.
Descriptions of the gems used in a corpus can be found in the homepage for that corpus.
The GEM program is designed to allow you to pull out parts of a transcript for further
analysis. Separate header lines are used to mark the beginning and end of each interesting
passage you want included in your gem output. These header tiers may contain “tags” that
will affect whether a given section is selected or excluded in the output. If no tag
information is being coded, you should use the header form @Bg with no colon. If you are
using tags, you must use the colon, followed by a tab. If you do not follow these rules,
check will complain.
If you want to be more selective in your retrieval of gems, you need to add code words or
tags to both the @Bg: and @Eg: lines. For example, you might wish to mark all cases of
verbal interchange during the activity of reading. To do this, you must place the word
“reading” on the @Bg: line just before each reading episode, as well as on the @Eg: line
just after each reading episode. Then you can use the +sreading switch to retrieve only this
type of gem, as in this example:
gem +sreading sample2.cha
Ambiguities can arise when one gem without a tag is nested within another or when
two gems without tags overlap. In these cases, the program assumes that the gem being
terminated by the @Eg line is the one started at the last @Bg line. If you have any sort of
overlap or embedding of gems, make sure that you use unique tags.
Part 2: CLAN 87
GEM can also be used to retrieve responses to specific questions or particular stimuli
used in an elicited production task. The @Bg entry for this header can show the number
and description of the stimulus. Here is an example of a completed header line:
@Bg: Picture 53, truck
One can then search for all the responses to picture 53 by using the +s"53" switch in GEM.
The / symbol can be used on the @Bg line to indicate that a stimulus was described out
of its order in a test composed of ordered stimuli. Also, the & symbol can be used to
indicate a second attempt to describe a stimulus, as in 1a& for the second description of
stimulus 1a, as in this example:
@Bg: 1b /
*CHI: a &b ball.
@Bg: 1a /
*CHI: a dog.
@Bg: 1a &
*CHI: and a big ball.
Similar codes can be constructed as needed to describe the construction and ordering of
stimuli for specific research projects.
When the user is sure that there is no overlapping or nesting of gems and that the end
of one gem is marked by the beginning of the next, there is a simpler way of using GEM,
which we call lazy GEM. In this form of GEM, the beginning of each gem is marked by
@G: with one or more tags and the +n switch is used. Here is an example:
@G: reading
*CHI: nice kitty.
@G: offstage
*CHI: who that?
@G: reading
*CHI: a big ball.
@G: dinner
In this case, one can retrieve all the episodes of “reading” with this command:
gem +n +sreading
Note also that you can use any type of code on the @Bg line. For example, you might
wish to mark well-formed multi-utterance turns, teaching episodes, failures in communica-
tions, or contingent query sequences.
+d The +d0 level of this switch produces simple output that is in legal CHAT format.
The +d1 level of this switch adds information to the legal CHAT output regarding
file names, line numbers, and @ID codes.
+g If this switch is used, all the tag words specified with +s switches must appear on
the @Bg: header line to make a match. Without the +g switch, having just one of
the +s words present is enough for a match.
gem +sreading +sbook +g sample2.cha
This will retrieve all the activities involving reading of books.
+n Use @G: lines as the basis for the search. If these are used, no overlapping or nesting
of @G: gems is possible and each @G: must have tags. In this case, no @Eg is
needed, but CHECK and GEM will simply assume that the gem starts at the @G:
and ends with the next @G:.
+s This option is used to select file segments identified by words found on the @Bg:
tier. Do not use the -s switch. See the example given above for +g. To search for a
group of words found in a file, use the form +s@filename.
7.13 GEMFREQ
This program combines the basic features of FREQ and GEM. Like GEM, it analyzes
portions of the transcript that are marked off with @Bg: and @Eg: markers. For example,
gems can mark off a section of bookreading activity with @Bg: bookreading and @Eg:
bookreading. Once these markers are entered, you can then run GEMFREQ to retrieve a
basic FREQ-type output for each of the various gem types you have marked. For example,
you can run this command:
gemfreq +sarriving sample2.cha
7.14 GEMLIST
The GEMLIST program provides a convenient way of viewing the distribution of gems
across a collection of files. For example, if you run GEMLIST on both sample.cha and
sample2.cha, you will get this output:
From file <sample.cha>
12 @Bg
3 main speaker tiers.
21 @Eg
1 main speaker tiers.
24 @Bg
3 main speaker tiers.
32 @Eg
From file <sample2.cha>
18 @Bg: just arriving
2 main speaker tiers.
21 @Eg: just arriving
22 @Bg: reading magazines
2 main speaker tiers.
25 @Eg: reading magazines
26 @Bg: reading a comic book
2 main speaker tiers.
29 @Eg: reading a comic book
GEMLIST can also be used with files that use only the @G lazy gem markers. In that
case, the file should use nothing by @G markers and GEMLIST will treat each @G as
implicitly providing an @Eg for the previous @g. Otherwise, the output is the same as with
@Bg and @Eg markers.
The only option unique to GEMLIST is +d which tells the program to display only the
data in the gems. GEMLIST also uses several options that are shared with other commands.
For a complete list of options for a command, type the name of the command followed by
a carriage return in the Commands window. Information regarding the additional options
shared across commands can be found in the chapter on Options.
7.15 KEYMAP
The KEYMAP program is useful for performing simple types of interactional and
contingency analyses. KEYMAP requires users to pick specific initiating or beginning
codes or “keys” to be tracked on a specific coding tier. If a match of the beginning code or
key is found, KEYMAP looks at all the codes on the specified coding tier in the next utter-
ance. This is the “map.” The output reports the numbers of times a given code maps onto
a given key for different speakers.
Part 2: CLAN 90
If you run the KEYMAP program on this data with the $INI as the +b key symbol, the
program will report that $INI is followed once by $INI and once by $RES. The key ($INI
in the previous example) and the dependent tier code must be defined for the program. On
the coding tier, KEYMAP will look only for symbols beginning with the $ sign. All other
strings will be ignored. Keys are defined by using the +b option immediately followed by
the symbol you wish to search for. To see how KEYMAP works, try this example:
keymap +b$INI* +t%spa sample.cha
For Unix, this command would have to be changed to treat the metacharacters as literal, as
follows:
keymap +b\$INI\* +t%spa sample.cha
KEYMAP produces a table of all the speakers who used one or more of the key sym-
bols, and how many times each symbol was used by each speaker. Each of those speakers
is followed by the list of all the speakers who responded to the given initiating speaker,
including continuations by the initial speaker, and the list of all the response codes and
their frequency count.
7.16 KWAL
The KWAL program outputs utterances that match certain user-specified search words.
The program also allows the user to view the context in which any given keyword is used.
To specify the search words, use the +s option, which allows you to search for either a
single word or a whole group of words stored in a file. It is possible to specify as many +s
options on the command line as you like.
Part 2: CLAN 91
Like COMBO, the KWAL program works not on lines, but on “clusters.” A cluster is
a combination of the main tier and the selected dependent tiers relating to that line. Each
cluster is searched independently for the given keyword. The program lists all keywords
that are found in a cluster tier. A simple example of the use of KWAL is:
kwal +schalk sample.cha
The output of this command tells you the file name and the absolute line number of the
cluster containing the key word. It then prints out the matching cluster.
The two +t switches work as a matched pair to preserve the %mor tier for CHI. The first
+o@ switch will preserve the header tiers. The second and third +o switches work as a pair
to exclude the %mor lines in the other speakers. However, the -o%mor switch keeps all of
the dependent tiers except for %mor. The +t switch is used for selecting parts of the tran-
script that may also be searched using the +s option. The +o switch, on the other hand,
only has an impact on the shape of the output. The +d switch specifies that the output
should be in CHAT format and the +f switch sends the output to a file. In this case, there
is no need to use the +s switch. Try out variations on this command with the sample files
to make sure you understand how it works.
Main lines can be excluded from the analysis using the -t* switch. However, this exclu-
sion affects only the search process, not the form of the output. It will guarantee that no
matches are found on the main line, but the main line will be included in the output. If you
want to exclude certain main lines from your output, you can use the -o switch, as in:
kwal +t*CHI +t%spa -o* sample.cha
You can also do limiting and selection by combining FLO and KWAL:
kwal +t*CHI +t%spa +s"$*SEL*" -t* sample.cha +d +f
flo *.kwal.cex
To search for a keyword on the *MOT main speaker tiers and the %spa dependent tiers of
that speaker only, include +t*MOT +t%spa on the command line, as in this command.
kwal +s"$INI:*" +t%spa +t*MOT sample.cha
If you wish to study only material in repetitions, you can use KWAL in this form:
kwal +s”+[//]” *.cha +d3 +d
contrast sentence with and without that code, you can pull out all the utterances with $A
using this command:
kwal +d +o@ +t% +s"$A" +f$A filenames
This will produce the $A.cex output file. Next, you can créate a file with the utterances that
do not have $A using this command:
kwal +d +o@ +t% -s"$A" +fno$A filenames
This will produce the no$A.cex output file. Then you can run commands on the output files
separately.
You can also search for cases in which the SUBJ on the %gra line is pro:per|you on the
%mor line. In this command, the letter “g” after the +s refers to the %gra tier and the letter
“m” refers to the %mor tier.
kwal +d7 +sg|SUBJ +sm|pro:per,;you filename.cha
2. Speech only (included at least one word on the main tier, but no g: or s: on the %sin
tier):
kwal *.cha +t%sin +c1 -ss:* -sg:* +d +fspeech_only
3. Sign only (included at least one s: on the %sin tier, but no words on the main line and
no g: on the %sin tier):
kwal *.cha +t%sin +c0 -sg:* +d +fno-g
kwal *.no-g.cex +t%sin +ss:* +d +fsign_only
4. Gesture + speech only (included at least one g: on the %sin tier and at least one word
Part 2: CLAN 93
5. Gesture + sign only (included at least one g: and one s: on the %sin tier but no words
on the main tier):
kwal *.cha +t%sin +c0 +ss:* +d +fsign_only
kwal *.sign_only.cex +t%sin +sg:* +d +fgesture+sign_only
6. Gesture + speech + sign (included at least one g: and one s: on the %sin tier and at least
one word on the main tier):
kwal *.cha +t%sin +c1 +ss:* +d +fsign_only
kwal *.sign_only.cex +t%sin +sg:* +d +fgesture+speech+sign
was found. The +w and -w options let you specify how many clusters after and
before the target cluster are to be included in the output. These options must be
immediately followed by a number. Consider this example:
kwal +schalk +w3 -w3 sample.cha
When the keyword chalk is found, the cluster containing the keyword and the three
clusters above (-w3) and below (+w3) will be shown in the output.
+xCNinclude only utterances which are C (>, <, =) than N items (w, c, m), "+x=0w" for
zero words
+xS specify items to include in above count (Example: +xxxx +xyyy)
-xS specify items to exclude from above count
7.17 MAXWD
This program locates, measures, and prints either the longest word or the longest utter-
ance in a file. It can also be used to locate all the utterances that have a certain number of
words or greater.
When searching for the longest word, the MAXWD output consists of: the word, its
length in characters, the line number on which it was found, and the name of the file where
it was found. When searching for the longest utterance with the +g option, the output con-
sists of: the utterance itself, the total length of the utterance, the line number on which the
utterance begins, and the file name where it was found. By default, MAXWD only analyzes
data found on the main speaker tiers. The +t option allows for the data found on the header
and dependent tiers to be analyzed as well. The following command will locate the longest
word in sample.cha.
maxwd sample.cha
You can also use MAXWD to track all the words or utterances of a certain length. For
example, the following command will locate all the utterances with only one word in them:
maxwd -x1 +g2 sample.cha
Alternatively, you may want to use MAXWD to filter out all utterances below or above a
certain length. For example, you can use this command to output only sentences with four
or more words in them:
maxwd +x4 +g2 +d1 +o%
+b You can use this switch to either include or exclude specified morpheme delimiters.
By default, the morpheme delimiters #, ~, and - are understood to delimit separate
morphemes. You can force MAXWD to ignore all three of these by using the -b#-~
form of this switch. You can use the +b switch to add additional delimiters to the
list.
+c This option is used to produce a given number of longest items. The following
Part 2: CLAN 95
If you want to print out all the utterances above a certain length, you can use this KWAL
command
kwal +x4w sample.cha
+d The +d level of this switch produces output with one line for the length level and the
next line for the word. The +d1 level produces output with only the longest words,
one per line, in order, and in legal CHAT format.
+g This switch forces MAXWD to compute not word lengths but utterance lengths. It
singles out the sentence that has the largest number of words or morphemes and
prints that in the output. The way of computing the length of the utterance is de-
termined by the number following the +g option. If the number is 1 then the length
is in number of morphemes per utterance. If the number is 2 then the length is in
number of words per utterance. And if the number is 3 then the length is in the
number of characters per utterance. For example, if you want to compute the MLU
and MLT of five longest utterances in words of the *MOT, you can use the following
command:
maxwd +g2 +c5 +d1 +t*MOT +o%mor sample.cha
Then you would run the output through MLU. The +g2 option specifies that the
utterance length will be counted in terms of numbers of words. The +c5 option
specifies that only the five longest utterances should be sent to the output. The +d1
option specifies that individual words, one per line, should be sent to the output. The
+o%mor includes data from the %mor line in the output sent to MLU.
+o The +o switch is used to force the inclusion of a tier in the output. To do this you
would use a command of the following shape:
maxwd +c2 +j +o%mor sample2.cha
7.18 MLT
The MLT program computes the mean number of utterances in a turn, the mean number
of words per utterance, and the mean number of words per turn. A turn is defined as a
sequence of utterances spoken by a single speaker. Overlaps are ignored in this
computation. Instead, the program simply looks for sequences of repeated speaker ID codes
at the beginning of the main line. While the same speaker is talking, then each utterance is
Part 2: CLAN 96
a part of the current turn. These computations are provided for each speaker separately.
Note that none of these ratios involve morphemes on the %mor line. If you want to analyze
morphemes per utterances, you should use the MLU program.
+d You can use this switch, together with the @ID specification to output data in a
format that can be opened in Excel, as in this command:
mlt +d +t@ID=”*|Target_Child*” sample.cha
This output gives 11 fields in this order: language, corpus, file, age, participant id, number
Part 2: CLAN 97
+g You can use the +g option to exclude utterances composed entirely of certain words.
For example, you might wish to exclude utterances composed only of hi, bye, or
both these words together. To do this, you should place the words to be excluded in
a file, each word on a separate line. The option should be immediately followed by
the file name, i.e. there should not be a space between the +g option and the name
of this file. If the file name is omitted, the program displays an error message: “No
file name for the +g option specified!”
+s This option is used to specify a word string that specifies which utterances should
be included. This switch selects whole utterances for inclusion, not individual
words, because MLT is an utterance-oriented program.
7.19 MLU
The MLU program computes the mean length of utterance, which is the ratio of
morphemes to utterances. By default, this program runs from a %mor line and uses that
line to compute the mean length of utterance (MLU) in morphemes. However, if you do
not have a %mor line in your transcript, you need to add the –t%mor switch to use it from
the main line. In that case, you will be computing MLU in words, not morphemes.
The predecessor of the current MLU measure was the “mean length of response” or
MLR devised by Nice (1925). The MLR corresponds to what we now call MLUw or mean
length of utterance in Words. Brown (1973) emphasized the value of thinking of MLU in
terms of morphemes, rather than words. Brown was particularly interested in the ways in
which the acquisition of grammatical morphemes reflected syntactic growth and he
believed that MLUm or mean length of utterance in morphemes would reflect this growth
more accurately than MLUw. Brown described language growth through six stages of
development for which MLU values ranged from 1.75 to 4.5. Subsequent research (Klee,
Schaffer, May, Membrino, & Mougey, 1989) showed that MLU is correlated with age until
about 48 months. Rondal, Ghiotto, Bredart, and Bachelet (1987) found that MLU is highly
correlated with increases in grammatical complexity between MLU of 1 and 3. However,
after MLU of 3.0, the measure was not well correlated with syntactic growth, as measured
by LARSP. A parallel study by Blake, Quartaro, and Onorati (1970)with a larger subject
group found that MLU was correlated with LARSP until MLU 4.5. Even better correlations
Part 2: CLAN 98
between MLU and grammatical complexity have been reported when the IPSyn is used to
measure grammatical complexity (Scarborough, Rescorla, Tager-Flusberg, Fowler, &
Sudhalter, 1991)
Brown (1973, p. 54) presented the following set of rules for the computation (by hand)
of MLU:
1. Start with the second page of the transcription unless that page involves a recitation of
some kind. In this latter case, start with the first recitation free stretch. Count the first
100 utterances satisfying the following rules.
2. Only fully transcribed utterances are used; none with blanks. Portions of utterances,
entered in parentheses to indicate doubtful transcription, are used.
3. Include all exact utterance repetitions (marked with a plus sign in records). Stuttering
is marked as repeated efforts at a single word; count the word once in the most complete
form produced. In the few cases where a word is produced for emphasis or the like
(no, no, no) count each occurrence.
4. Do not count such fullers as mm or oh, but do count no, yeah, and hi.
5. All compound words (two or more free morphemes), propernames, and riualized
reduplications count as single words. Examples: birthday, rakety-booom, choo-choo,
quack-quack, night-night, pocketbook, seesaw. Justification is that there is no evidence
that the constitutent morphemes functions as such for these children.
6. Count as one morpheme all irregular pasts of the verb (got, did, went, saw). Justification
is that there is no evidence that the child relates these to present forms.
7. Count as one morpheme all diminutives (doggie, mommie) because these children at
least to do not seem to use the suffix productively. Diminutives are the standard forms
used by the child.
8. Count as separate morphemes all auxiliaries (is, have, will, can, must, would). Also, all
catenatives: gonna, wanna, hafta. These latter counted as single morphemes rather
than as gong to or want to because evidence is that they function so for the children.
Count as separate morphemes all inflections, for example, possessive [s], plural [s],
third person singular [s], regular past [d], and progressive [ing].
9. The range count follows the above rules but is always calculated for the total
transcription rather than for 100 utterances.
Because researchers often want to continue to follow these rules, it is important to
understand how to implement this system in CLAN. Here is a detailed description,
corresponding to Brown’s nine points.
1. Brown recommended using 100 utterances. He also suggested that these should be
taken from the second page of the transcript. In effect, this means that roughly the first
25 utterances should be skipped. The switch that would achieve this effect in the MLU
program is: +z25u-125u. This is the form of the command used for MLU-100 in the
KIDEVAL program.
2. The symbols xxx, yyy, and www are also excluded by default, as are the utterances in
which they appear. If you wish to include the xxx forms and the utterances that contain
them, then use the +sxxx option. The forms yyy and www are always excluded and
cannot be included. Utterances with no words are excluded from the utterance count.
Part 2: CLAN 99
3. If you mark repetitions and retraces using the CHAT codes of [/], [//], [///], [/?], and [/-
], the repeated material will be excluded from the computation automatically. This can
be changed by using the +r6 switch or by adding any of these switches: +s+"</>"
+s+"<//>".
4. If you want forms to be treated as nonwords, you can precede them with the marker &,
as in &mm. Alternatively, you can add the switch –smm to exclude this form or you
can have a list of forms to exclude. The following strings are also excluded by default:
uh um 0* &* +* -* $* where the asterisk indicates any material following the exclusion
symbol. If the utterance consists of only excludable material, the whole utterance will
be ignored. In addition, suffixes, prefixes, or parts of compounds beginning with a zero
are automatically excluded and there is no way to modify this exclusion. Brown
recommends excluding mm and uh by default. This is done by marking them as &-mm
and &-uh.
5. You can use +s to include lines that would otherwise be excluded. For example, you
may want to use +s”[+ trn]” to force inclusion of lines marked with [+ trn]. You can
also use the +sxxx switch to change the exclusionary behavior of MLU. In this case,
the program stops excluding sentences that have xxx from the count, but still excludes
the specific string “xxx”. You can also use the special form marker @a to force
treatment of an incomprehensible string as a word. This can happen when the sentential
and situational context is so clear that you know that the form was a word. For example,
the form xxx@a will appear on the %mor line as w|xxx and the form xxx@a$n will
appear on the %mor line as n|xxx. In the place of the “n” you could place any part of
speech code such as “v” or “adj”. This is because the @a is translated through the code
“w” as a generic “word” and the part of speech code after the $ sign is translated as the
part of the speech of the incomprehensible word. These codes also apply to yyy,
yyy@a, and yyy@a$n.
6. When MLU is computed from the %mor line, the compound marker is excluded as a
morpheme delimiter, so this restriction is automatic. If you compute MLU from the
main line, then you need to add –b+ to your command to exclude the plus as a
morpheme delimiter.
7. The ampersand (&) marker for irregular morphology is not treated as a morpheme
delimiter, so this restriction is automatic.
8. By default, diminutives are treated as real morphemes. In view of the evidence for the
productivity of the diminutive, it is difficult to understand why Brown thought they
were not productive.
9. The treatment of hafta as one morpheme is automatic unless the form is replaced by [:
have to]. The choice between these codes is left to the transcriber.
It is also possible to exclude utterances by using postcodes. By default, MLU excludes
utterances marked with the specific postcode [+ mlue]. This works both for MLU as a
separate program and for MLU as a part of KIDEVAL. It is also possible to make further
exclusions for MLU as a separate program by using some other postcode such as [+ exc]
in the form of -s"[+ exc]". However, this non-default marking will not get picked up by
KIDEVAL.
The use of postcodes for exclusion needs to be considered carefully. Brown suggested
Part 2: CLAN 100
that all sentences with unclear material be excluded. Brown wants exact repetitions to be
included and does not exclude imitations. However, other researchers recommend also
excluding imitation, self-repetitions, and single-word answers to questions.
The program considers the following three symbols to be morpheme delimiters: - # ~
MOR analyses distinguish between these delimiters and the ampersand (&) symbol that
indicates fusion. As a result, morphemes that are fused with the stem will not be included
in the MLU count. If you want to change this list, you should use the +b option described
below. For Brown, compounds and irregular forms were monomorphemic. This means that
+ and & should not be treated as morpheme delimiters for an analysis that follows his
guidelines. The program considers the following three symbols to be utterance delimiters:
. ! ? as well as the various complex symbols such as +... which end with one of these three
marks.
The computation of MLU depends on the correct morphemicization of words. The best
way to do this is to use the MOR and POST programs to construct a morphemic analysis
on the %mor line. This is relatively easy to do for English and other languages for which
good MOR grammars and POST disambiguation databases exist. However, if you are
working in a language that does not yet have a good MOR grammar, this process would
take more time. Even in English, to save time, you may wish to consider using MLU to
compute MLUw (mean length of utterance in words), rather than MLU. Malakoff, Mayes,
Schottenfeld, and Howell (1999) found that MLU correlates with MLUw at .97 for English.
Aguado (1988) found a correlation of .99 for Spanish, and Hickey (1991) found a
correlation of .99 for Irish. If you wish to compute MLUw instead of MLU, you can simply
refrain from dividing words into morphemes on the main line. If you wish to divide them,
you can use the +b switch to tell MLU to ignore your separators.
This command looks at only those utterances spoken by the child to the mother as ad-
dressee. You can then run MLU on the output of the KWAL command.
The inclusion of certain utterance types leads to an underestimate of MLU. However,
there is no clear consensus concerning which sentence forms should be included or exclud-
ed in an MLU calculation. The MLU program uses postcodes to accommodate differing
approaches to MLU calculations. To exclude sentences with postcodes, the -s exclude
switch can be used in conjunction with a file of postcodes to be excluded. The exclude file
should be a list of the postcodes that you are interested in excluding from the analysis. For
example, the sample.cha file is postcoded for the presence of responses to imitations [+ I],
yes/ no questions [+ Q], and vocatives [+ V].
For the first MLU pass through the transcript, you can calculate the child’s MLU on
the entire transcript by typing:
mlu +t*CHI +t%mor sample.cha
For the second pass through the transcript you can calculate the child’s MLU following
Part 2: CLAN 101
the criteria of Scarborough (1990). These criteria require excluding the following: routines
[+ R], book reading [+ "], fillers [+ F], imitations [+ I], self-repetitions [+ SR], isolated
onomatopoeic sounds [+ O], vocalizations [+ V], and partially unintelligible utterances [+
PI]. To accomplish this, an exclude file must be made which contains all these postcodes.
Of course, for the little sample file, there are only a few examples of these coding types.
Nonetheless, you can test this analysis using the Scarborough criteria by creating a file
called “scmlu” with the relevant codes in angle brackets. Although postcodes are contained
in square brackets in CHAT files, they are contained in angle brackets in files used by
CLAN. The scmlu file would look something like this:
[+ R]
[+ "]
[+ V]
[+ I]
Once you have created this file, you then use the following command:
mlu +t*CHI -s@scmlu sample.cha
For the third pass through the transcript, you can calculate the child’s MLU using a still
more restrictive set of criteria, also specified in angle brackets in postcodes and in a sepa-
rate file. This set also excludes one-word answers to yes/no questions [$Q] in the file of
words to be excluded. You can calculate the child’s MLU using these criteria by typing:
mlu +t*CHI -s@resmlu sample.cha
In general, exclusion of these various limited types of utterances tends to increase the
child’s MLU.
+d You can use this switch, together with the ID specification to output data for Excel,
as in this command:
mlu +d +tCHI sample.cha
This output gives the @ID field, the number of utterances, number of morphemes,
morphemes/utterances, and the standard deviation of the MLU. To run this type of
analysis, you must have an @ID header for each participant you wish to track. You
can use the +t switch in the form +tCHI to examine a whole collection of files. In
Part 2: CLAN 102
this case, all the *CHI lines will be examined in the corpus.
+d1 This level of the +d switch outputs data in another systematic format, with data for
each speaker on a single line. However, this form is less adapted to input to a
statistical program than the output for the basic +d switch. Also, this switch works
with the +u switch, whereas the basic +d switch do es not. Here is an example of
this output:
*CHI: 5 7 1.400 0.490
*MOT: 8 47 5.875 2.891
+g You can use the +g option to exclude utterances composed entirely of certain words
from the MLT analysis. For example, you might wish to exclude utterances
composed only of hi or bye. To do this, you should place the words to be excluded
in a file, each word on a separate line. The option should be immediately followed
by the file name. There should not be a space between the +g option and the name
of this file. If the file name is omitted, the program displays an error message: “No
file name for the +g option specified!”
+s This option is used to specify a word to be used from an input file. This option should
be immediately followed by the word itself. To search for a group of words stored
in a file, use the form +s@filename. The -s switch excludes certain words from the
analysis. This is a reasonable thing to do. The +s switch bases the analysis only on
certain words. It is more difficult to see why anyone would want to conduct such an
analysis. However, the +s switch also has another use. One can use the +s switch to
remove certain strings from automatic exclusion by MLU. The program
automatically excludes xxx, 0, uh, and words beginning with & from the MLU
count. This can be changed by using this command:
mlu +s+uh +s+xxx +s0* +s&* file.cha
MLU also uses several options that are shared with other commands. For a complete
list of options for a command, type the name of the command followed by a carriage return
in the Commands window. Information regarding the additional options shared across
commands can be found in the chapter on Options.
7.20 MODREP
The MODREP program matches words on one tier with corresponding words on anoth-
er tier. It works only on tiers where every word on tier A matches one word on tier B. When
such a one-to-one correspondence exists, MODREP will output the frequency of all match-
es. Consider the following sample file distributed with CLAN as modrep.cha:
@Begin
@Participants: CHI Child
*CHI: I want more.
%pho: aI wan mo
%mod: aI want mor
*CHI: want more bananas.
%pho: wa mo nAnA
%mod: want mor bAn&nAz
*CHI: want more bananas.
%pho: wa mo nAnA
%mod: want mor bAn&nAz
*MOT: you excluded [//] excluded [/] xxx yyy www
Part 2: CLAN 103
&d do?
%pho: yu du
%mod: yu du
@End
You can run the following command on this file to create a model-and-replica analysis for
the child’s speech:
modrep +b*chi +c%pho +k modrep.cha
This output tells us that want was replicated in two different ways, and that more was rep-
licated in only one way twice. Only the child’s speech is included in this analysis and the
%mod line is ignored. Note that you must include the +k switch in this command to
guarantee that the analysis of the %pho line is case-sensitive. By default, all CLAN
commands except for FREQ, FREQMERGE, MORTABLE, PHONFREQ, RELY,
TIMEDUR, and VOCD are case-insensitive.
If you want to include some of the excluded strings, you can add the +q option. For ex-
ample, you could type:
modrep +b* +c%pho +k modrep.cha +qwww
However, adding the www would destroy the one-to-one match between the model line and
Part 2: CLAN 104
the replica line. When this happens, CLAN will complain and then die. Give this a try to
see how it works. It is also possible to exclude additional strings using the +q switch. For
example, you could exclude all words beginning with “z” using this command:
modrep +b* +c%pho +k modrep.cha -qz*
Because there are no words beginning with “z” in the file, this will not change the match
between the model and the replica.
If the main line has no speech and only a 0, MODREP will effectively copy this zero
as many times as in needed to match up with the number of units on the %mod tier that is
being used to match up with the main line.
This command will compare the %mod and %pho lines for both the mother and the child
in the sample file. Note that it is also possible to trace pronunciations of individual target
words by using the +o switch as in this command for tracing words beginning with /m/:
modrep +b%mod +c%pho +k +om* modrep.cha
If you want to conduct an even more careful selection of codes on the %mor line, you
can make combined use of MODREP and COMBO. For example, if you want to find all
the words matching accusatives that follow verbs, you first select these utterances by run-
ning COMBO with the +d switch and the correct +s switch and then analyze the output
using the MODREP command we used earlier.
combo +s"v:*^*^n:*-acc" +t%mor sample2.cha +d +f
modrep +b%mor +c*MOT +o"*acc"sample2.cmb.cex
The output of this program is the same as in the previous example. However, in a large
Part 2: CLAN 105
input file, the addition of the COMBO filter can make the search much more restrictive
and powerful.
Next, you should open PhonTalk, and select the Open option in the File pulldown
menu. The program will run on each file and will list each as "Finished" once it is
processed. By default, the output in the case of the Bliss corpus would be Bliss-xml-phon.
This is then a Phon project which you can analyze using Phon. You may want to rename
this output to remove the "xml" in the folder name, as in Bliss-phon.
To convert back to CHAT, you start PhonTalk and select Bliss-phon by choosing Open
from the File menu. This will create a CHAT XML version of the Phon files. Then you
run Chatter to convert from CHAT XML to standard CHAT, this time selecting the radio
buttons for "XML to CHAT".
7.22 RELY
This program has five functions: (1) to examine agreement between two coders, (2) to
compute Cohen's kappa, (3) to evaluate correctness of a student coder against a master
document, (4) to check overall match between two transcripts, and (5) to combine codes
into a single file.
First, we will consider the function of checking reliability between two coders. When
you are entering a series of codes into files using the Coder Mode, you will often want to
compute the reliability of your coding system by having two or more people code a single
file or group of files. To do this, you can give each coder the original file, get them to enter
a %cod or %spa line and then use the RELY program to spot matches and mismatches. To
create an example, you could copy the sample.cha file in CLAN’s /examples folder to a
file called samplea.cha file and change one code in the samplea.cha file. In samplea.cha,
change the code for the first utterance from “$INI:sel:in” to “$INI:sel:gone”. Then enter
the command:
rely sample.cha samplea.cha
The output in sample.rely.cex file will report the coding disagreements and you can
triple-click on the lines that give the line numbers, and they will open to the point of the
mismatch in each file. If you add +t%spa to this command, you will get a fuller report.
This will work in the same way, if your coding tier name is, for example, %cod or
something else.
By default, RELY examines all main and dependent tiers. In the example discussed so
far, the two transcripts are identical except for any differences that would appears on the
coding tier. However, in some other cases, there could also be differences on the main line
or elsewhere. If you want the program to ignore any differences in the main line, header
line, or other dependent tiers that may have been introduced by the second coder, you can
add the +c switch. Then, you will need to use the +t switch to pick out a line or speaker to
include, while ignoring all the others. If the command is:
rely +c sample.cha samplea.cha +t%spa -t*
then the program will only report mismatches on the %spa tier. Note that, in addition to
adding the +c switch, you must also add the -t* switch to make sure the RELY also ignores
the main line in addition to ignoring other depending tiers. If you further add a +t*CHI
switch, it will only report mismatches for selected tier for the Target_Child. However, if
you use the +c switch with no additional +t inclusions, RELY will report unmatched tiers,
but not the actual mismatches. In the sample.rely.cex output, you can triple-click on lines
Part 2: CLAN 107
with the line numbers given and CLAN will open to that place in the original file.
A second function of RELY is to compute Cohen's kappa by adding the +dN switch.
For this analysis, you need to specify the number of possible categories being checked.
Here is an example of computing kappa for %spa lines with three possible categories:
rely +c +d3 sample.cha samplea.cha +t%spa -t*
The third function of the RELY command is to evaluate the correctness of a student
transcript against a master transcript. The student is given a version of the complete master
transcript without the coding tier and the goal is to add codes. In this case, it is assumed
that the coding in the master transcript is fully correct. The question is whether the student's
codes are correct. To specify that the first file given is the master, you use the +dm1 switch.
rely +dm1 sample.cha samplea.cha +t%spa
The output from this command provides two counts: precision and accuracy. These are
the two dimensions used in fields such as computational linguistics to evaluate correctness.
Precision is the percentage of fields in the master file that are matched by the student.
Accuracy is the percentage of fields in the student file that are matched to the master. A
failure in precision occurs when the student misses a code in the master file. A failure in
accuracy occurs when the student inserts a code that is not found in the master. If a
researcher is interested in going beyond these two scores, they can also compute the F score
which is the harmonic mean of precision and accuracy.
The fourth function of the RELY command is to estimate the overall match between
two transcripts on the main line. It is very difficult to define this type of comparison
precisely. Instead, RELY uses a rough-and-ready "bag of words" comparison method that
simply looks at the overall match of the main line items in the two versions. The command
for this type of analysis adds the +d switch, and the output is the percentage of overall
overlap.
rely +d sample.cha samplea.cha
The fifth function of the RELY command is to allow multiple coders to add a series of
dependent tiers to a master file. The master file is the first one given in the command line.
The lines of the master file should remain untouched and the coder of the second file should
only be adding information on a single additional dependent tier. This function is accessed
through the +a switch, which tells the program the name of the coding line (given by the
+t switch) from the secondary file that is to be added to the master file, as in
rely +a +t%spa +t@ sample.cha samplea.cha
If, by mistake, some changes were made to the other coding lines, the output will ignore
the mismatches, keeping what is in the master file only. To get a full file merger, you need
to add the +t@ switch to include the header tiers.
It is important to understand the detailed workings of comparison with the +a switch.
When used with +a and +t%cod, RELY looks at the first speaker tier in master and coder
files and, if they match, then it looks for the dependent tier in coder file that was specified
with +t%cod option. If it finds a %cod tier in coder file, then it looks in the master file
under the corresponding speaker tier to see if the master file already has a %cod tier. If it
does (and it really shouldn’t), then the error message "** Duplicate tier found around
lines:" is given and user must choose whether the master file tier or the coder file tier should
Part 2: CLAN 108
be added to master file. If master file does not already have %cod tier, then the %cod tier
from coder file is add to all the other dependent tiers in the master file under the
corresponding speaker tier.
RELY +a will also report an error, if it finds some other dependent tier, such as %com
in the coder file that is not in the master. In that case, it will report a message "**
Unmatched tiers found around lines:" to inform user that there are tiers in the coder file
that are not in the master file and which have not been specified by the user to be added
with +t%com option. This message is just an FYI.
RELY +a will only add tiers from coder file that are missing from master file and are
specified with +t option. At the same time, it will report if there are some other tiers in
coder file that were not specified with +t option that are also missing from master file. In
other words, the coder file must be a subset of master file with only extra tiers that users
would want to add to master file. However, possibly not all those tiers are supposed to be
added and that is what the +t option is for.
If you want to conduct multiple runs with RELY, looking at different speakers and
different coding lines using the +c and +t switches, then you may also want to use the +2
switch to create differently named files from each run of RELY.
7.23 SCRIPT
The SCRIPT command is useful if fixed scripts are used in clinical research. It will
compare a participant’s performance to that of the model. To run SCRIPT, you must first
prepare a Model Script and a Participant’s Script.
2. Run CHECK (using esc-L) to verify that CHAT format is accurate and run mor +xl
*.cha to make sure the words in the file are all recognizable. Run MOR, POST, and
CHECK. Put this file in your CLAN lib folder in the folder with the Participant’s files
you will be comparing to it.
The output will include an .xls file and a .cex file. The .xls file provides the following
information for both the model script and the Participant’s script production: TIMDUR
and # words produced. It provides the following information on the Participant’s script
only: # words correct, % words correct, # words omitted, % words omitted, # words added,
# recognizable errors, # unrecognizable errors, # utterances with xxx, # utterances with 0.
Unrecognizable errors are those transcribed as xxx or coded as unknown neologistic or
Part 2: CLAN 110
semantic errors ([* n:uk] and [* s:uk]. (See the chapter on Error Coding in the CHAT
manual.) All other errors include target replacements and are considered recognizable.
The .cex file provides the following information for the Participant’s script production:
list of omitted words (with part of speech and bound morpheme), list of added words, and
list of errors (error production, intended word if known, error code and frequency info).
7.23.4 Variations
If you want to produce output for all the CHAT files in a folder you would use this
command:
script +t*PAR +smodel.cha *.cha +u
The +u switch will list the results for each CHAT file instead of individual .cex and .xls
files.
The default mode for this command is to INCLUDE target replacements for errors
judged to be close approximations (lake [: like] [* p:w]) and EXCLUDE revisions and
retracings (anything coded with [/] or [//]). Both of those defaults can be changed by adding
switches to the command line:
+r5 excludes target replacements
+r6 includes repetitions and revisions
7.24 TIMEDUR
The TIMEDUR program computes the duration of the pauses between speakers and
the duration of overlaps. This program requires sound bullets at the ends of utterances or
lines created through sonic CHAT. The data is output in a form that is intended for export
to a spreadsheet program. Columns labeled with the speaker’s ID indicate the length of the
utterance. Columns labeled with two speaker ID’s, such as FAT-ROS, indicate the length
of the pause between the end of the utterance of the first speaker and the beginning of the
utterance of the next speaker. Negative values in these columns indicate overlaps.
The basic output format of TIMEDUR gives a profile of durations for all speakers
through the whole file. For a more succinct summary of durations for a given speaker, use
a command with the +t switch, such as:
timedur +d1 +d +t*PAR *.cha
This command creates a summary of time durations across files for just PAR. In effect,
it treats the +u switch as the default.
+d outputs default results in SPREADSHEET format
+d1 outputs ratio of words and utterances over time duration
+d10 outputs above, +d1, results in SPREADSHEET format
Part 2: CLAN 111
7.25 VOCD
The VOCD command was written by Gerard McKee of the Department of Computer
Science, The University of Reading. The research project supporting this work was funded
by grants from the Research Endowment Trust Fund of The University of Reading and the
Economic and Social Research Council (Grant no R000221995) to D. D. Malvern and B.
J. Richards, School of Education, The University of Reading, Bulmershe Court, Reading,
England RG6 1HY. The complete description of VOCD can be found in: Malvern, D.,
Richards, B., Chipere, N., & Purán, P. (2004). Lexical diversity and language development.
New York: Palgrave Macmillan.
Measurements of vocabulary diversity are frequently needed in child language research
and other clinical and linguistic fields. In the past, measures were based on the ratio of
different words (Types) to the total number of words (Tokens), known as the type–token
Ratio (TTR). Unfortunately, such measures, including mathematical transformations of the
TTR such as Root TTR, are functions of the number of tokens in the transcript or language
sample — samples containing larger numbers of tokens give lower values for TTR and
vice versa. This problem has distorted research findings. Previous attempts to overcome
the problem, for example by standardizing the number of tokens to be analyzed from each
child, have failed to ensure that measures are comparable across researchers who use
different baselines of tokens, and inevitably waste data in reducing analyses to the size of
the smallest sample.
The approach taken in the VOCD program is based on an analysis of the probability of
new vocabulary being introduced into longer and longer samples of speech or writing. This
probability yields a mathematical model of how TTR varies with token size. By comparing
the mathematical model with empirical data in a transcript, VOCD provides a new measure
of vocabulary diversity called D. The measure has three advantages: it is not a function of
the number of words in the sample; it uses all the data available; and it is more informative,
because it represents how the TTR varies over a range of token size. The measure is based
on the TTR versus token curve calculated from data for the transcript as a whole, rather
than a particular TTR value on it.
D has been shown to be superior to previous measures in both avoiding the inherent
flaw in raw TTR with varying sample sizes and in discriminating across a wide range of
language learners and users (Malvern, Richards, Chipere, & Purán, 2004).
mathematically so that the characteristics of the curve for a transcript yields a valid measure
of vocabulary diversity.
Various probabilistic models were developed and investigated to arrive at a model
containing only one parameter which increases with increasing diversity and falls into a
range suitable for discriminating among the range of transcripts found in various language
studies. The model chosen is derived from a simplification of Sichel’s (1986) type– token
characteristic curve and is in the form an equation containing the parameter D. This
equation yields a family of curves with the same general and appropriate shape, with dif-
ferent values for the parameter D distinguishing different members of this family (Malvern
& Richards, 1997a). In the model, D itself is used directly as an index of lexical diversity.
To calculate D from a transcript, the VOCD program first plots the empirical TTR
versus tokens curve for the speaker. It derives each point on the curve from an average of
100 trials on subsamples of words of the token size for that point. The subsamples are made
up of words randomly chosen (without replacement) from throughout the transcript. The
program then finds the best fit between the theoretical model and the empirical data by a
curve-fitting procedure which adjusts the value of the parameter (D) in the equation until
a match is obtained between the actual curve for the transcript and the closest member of
the family of curves represented by the mathematical model. This value of the parameter
for best fit is the index of lexical diversity. High values of D reflect a high level of lexical
diversity and lower diversity produces lower values of D.
The validity of D has been the subject of extensive investigation (Malvern & Richards,
1997a, 1997b; Malvern et al., 2004; Richards & Malvern, 1996) on samples of child
language, children with SLI, children learning French as a foreign language, adult learners
of English as a second language, and academic writing. In these validation trials, the
empirical TTR versus token curves for a total of 162 transcripts from five corpora covering
ages from 24 months to adult, two languages and a variety of settings, all fitted the model.
The model produced consistent values for D which, unlike TTR and even Mean Segmental
TTR (MSTTR) (Richards & Malvern, 1996, pp. 35-38), correlated well with other well
validated measures of language. These five corpora also provide useful indications of the
scale for D.
7.25.2 Calculation of D
In calculating D, VOCD uses random sampling of tokens in plotting the curve of TTR
against increasing token size for the transcript under investigation. Random sampling has
two advantages over sequential sampling. Firstly, it matches the assumptions underlying
the probabilistic model. Secondly, it avoids the problem of the curve being distorted by the
clustering of the same vocabulary items at points in the transcript.
In practice, each empirical point on the curve is calculated from averaging the TTRs of
100 trials on subsamples consisting of the number of tokens for that point, drawn at random
from throughout the transcripts. This default number was found by experimentation and
balanced the wish to have as many trials as possible with the desire for the program to run
reasonably quickly. The run time has not been reduced at the expense of reliability, how-
ever, as it was found that taking 100 trials for each point on the curve produced consistency
in the values output for D without unacceptable delays.
Part 2: CLAN 113
Which part of the curve is used to calculate D is crucial. First, to have subsamples to
average for the final point on the curve, the final value of N (the number of tokens in a
subsample) cannot be as large as the transcript itself. Moreover, transcripts vary hugely in
total token count. Second, the equation approximates Sichel’s (1986) model and applies
with greater accuracy at lower numbers of tokens. In an extensive set of trials, D has been
calculated over different parts of the curve to find a portion for which the approximation
held good and averaging worked well. Based on these trials, the default is for the curve to
be drawn and fitted for N=35 to N=50 tokens in steps of 1 token. Each of these points is
calculated from averaging 100 subsamples, each drawn from the whole of the transcript.
Although only a relatively small part of the curve is fitted, it uses all the information
available in the transcript. This also has the advantage of calculating D from a standard
part of the curve for all transcripts regardless of their total size, further providing for
reliable comparisons between subjects and between the work of different researchers.
The procedure depends on finding the best fit between the empirical and theoretically
derived curves by the least square difference method. Extensive testing confirmed that the
best fit procedure was valid and was reliably finding a unique minimum at the least square
difference.
As the points on the curve are averages of random samples, a slightly different value
of D is to be expected each time the program is run. Tests showed that with the defaults
chosen these differences are relatively small, but consistency was improved by VOCD
calculating D three times by default and giving the average value as output.
In the CLAN output window, you will then see a description of the various components of
the +sm switch, along with example usages. When you use the +sm version of the search
string, you then make sure that VOCD is running from the %mor line.
To illustrate the functioning of VOCD, we can use a command that examines the child’s
output in the file 68.cha in /examples/transcripts/ne32. The command for doing this is:
vocd +t*CHI +sm;*,o% 68.cha
To also exclude affixes and neologisms (unintelligible words are already excluded from
this analysis), use:
vocd +t*CHI +sm;*,o% -sm|neo +f *.cha
The first word class entered will be the numerator and the second will
be the denominator.
+gnS: compute "limiting type-type ratio" S=NUMERATOR
-gnS: compute "limiting type-type ratio" S=NUMERATOR
+gdS: compute "limiting type-type ratio" S=DENOMINATOR
-gdS: compute "limiting type-type ratio" S=DENOMINATOR
7.26 WDLEN
The WDLEN program tabulates the lengths of words, utterances, and turns. The basic
command is:
wdlen sample.cha
The output from running this on the sample.cha file will be as displayed here:
*CHI: 3 1 1 0 0 0 0 0 0 0 1.60
*MOT: 0 1 1 1 2 2 0 0 0 1 3.77
-------
Number of single turns of each of these lengths in utterances
lengths: 1 2 Mean
*CHI: 0 0 0.00
*MOT: 0 1 2.00
-------
Number of single turns of each of these lengths in words
lengths: 1 2 3 4 5 6 7 8 9 10 11
Mean
*CHI: 0 0 0 0 0 0 0 0 0 0 0
0.00
*MOT: 0 0 0 0 0 0 0 0 0 0 1
11.00
-------
Number of words of each of these morpheme lengths
lengths: 1 2 Mean
*CHI: 7 1 1.12
*MOT: 35 7 1.16
-------
Number of utterances of each in morpheme length:
1 2 3 4 5 6 7 8 9 10 11 12
Mean
*CHI: 3 0 2 0 0 0 0 0 0 0 0 0
1.80
*MOT: 0 0 2 0 1 2 2 0 0 0 0 1
4.46
The first four analyses are computed from the main line. For these, the default value
of the +r5 switch is shifted to “no replacement” so that word length is judged from the
actual surface word produced. Also, the default treatment of forms with omitted material
uses +r3, rather than the usual +r1. Only alphanumeric characters are counted and the
forms xxx, yyy, and www are excluded, as are forms beginning with & or 0 and any
material in comments. The last two analyses are computed from the %mor line. There,
the forms with xxx, yyy, or www are also excluded. For the segments of WDLEN that run
off the %mor line, you should use the form of the +/-s switch that conforms with the syntax
of the %mor line. This form uses +sm, rather than something like +s"n|*". You can best
understand the syntax of +sm by typing:
vocd +sm
The WDLEN command allows for a maximum of 100 letters per word and 100 words
or morphemes per utterance. If you input exceeds these limits, you will receive an error
message. The only option unique to WDLEN is +d that allows you to output the results in
a format that can be opened directly from Excel. Information regarding the additional
options shared across commands can be found in the chapter on Options.
Part 2: CLAN 117
8 Profiling Commands
This section describes CLAN's profiling commands. They include:
1. C-NNLA: Northwestern Narrative Language Analysis
2. C-QPA: Quantitative Production Analysis
3. CORELEX: Core lexicon analysis for 5 AphasiaBank tasks
4. DSS: Developmental Sentence Score
5. EVAL: computation of a wide range of indices for aphasia
6. FluCalc: computation of a wide range of indices for stuttering
7. IPSyn: Index of Productive Syntax
8. KIDEVAL: computation of a wide range of indices for child language
9. MORTABLE: computation of occurrences of grammatical morphemes
10. SUGAR: Sampling Utterances and Grammatical Analysis Revised
8.1 C-NNLA
This command provides an automatic computation of the Northwestern Narrative
Language Analysis profile (Thompson et al., 1995). C-NNLA has been designed to
compute measures in accordance with the rules provided in the NNLA manual. Although
a few individual lexical codes are different from those given in the NNLA manual, all
automatically computed measures have been shown to have an acccuracy comparable to
that achieved manually by highly trained and experienced NNLA coders. Currently the
program is implemented for English only. The command depends on the presence and
accuracy of %mor and %gra lines, as described in the MOR manual.
2. Utterance level coding. Two codes are needed to mark grammatically flawed [+
gram] and semantically flawed [+ sem] utterances, as per the NNLA manual.
*PAR: she was a really nice dress. [+ sem]
*PAR: he has a wonderful time. [+ sem](talking about Cinderella)
*PAR: looking at the clock. [+ gram]
*PAR: it is just stepmother and three stepsisters. [+ gram]
Part 2: CLAN 118
This command will generate one Excel spreadsheet with all outcome measures. (Note: the
.xls file is actually a CSV text file, so it is advisable to save these as .xlsx files in Excel.).
Following is a list of the values computed in C-NNLA. The first 12 fields come from
the CHAT @ID header for the selected speaker. To match the Excel to the numbers in the
following fields you can select R1C1 column labeling in Excel. For Windows, this in in
Options/Calculation. For Mac, it is in Preferences/Calculation and it says “Use R1C1
reference style”.
1. filename
2. language
3. corpus name
4. code for the speaker
5. age
6. sex
7. group
8. race
9. SES
10. role of the speaker
11. education
12. custom_field: score on the Western Aphasia Battery (WAB).
13. duration: total time of the sample in seconds. Note: If the transcript is not linked,
this will not be calculated. An alternative is to add a TIME DURATION line to the
ID lines in the transcript (see the TIMEDUR section in this manual).
14. words per minute: total words divided by total time for speaker
15. total utterances: # of utterances
16. total words: # of words (tokens)
17. MLU words: MLU in words (excludes utterances with any unintelligible content
transcribed as xxx)
18. open-class: # of open-class words -- all nouns, all verbs excluding auxiliaries and
modals, all adjectives, all adverbs
19. % open-class/all words
Part 2: CLAN 119
20. closed-class words: # of closed-class words -- all other words besides open-class
words, with the exception of words coded as communicators (e.g., “yeah”) and
onomatopoeia (e.g., “woofwoof”)
21. % closed-class words/all words
22. open/closed ratio
23. nouns: # of nouns
24. % nouns/all words
25. verbs: # of verbs, including copulas and participles
26. % verbs/all words
27. noun/verb: ratio of nouns to verbs
28. adj: # of adjectives
29. adv: # of adverbs
30. det: # of determiners (det:art and det:dem)
31. pro: # of pronouns (excluding "wh" interrogative pronouns and "wh" relative
pronouns) and possessive determiners
32. aux: # of auxiliaries
33. conj: # of conjunctions (excluding "wh" conjunctions) and coordinators
34. complementizers: # of complementizers
35. modals: # of modals and modal auxiliaries
36. prep: # of prepositions
37. negation markers: # of negatives and "no" communicators
38. infinitival markers: # of infinitives
39. quantifiers: # of quantifiers, numbers (det:num), and post (e.g., "all", "both")
40. wh-words: # of "wh" relative pronouns, "wh" interrogative pronouns, "wh"
conjunctions, and interrogative determiners
41. comparative suffixes: # of words with -CP
42. superlative suffixes: # of words with -SP
43. possessive markers: # of words with -POSS
44. regular plural markers: # of words with -PL
45. irregular plural forms: # of words with &PL
46. 3rd person present tense markers: # of words with -3S
47. regular past tense markers: # of words with -PAST
48. irregular past tense markers: # of words with &PAST
49. regular perfect aspect markers: # of words with -PASTP
50. irregular perfect participles: # of words with &PASTP
51. progressive aspect markers: # of words with -PRESP excluding gerunds (nouns)
52. % correct regular inflection: numerator = of all regular inflected verbs, copulas,
participles, auxiliaries, and "does" modal that do not have any morphological error
codes next to them; denominator = all regular inflected verbs, copulas, participles,
auxiliaries, and "does" modal
53. % correct irregular inflection: numerator = numerator = of all irregular inflected verbs,
copulas (except for "is"), participles, auxiliaries (except for "is" and "are"), and "did"
modal that do not have any morphological error codes next to them; denominator = all
irregular inflected verbs, copulas (except for "is"), participles, auxiliaries (except for
"is" and "are"), and "did" modal
54. % sentences produced: numerator = utterances that have at least one verb, copula,
Part 2: CLAN 120
modal, or participle; denominator = all utterances counted on speaker tier except non-
word utterances
55. % sentences with correct syntax, semantics: numerator = utterances that have at least
one verb, copula, modal, or participle and do not have a [+ gram] or [+ sem] post-
code; denominator = all sentences (numerator from % sentences produced)
56. % sentences with flawed syntax: numerator = utterances that have a [+ gram] post-
code and have at least one verb, copula, modal, or participle; denominator = all
sentences (numerator from % sentences produced)
57. % sentences with flawed semantics: numerator = utterances that have a [+ sem] post-
code and have at least one verb, copula, modal, or participle; denominator = all
sentences (numerator from % sentences produced)
58. sentence complexity ratio: numerator = # of sentences that have at least 1 verb, copula,
modal, or participle and have at least one CSUBJ, COMP, CPRED, CPOBJ, COBJ,
XJCT, CJCT, CMOD, XMOD on the %gra tier; denominator = sentences that have at
least one verb, copula, modal, or participle and do not have any of those codes on the
%gra tier (Note: there are some additional rules about this computation that can be
found at the C-NNLA link at the AphasiaBank website in the Discourse Analysis
section)
59. # embedded clauses/sentence: numerator = # sentences with one embedded clause +
# of sentences with two embedded clauses x 2 + # of sentences with three embedded
clauses x 3, etc.; denominator = # of sentences with zero embeded clauses + # of
sentences with one embedded clause, two embedded clauses + # of sentences with
three embedded clauses, etc. For details on how embedded clauses are counted on the
%gra tier, see the C-NNLA link at the AphasiaBank website in the Discourse Analysis
section.
8.2 C-QPA
This command provides an automatic computation of the Quantitative Production
Analysis profile (Berndt, Wayland, Rochon, Saffran, & Schwartz, 2000). C-QPA has been
designed to compute measures in accordance with the rules set out by the QPA authors.
Currently the program is implemented for English only. Use of the command requires the
presence of accurate %mor and %gra lines.
3. Required subjects. Mark any missing subjects in sentences with a [+ 0subj] post-
code. Do not use this code for imperatives like "clean the floor". Example:
*PAR: make her his wife. [+ 0subj]
The command will generate two .xls spreadsheets: the sentence-by-sentence analysis
spreadsheet, and the summary spreadsheet. (Note: the .xls files are text files, so it is
advisable to save them as .xlsx files in Excel.) To match the Excel to the numbers in the
following fields you can select R1C1 column labeling in Excel. For Windows, this in in
Options/Calculation. For Mac, it is in Preferences/Calculation and it says “Use R1C1
reference style”.
Analysis Spreadsheet
1. utterance – actual speaker utterance
2. sentence utterance (1,0) – 1 for sentences that have a noun (SUBJ) and main verb
(ROOT); 1 for imperative sentences (ROOT without [+0 subj] post-code); 0 for
Part 2: CLAN 122
Summary Spreadsheet
Part 2: CLAN 123
Following is a list of the values provided in the summary spreadsheet for C-QPA. To
match the Excel to the numbers in the following fields you can select R1C1 column
labeling in Excel. For Windows, this in in Options/Calculation. For Mac, it is in
Preferences/Calculation and it says “Use R1C1 reference style”. The first 12 fields in this
output come from the CHAT @ID header for the selected speaker. Several of these
measures are the same as the ones in the Analysis Spreadsheet above and will not be re-
defined here.
1. filename
2. language
3. corpus name
4. code for the speaker
5. age
6. sex
7. group
8. race
9. SES
10. role of the speaker
11. education
12. custom_field: score on the Western Aphasia Battery (WAB).
13. duration -- total time of the sample in seconds. Note: If the transcript is not linked,
this will not be calculated. An alternative is to add a TIME DURATION line to the
ID lines in the transcript (see the TIMEDUR section in the CHAT manual).
14. # narrative words
15. # words per minute -- total narrative words divided by total time for speaker
16. # open class words
17. # closed class words -- all other parts-of-speech that are not open class, excluding
onomatopoeia and communicators
18. proportion closed class words -- # closed class words divided by # narrative words
19. nouns
20. # NRDs
21. # NRDs w/determiners
22. DET index -- # NRDs with determiners divided by # NRDs
23. # pronouns
24. proportion pronouns -- # pronouns divided by # pronouns plus # nouns
25. # verbs
26. proportion verbs -- # of verbs divided by # of pronouns plus # of nouns
27. # matrix verbs
28. total aux score
29. aux complexity -- [total aux score divided by # of matrix verbs] minus 1
30. # Ss -- # of the following: utterances that include a SUBJ linked to ROOT on the
%gra tier and is not an incomplete utterance (trailed off, interrupted); utterances
with ROOT as the first or only code on the %gra tier or the first code following
BEGP (with no 0subj post-code)
31. # words in Ss
32. proportion words in Ss -- # of words in sentences divided by # of narrative words
33. # well-formed Ss
Part 2: CLAN 124
8.3 CORELEX
This command provides automatic computation of core lexicon lists for the five
AphasiaBank Discourse Protocol tasks, as published in Dalton et al. (2020). The
command will automatically extract the appropriate gem (task) and create a spreadsheet
showing which words from the core lexicon were used. The "Types" column in the
spreadsheet will show how many words from the list were used. The other columns will
show which specific words from the list were used and how frequently. Be sure to save
the spreadsheet as an .xlsx Workbook in Excel.
If you want to compare your results to the norms reported in Dalton et al. (2020), you
need to do two steps before running the CORELEX command because the norms included
revised words and excluded target replacements for semantic paraphasias. The CORELEX
program counts lemmas on the %mor tier (to capture different forms of a word — e.g.,
"was" for "be"), and the %mor tier excludes revisions and includes target replacements. To
fix that (include revised words, exclude target replacements):
1. Run this command on your CHAT file(s) -- chstring +q1 filename.cha -- to remove
revision codes in the transcript ([//]) and replace target replacements for semantic
Part 2: CLAN 125
8.4 DSS
This program provides an automatic computation of the Developmental Sentence Score
(DSS) (Lee, 1974). This score is based on the assignment of scores for a variety of
syntactic, morphological, and lexical structures across eight grammatical domains. The
computation of DSS relies on the part of speech (POS) analysis of the %mor tier.
on morphosyntactic analysis. Once the disambiguated %mor is created, the user can run
DSS to compute the Developmental Sentence Analysis. A basic DSS command has this
shape:
dss +b*CHI +le *.cha
The items scored by DSSJ are listed below. The numbers indicate the scores assigned
for each type of usage. The morpological codes refer to the codes used in JMOR06 and
WAKACHI2002 v.5.
Verb Final Inflection (Vlast)
1 PAST (tabeta), PRES (taberu), IMP:te (tabete!)
2 HORT (tabeyoo), CONN (tabete…)
3 COND:tara (tabetara)
4 CONN&wa (tabecha), GER (tabe), NEG&IMP:de (tabenaide!)
5 IMP (tabero), NEG&OBL (tabenakucha)
Verb Middle Inflection (Vmid)
1 COMPL (tabechau), NEG (tabenai), ASP/sub|i (tabeteru/tabete iru)
2 DESID (tabetai), POT (taberareru/tabereru), POL (tabemasu),
sub|ku (tabete kuru), sub|ik (tabete iku)
3 sub|mi (tabete miru), sub|ar (tabete aru), sub|ok (tabete oku),
sub|age (tabete ageru)
4 PASS (taberareru)
5 sub|moraw (tabete morau), sub|kure (tabete kureru)
Adjective Inflection (ADJ)
1 A-PRES (oishii)
3 A-NEG- (oishikunai), A-ADV (oishiku)
4 A-PAST (oishikatta)
Copula (COP)
1 da&PRES (da)
3 de&wa-NEG-PRES (janai), de&CONN (gakusee de)
4 da-PAST (datta), da&PRES:na (gakusee na no), ni&ADV (gakusee ni naru)
5 de&CONN&wa (kami ja dame)
Adjectival Nouns + Copula (AN+COP)
4 AN+da&PRES (kiree da), AN+ni&ADV (kiree ni), AN+da&PRES:na (kiree na)
Conjunctive particles (CONJ ptl)
2 kara=causal (kiree da kara)
3 to (taberu to ii), kara=temporal (kaette kara taberu), kedo (tabeta kedo)
4 shi (taberu shi), noni (tabeta noni)
Conjunctions (CONJ)
4 datte (datte tabeta mon), ja (ja taberu), de/sorede (de tabeta), dakara (dakara
tabeta)
5 demo (demo tabechatta)
Elaborated Noun Phrases (NP)
2 N+no+(N) (ringo no e), A+N (oishii ringo)
3 N+to+N (ringo to nashi), Adn+N (ironna ringo), V+N (tabeta ringo)
5 AN+na+N (kiree na ringo), V+SNR (tabeta no ga chigau)
Formal Nouns (FML)
Part 2: CLAN 131
POINTS: A1
This condition checks for the presence of the pronouns it, that, or this and assigns one
A1 points if they are located. The pattern matching for the Focus uses the syntax of a
COMBO search pattern. This means that the asterisk is a wild card for “anything”; the plus
means “or”, and the up arrow means “followed by”. DSS goes through the sentence one
word at a time. For each word, it checks for a match across all the rules. Within a rule,
DSS checks across conditions in order from top to bottom. Once a match is found, it adds
the points for that match and then moves on to the next word. This means that, if a
condition assigning fewer points could block the application of a condition assigning more
points, you need to order the condition assigning more points before the condition
assigning fewer points. Specifically, the C1 condition for main verbs is ordered after C2
and C7 for this reason. If there is no blocking relation between conditions, then you do not
have to worry about condition ordering.
The Japanese implementation of DSS differs from the English implementation in one
important way. In Japanese, after a match occurs, no more rules are searched and the
processor moves directly on to the next word. In English, on the other hand, after a match
occurs, the processor moves on to the next rules before moving on to the next word.
Miyata, S., Hirakawa, M., Ito, K., MacWhinney, B., Oshima-Takane, Y., Otomo, K.
Shirai, Y., Sirai, H., & Sugiura, M. (2009). Constructing a New Language Measure for
Japanese: Developmental Sentence Scoring for Japanese. In: Miyata, S. (Ed.) Development
of a Developmental Index of Japanese and its application to Speech Developmental
Disorders. Report of the Grant-in Aid for Scientific Research (B)(2006-2008) No.
18330141, Head Investigator: Susanne Miyata, Aichi Shukutoku University. 15-66.
protocol and EVAL-D is a variant of EVAL used for data collected with the DementiaBank
protocol. The program can be used in three ways:
1. Both EVAL and EVAL-D can analyze a participant's performance on a discourse task.
2. EVAL can analyze a participant's performance of a discourse task from the
AphasiaBank protocol and compare the results to those of a reference group from that
database. EVAL-D can do the same for DementiaBank protocol data. The resulting
spreadsheet displays the participant's analysis side-by-side with the mean scores of the
comparison group and indicates where the participant and the comparison group differ
by one or more standard deviations.
3. The programs can analyze the baseline performance, then you can re-administer and
analyze the discourse task after a period of therapy. The spreadsheet displays the pre-
and post- therapy results side-by-side, allowing a comparison of performance at
different time points.
The use of EVAL is described in tutorial screencasts available from
https://talkbank.org/screencasts . The basic EVAL command
3. Press the OK button, and the Commands window will echo this full command you
have constructed:
eval @ +t*PAR: +d”Anomic45-65” +g”Sandwich” +u
4. Press Run and CLAN will produce an output file called eval_demo.xls, as listed in the
CLAN Output window. Click three times on that last line and the results will open in
Excel. If Excel warns you about opening the files, just say “yes”. The columns list
various outputs in terms of indices and part of speech frequencies.
specifies the number of files in the AphasiaBank database that met the criteria for the
comparison database; in this case it was 25. The final line is the CLAN command that was
used to generate this spreadsheet. The second column is the ID information for the person
whose language is in the transcript.
-bS: remove S characters from the morphemes list (-b: empty morphemes list)
+cN: create database file (N == 1: eval +c1 +t"*par")
+dS: specify database keyword(s) “S”. The choices are: Anomic, Global, Broca,
Wernicke, TransSensory, TransMotor, Conduction, NotAphasicByWAB, Control, Fluent,
Nonfluent, Fluent, AllAphasia
+e1: create list of database files used for comparisons
+e2: create proposition word list for each CHAT file
+g: gem tier should contain all words specified by +gS
-g: look for gems in database only
+gS: select gems which are labeled by label S
+n: gem is terminated by the next @G (default: automatic detection)
-n: gem is defined by @Bg and @Eg (default: automatic detection)
+o4: output raw values instead of percentage values
8.6 FLUCALC
The FLUCALC program works very much like KIDEVAL. It tracks the frequencies of
the various fluency indicators summarized in the section of the CHAT manual on
“Disfluency Transcription” such as retraces, blockings, and initial segment repetitions.
Like KIDEVAL, it requires the presence of a %mor tier and the selection of a certain
speaker as the target. For example, the +t*PAR switch would select utterances from PAR
for analysis. Using the tom.cha file in CLAN's examples/fluency folder, the command for
a basic analysis would be;
flucalc tom.cha +t*TOM
The output will go into a file called tom.flucalc.xls which you can open in Excel, after
accepting the warning message.
FLUCALC will perform a fluency analysis of a language sample, in both raw counts
and percentages of intended words. It will also provide a “beta” weighted disfluency value
over words, based on the formula proposed by the Illinois Stuttering Project (Yairi &
Ambrose, 1999) for computations made on syllable counts.
You must use fluency codes specified in the CHAT manual and the Clinician’s Guide
to CLAN. You must run MOR for the appropriate language to then run FluCalc. The same
speech/language sample can be used for both language sample analysis (KidEval, EVAL)
and fluency appraisal via FluCalc. The output is in CSV (comma-separated variables)
spreadsheet format. Spreadsheets in this format can be imported directly to Excel. When
opening them, Excel will warn you that the spreadsheet is not in Excel XLS format, but
you can just ignore that warning and proceed to open and analyze using Excel.
The user can select to have FLUCALC use either words or syllables as the denominator
for the computation of percentages (indicated below by %). For indices that look at
intended words, the forms on the %mor line are used, because that line excludes repetitions
and nonwords. When counting pauses, FLUCALC ignores pauses that are utterance
external. This means that pauses at the beginnings and ends of utterances are not counted,
Part 2: CLAN 139
because they often represent features of conversational exchange, rather than purer
measures of disfluency.
By default, phonological fragments are considered typical disfluencies (TD). This is
appropriate for children. However, for adults, it may be better to consider phonological
fragments as stuttering like disfluencies (SLD). This is because fragments may represent
word avoidance behaviors in adults who stutter. To treat phonological fragments as SLD,
you can add the +c1 switch to the FLUCALC command. This option moves sound
fragments from the TD computations to the SLD computations.
If the sample has been processed for ASR (automatic speech recognition) using the
batchalign system at https://github.com/talkbank there will be a new %wor line that can be
used when the +a switch is selected to derive more accurate times for word and pause
duration. These additional times are then summarized in 9 columns are the end of the
output.
25. %PWR
26. #PWR-RU: part-word repetition units, these are sometimes called iterations, or the
actual number of repetitions in a part-word repetition unit. This column totals all RUs
seen in the sample, for use in the weighted disfluency score
27. %PWR-RU
28. #WWR: whole word repetition
29. %WWR
30. #WWR-mono: monosyllabic repetition
31. %WWR-mono
32. #WWR-RU: repetition units; please see PWR above
33. %WWR-RU
34. #WWR-RU-mono: repetition units; please see PWR above
35. %WWR-RU-mono
36. mean RU = (PWR-RU + WWR-RU) / (PWR+WWR)
37. #Phonological fragment: these are best viewed as abandoned word attempts, e.g.
&+fr- tadpole, where the speaker appears to change word choices; this code was
original to CLAN programs.
38. %Phonological fragment
39. #Phrase repetitions
40. %Phrase repetitions
41. #Word revisions
42. %Word revisions
43. #Phrase revisions
44. %Phrase revisions
45. #Pauses: only utterance-internal unfilled pauses are counted
46. %Pauses
47. #Filled pauses
48. %Filled pauses
49. #TD: typical disfluencies; this is the sum of phrase repetitions, word revisions, phrase
revisions, pause counts, phonological fragments, and filled pauses.
50. %TD: total typical disfluencies over total words or total syllables.
51. #SLD: stutter-like disfluencies; this the sum of prolongations, broken words, blocks,
PWRs, and monosyllabic WWRs.
52. %SLD: proportion of stutter-like disfluencies over total intended words
53. #Total (SLD+TD): this sums all forms of disfluency, both stutter-like and typical, seen
in the sample
54. %Total (SLD+TD)
55. SLD Ratio: SLD/(SLD+TD)
56. Content_words_ratio: open_class_words with disfluency/total disfluencies
This includes n, v, part, cop adj, adv (except adv:int which is closed)
57. Function_words_ratio: closed_class_words with disfluency/total disfluencies
This includes all items not in the content/open class, except co and on
58. Content_allwords_ratio: all content words with and without disfluency
59. Function_allwords_ratio: all function words with and without disfluency
60. Weighted SLD: This is an adapted version of the SLD formula for distinguishing
Part 2: CLAN 141
between typical disfluency and stuttering profiles in young children. It was originated
by Yairi & Ambrose (1999) and referenced against a standard sample of 100 syllables.
This formula penalizes the severity of the segment repetition profile as well as the
presence of prolonged sounds and blocks, which are virtually absent in any sample of
typically fluent speech. The formula is:
((PWR + mono-WWR) * ((PWR-RU + mono-WWR-RU)/(PWR + mono-WWR))) +
(2 * (prologations + blocks))
61. IW_Dur: total inter-word pause duration for target speaker. FLUCALC requires
specifying just one target speaker
62. Utt_dur: total utterances duration for target speaker
63. IW_Dur / Utt_dur
64. Switch_Dur: total inter-utterance pause duration time when target utterances follow
different speakers
65. #_Switch: # of times consecutive utterances are from different speakers
66. Switch_Dur/#_Switch
67. No_Switch_dur: total inter-utterance pause when target utterances follow the same
target speaker
68. #_No_Switch: # of times consecutive utterances are from the same speaker
69. No_Switch_Dur/#_No_Switch
8.7 IPSYN
The IPSyn Command computes the Index of Productive Syntax (Altenberg, Roberts,
& Scarborough, 2018; Scarborough, 1990). Computation of this index requires the
presence of an accurate %mor line. Currently the program is implemented only for
Part 2: CLAN 142
English. The full form of the IPSyn requires 100 acceptable utterances. However, Yang
et al. (2021) have shown that IPSyn works equally well with samples of 50 utterances. To
allow for this, IPSyn now requires 50 utterances by default and uses the new rule set
recommend by Yang et al. The basic form of the command is:
ipsyn +t*CHI –leng filename.cha
If you wish to run the classic version of IPSyn, then you should use the +o switch whcih
will use the old rule set and 100 utterances by default. The classic rule set is designed to
conform to the IPSyn-R revised version of the scale from 2018 (Altenberg et al., 2018).
Inaccuracies in CLAN's 2019 version that were noted by Roberts et al. (2020) are largely
corrected in the current version.
If you wish to change the treatment of a given utterance, you can use the [+ ipe]
postcode to exclude it or the [+ ip] to include it. This exclusion code will also apply to
IPSyn inside KIDEVAL. IPSyn excludes the whole utterance if it has an [+ ipe] postcode
or if the whole utterance is just xxx, yyy, or zzz. IPSYN also excludes repeated utterances.
If an utterance has been spoken verbatim by the speaker before, then it is excluded from
analyses, unless the [+ ip] postcode is specified on that utterance. To better see what IPSYN
does, run this command on the 98.cha file in the /examples/transcripts/ne32 folder.
ipsyn +t*CHI +leng 98.cha
The output from this command will be 98.ipsyn.cex. Triple-click on that name in CLAN’s
output window and you will see how IPSYN assigned points for each relevant utterance in
that sample. This same run of the IPSYN command will also produce the file 98.ipcore.cex.
You can open that file to see which utterances were included in the IPSyn analysis.
RULENAME: N8
if
INCLUDE: $MOD ^ $N ^ $V
DIFFERENT_STEMS: >2
The notation of >2 should really be >=2, because the program interprets it as greater than
or equal to 2. Given an utterance such as my dog barks and my cat meows, this rule would
match my dog barks for the first point and my cat meows for the second point. If the
utterance were my dog barks and my dog runs, then there would not be a second point,
because there would only be one new stem. The matching strings can be anywhere in the
100-utterance sample; it is not necessary that they be in the same utterance. In this case,
the requirement that there be two or more different stems would be fulfilled, because two
of the stems are different, although one is the same. DIFFERENT_STEMS_POS sets
Part 2: CLAN 143
This rule will assign a second point if the two strings are for the sheep and besides the
sheep, but not if the two strings are for the sheep and for a sheep.
On the INCLUDE line, the ^ or caret indicates that the element after the caret must
directly follow the one before. The categories here – $MOD for modifier, $N for noun,
and $V for verb – are defined in the eng.cut file.
It is also possible to have additional restrictions placed on the assignment of the points,
as in the above example from V3, where the EXCLUDE term disallows a first point for
three specific phrases: lookit this, in there, or on there. For the second point, the two
exclusions from the first point are now allowed, if neither word matches stems from the
first point.
You can see how IPSyn is assigning points by looking at the results for individual
words in the *.ipsyn.cex output file. The *.ipcore.cex file shows you which utterances
were selected for the analysis.
The six columns in these four tables encode the features of each rule. The first column
gives the rule number. The second column gives the string match for the rule. Here the plus
is used for combination, although in the actual rules the up-arrow symbol is used for this.
The third column shows that, if you credit this rule, you should also credit some other
rule(s). This is called "cascading credit." The fourth column gives and example structure
that matches the rule. The fifth column lists structures to exclude, even if they match the
string in the second column. The sixth column gives the criteria that allow for the
assignment of a second point for the rule. In this column, the word "stem" refers to the fact
that the program uses the lemma or stem from the %mor line, rather than the surface form
of the word.
Part 2: CLAN 145
Notes:
$CAT = have to, supposed to, going to, want to, got to, better
cop can also be ~cop
Part 2: CLAN 147
Notes:
Q4: count wh + v:cop ONLY as second point, unless V4 has two points
Part 2: CLAN 148
Notes:
S8 excludes: wanna feed her (V5 only); I like swim (S6 only); to ride on (no main verb)
S11, 12, 13, 14, 16, 17, 19, 20 require use of the %gra line.
S20 is ill-defined
Cascading credits for S1 are not computed, because children always get full on S1.
8.8 KIDEVAL
KIDEVAL is a program that provides automatic analysis of a language sample that has
been transcribed in the CHAT format. Using various components of the CLAN program,
KIDEVAL automatically computes these measures, which are entered into a spreadsheet.
The KIDEVAL spreadsheet includes columns for each measure. If the analysis operates
on multiple transcripts from the same or different children, each transcript will have values
in its own row.
If you select “do not compare to database” the dialog will change to this form:
If you select the options displayed above, CLAN will compile and use this command:
kideval @ +leng +t*CHI:
The result will be sent to an Excel file, as noted in CLAN’s output window with this line:
Output file </Applications/CLAN/examples/ne32/14.kideval.xls>
You can triple-click on that line, and it will fire up Microsoft Excel. Excel will ask you if
you really want to open the file and just say “yes”. The output (with most columns removed
to fit on this page) will look like this:
Using a similar process for Mandarin Chinese, we can select all the files in the Tong corpus
in Chinese and the result (with many columns removed) will be:
Part 2: CLAN 151
For various reasons, a file can be specifically excluded from KIDEVAL analysis by
inclusion of this line in the transcript. Once a file has this line, KIDEVAL will ignore it.
@Comment: KIDEVAL DATABASE EXCLUDE
Following is a list of the measures computed by KIDEVAL. To match the Excel to the
numbers in the following fields you can select R1C1 column labeling in Excel. For
Windows, this in in Options/Calculation. For Mac, it is in Preferences/Calculation and it
says “Use R1C1 reference style”. The first 12 fields in this output come from the CHAT
@ID header for the selected speaker. Fields 13-15 come from the @Types header. Items
43-56 are the 14 English morphemes studied by Brown (1973).
1. Filename
2. Language
3. Corpus Name
4. Participant Code
5. Age in months
6. Sex
7. Group
8. Race
9. SES
10. Participant Role
11. Education
12. Custom field
13. Design
14. Activity
15. Group
16. Total Utts: total utterances,
17. MLU Utts: number of utterances, as used for computing MLU,
18. MLU Words: MLU in words,
19. MLU Morphemes: MLU in morphemes,
20. MLU 100 Utts: MLU of the first 100 child utterances in morphemes,
Part 2: CLAN 152
21. MLU 100 Words: MLU of the first 100 child utterances in words,
22. MLU 100 Morphemes: MLU of the first 100 child utterances in morphemes,
23. FREQ types: total word types, as used for computing FREQ
24. FREQ tokens: total word tokens,
25. FREQ TTR: type/token ratio,
26. NDW 100: number of different words in the first 100 words in the sample,
27. VOCD score: KIDEVAL will warn if the sample is too small to compute VOCD,
28. Verbs/Utt: verbs per utterance. This can be less than 1.0 for young children,
29. TD Words: total number of words for each speaker, as used for TIMEDUR
30. TD Utts: total number of utterances for each speaker (no exclusionary criteria),
31. TD Time: total duration in seconds of utterances for each speaker,
32. TD Words/Time: words per second,
33. TD Utts/Time: utterances per second,
34. Word Errors: number of words involved in errors,
35. Utt Errors: number of utterances involved in errors,
36. Retracing [//]: number of retracings,
37. Repetition [/]: number of repetitions,
38. DSS Utterances: number of DSS-eligible utterances (default 50 required),
39. DSS: Developmental Sentence Score,
40. IPSyn Utterances: number of IPSyn-eligible utterances (default 50 required),
41. IPSyn Total
42. MOR words: the number of words according to the %mor tier
43. -PRESP the present participle -ing, as in swimming.
44. in the preposition in, as in the cheese is in the bag.
45. on the preposition on, as in put it on.
46. -PL the regular plural, as in dogs.
47. &PAST the irregular past, as in fell.
48. ~poss the possessive clitic, as in John’s
49. cop the uncontractible copula as in Is Meg nice? Meg is.
50. det:art the determiner, as in the ball.
51. -PAST the regular past, as in jumped.
52. -3S the regular third person singular present, as in runs.
53. &3S the irregular third person singular present, as in does or has.
54. aux the uncontractible auxiliary, as in Is John running? Yes, he is.
55. ~cop the contracted copula, as in Meg’s tall.
56. ~aux the contracted auxiliary, as in John’s going.
57. Total non-zero MOR: This is the count of the number of Brown's morphemes that had
at least one instance. For example, if 10 of the 14 morphemes appeared, this number
would be 10.
The first 22 of the measures in KIDEVAL are the same for all languages. Fields 23 and
24 are only computed for English and Japanese, because only those two languages have
DSS. Also, fields 25 and 26 are only meaningful for English, since IPSyn only exists so
far for English. For fields 43-56, the actual morphemic forms that are being tracked vary
totally from language to language. The best way to follow these is to run KIDEVAL on a
single file and then to look at the column headers for each field after mor_Words. We can
modify this set by adding or removing morphemes, based on user requests.
Part 2: CLAN 153
To run KIDEVAL, a transcript must have at least 50 utterances. If the transcript does
not contain enough utterances for a given index, N/A (not applicable) is inserted.
KIDEVAL now uses the new smaller rule set recommended by Yang et al., as described in
the section on IPSyn. For the MLU computation, sentences marked by [+ mlue] are
excluded from the analysis. For the DSS computation, sentences marked by [+ dsse] are
excluded. For the IPSyn computation sentences marked by [+ ipe] are excluded. If a
transcript does not have time values entered, then columns 15-18 will not be meaningful.
Items separated by a comma are treated as AND; items separated by a space are treated as
OR. To include combinations of morphemes in a KIDEVAL spreadsheet, you must run a
separate FREQ program, such as this one that looks for adj+noun or noun+adj
combinations in French:
freq +sm"|adj |n" +sm"|n |adj" +d2 *.cha
This command will create an Excel output structured like that for KIDEVAL and you may
wish to cut and paste the relevant columns from that output into your overall KIDEVAL
spreadsheet.
preference folder on your computer. If you use the KIDEVAL dialog to select a
comparison database, the dialog will change to this form:
Comparisons involve comparing a single transcript with the overall comparison database.
For example, if we compare the barry.cha file in the CLAN examples folder with the
English database, we will get this output (with many columns removed):
In this output, the asterisks in row 4 indicate that this sample differs significantly from the
comparison group on several measures. The corpora that are used for the English-NA
KIDEVAL comparison databases are
Bates/Free20 Bates/Free28 Bernstein/Children
Bliss Bloom70 Bloom73
Braunwald Brown Clark
Demetras1 Demetras2 Feldman
Gathercole Gleason/Father Gleason/Mother
Hall Higginson HSLLD/HV1/TP
HSLLD/HV2/TP HSLLD/HV3/TP MacWhinney
McCune NewEngland Post
Providence Sachs Snow
Suppes Tardif Valian
VanHouten VanKleeck Warren
Weist
The corpora used for Mandarin are Tong and Zhou1. All French corpora are used for
Part 2: CLAN 155
French.
8.9 MORTABLE
MORTABLE uses the %mor line to create a frequency table of parts of speech and
affixes in a format that can be opened directly in Excel. The command line needs to include
the script file which is in the CLAN/lib/mortable folder, for example:
mortable +t*PAR +leng *.cha
Columns M-AF provide the percentage of each part-of-speech — e.g., adjectives, adverb,
auxiliaries, conjunctions. The script for these percentage calculations uses an “OR” format,
so that the data in each column is mutually exclusive. Columns AE-AS provide the
percentages of each affix. These are calculated in a non-exclusive fashion.
If you want the actual count of the items found instead of percentages, add +o4 to the
command line. If you want MORTABLE to automatically calculate cumulative totals of
parts-of-speech, you can make modifications to the eng.cut file found in
CLAN/lib/mortable. Here is an example of how to do this.
AND
# +|v,|v:* "% v,v:*"
If you wanted a cumulative total of all pronouns (including pro:indef, pro:per, pro:wh,
pro:refl, pro:poss, and pro), you could enter the following into the script file under the AND
section and you would see a column in your spreadsheet called “pro-total”:
+|pro,|pro:* "pro-total"
For cases where target replacements are in the transcript next to error productions with
missing morphemes (e.g., he is kick [: kicking] [* m:0ing] the ball), the EVAL and
MORTABLE programs will reflect the speaker's morphological production (e.g., v|kick)
and not count anything that was not produced (e.g., part|kick-PRESP). For cases where
target replacements are in the transcript next to error productions for superfluous
morphemes (e.g., there is one birds [: bird] [* m:+s] in the tree), the EVAL and
MORTABLE programs will not count the superfluous morphemes (e.g., n|bird-PL)
because they were not used correctly.
Part 2: CLAN 156
8.10 SUGAR
This program computes the SUGAR (Sampling Utterances and Grammatical Analysis
Revised) profile (Pavelko & Owens, 2017). For a discussion of problems with SUGAR
see (Guo, Eisenberg, Bernstein Ratner, & MacWhinney, 2018). For SUGAR, the corpus
of utterances includes the first 50 utterances. The basic format of the command is:
sugar +t*CHI filename.cha
There are just four metrics to be computed:
1. MLU-S: This measure is the same as the MLU currently in CLAN.
2. TNW: This counts the total number of words in the 50 utterances.
3. WPS: This computes (number of words) / (number of sentences). For this, each
utterance with a verb counts as a sentence.
4. CPS: This computes (number of clauses) / (number of sentences). For this, each
sentence counts as one clause and then each of the GRs marking embedded structures
in the %gra line (section 7.8.14 of the CLAN manual) counts as an additional clause.
Part 2: CLAN 157
In this example, the first argument starts Batchalign (assuming it has been installed); the
second argument starts morphosyntactic analysis; the third argument specifies the input as
French; the fourth argument locates the input folder; and the last argument locates the
output folder. The input directory can contain any number of CHAT files or folders of
CHAT files and that same structure will be preserved when files appear recursively in the
output folder. As Batchalign completes UD tagging of a file, that file will appear in the
output folder. If processing fails on a file for some reason, such as incorrect CHAT format,
processing will move on to the next file.
Batchalign relies on the Stanza system (https://stanfordnlp.github.io/stanza/) to apply
the various UD taggers. Users of Stanza can cite this paper: Qi, P., Zhang, Y., Zhang, Y.,
Bolton, J., & Manning, C. D. (2020). Stanza: A Python natural language processing toolkit
for many human languages. arXiv preprint arXiv:2003.07082. Stanza is currently
maintained by John Bauer.
Although UD taggers are better than MOR in terms of constructing dependency
structures on the %gra line, they do not perform analysis on the %mor in as much detail as
the MOR taggers that work within the CLAN programs. Moreover, UD does none of the
checking for lexical accuracy and typos that is provide by MOR. However, UD makes up
for these deficiencies by providing wide language coverage and consistent nomenclature
for parts of speech, lexical features, and grammatical relations across languages. For those
reasons, we have shifted most languages in CHILDES to UD, while continuing to maintain
the well-developed MOR taggers for English, Spanish, and Chinese that are available at
https://talkbank.org/morgrams. That page also provides grammars for Hebrew and
Japanese because we have not succeeded in applying UD to those languages. We will
eventually create both MOR and UD taggings for English, Spanish, and Chinese. For all
languages except English, CLAN programs such as KIDEVAL are now based on use of
the tags produced by UD.
Given the increasing availability of UD taggers and the possibility of creating web
services for running these analyses, we do not expect that users will be motivated to create
Part 2: CLAN 158
new MOR grammars. However, the MOR manual is still available on the web for users of
MOR for English.
9.1 Alignment
The default file format for UD is the CONLL format. However, to maintain
compatibility with the CLAN programs and provide better readability, Batchalign converts
CONLL to CHAT. In this format, there must be a one-to-one correspondence between
words on the main line and words on the %mor tier. In order to achieve this one-to-one
correspondence, the following rules are observed:
1. Each word group (see below) on the %mor line is surrounded by spaces or an initial
tab to correspond to the corresponding space-delimited word group on the main line.
The correspondence matches each %mor word (morphological word) to a main line
word in a left-to-right linear order in the utterance.
2. Forms on the main line that begin with & are not considered to be words.
3. Utterance delimiters are preserved on the %mor line to facilitate readability and
analysis. These delimiters should be the same as the ones used on the main line.
4. Along with utterance delimiters, the satellite markers of ‡ for the vocative and „ for tag
questions or dislocations are also included on the %mor line in a one-to-one alignment
format.
5. Retracings and repetitions are excluded from this one-to-one mapping, as are nonwords
such as xxx or strings beginning with &.
6. When a replacing form is indicated on the main line with the form [: text], the material
on the %mor line corresponds to the replacing material in the square brackets, not the
material that is being replaced. For example, if the main line has gonna [: going to], the
%mor line will code going to.
7. The [*] symbol that is used on the main line to indicate errors is not duplicated on the
%mor line.
Cliticized words with two or more parts have their parts joined by the tild ~ character.
In this example, the first number in the terms on the %gra line indicates the order of the
word in the utterance. The second number indicates the item to which it is connected
through a grammatical relation (GR); and the last term labels the grammatical relation.
Running this same utterance through UD for English, we get this result and a slightly
different graph, because of the way that UD marks ROOT:
In the files in which English is predominant, on the other hand, the tier has this form:
@Language: eng, yue
The programs then assume that, by default, each word in the transcript is in the first listed
language. This default can be reversed in two ways. First, within the English files, the
precode [- yue] can be placed at the beginning of utterances that are primarily in Cantonese.
If single Cantonese words are used inside English utterances, they are marked with the
Part 2: CLAN 160
special form marker @s. If an English word appears within a Cantonese sentence marked
with the [- yue] precode, then the @s code means that the default for that sentence
(Chinese) is now reversed to the other language (English). For the files that are primarily
in Cantonese, the opposite pattern is used. In those files, English sentences are marked as
[- eng] and English words inside Cantonese are marked by @s. This form of marking
preserves readability, while still making it clear to the programs which words are in which
language. If it is important to have each word explicitly tagged for language, the –l switch
can be used with CLAN programs such as KWAL, COMBO, or FIXIT to insert this more
verbose method of language marking.
To minimize cross-language listing, it was also helpful to create easy ways of
representing words that were shared between languages. This was particularly important
for the names of family members or relation names. For example, the Cantonese form 姐
姐 for “big sister” can be written in English as Zeze, so that this form can be processed
correctly as a proper noun address term. Similarly, Cantonese has borrowed a set of
English salutations such as “byebye” and “sorry” which are simply added directly to the
Cantonese grammar in the co-eng.cut file.
Once these various adaptations and markings are completed, it is then possible to run
MOR in two passes on the corpus. For the YipMatthews English files, the steps are:
8. To make sure all words in English are recognized, set the MOR library to ENG and
run: mor -s"[- yue]" +xb *.cha
9. Fix any errors and add any missing words to the ENG lexicon.
10. Run: mor -s"[- yue]" *.cha
11. Run CHECK to check for problems.
12. To make sure all words in Cantonese are recognized, set the MOR library to YUE and
run: mor +s"[- yue]" +xb *.cha
13. Fix any errors and add any missing words to the YUE lexicon.
14. Run: mor +s"[- yue]" *.cha
15. Run CHECK to check for problems.
To illustrate the result of this process, here is a representative snippet from the
te951130.cha file in the /TimEng folder. Note that the default language here is English and
that sentences in Cantonese are explicitly marked as [- yue].
*LIN: where is grandma first, tell me ?
%mor: adv:wh|where v|be n|grandma adv|first v|tell pro|me ?
*LIN: well, what's this ?
%mor: co|well pro:wh|what~v|be pro:dem|this ?
*CHI: [- yue] xxx 呢 個 唔 夠 架 .
%mor: unk|xxx det|ni1=this cl|go3=cl neg|m4=not adv|gau3=enough
sfp|gaa3=sfp .
*LIN: [- yue] 呢 個 唔 夠 .
%mor: det|ni1=this cl|go3=cl neg|m4=not adv|gau3=enough .
*LIN: <what does it mean> [>] ?
%mor: pro:wh|what v:aux|do pro|it v|mean ?
This type of analysis is possible whenever MOR grammars exist for both languages, as
would be the case, for example, for Japanese-English, Spanish-English, English-German,
German-French, Spanish-French, Mandarin-Cantonese, or Italian-Mandarin bilinguals. It
is also possible to use this same method to tag with the newer UD taggers.
Part 2: CLAN 161
10.1 CHAT2ANVIL
This program converts a CHAT file to ANVIL format.
10.2 CHAT2CA
The CHAT2CA program will convert a CHAT file to a format that is closer to standard
CA (Conversation Analysis) format. This is a one-way conversion, since we cannot
convert back to CHAT from CA. Therefore, this conversion should only be done when you
have finished creating your file in CHAT or when you want to show you work in more
standard CA format. The conversion changes some of the non-standard symbols to their
standard equivalent. For example, the speedup and slowdown are marked by inward and
outward pointing arrows.
10.3 CHAT2ELAN
This program converts a CHAT file to the ELAN format for gestural analysis. For
conversion in the opposite direction, use ELAN2CHAT. You can download the ELAN
Part 2: CLAN 162
The +e switch is used to specify the media file type, which could be mp4, mov, wav, or
mp3. CHAT main tiers appear in ELAN with the speaker's name, such as *BET which
becomes “BET”. Dependent tiers, which are called “child tiers” in ELAN, are then coded
as owned by a speaker, as in gpx@BET for the %gpx tier linked to the BET speaker. These
screenshots show a simple CHAT file (which is included in the CLAN distribution inn the
/examples/chat2elan folder) and then how it looks after conversion to ELAN format:
Part 2: CLAN 163
10.4 CHAT2PRAAT
This program converts a CHAT file to the Praat format. When running this, you need
to add the file type of the audio using the +e switch as in +emp3.
10.5 CHAT2SRT
This program converts a CHAT file to SRT format for captioning video. The use of
this program is described in a screencast available from https://talkbank.org/screencasts.
On the Mac, you will need to purchase Subtitle Writer for $4.99 from the App Store.
1. Run this command: chat2srt shoes.cha to produce shoes.srt
2. Open Subtitle Writer.
3. Click Add and add shoes.srt
4. Select the language as English.
5. Click Import Movie and select shoes.mp4
6. Click Save Subtitled Video
7. Open and play the resultant captioned movie.
If you want to use the English gloss in the %glo line in a file instead of the main line for
the captions, then use this command: chat2srt +t%glo shoes.cha to produce the .srt file.
10.6 CHAT2TEXT
This program converts a CHAT file to a series of text lines for analysis by concordance
programs such as AntConc. This command is implemented as a simple alias based on the
FLO command: flo +cr +t* The FLO program with the +cr switch removes all the various
markup of CHAT.
10.7 CHAT2XMAR
This program converts a CHAT file to the EXMARaLDA format for Partitur analysis.
For conversion in the opposite direction, use XMAR2CHAT. You can download the
EXMARaLDA program from https://www1.uni-hamburg.de/exmaralda/.
Part 2: CLAN 164
10.8 ANVIL2CHAT
This program converts an ANVIL file to a CHAT file. For conversion in the opposite
direction, you can use CHAT2ANVIL
10.9 ELAN2CHAT
This program converts ELAN files to CHAT files. Use CHAT2ELAN for conversion
in the opposite direction. The command is just: ELAN2CHAT filename.cha. If your file
began in CHAT, then the tiers probably already have names that will pass on without
problems during the conversion. You can see how this might look in the section above on
CHAT2ELAN. Looking at that section and running the example given there is the best way
to understand the format required in ELAN.
However, if your file was produced originally in ELAN, it will be necessary to rename
the tiers to CHAT format before running elan2chat. There are basically two different
pathways for conversion of ELAN to CHAT.
CHAT-compatible ELAN files: The first pathway is the simplest. In this method,
you create ELAN files from scratch that maximize CHAT compatibility. To do this, you
give the main tier a 3-letter name, such as BET in the screencast example. Then, the %gpx
dependent tier for that speaker is named gpx@BET and it should be a child under the BET
top level tier. You should avoid use of Time Subdivision and Symbolic Subdivision tier
type. Instead, please make use of the Included In tier type.
ELAN files that must be reorganized: In the second pathway, you would be working
with ELAN files that had not been structured from the beginning to maximize CHAT
compatibility. In that case, there are ways to reorganize the ELAN files to increase
compatibility. The steps are:
1. Renaming tiers. For this, you need to select one tier as the top-level tier. Usually
this would be the main speaker tier(s) with one such tier for each speaker.
2. Then you would align the child or dependent tiers with the top level or main tiers.
If you are coding three types of dependent information and you have three speakers,
you would then end up with 9 tiers.
3. It is also possible to have tiers that are children of child tiers, if they are fully
aligned. However, it is not possible to go further into embedding with a third level
of dependency.
4. If a corpus has a mix of aligned and unaligned annotations on a Symbolic
Subdivision tier, you should either convert the tier to Included In or make sure
that all annotations are aligned.
5. Finally, you may wish to remove some tiers that don’t adapt well to the CHAT
structure or which are not important for your general analysis.
When reorganizing ELAN tiers, you need to consider these operations:
1. Changing the hierarchy of tiers in ELAN is only possible via a copy operation (there
is an option Reparent Tier but that also creates a copy of the tier). If the result is
acceptable, the original tier can be removed. It is implemented this way because
changing the hierarchy can involve changing the type of the tier and, therefore, the
constraints applied to annotations (annotations can be removed or concatenated etc.
Part 2: CLAN 165
by this operation).
2. Renaming of tiers can be applied to a set of files / a corpus (File->Multiple File
Processing->Edit Multiple Files...). This (mainly) makes sense if there is some
consistency of tier names in that corpus/set of files.
10.10 LAB2CHAT
This program converts files in WaveSurfer format, such as the files from Stockholm
University to CHAT format. Each utterance is on a separate line and the line begins with
the begin and end times in seconds. The WaveSurfer or LABB format has separate files
for each speaker, so there must be an attribs.cut file, like the one in /lib/fixes/wavesurf-
attribs.cut to declare who is who, as in:
;"Chat Speaker" "Original tag litman"
*MOT mamman Mother
*CHI barnet Target_Child
10.11 LENA2CHAT
This program is designed to convert LENA *.its files into CHAT format. It assumes
that all the *.its files for a given child, along with the *.wav audio files, are collected into
a single folder that has a 3- or 4-character name, such as AB04 or 0134. The command of
"lena2chat *.its" is issued from within the folder. After running the command, please create
a new folder inside this folder called 0its and put the *.its files into that folder. The *.cha
files created by the program stay at the top level and they will be given names based on the
folder name and the age of the child. The .wav files will also be renamed in this way. So,
if the child's age is 2;04.05 and the folder name is AB04, then the file will be named
AB04_020405.cha and the corresponding audio file will be named AB04_020505.wav.
The @Media line will use the new name of the media file, but the original name of the
media file will be preserved in a line such as:
@Comment: old media file name is e20100728_143446_003489
The program is designed to convert whole folders at a time, rather than single files,
although a folder with only one file will also be converted. To convert a series of folders
at once, you can add the +re switch. So, conversion of a whole corpus would use this form
of the command:
lena2chat +re *.its
10.12 LIPP2CHAT
To convert LIPP files to CHAT, you need to go through two steps. The first step is to
run this command:
cp2utf –c6 *.lip
This will change the LIPP characters to CLAN’s UTF8 format. Next you run this
command:
lipp2chat +leng *.cp2utf.cex
This will produce a set of *.utf.cha files which you can then rename to *.cha. The
obligatory +l switch requires you to specify the language of the transcripts. For example,
Part 2: CLAN 166
10.13 PRAAT2CHAT
This program converts files in the PRAAT format to files in CHAT format. The
/examples/praat2chat folder contains a 0readme.txt file that explains how to use the
program. The following material is the same as what is given in that file:
The files in this folder illustrate how to convert Praat files to CHAT and vice versa.
The simplest case arises when you first convert a CHAT file like clip.cha to Praat, you use
this command:
chat2praat clip.cha +emp3
The +e switch specifies the format of the audio as either mp3 or wav. The output of this
command is clip.c2praat.textGrid. You can convert this back to CHAT format using this
command:
praat2chat clip.c2praat.textGrid
In this case, no attribs.cut file is needed, because the orginal file was already in CHAT
format. The output of this command is clip.xh2praat.praat.cha. Note that this file is
identical to clip.cha.
However, if you begin with a Praat file that was never converted to CHAT, you will
need to create an attribs.cut file and you use this command:
praat2chat -opcl +dattribs.cut praat.textGrid
This command uses the declarations in the attribs.cut file illustrated in this folder and
produces praat.praat.cha as output. The +d swtich with an attribs.cut file is only needed
when converting from a Praat file created originally inside Praat, not when the Praat file
was created from CHAT, and not when converting from CHAT to Praat. The attribs.cut
file tells the program which Praat tag gives the Speaker role and which ones give dependent
tiers. After running praat2chat, you will need to fix the headers, run DELIM to add periods
for utterances and perform various similar operations required by CHAT format.
10.14 RTF2CHAT
This program is used to take data that was formatted in Word and convert it to CHAT.
10.15 SALT2CHAT
This program takes SALT formatted files and converts them to the CHAT format.
SALT is a transcript format developed by Jon Miller and Robin Chapman at the University
of Wisconsin. By default, SALT2CHAT sends its output to a file. Here is the most common
use of this program:
salt2chat file.cut
It may be useful to note a few details of the ways in which SALT2CHAT operates on
SALT files:
1. When SALT2CHAT encounters material in parentheses, it translates this material
as an unspecified retracing type, using the [/?] code.
Part 2: CLAN 167
10.16 SRT2CHAT
This program converts files in SRT format to CHAT format. For the file headers, it
uses TXT as the speaker and English as the default language. Times are encoded in the
bullets. After conversion, you may want to put the individual lines into full CHAT format.
Alternatively, you can insert the @Options: heritage line to preserve the initial transcripts
and timing.
10.17 TEXT2CHAT
The TEXT2CHAT program is quite simple. It takes a set of sentences in paragraph
form and converts them to a CHAT file. Blank lines are treated as possible paragraph
breaks and are noted with @Blank headers. To illustrate the operation of TEXT2CHAT,
here are the results of running TEXT2CHAT on the previous three sentences:
@Begin
@Languages: eng
@Participants: TXT Text
@ID: ent|text|TXT|||||Text|||
*TXT: the text2chat program is quite simple.
*TXT: it takes a set of sentences in paragraph form and
converts them to a chat file.
*TXT: blank lines are considered to be possible paragraph
breaks and are noted with @blank headers.
@End
Problems can arise when there is extraneous punctuation in the original, as in forms such
as St. Louis which would generate a new line at Louis. To avoid this, one can either remove
these problems beforehand or in the resultant CHAT file.
Part 2: CLAN 168
Part 2: CLAN 169
11 Reformatting Commands
These commands are useful when a researcher wants to add features to files that have
already passed CHECK and are in good CHAT format. Some of these add additional
material to files; others move around information already in the files. Clicking in the page
number will take you to the relevant command.
Command Page Function
CHSTRING 169 Changes one string to another, often using a changes file.
DATES 171 Given a birthday and current data, computes the age.
FLO 171 Adds a simplified %flo line to each line in a transcript.
INDENT 172 Realigns overlaps in CA transcripts.
LONGTIER 172 Makes each utterance one line long.
REPEAT 172 Marks repeated sequences of words as repetitions.
RETRACE 172 Marks retraced segments.
TIERORDER 173 Places all dependent tiers into a user-specified order.
TRIM 173 Remove specified dependent tiers.
11.1 CHSTRING
This program changes one string to another string in an ASCII text file. CHSTRING is
useful when you want to correct spelling, change subjects’ names to preserve anonymity,
update codes, or make other uniform changes to a transcript. This changing of strings can
also be done on a single file using a text editor. However, CHSTRING is much faster and
allows you to make a whole series of uniform changes in a single pass over many files.
By default, CHSTRING is word-oriented, as opposed to string-oriented. This means
that the program treats the as the single unique word the, rather than as the string of the
letters “t”, “h”, and “e”. If you want to search by strings, you need to add the +w option.
If you do, then searching for the with CHSTRING will result in retrieving words such as
other, bathe, and there. In string-oriented mode, adding spaces can help you to limit your
search. Knowing this will help you to specify the changes that need to be made on words.
Also, by default, CHSTRING works only on the text in the main line and not on the
dependent tiers or the headers.
When working with CHSTRING, it is useful to remember the functions of the various
metacharacters, as described in the metacharacters section. For example, the following
search string allows you to add a plus mark for compounding between “teddy” and “bear”
even when these are separated by a newline, since the underscore character matches any
one character including space and newline. You need two versions here, since the first
with only one space character works within the line and the second works when “teddy” is
at the end of the line followed by first a carriage return and then a tab:
+s"teddy_bear" "teddy+bear” +s"teddy__bear" "teddy+bear"
Part 2: CLAN 170
Unique Options
+b Work only on material that is to the right of the colon which follows the tier ID.
+c Often, many changes need to be made in data. You can do this by using a text editor
to create an ASCII text file containing a list of words to be changed and what they
should be changed to. This file should conform to this format:
"oldstring" "newstring"
You must use the quotation marks to surround the two strings. The default name for
the file listing the changes is changes.cut. If you don’t specify a file name at the +c
option, the program searches for changes.cut. If you want to another file, the name
of that file name should follow the +c. For example, if your file is called
mywords.cut, then the option takes the form +cmywords.cut.
To test out the operation of CHSTRING with +c, try creating the following file
called changes.cut:
"the" "wonderful"
"eat" "quark"
Then try running this file on the sample.cha file with the command:
chstring +c sample.cha
Check over the results to see if they are correct. If you need to include the double
quotation symbol in your search string, use a pair of single quote marks around the
search and replacement strings in your include file. Also, note that you can include
Unicode symbols in your search string.
+d This option turns off several CHSTRING clean-up actions. It turns off deletion of
blank lines, removal of blank spaces, removal of empty dependent tiers, replacement
of spaces after headers with a tab, and wrapping of long lines. All it allows is the
replacement of individual strings.
+l Work only on material that is to the left of the colon which follows the tier ID. For
example, if you want to add an “x’ to the %syn to make it %xsyn, you would use
this command:
chstring +s"%mor:" "%xmor:" +t% +l *.cha
+q CHAT requires that a three letter speaker code, such as *MOT:, be followed by a
tab. Often, this space is filled by three spaces instead. Although this is undetectable
visually, the computer recognizes tabs and spaces as separate entities. The +q option
brings the file into conformance with CHAT by replacing the spaces with a tab. It
also reorganizes lines to wrap systematically at 80 characters.
+s Sometimes you need to change just one word, or string, in a file(s). These strings
can be put directly on the command line following the +s option. For example, if
you wanted to mark all usages of the word gumma in a file as child-based forms, the
option would look like this:
+s"gumma" "gumma@c"
Part 2: CLAN 171
11.2 DATES
The DATES program takes two time values and computes the third. It can take the
child’s age and the current date and compute the child’s date of birth. It can take the date
of birth and the current date to compute the child’s age. Or it can take the child’s age and
the date of birth to compute the current date. For example, if you type:
dates +a 2;03.01 +b 12-jan-1962
You can also use the date format of MM/DD/YY, as in this version of the command:
dates +b 08/31/63 +d 07/30/64
If your files have the child's age in the @ID header, and if you know the child's date of
birth, but do not have the @Date field, you can create a set of new files with the @Date
information, using this version of the command:
dates +bCHI 08/31/63 *.cha
Unique Options
+a Following this switch, after an intervening space, you can provide the child’s age in
CHAT format.
+b Following this switch, after an intervening space, you can provide the child’s birth
date in day-month-year format.
+d Following this switch, after an intervening space, you can provide the current date
or the date of the file you are analyzing in day-month-year format.
11.3 FLO
The FLO program creates a simplified version of a main CHAT line. This simplified
version strips out markers of retracing, overlaps, errors, and all forms of main line coding.
The only unique option in FLO is +d, which replaces the main line, instead of just adding
a %flo tier.
Part 2: CLAN 172
11.4 INDENT
This program is used to realign the overlap marks in CA files. The files must be in a
fixed width font such as CAFont.
11.5 LINES
This program inserts line numbers that can be saved when closing the file, based on the
numbering system used by the "Show Line Numbers" option in the Mode menu. Apart
from the options available to other programs, LINES uses the +n option to remove all the
line/tier numbers.
11.6 LONGTIER
This program removes line wraps on continuation lines so that each main tier and each
dependent tier is on one long line. It is useful for cleaning up files, because it eliminates
having to think about string replacements across line breaks.
11.7 MEDIALINE
This program inserts the @Media field, based on the name of the transcript, along with
options for declaring a=audio, v=video, m=missing, and u=unlinked. So, this command
medialine +a +u test.cha
11.8 REPEAT
REPEAT if two consecutive main tiers are identical then the postcode [+ rep] is inserted
at the end of the second tier.
11.9 RETRACE
RETRACE inserts [/] after repeated words as in this example:
*FAT: +^ the was the was xxx ice+cream .
%ret: +^ <the was> [/] the was xxx ice+cream .
If +c is used, then the main tier is replaced with the reformatted material and no additional
%ret tier is created.
11.10 ROLES
The ROLES program is used to correct the role assignments that are produced during
running of the Batchalign program for automatic speech recognition
(https://github.com/talkbank/batchalign2). That program assigns roles such as PAR0,
PAR1, etc. To convert these to roles in the format described in the CHAT manual, you
creates a roles.cut file in this shape:
Part 2: CLAN 173
This will redo the @Participants and @ID lines, as well as the *PAR codes throughout the
file. Depending on the shape of your corpus and the Batchalign output, you maywant to
have a different roles.cut file for each transcript.
11.11 SEGMENT
This command is used to modify the utterance segmentation produced in the Batchalign
pipeline for automatic speech recognition (https://github.com/talkbank). Once Batchalign
has finished, you can use continuous playback (Esc-8) to play through the transcript to
make sure that words are correctly recognized and that utterances are properly segmented.
If one utterance should be joined to the following utterance, you can add &&& at the end
of the line after the time bullet and remove the terminator on the first utterance. If an
utterance should be broken up, then you can enter &&& at the break location preceded by
the terminator needed for the first utterance. When doing this, SEGMENT assumes that
there are no additional dependent tiers other than %wor. So, if there are %mor and %gra
tiers, you can use TRIM to remove them and then rerun MOR after completing SEGMENT.
Before running SEGMENT, make sure to save and close your file. Also, you may want to
avoid use of the +1 switch for file replacement until you are comfortable with use of
SEGMENT.
An alternative to using SEGMENT is to use this command to remove the %wor lines
in the output from Batchalign (the +re is optional):
trim -t%wor +re *.cha +1
Then you can insert periods to mark sentence ends or join utterances as needed and then
use FIXIT to separate out utterances at the period you have marked.
11.12 TIERORDER
TIERORDER puts the dependent tiers into a consistent alphabetical order. The
/lib/fixes/tierorder.cut file can be used to control this order by rearranging the order of tiers
in that file.
11.13 TRIM
This command is designed to allow you to remove coding tiers from CHAT files. For
example, to remove all the %mor lines from files without changing anything else, you can
use this command:
trim –t%mor *.cha +1
12.1 COMBTIER
COMBTIER corrects a problem that typically arises when transcribers create several
%com lines. It combines two %com lines into one by removing the second header and
moving the material after it into the tier for the first %com.
12.2 CP2UTF
CP2UTF converts code page ASCII files and UTF-16 into UTF-8 Unicode files. If
there is an @Font tier in the file, the program uses this to guess the original encoding. If
not, it may be necessary to add the +o switch to specify the original language, as in +opcct
for Chinese traditional characters on the PC. If the file already has a @UTF8 header, the
program will not run, unless you add the +d switch and you are sure that the line in question
is not already in UTF. The +c switch uses the unicode.cut file in CLAN/lib/fixes directory
to effect translation of ASCII to Unicode for IPA symbols, depending on the nature of the
ASCII IPA being used. For example, the +c3 switch produces a translation from IPAPhon.
The +t@u switch forces the IPA translation to affect main line forms in the text@u format.
+b : add BOM symbol to the output files
+cN: specify column number (3-7) (default: 4, IPATimes)
+d: convert ONLY tiers specified with +t option
+d1: remove bullets from data file
+d2: add BOM encoding information at the beginning of a CHAT file to help
applications, such as NVivo or MS-Word, to read it better
+oS: specify code page. Please type "+o?" for full listing of codes
utf16 - Unicode UTF-16 data file
macl - Mac Latin (German, Spanish ...)
pcl - PC Latin (German, Spanish ...)
12.3 DELIM
DELIM inserts a period at the end of every main line if it does not currently have one.
To do this for all tiers use the +t* switch.
12.4 FIXBULLETS
This program is used to fix the format of the bullets that are used in CLAN to link a
CHAT file to audio or video. Without any additional switches, it fixes old format bullets
that contain the file name to new format bullets and inserts an @Media tier at the beginning
of the file. The various switches can be used to fix other problems. The +l switch can be
used to make the implicit declaration of second language source explicit. The +o switch
Part 2: CLAN 176
12.5 FIXIT
FIXIT is used to break up tiers with multiple utterances into the standard format with
one utterance per main line.
12.6 LOWCASE
This program is used to fix files that were not transcribed using CHAT capitalization
conventions. Most commonly, it is used with the +c switch to only convert the initial word
in the sentence to lowercase. To protect certain proper nouns in first position from the
conversion, you can create a file of proper noun exclusions with the name caps.cut and add
the +d switch to avoid lowercasing these. You may also want to use the /lib/fixes/caps.cut
file. It contains over 8000 proper nouns, many based on children's first names.
12.7 QUOTES
This program moves quoted material to its own separate tier.
Part 2: CLAN 177
13 Supplementary Commands
CLAN also includes these basic operating system type commands which allow you to
manage file and folder structure.
13.1 batch
You can place a group of commands into a text file which you then execute as a batch.
The word batch should be followed by the name of a file in your working directory, such
as commands.bat. Each line of that file is then executed as a CLAN command.
13.2 cd
This command will change your working directory. For example, is you have a
subfolder called ne20, then the command cd ne20 will set your current working directory
to ne20. To move up, you can type cd ..
13.3 dir
This command will provide a listing of the files in your current working directory.
13.4 info
Just typing "info" will list all the available CLAN commands.
13.5 ren(ame)
This command allows you to change file names in a variety of ways. You can change
case by using -u for upper and -l for lower. The -c and -t switches allow you to change the
creator signature and file types recognized by Macintosh. The –f switch forces file
replacement. Here are some examples:
Rename a series of files with names like “child.CHA (Word 5)”:
ren '*.CHA (Word 5)' *.cha
Rename the output from FREQ and replace the original files:
ren –f *.frq.cex *.cha
Taking out spaces (because CLAN has trouble with spaces in names):
ren'* -e.cha' '*-e.cha'
If you have two variables in your input, you can control which is replaced by adding
either \1 or \2 to the output. So, if you use \1, then the first variable is preserved, and if
you use \2, then the second variable is preserved. So, if the input is aki12_boofoo.cha, then
this command creates 12.cha as output:
ren aki*_*.cha \1*.cha
13.6 rm
This command will delete files. It can be used with wild cards and recursively, as in
rm -re *.mor.cex
It is important to be careful with the use of this command. A good practice is to make a
copy of the folder(s) to which it applies before running the command.
Part 2: CLAN 179
14 Options
This chapter describes the various options or switches that are shared across the CLAN
commands. To see a list of options for a given program such as KWAL, type kwal followed
by a carriage return in the Commands window. You will see a list of available options in
the CLAN Output window.
Each option begins with a + or a -. There is always a space before the + or -. Multiple
options can be used and they can occur in any order. For example, the command:
kwal +f +t*MOT sample.cha
runs a KWAL analysis on sample.cha. The selection of the +f option sends the output from
this analysis into a new file called sample.kwa.cex. The +t*MOT option confines the anal-
ysis to only the lines spoken by the mother. The +f and +t switches can be placed in either
order.
14.1 +F Option
This option allows you to send output to a file rather than to the screen. By default,
nearly all the programs send the results of the analyses directly to the screen. You can,
however, request that your results be inserted into a file. This is accomplished by inserting
the +f option into the command line. The advantage of sending the program’s results to a
file is that you can go over the analysis more carefully, because you have a file to which
you can later refer.
The -f switch is used for sending output to the screen. For most programs, -f is the
default and you do not need to enter it. You only need to use the -f switch when you want
the output to go to the screen for CHSTRING, FLO, and SALT2CHAT. The advantage of
sending the analysis to the screen (also called standard output) is that the results are
immediate and your directory is less cluttered with nonessential files. This is ideal for quick
temporary analysis.
The string specified with the +f option is used to replace the default file name extension
assigned to the output file name by each program. For example, the command
freq +f sample.cha
would create an output file sample.frq.cex. If you want to control the shape of the extension
name on the file, you can place up to three letters after the +f switch, as in the command
freq +fmot sample.cha
which would create an output file sample.mot.cex. If the string argument is longer than
three characters, it will be truncated. For example, the command
freq +fmother sample.cha
+f"c:.res" This sends the output files to c: and assigns the extension .res.
When you are running a command on several files and use the +f switch, the output
will go into several files – one for each of the input files. If what you want is a combined
analysis that treats all the input files as one large file, then you should use the +u switch. If
you want all the output to go into a single file for which you provide the name, then use
the > character at the end of the command along with an additional file name. The > option
can not be combined with +f.
14.2 +K Option
This option controls case-sensitivity. A case-sensitive program is one that makes a dis-
tinction between uppercase and lowercase letters. Many of the CLAN commands are case-
sensitive by default. If you type the name of each command, you will see a usage page
indicating the default setting for the +k switch. Use of the +k option overrides the default
state whatever that might be. For instance, suppose you are searching for the auxiliary verb
“may” in a text. If you searched for the word “may” in a case-sensitive program, you would
obtain all the occurrences of the word “may” in lower case only. You would not obtain any
occurrences of “MAY” or “May.” Searches performed for the word “may” using the +k
option produce the words “may,” “MAY,” and “May” as output.
14.3 +L Option
The +l option is used to provide language tags for every word in a bilingual corpus.
Use of this switch does not actually change the file; rather these tags are represented in
computer memory for the files and are used to provide full identification of the output of
programs such as FREQ or KWAL. For examples of the operation of the +l switch in the
context of the FREQ program, see the section of the FREQ program description that
examines searches in bilingual corpora.
An additional variation on the +l switch is +l1 which serves to insert a language precode
such as [- spa] for every utterance, including those that are unmarked without the use of
this switch. If this switch is used, then it is possible to trace between language code-
switching on the whole utterance level using commands such as the two following, where
the first one tracks changes from French to Spanish and the second tracks changes from
Spanish to French:
14.4 +P Option
This switch is used to change the way in which CLAN processes certain word-internal
symbols. Specifically, the programs typically consider compounds such as black+bird to
be single words. However, if you add the switch +p+, then the plus symbol will be treated
as a word delimiter. This means that a program like FREQ or MLU would treat black+bird
as two separate words. Another character that you may wish to treat as a word separator
is the underscore. If you use the switch +p_, then New_York would be treated as two
words.
Part 2: CLAN 181
14.5 +R Option
This option deals with the treatment of material in parentheses.
+r1 Removing Parentheses. Omitted parts of words can be marked by parentheses,
as in “(be)cause” with the first syllable omitted. The +r1 option removes the parentheses
and leaves the rest of the word as is.
+r2 Leaving Parentheses. This option leaves the word with parentheses.
+r3 Removing Material in Parentheses. This option removes all the omitted part.
Here is an example of the use of the first three +r options and their resulting outputs, if
the input word is “get(s)”:
Option Output
"no option" gets
"+r1" gets
"+r2" get(s)
"+r3" get
+r4 Removing Prosodic Symbols in Words. By default, symbols such as #, /, and :
are ignored when they occur inside words. Use this switch if you want to include them in
your searches. If you do not use this switch, the strings cat and ca:t are regarded as the
same. If you use this switch, they are seen as different. The use of these prosodic marker
symbols is discussed in the CHAT manual.
+r5 Text Replacement. By default, material in the form [: text] replaces the material
preceding it in the string search programs. The exception to this rule is for the WDLEN
program. If you do not want this replacement, use this switch.
+r6 Retraced Material. By default, material in retracings is included in searches and
counts. The exceptions are the EVAL, FREQ, MLT, MLU, and MODREP programs, for
which retracings are excluded by default. The +r6 switch is used to change these default
behaviors for those programs.
+r7 Do not remove prosodic symbols (/~^:) in words
+r8: Combine %mor tier items with replacement word [: …] and error code [* …] if
any from speaker tier.
14.6 +S Option
This option allows you to search for a certain string. The +s option allows you to specify
the keyword you desire to find. You do this by putting the word in quotes directly after the
+s switch, as in +s"dog" to search for the word “dog.” You can also use the +s switch to
specify a file containing words to be searched. You do this by putting the file name after
+s@, as in +s@adverbs, which will search for the words in a file called adverbs.cut. If you
want to use +s to look for the literal character @, you need to precede it with a backslash
as in +s"\@".
By default, the programs will only search for matches to the +s string on the main line.
If you want to include a search on other tiers, you need to add them with the +t switch.
Also by default, unless you explicitly include the square brackets in your search string, the
search will ignore any material that is enclosed in square brackets.
Part 2: CLAN 182
It is possible to specify as many +s options on the command line as you like. If you
have several +s options specified, the longest ones will be applied first. Use of the +s option
will override the default list. For example, the command
freq +s"word" data.cut
You can use either single or double quotation marks. However, for Unix and CLAN
Part 2: CLAN 183
commands on the web interface, you need to use single quotation marks. When your search
string does not include any metacharacters or delimiters, you can omit the quotation marks
altogether.
Multiple +s strings are matched as exclusive or’s. If a string matches one +s string, it
cannot match the other. The most specific matches are processed first. For example, if your
command is
freq +s$gf% +s$gf:a +t%cod
Because $gf:a matches specifically to the +s$gf:a, it is excluded from matching +s$gf%.
One can also use the +s switch to remove certain strings from automatic exclusion. For
example, the MLU program automatically excludes xxx, 0, uh, and words beginning with
& from the MLU count. This can be changed by using this command:
mlu +s+uh +s+xxx +s+0* +s+&* file.cha
14.7 +T Option
This option allows you to include or exclude tiers. In CHAT formatted files, there exist
three tier code types: main speaker tiers (denoted by *), speaker-dependent tiers (denoted
by %), and header tiers (denoted by @). The speaker-dependent tiers are attached to
speaker tiers. If, for example, you request to analyze the speaker *MOT and all the %cod
dependent tiers, the programs will analyze all the *MOT main tiers and only the %cod
dependent tiers associated with that speaker.
The +t option allows you to specify which main speaker tiers, their dependent tiers, and
header tiers should be included in the analysis. All other tiers, found in the given file, will
be ignored by the program. For example, the command:
freq +t*CHI +t%spa +t%mor +t"@Group of Mot" sample.cha
tells FREQ to look at only the *CHI main speaker tiers, their %spa and %mor dependent
tiers, and @Situation header tiers. When tiers are included, the analysis will be done on
only those specified tiers.
The -t option allows you to specify which main speaker tiers, their dependent tiers, and
header tiers should be excluded from the analysis. All other tiers found in the given file
should be included in the analysis, unless specified otherwise by default. The command:
freq -t*CHI -t%spa -t%mor -t@"Group of Mot" sample.cha
tells FREQ to exclude all the *CHI main speaker tiers together with all their dependent
tiers, the %spa and %mor dependent tiers on all other speakers, and all @Situation header
tiers from the analysis. All remaining tiers will be included in the analysis.
When the transcriber has decided to use complex combinations of codes for speaker
IDs such as *CHI-MOT for “child addressing mother,” it is possible to use the +t switch
with the # symbol as a wildcard, as in these commands:
Part 2: CLAN 184
When tiers are included, the analysis will be done on only those specified tiers. When
tiers are excluded, however, the analysis is done on tiers other than those specified. Failure
to exclude all unnecessary tiers will cause the programs to produce distorted results. There-
fore, it is safer to include tiers in analyses than to exclude them, because it is often difficult
to be aware of all the tiers present in any given data file.
If only a tier-type symbol (*, %, @) is specified following the +t/-t options, the pro-
grams will include all tiers of that symbol type in the analysis. Using the option +t@ is
important when using KWAL for limiting (see the description of the KWAL program),
because it makes sure that the header information is not lost.
The programs search sequentially, starting from the left of the tier code descriptor, for
exactly what the user has specified. This means that a match can occur wherever what has
been specified has been found. If you specify *M on the command line after the option, the
program will successfully match all speaker tiers that start with *M, such as *MAR, *MIK,
*MOT, and so forth. For full clarity, it is best to specify the full tier name after the +t/-t
options, including the : character. For example, to ensure that only the *MOT speaker tiers
are included in the analysis, use the +t*MOT: notation.
As an alternative to specifying speaker names through letter codes, you can use the
form:
+t@id=idcode
In this form, the “idcode” is any character string that matches the type of string that has
been declared at the top of each file using the @ID header tier.
All of the programs include the main speaker tiers by default and exclude all of the
dependent tiers, unless a +t% switch is used.
14.8 +U Option
This option merges the output of searches on specified files together. By default, when
the user has specified a series of files on the command line, the analysis is performed on
each individual file. The program then provides separate output for each data file. If the
command line uses the +u option, the program combines the data found in all the specified
files into one set and outputs that set as a whole. For most commands, the switch merges
all data for a given speaker across files. The commands that do this are: CHAINS, CHIP,
COOCCUR, DIST, DSS, FREQ, FREQPOS, GEM, GEMFREQ, IPSYN, KEYMAP,
MAXWD, MLT, MLU, MODREP, PHONFREQ, and WDLEN. There are several other
commands for which there is a merged output, but that output separates data from different
input files. These commands are COMBO, EVAL, KIDEVAL, KWAL, MORTABLE,
TIMEDUR, and VOCD. If too many files are selected, CLAN may eventually be unable
to complete this merger.
14.9 +V Option
This switch gives you the date when the current version of CLAN was compiled.
Part 2: CLAN 185
14.10 +W Option
This option controls the printing of additional sentences before and after a matched
sentence. This option can be used with either KWAL or COMBO. These programs are used
to display tiers that contain keywords or regular expressions as chosen by the user. By
default, KWAL and COMBO combine the user-chosen main and dependent tiers into
“clusters.” Each cluster includes the main tier and its dependent tiers. (See the +u option
for further information on clusters.)
The -w option followed by a positive integer causes the program to display that number
of clusters before each cluster of interest. The +w option followed by a positive integer
causes the program to display that number of clusters after each cluster of interest. For
example, if you wanted the KWAL program to produce a context larger than a single
cluster, you could include the -w3 and +w2 options in the command line. The program
would then output three clusters above and two clusters below each cluster of interest.
14.11 +X Option
This option is available in most of the analysis programs. It allows you to control the
type and number of items in utterances being selected for analysis.
+xCNT: C (condition) can be greater than >, less than <, or equal = (>, <, =)
N (number) is the number of items to be included
T (type) is the type of item which can be words (w), characters (c), or morphemes
(m). If “m” is used, there must be a %mor line.
+x<10c means to include all utterances with less than 10 characters
+x=0w means to include all utterances with zero words
+xS: include certain items in above count (Example: +xxxx +xyyy)
-xS: exclude certain items from above count
In the MOR and CHIP programs, +x has a different meaning.
14.12 +Y Option
This option allows you to work on non-CHAT files. Most of the programs are designed
to work best on CHAT formatted data files. However, the +y option allows the user to use
these programs on non-CHAT files. It also permits certain special operations on CHAT
files. The program considers each line of a non-CHAT file to be one tier. There are two
values of the +y switch. The +y value works on lines and the +y1 value works on utterances
as delimited by periods, question marks, and exclamation marks. Some programs do not
allow the use of the +y option at all. Workers interested in using CLAN with
nonconversational data may wish to first convert their files to CHAT format using the
TEXTIN program to avoid having to avoid use of the +y option.
If you want to search for information in specific headers, you may need to use the +y
option. For example, if you want to count the number of utterances by CHI in a file, you
can use this command:
freq +s"\*CHI" *.cha +u +y
Part 2: CLAN 186
14.13 +Z Option
This option allows the user to select any range of words, utterances, or speaker turns to
be analyzed. The range specifications should immediately follow the option. For example:
+z10w analyze the first ten words only.
+z10u analyze the first ten utterances only.
+z10t analyze the first ten speaker turns only.
+z10w-20w analyze 11 words starting with the 10th word.
+z10u-20u analyze 11 utterances starting with the 10th utterance.
+z10t-20t analyze 11 speaker turns starting with the 10th turn.
+z10w- analyze from the tenth word to the end of file.
+z10u- analyze from the tenth utterance to the end of file.
+z10t- analyze from the tenth speaker turn to the end of file.
If the +z option is used together with the +t option to select utterances from a certain
speaker, then the counting will be based only on the utterances of that speaker. For exam-
ple, this command:
mlu +z50u +t*CHI 0611.cha
will compute the MLU for the first 50 utterances produced by the child. If the +z option
is used together with the +s option, the counting will be dependent on the working of the
+s option and the results will seldom be as expected. To avoid this problem, you should
first use KWAL with +z to extract the utterances you want and then run MLU on that
output.
kwal +d +z50u +t*CHI sample.cha
kwal +sMommy sample.kwa.cex
If the +z switch specifies more items than exist in the file, the program will analyze
only the existing items. If the turn or utterance happens to be empty, because it consists of
special symbols or words that have been selected to be excluded, then this utterance or turn
is not counted.
The usual reason for selecting a fixed number of utterances is to derive samples that
are comparable across sessions or across children. Often researchers have found that
samples of 50 utterances provide almost as much information as samples of 100 utterances.
Reducing the number of utterances being transcribed is important for clinicians who have
been assigned a heavy case load.
You can also use postcodes to further control the process of inclusion or exclusion.
Suppose you would like to be able to find all occurrences of the word “cat” in a file.
This includes the plural form “cats,” the possessives “cat's,” “cats’'” and the contraction
“cat’s”. Using a metacharacter (in this case, the asterisk) would help you to find all of these
without having to go through and individually specify each one. By inserting the string
cat* into the include file or specifying it with +s option, all these forms would be found.
Metacharacters can be placed anywhere in the word.
The * character is a wildcard character; it will find any character or group of continuous
characters that correspond to its placement in the word. For example, if b*s were specified,
the program would match words like “beads,” “bats,” “bat's,” “balls,” “beds,” “breaks,”
and so forth.
The % character allows the program to match characters in the same way as the *
symbol. Unlike the * symbol, however, all the characters matched by the % will be ignored
in terms of the way of which the output is generated. In other words, the output will treat
“beat” and “bat” as two occurrences of the same string, if the search string is b%t. Unless
the % symbol is used with programs that produce a list of words matched by given key-
words, the effect of the % symbol will be the same as the effect of the * symbol.
When the percentage symbol is immediately followed by a second percentage symbol,
the effect of the metacharacter changes slightly. The result of such a search would be that
the % symbol will be removed along with any one character preceding the matched string.
Without adding the additional % character, a punctuation symbol preceding the wildcard
string will not be matched ane will be ignored.
The underline character _ is like the * character except that it is used to specify any
single character in a word. For example, the string b_d will match words like “bad,” “bed,”
“bud,” “bid,” and so forth. For detailed examples of the use of the percentage, underline,
and asterisk symbols, see the section special characters.
The quote character (\) is used to indicate the quotation of one of the characters being
used as metacharacters. Suppose that you wanted to search for the actual symbol (*) in a
text. Because the (*) symbol is used to represent any character, it must be quoted by in-
serting the (\) symbol before the (*) symbol in the search string to represent the actual (*)
character, as in “string\*string.” To search for the actual character (\), it must be quoted
also. For example, “string\\string” will match “string” followed by “\” and then followed
by a second “string.”
Part 2: CLAN 188
15 References
Aguado, G. (1988). Appraisal of the morpho-syntactic competence in a 2.5 month old
child. Infancia y Aprendizaje, 43, 73-95.
Altenberg, E., Roberts, J., & Scarborough, H. (2018). Young children's structure
production: A revision of the Index of Productive Syntax. Language, Speech and
Hearing Services in Schools, 49(4), 1-14. doi:10.1044/2018_LSHSS-17-0092
Berndt, R., Wayland, S., Rochon, E., Saffran, E., & Schwartz, M. (2000). Quantitative
production analysis: A training manual for the analysis of aphasic sentence
production. Hove, UK: Psychology Press.
Blake, J., Quartaro, G., & Onorati, S. (1970). Evaluating quantitative measures of
grammatical complexity in spontaneous speech samples. Journal of Child
Language, 20, 139-152.
Bohannon, N., & Stanowicz, L. (1988). The issue of negative evidence: Adult responses to
children's language errors. Developmental Psychology, 24, 684-689.
Brainerd, C., & Pressley, M. (1982). Verbal processes in children. New York, NY, NY:
Springer-Verlag.
Brown, R. (1973). A first language: The early stages. Cambridge, MA: Harvard.
Demetras, M., Post, K., & Snow, C. (1986). Feedback to first-language learners. Journal
of Child Language, 13, 275-292.
Guo, L.-Y., Eisenberg, S., Bernstein Ratner, N., & MacWhinney, B. (2018). Is putting
SUGAR (Sampling Utterances of Grammatical Analysis Revised) into Language
Sample Analysis a good thing? A response to Pavelko and Owens (2017).
Language, Speech and Hearing Services in Schools, 49(3), 622-627.
doi:doi:10.1044/2018_LSHSS-17-0084
Hickey, T. (1991). Mean length of utterance and the acquisition of Irish. Journal of Child
Language, 18, 553-569.
Hirsh-Pasek, K., Trieman, R., & Schneiderman, M. (1984). Brown and Hanlon revisited:
mother sensitivity to grammatical form. Journal of Child Language, 11, 81-88.
Hoff-Ginsberg, E. (1985). Some contributions of mothers' speech to their children's
syntactic growth. Journal of Child Language, 12, 367-385.
Klee, T., Schaffer, M., May, S., Membrino, S., & Mougey, K. (1989). A comparison of the
age-MLU relation in normal and specifically language-impaired preschool
children. Journal of Speech and Hearing Research, 54, 226-233.
Lee, L. (1974). Developmental Sentence Analysis. Evanston, IL: Northwestern University
Press.
Malakoff, M. E., Mayes, L. C., Schottenfeld, R., & Howell, S. (1999). Language
production in 24-month-old inner-city children of cocaine-and-other-drug-using
mothers. Journal of Applied Developmental Psychology, 20, 159-180.
Malvern, D., & Richards, B. (1997a). A new measure of lexical diversity. In A. Ryan & A.
Wray (Eds.), Evolving models of language (pp. 58-71). Clevedon: Multilingual
Matters.
Malvern, D., & Richards, B. (1997b, May 1997). Trends in lexical diversity and type-token
ratio in Templin: A mathematical modelling approach. Paper presented at the
poster presented at the 18th Annual Symposium on Research in Child Language
Disorders, University of Wisconsin-Madison.
Malvern, D., Richards, B., Chipere, N., & Purán, P. (2004). Lexical diversity and language
Part 2: CLAN 189