Neural Evidence For Referential Understanding of Object Words in Dogs
Neural Evidence For Referential Understanding of Object Words in Dogs
ll
Report
Neural evidence for referential understanding
of object words in dogs
rez,1,4 Shany Dror,5,6 and Attila Andics1,4
Marianna Boros,1,7,8,* Lilla Magyari,1,2,3,7,* Boglárka Morvai,1 Raúl Hernández-Pe
1Neuroethology of Communication Lab, Department of Ethology, Eötvös Loránd University, Pázmány Pe ter se
tány 1/C, 1117 Budapest,
Hungary
2Norwegian Centre for Reading Education and Research, Faculty of Arts and Education, University of Stavanger, Professor Olav Hanssens vei
SUMMARY
Using words to refer to objects in the environment is a core feature of the human language faculty. Referential
understanding assumes the formation of mental representations of these words.1,2 Such understanding of
object words has not yet been demonstrated as a general capacity in any non-human species,3 despite mul-
tiple behavior-based case reports.4–10 In human event-related potential (ERP) studies, object word knowl-
edge is typically tested using the semantic violation paradigm, where words are presented either with their
referent (match) or another object (mismatch).11,12 Such mismatch elicits an N400 effect, a well-established
neural correlate of semantic processing.12,13 Reports of preverbal infant N400 evoked by semantic viola-
tions14 assert the use of this paradigm to probe mental representations of object words in nonverbal popu-
lations. Here, measuring dogs’ (Canis familiaris) ERPs to objects primed with matching or mismatching ob-
ject words, we found a mismatch effect at a frontal electrode, with a latency (206–606 ms) comparable to the
human N400. A greater difference for words that dogs knew better, according to owner reports, further sup-
ported a semantic interpretation of this effect. Semantic expectations emerged irrespective of vocabulary
size, demonstrating the prevalence of referential understanding in dogs. These results provide the first neural
evidence for object word knowledge in a non-human animal.
Current Biology 34, 1–5, April 22, 2024 ª 2024 Elsevier Inc. 1
Please cite this article in press as: Boros et al., Neural evidence for referential understanding of object words in dogs, Current Biology (2024), https://
doi.org/10.1016/j.cub.2024.02.029
ll
Report
A B
Owner shows the Match
object Mismatch
Sound from
Object displayed to loudspeaker Owner holds up Next trial starts on
the owner “Kun-kun, look, the the object button press
ball!”
2000 ms
the same objects were presented in the matching and the mis- 4.97 ± 12.50 mV; mismatch, mean ± SD = 1.44 ± 8.61 mV; Fig-
matching conditions, to control for object word-related (e.g., ure 2B; Table S4). Condition-averaged data of most of the dogs
acoustic, familiarity-based, and cloze probability-based) and ob- (14 out of 18, a proportion comparable to reports from humans;
ject-related (e.g., perceptual, preferential, and attentional) con- cf. Parise and Csibra14) showed a positive mismatch-match dif-
founds. To ensure precise timing, objects were shown through ference (Wilcoxon signed-rank test, V = 22, p = 0.004; Figure 2C),
an electric window with opacity controlled. To maintain the and Spearman correlation analysis indicated no relationship be-
dog’s attention, the owner’s face was visible every time the win- tween the magnitude of the mismatch effect and dogs’ owner-
dow turned transparent. ERP analysis was time-locked to the reported vocabulary size for noun-like words (range: 5–230;
onset of object presentation. Only dogs with at least five trials Table S1; rs = 0.273, p = 0.289; BF = 0.507, n = 17), together sug-
per condition (match, mean ± SD = 37.33 ± 28.79; mismatch, gesting that the effect was not driven by a few individuals with a
mean ± SD = 39.50 ± 32.33; Table S2) after artefact rejection large vocabulary. Finally, the lack of correlation between both
were included in the statistical analyses (n = 18; Table S3; see age and head shape (cephalic index, CI) with the magnitude of
STAR Methods for more details). the mismatch effect indicated that the result is independent of
We first compared dogs’ ERPs for match versus mismatch tri- age and breed (rs = 0.421, p = 0.081, BF = 0.33, n = 18 and
als in an a priori time window (350–500 ms) defined by an N400 rs = 0.225, p = 0.368, BF = 0.33, n = 18, for age and CI,
effect reported in adult humans in a similar task.14 We averaged respectively).
data within this time window and applied a permutation test for
repeated-measures (RM) ANOVA, revealing an interaction effect DISCUSSION
between electrode and condition (F(2) = 4.307, p < 0.001).
Follow-up testing of the main effect of condition on the elec- Dogs understand the referential nature of object words
trodes (Fz, FCz, and Cz) separately revealed a significant differ- The ERP mismatch effect reported here constitutes the first
ence only on the Fz electrode (F(1) = 8.284, p = 0.002; Table S4), demonstration of a neural correlate of semantic expectation
indicating more positive deflection for the mismatch than for the violation in a non-human species. Several arguments support
match condition (mean ± SD = 1.34 ± 5.98 mV versus mean ± that this mismatch effect is indeed referential and semantic in
SD = 4.67 ± 7.65 mV, respectively). Then, to determine the nature. First, on a more general note, the same objects were
exact spatiotemporal distribution of this condition difference, presented an equal number of times in the matching and the
we applied a cluster-based random permutation approach.28 mismatching conditions, so ERP differences measured at ob-
The semantic mismatch effect was confirmed on the Fz elec- ject presentation are unlikely to reflect object-related percep-
trode between 206 and 606 ms after visual stimulus onset tual, preference, or attentional effects. Second, because of
(F(1) = 8.38, p = 0.001; match, mean ± SD = 4.53 ± 6.64 mV; the temporal delay between the prime word and the probe ob-
mismatch, mean ± SD = 1.72 ± 5.46 mV; Figure 2A; ject during stimulus presentation, the mental representation of
Table S4). A subsequent permutational RM ANOVA revealed a the object needed to emerge in the absence of the object (cf.
condition by reported word knowledge interaction (F(1) = Waxman2). This favors the interpretation that the expectation
1.972, p = 0.005), with a positive-going ERP deflection for the violation effect here reflects referential understanding of object
mismatch compared to the match condition for the well-known words rather than pure associations. Third, the fact that better
words (scored at least 4 by the owner on a 5-point Likert scale; known object words (as assessed by the owner) evoked a
Table S1) only (F(1) = 3.753, p = 0.017; match, mean ± SD = greater mismatch effect is also compatible with the account
ll
Report
Figure 2. Results
(A) Grand-averaged ERPs for the match and
mismatch conditions at the Fz electrode. Onset of
object presentation was at 0 ms (vertical red dashed
line). Cluster-based random permutation identified
significant condition difference in the time window
206–606 ms (red box).
(B and C) Boxplots of individual ERPs per condition
averaged in this time window. (B) Split for (owner-
reported) little-known and well-known object words.
(C) Connected dots across conditions represent in-
dividual data. Boxplots indicate the median, 25th and
75th percentiles (boxes), and minimum and maximum
(whiskers).
**p < 0.01, *p < 0.05.
See also Tables S1, S2, and S4.
ll
Report
and the polarity of the difference between conditions can vary (LP2017-13/2017), the Eötvös Loránd University, and the European Research
depending on factors such as location of electrodes, choice of Council under the European Union’s Horizon 2020 research and innovation
program (grant number 950159). M.B. was supported by the ÚNKP-22-4
the reference, and the geometry of underlying cortices. Indeed,
New National Excellence Program of the Ministry for Culture and Innovation
more positive-going deflections for matching than for mismatch- from the National Research, Development and Innovation Fund (ÚNKP-22-
ing words have been observed in semantic priming paradigms in 4-II-ELTE-963). A.A. was supported by the National Brain Programme 3.0 of
human infants in an early time window (N200–50034–36), and the the Hungarian Academy of Sciences (NAP2022-I-3/2022). We are grateful to
positive end of the N400 dipole has been identified in a semantic Claudia Fugazza and Ádám Miklósi for the discussion during the conceptual-
violation paradigm in humans with intracranial EEG.37 Further- ization of the study and to Elodie Ferrando, Maria Woitow, Lisa Touazi, Dávid
Török, and Andrea Sommese for their assistance in the EEG experiments. We
more, dipole orientation may vary across species due to gyrifica-
thank all dog owners and dogs for their participation in the experiments.
tion differences. Note, however, that previous canine ERP
studies with similarly positioned electrodes probing word pro-
AUTHOR CONTRIBUTIONS
cessing reproduced the negative polarity in paradigms that in hu-
mans typically evoked N400 (word segmentation index22; word Conceptualization, L.M., M.B., and A.A.; methodology, M.B., L.M., B.M.,
detection index21). It is also possible that the present findings R.H.-P., and A.A.; investigation, M.B., B.M., and S.D.; visualization, M.B.
reflect a component different from N400. An expectation-modu- and B.M.; writing – original draft, L.M., M.B., B.M., and A.A.; writing – review &
lated, positive-going component, P3b, has also been reported editing, L.M., M.B., B.M., R.H.-P., S.D., and A.A.; funding acquisition, A.A.;
across multiple species38 in a similar window as the effect project administration, M.B. and B.M.; supervision, L.M. and A.A.
for object word understanding in a non-human species. The cis of how children learn the meanings of words.
1. Bloom, P. (2001). Pre
discovery of this capacity in dogs informs theoretical work on Behav. Brain Sci. 24, 1095–1103.
language evolution and semantics by revealing that the appreci- 2. Waxman, S.R., and Gelman, S.A. (2009). Early word-learning entails refer-
ation of referentiality during lexical processing is not a distinctive ence, not merely associations. Trends Cogn. Sci. 13, 258–263.
feature of human language use. 3. Fitch, W.T. (2020). Animal cognition and the evolution of human language:
why we cannot focus solely on communication. Philos. Trans. R. Soc.
STAR+METHODS Lond. B Biol. Sci. 375, 20190046.
4. Pepperberg, I.M. (2006). Cognitive and communicative abilities of grey
Detailed methods are provided in the online version of this paper parrots. Appl. Anim. Behav. Sci. 100, 77–86.
and include the following: 5. Herman, L.M., Richards, D.G., and Wolz, J.P. (1984). Comprehension of
sentences by bottlenosed dolphins. Cognition 16, 129–219.
d KEY RESOURCES TABLE 6. Fugazza, C., and Miklósi, Á. (2020). Depths and limits of spontaneous
d RESOURCE AVAILABILITY categorization in a family dog. Sci. Rep. 10, 3082.
B Lead Contact 7. Kaminski, J., Call, J., and Fischer, J. (2004). Word learning in a domestic
B Materials availability dog: evidence for ‘‘fast mapping’’. Science 304, 1682–1683.
B Data and code availability 8. Pilley, J.W., and Reid, A.K. (2011). Border collie comprehends object
d EXPERIMENTAL MODEL AND SUBJECT DETAILS names as verbal referents. Behav. Processes 86, 184–195.
d METHOD DETAILS 9. Savage-Rumbaugh, S. (1986). Ape Language: From Conditioned
B Stimuli and procedure Response to Symbol (Columbia University Press).
B Data acquisition 10. Savage-Rumbaugh, E.S., Murphy, J., Sevcik, R.A., Brakke, K.E., Williams,
d QUANTIFICATION AND STATISTICAL ANALYSIS S.L., Rumbaugh, D.M., and Bates, E. (1993). Language comprehension in
ape and child. Monogr. Soc. Res. Child Dev. 58.
11. Friedrich, M., and Friederici, A.D. (2005). Phonotactic knowledge and lex-
SUPPLEMENTAL INFORMATION
ical-semantic processing in one-year-olds: brain responses to words and
nonsense words in picture contexts. J. Cogn. Neurosci. 17, 1785–1802.
Supplemental information can be found online at https://doi.org/10.1016/j.
cub.2024.02.029. 12. Kutas, M., and Federmeier, K.D. (2011). Thirty years and counting: finding
A video abstract is available at https://doi.org/10.1016/j.cub.2024.02. meaning in the N400 component of the event-related brain potential (ERP).
029#mmc3. Annu. Rev. Psychol. 62, 621–647.
13. Lau, E.F., Phillips, C., and Poeppel, D. (2008). A cortical network for se-
ACKNOWLEDGMENTS mantics: (de)constructing the N400. Nat. Rev. Neurosci. 9, 920–933.
14. Parise, E., and Csibra, G. (2012). Electrophysiological evidence for the un-
This project was funded by the Hungarian Academy of Sciences, a grant to the derstanding of maternal speech by 9-month-old infants. Psychol. Sci. 23,
MTA-ELTE ‘‘Lendület’’ Neuroethology of Communication Research Group 728–733.
ll
Report
15. Miklósi, A., and Topál, J. (2013). What does it take to become ‘best 35. Friedrich, M., and Friederici, A.D. (2008). Neurophysiological correlates of
friends’? Evolutionary changes in canine social competence. Trends online word learning in 14-month-old infants. Neuroreport 19, 1757–1761.
Cogn. Sci. 17, 287–294. 36. Torkildsen, J.v.K., Syversen, G., Simonsen, H.G., Moen, I., and Lindgren,
16. Dror, S., Miklósi, Á., Sommese, A., Temesi, A., and Fugazza, C. (2021). M. (2007). Electrophysiological correlates of auditory semantic priming in
Acquisition and long-term memory of object names in a sample of gifted 24-month-olds. J. Neurolinguistics 20, 332–351.
word learner dogs. R. Soc. Open Sci. 8, 210976.
37. Nobre, A.C., and McCarthy, G. (1994). Language-related ERPs: scalp dis-
ni, A., and Miklósi, Á.
17. Fugazza, C., Andics, A., Magyari, L., Dror, S., Zemple tributions and modulation by word type and semantic priming. J. Cogn.
(2021). Rapid learning of object names in dogs. Sci. Rep. 11, 2222. Neurosci. 6, 233–255.
18. Ramos, D., and Mills, D.S. (2019). Limitations in the learning of verbal con-
38. Paller, K.A. (1994). The neural substrates of cognitive event-related poten-
tent by dogs during the training of OBJECT and ACTION commands.
tials: a review of animal models of P3. In Cognitive Electrophysiology, H.-J.
J. Vet. Behav. 31, 92–99. €user Boston),
Heinze, T.F. Münte, and G.R. Mangun, eds. (Birkha
19. Andics, A., Gábor, A., Gácsi, M., Faragó, T., Szabó, D., and Miklósi, Á. pp. 300–333.
(2016). Neural mechanisms for lexical processing in dogs. Science 353,
39. Leckey, M., and Federmeier, K.D. (2020). The P3b and P600(s): positive
1030–1032.
contributions to language comprehension. Psychophysiology 57, e13351.
20. Gábor, A., Gácsi, M., Szabó, D., Miklósi, Á., Kubinyi, E., and Andics, A.
(2020). Multilevel fMRI adaptation for spoken word processing in the 40. Boersma, P., and Weenink, D. (2023). Praat: doing phonetics by computer.
awake dog brain. Sci. Rep. 10, 11968. 41. Kleiner, M., Brainard, D.H., Pelli, D., Ingling, A., Murray, R., and Broussard,
21. Magyari, L., Huszár, Z., Turzó, A., and Andics, A. (2020). Event-related po- C. (2007). What’s new in Psychtoolbox-3.
tentials reveal limited readiness to access phonetic details during word 42. Oostenveld, R., Fries, P., Maris, E., and Schoffelen, J.-M. (2011). FieldTrip:
processing in dogs. R. Soc. Open Sci. 7, 200851. open source software for advanced analysis of MEG, EEG, and invasive
22. Boros, M., Magyari, L., Török, D., Bozsik, A., Deme, A., and Andics, A. electrophysiological data. Comput. Intell. Neurosci. 2011, 156869.
(2021). Neural processes underlying statistical learning for speech seg- 43. Team, R.C. (2022). R: A language and environment for statistical
mentation in dogs. Curr. Biol. 31, 5512–5521.e5. computing (R Foundation for Statistical Computing).
23. Kis, A., Szakadát, S., Kovács, E., Gácsi, M., Simor, P., Gombos, F., Topál,
44. Voeten, C.C. (2023). buildmer: stepwise elimination and term reordering
J., Miklósi, A., and Bódizs, R. (2014). Development of a non-invasive poly-
for mixed-effects regression.
somnography technique for dogs (Canis familiaris). Physiol. Behav. 130,
149–156. 45. Voeten, C.C. (2022). permutes: permutation tests for time series data.
glás, E., Gergely, A., Kupán, K., Miklósi, Á., and Topál, J. (2012). Dogs’
24. Te 46. Morey, R.D., and Rouder, J.N. (2022). BayesFactor: computation of Bayes
gaze following is tuned to human communicative signals. Curr. Biol. 22, factors for common designs.
209–212. 47. Reeve, C., and Jacques, S. (2022). Responses to spoken words by do-
25. Kaminski, J., Schulz, L., and Tomasello, M. (2012). How dogs know when mestic dogs: a new instrument for use with dog owners. Appl. Anim.
communication is intended for them. Dev. Sci. 15, 222–232. Behav. Sci. 246, 105513.
26. Senju, A., Csibra, G., and Johnson, M.H. (2008). Understanding the refer- 48. Pongracz, P., Miklosi, A., and Csányi, V. (2000). Owner’s beliefs on the
ential nature of looking: infants’ preference for object-directed gaze. ability of their pet dogs to understand human verbal communication: a
Cognition 108, 303–319. case of social understanding. Cah. Psychol. Cogn. 20, 87–107.
27. Tauzin, T., Csı́k, A., Kis, A., and Topál, J. (2015). What or where? The
49. Patrucco-Nanchen, T., Friend, M., Poulin-Dubois, D., and Zesiger, P.
meaning of referential human pointing for dogs (Canis familiaris).
(2019). Do early lexical skills predict language outcome at 3 years? A lon-
J. Comp. Psychol. 129, 334–338.
gitudinal study of French-speaking children. Infant Behav. Dev. 57,
28. Maris, E., and Oostenveld, R. (2007). Nonparametric statistical testing of 101379.
EEG- and MEG-data. J. Neurosci. Methods 164, 177–190.
50. Styles, S., and Plunkett, K. (2009). What is ‘word understanding’ for the
29. Kutas, M., Van Petten, C., and Kluender, R. (2006). Psycholinguistics elec- parent of a one-year-old? Matching the difficulty of a lexical comprehen-
trified II (1994–2005). In The Handbook of Psycholinguistics, M.J. Traxler, sion task to parental CDI report. J. Child Lang. 36, 895–908.
and M.A. Gernsbacher, eds. (Elsevier), pp. 659–724.
51. Kubinyi, E., and Wallis, L.J. (2019). Dominance in dogs as rated by owners
30. Townsend, S.W., and Manser, M.B. (2013). Functionally referential
corresponds to ethologically valid markers of dominance. PeerJ 7, e6838.
communication in mammals: the past, present and the future. Ethology
119, 1–11. 52. Czeibert, K., Baksa, G., Grimm, A., Nagy, S.A., Kubinyi, E., and Petneházy,
Ö. (2019). MRI, CT and high resolution macro-anatomical images with cry-
31. Prichard, A., Cook, P.F., Spivak, M., Chhibber, R., and Berns, G.S. (2018).
osectioning of a Beagle brain: creating the base of a multimodal imaging
Awake fMRI reveals brain regions for novel word detection in dogs. Front.
atlas. PLoS One 14, e0213458.
Neurosci. 12, 737.
32. Karuza, E.A., Emberson, L.L., and Aslin, R.N. (2014). Combining fMRI and €nnel, C., Attaheri, A., Friederici, A.D., and
53. Milne, A.E., Mueller, J.L., Ma
behavioral measures to examine the process of human learning. Petkov, C.I. (2016). Evolutionary origins of non-adjacent sequence pro-
Neurobiol. Learn. Mem. 109, 193–206. cessing in primate brain potentials. Sci. Rep. 6, 36259.
33. Fugazza, C., Dror, S., Sommese, A., Temesi, A., and Miklósi, Á. (2021). 54. Bognár, Z., Iotchev, I.B., and Kubinyi, E. (2018). Sex, skull length, breed,
Word learning dogs (Canis familiaris) provide an animal model for studying and age predict how dogs look at faces of humans and conspecifics.
exceptional performance. Sci. Rep. 11, 14070. Anim. Cogn. 21, 447–456.
34. Friedrich, M., and Friederici, A.D. (2004). N400-like semantic incongruity 55. Bognár, Z., Szabó, D., Dees, A., and Kubinyi, E. (2021). Shorter headed
effect in 19-month-olds: processing known words in picture contexts. dogs, visually cooperative breeds, younger and playful dogs form eye con-
J. Cogn. Neurosci. 16, 1465–1477. tact faster with an unfamiliar human. Sci. Rep. 11, 9293.
ll
Report
STAR+METHODS
RESOURCE AVAILABILITY
Lead Contact
Further information and requests for resources should be directed to and will be fulfilled by the Lead Contact, Marianna Boros
([email protected]). There is no restriction for distribution of materials.
Materials availability
This study did not generate new unique reagents.
ll
Report
Participants were healthy companion dogs (n = 27), recruited via departmental social platforms. To ensure that all subjects had some
experience with object words, we included only dogs that were judged by the owner to know at least three object words, which is an
average owner-reported knowledge of noun-like words.47 From the 27 dogs who participated in the EEG experiment, we excluded 2
dogs due to technical issues and 7 dogs because of the high number of movement-artefacts or noisy electrodes during the automatic
data cleaning procedure (see section EEG artefact-rejection and analysis). The EEG data of 18 dogs were analyzed (Table S3; 10
males; mean age ± SD in years: 6.36 ± 2.87, range: 1.5–10 years; 5 Border Collies, 1 Akita Inu, 1 White Swiss Shepard, 1 Miniature
Pinscher, 1 Toy Poodle, 1 Pumi, 1 Hungarian Vizsla, 1 Standard Schnauzer, 1 Labrador Retriever, 5 mixed breeds).
The study used non-invasive scalp EEG that did not cause any pain to the dogs. The University Institutional Animal Care and Use
Committee (UIACUC, Eötvös Loránd University, Hungary) issued its ethical approval of the project under the certificate no. ELTE-
AWC-010/2021. The owners of the dogs volunteered to take part in the project in return for a modest monetary compensation
and gave written informed consent.
METHOD DETAILS
ll
Report
the electric window only. During the training session, O1 was trained on the experimental procedure with a stimulus set from
another dog.
At the testing sessions, after initial free exploration of the room, dogs were instructed to lay down on the mattress facing the oc-
cluder with the electric window. Next, E1 (and E2 if present) began the electrode application, during which the owner(s) was (were)
comforting the dog and keeping it in position. Once all electrodes were placed on the dog’s head, O2/E2 sat down next to the dog,
and E1 together with O1, left the inner zone. Testing started when all participants (dog, owners and experimenters) took their position,
and the dog was lying with head down in a stable and relaxed position.
At the beginning of each trial, the electric window was opaque. First, the image of the object was displayed on the monitor of O1 for
1500 ms to allow O1 to localize and grab the object. Then the electric window became transparent, and O1 gazed through the window
at the dog to grab its attention, while the auditory stimulus was played back from the loudspeakers. Next, the electric window became
opaque for 1000 ms, during which O1 raised the object in front of his/her face. Then the electric window turned transparent again, O1
held the object in front of his/her face and looked down at the monitor where a countdown of 2 s was displayed. After 2000 ms, the
electric window turned opaque again, and O1 put down the object. E1, who observed the dog through the webcam, decided on the
initiation of the next trial, if the dog was attentive, in the right position, and facing the electric window with open eyes.
During match trials, objects presented to the dogs by their owners were primed with the corresponding object word. During
mismatch trials, objects were primed with auditory stimuli containing a non-matching object word. The same words and the same
objects were presented an equal amount of times in the matching and the mismatching conditions. The presentation order of the
trials was pseudo–randomized with the following constraints: no more than two consecutive identical words in a row and no more
than two consecutive trials of the same kind in a row, either matching or mismatching followed each other.
The dogs completed as many trials during a testing session as their owners felt appropriate, i.e., felt the dog was not too tired and
still attentive (see Table S2 for the total number of trials/dogs). When a dog stood/sat up during the experiment, the person sitting next
to the dog was asked to encourage the dog to get back to a lying position. If the dog wanted to leave the mattress, the session was
aborted. Owners were invited to participate in many testing sessions. The final number of sessions depended on the availability of the
owners and the dog’s cooperativity (min. 1, max. 6 sessions).
Data acquisition
Electrophysiological recording
Electrode-placement followed a canine EEG setup developed and validated in our lab, which has been used successfully to study word
processing in dogs.21,22 Surface-attached scalp electrodes (gold-coated Ag|AgCl) were fixed with EC2 Grass Electrode Cream (Grass
Technologies, USA) on the dogs’ heads and placed over the anteroposterior midline of the skull (Fz, Cz, FCz, Pz), and on the zygomatic
arch (os zygomaticum), next to the left (F7) and right eyes (F8, electrooculography, EOG). The ground electrode was placed on the left
musculus temporalis. Fz, Cz, FCz and EOG derivations were referred to Pz.21,23 The reference electrode (Pz), Fz, Cz and FCz were placed
on the dog’s head at the anteroposterior midline above the bone to decrease the chance for artefacts from muscle movements. Pz was
placed posteriorly to the active electrodes on a head-surface above the back part of the external sagittal crest (crista sagittalis externa) at
the occipital bulge of dogs where the skull is usually the thickest and under which either no brain or only the cerebellum is located depend-
ing on the shape of the skull52 (see more explanation of the electrode placement in Magyari et al.21). Impedances for the EEG electrodes
were kept at least below 20 kU, but mostly below 10 kU. The EEG was amplified on 8 channels by a 64-channel Neuvo amplifier (Compu-
medics Neuroscan, Australia) with applying DC-recording and a 200 Hz anti-aliasing filter, and it was digitized at 500 Hz sampling-rate.
EEG artefact rejection
EEG artefact rejection and analysis were conducted using the FieldTrip software package42 in MATLAB R2018a (The Mathworks,
Massachusetts, USA).
EEG data was digitally filtered offline with a 0.1 Hz high-pass and 35-Hz low-pass filter and segmented between 200 ms pre-stimulus
and 1000 ms after stimulus onset (visual object presentation). Each segment was detrended by removing the first-order polynomial and
baselined between -200 ms and 0 ms. A two-step automatic artefact rejection procedure was applied to remove eye-, ear-, head- or any
other body movement-related and other artefacts (see Magyari et al.21). In the first step, trials with deviant voltage values were rejected
by removing each segment where amplitudes exceeded ± 110 mV at any time point in the segment. In the second step, abrupt changes
in amplitudes related to saccadic movements were identified as minimum and maximum values exceeding 120 mV at any of the elec-
trodes (Fz, Cz, FCz, F7 and F8) in 100 ms long sliding windows and segments containing such abrupt changes were subsequently re-
jected from further analysis. Additionally, videos recorded during experimental sessions were screened to identify trials where the dogs
were inattentive during object presentation. This screening did not identify additional trials for exclusion that have not already been
excluded during automatic artefact rejection. Segments were averaged separately for each condition, session, participant and elec-
trode (Fz, Cz, FCz). After the automatic artefact rejection, on average 30.15% of trials remained in the matching condition (min: 0%,
max: 76.66%) and 31.79% of the trials in the mismatching condition (min: 0%, max: 80%) (Table S2).
We included in the analysis all sessions where at least 5 trials per condition remained after the automatic artefact rejection (average
no of match trials per dog: 13.71, min: 5, max: 41; average no of mismatch trials: 13.75, min: 5, max: 43). This resulted in 18 dogs
being included in the analysis. All statistical analyses were carried out using R version 4.2.2.43
ll
Report
We first defined an a priori time window between 350 and 500 ms, defined by an N400 effect reported in humans using a similar
task.14 We averaged data across trials and conditions for each dog over the time window of interest separately for the Fz, Cz, and FCz
electrodes. Because regardless of the transformation applied, the raw EEG data and the residuals of linear models did not conform to
normal distribution, we proceeded by applying a permutation test for repeated measures (RM) ANOVA (with 1000 permutations) con-
taining the fixed factors of condition (match and mismatch) and electrode (Fz, Cz, FCz) and their interactions, and subject with nested
session as random terms, using the ‘buildmer’ package.44 The permutation test yielded significant interaction between the fixed ef-
fects, therefore we ran separate permutation tests for data from each electrode creating ANOVAs containing the fixed factor of con-
dition (match and mismatch), and subject with nested session as random terms. We report p values and F statistics from all tests in
Table S4.
To determine the exact spatio-temporal distribution of the conditional differences, we applied a cluster-based random permutation
procedure,28 utilizing the ‘permutes’45 and ‘buildmer’44 packages. As a first step, for each time point (1000 time points resulting from
each ms after stimulus onset) a separate Linear Mixed-Effects Model (LMM) was built using across-trial dog-averages from the Fz,
FCz and Cz electrodes, including condition (match and mismatch) and electrode (Fz, Cz, FCz) as fixed factors and their interactions,
and the subjects’ ID with nested session as random term. For each model, a permutation test was run to determine which factor had a
significant effect at each time point using permuted likelihood ratio tests (LRTs). Next, we used the LRTs to compute a cluster-mass
statistic and selected connected datasets on the basis of temporal adjacency (min. 50 ms to exclude spurious effects, cf. Milne
et al.53) with the largest cluster-level value. The permutation test yielded significant interaction between the fixed effects, therefore
we repeated the cluster-based random permutation procedure separately for data from each electrode. We report p values from all
tests in Table S4.
Additionally, we performed a Wilcoxon signed-rank test to evaluate whether the ERP differences between the match and mismatch
condition for the 206-606 ms time-window showed the same direction in most of the dogs.
In the final sample, dogs’ average owner-reported word knowledge ranged from 2 to 5 points (mean ± SD = 3.9 ± 0.95, see
Table S1). To test whether this reported word knowledge of dogs affected the observed mismatch effect, we performed a follow-
up analysis. For each trial, we included the reported knowledge score of the word used. Only words with an occurrence in minimum
one match and one mismatch trial within session (after artefact rejection) were included in the analysis. To obtain a balanced number
of observations, we collapsed the data into two new categories: little-known words (scores 1 to 3) and well-known words (scores 4 to
5). We then carried out a permutational RM ANOVA (with 1000 permutations) on the data averaged over in the time window defined by
the cluster-based random permutation analysis (206-606 ms) including two fixed factors: condition (match and mismatch) and re-
ported word knowledge (little-known words, well-known words), and their interactions; and subject with nested session as random
terms (‘buildmer’ package44). As the permutation test yielded significant interaction between the fixed effects, we ran separate per-
mutation tests for data from each reported word knowledge category, creating an RM ANOVA containing the fixed factor of condition
(match and mismatch), and subject with nested session as random terms. We report p values and F statistics from all tests in
Table S4.
Dog’s owner-reported vocabulary size ranged from 5 to 230 noun-like words (mean ± SD = 32.23 ± 55.85, see Table S1). Out of the
18 dogs whose data was analyzed, the owner of 17 provided this information. To assess if there is a relationship between dogs’
owner-reported vocabulary size and the magnitude of the mismatch effect, we carried out a Spearman correlation analysis between
each dog’s owner reported vocabulary size and the difference between their average ERP responses in the mismatch and match
conditions in the time window defined by the cluster-based random permutation analysis (206-606 ms). The strength of the evidence
was assessed using Bayesian inference by applying the ‘BayesFactor’ package.46
To test whether the obtained result depends on age or head shape (cephalic index), we correlated these factors with individual
response differences between conditions using Spearman correlation. Cephalic index was calculated from head length and width
measured externally (cf. Bognár et al.54,55) and conditional differences were calculated between the average ERP responses in
the mismatch and match conditions in the time window defined by the cluster-based random permutation analysis (206-606 ms).
The strength of the evidence was assessed using Bayesian inference by applying the ‘BayesFactor’ package.46