0% found this document useful (0 votes)
36 views10 pages

Neural Evidence For Referential Understanding of Object Words in Dogs

Neural evidence for referential understanding
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views10 pages

Neural Evidence For Referential Understanding of Object Words in Dogs

Neural evidence for referential understanding
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Report

Neural evidence for referential understanding of


object words in dogs
Highlights Authors
d Dogs’ object word understanding is probed with EEG in a Marianna Boros, Lilla Magyari,
semantic violation paradigm Boglárka Morvai,
rez, Shany Dror,
Raúl Hernández-Pe
d Mismatch between prime word and target object evoked a Attila Andics
human N400-like ERP effect

d This is neural evidence for object word-elicited mental


Correspondence
representations in non-humans [email protected] (M.B.),
[email protected] (L.M.)
d Dogs’ object word understanding is thus, similarly to
humans’, referential in nature In brief
Boros, Magyari et al. find a human N400-
like semantic mismatch effect in dogs’
ERPs to objects primed with matching or
mismatching object words, revealing that
object words can evoke mental
representations of the referred objects in
dogs. The referential understanding of
object words is thus not a distinctive
feature of the human language faculty.

Boros et al., 2024, Current Biology 34, 1–5


April 22, 2024 ª 2024 Elsevier Inc.
https://doi.org/10.1016/j.cub.2024.02.029 ll
Please cite this article in press as: Boros et al., Neural evidence for referential understanding of object words in dogs, Current Biology (2024), https://
doi.org/10.1016/j.cub.2024.02.029

ll

Report
Neural evidence for referential understanding
of object words in dogs
rez,1,4 Shany Dror,5,6 and Attila Andics1,4
Marianna Boros,1,7,8,* Lilla Magyari,1,2,3,7,* Boglárka Morvai,1 Raúl Hernández-Pe
1Neuroethology of Communication Lab, Department of Ethology, Eötvös Loránd University, Pázmány Pe ter se
tány 1/C, 1117 Budapest,
Hungary
2Norwegian Centre for Reading Education and Research, Faculty of Arts and Education, University of Stavanger, Professor Olav Hanssens vei

10, 4021 Stavanger, Norway


3Department of Social Studies, Faculty of Social Sciences, University of Stavanger, Kjell Arholms gate 41, 4021 Stavanger, Norway
4ELTE NAP Canine Brain Research Group, Eötvös Loránd University, Pázmány Pe ter se
tány 1/C, 1117 Budapest, Hungary
5Department of Ethology, Eötvös Loránd University, Pázmány Peter se
tány 1/C, 1117 Budapest, Hungary
6Doctoral School of Biology, Institute of Biology, ELTE Eötvös Loránd University, Pázmány Peter se
tány 1/C, 1117 Budapest, Hungary
7These authors contributed equally
8Lead contact

*Correspondence: [email protected] (M.B.), [email protected] (L.M.)


https://doi.org/10.1016/j.cub.2024.02.029

SUMMARY

Using words to refer to objects in the environment is a core feature of the human language faculty. Referential
understanding assumes the formation of mental representations of these words.1,2 Such understanding of
object words has not yet been demonstrated as a general capacity in any non-human species,3 despite mul-
tiple behavior-based case reports.4–10 In human event-related potential (ERP) studies, object word knowl-
edge is typically tested using the semantic violation paradigm, where words are presented either with their
referent (match) or another object (mismatch).11,12 Such mismatch elicits an N400 effect, a well-established
neural correlate of semantic processing.12,13 Reports of preverbal infant N400 evoked by semantic viola-
tions14 assert the use of this paradigm to probe mental representations of object words in nonverbal popu-
lations. Here, measuring dogs’ (Canis familiaris) ERPs to objects primed with matching or mismatching ob-
ject words, we found a mismatch effect at a frontal electrode, with a latency (206–606 ms) comparable to the
human N400. A greater difference for words that dogs knew better, according to owner reports, further sup-
ported a semantic interpretation of this effect. Semantic expectations emerged irrespective of vocabulary
size, demonstrating the prevalence of referential understanding in dogs. These results provide the first neural
evidence for object word knowledge in a non-human animal.

RESULTS semantic violation paradigm. To evoke the use of mental repre-


sentations, we presented words as primes and objects as
Dogs are thought to be exceptional among animals in their social- probes, the reverse of what is used in most N400 studies, and
communicative capacities toward humans,15 and as companion similar to the settings previously used in preverbal infants.14
animals, they live in a language- and object-rich environment. We presented the words in a natural, ostensive-communicative
Behavioral reports on whether dogs understand that words can context that dogs pay attention to24,25 and that has been re-
refer to objects are indecisive: they suggest that a few dogs ported to facilitate referential expectations in both infants26
can learn a high number of object words after a few expo- and dogs.27 We predicted that if dogs understand object words,
sures6,16,17 but also that most dogs fail to do so even after exten- then hearing them in a referential context will create semantic ex-
sive training.17,18 Nevertheless, performance measures that pectations, which in turn will lead to an event-related potential
impose additional task demands (i.e., attentional or training re- (ERP) difference for the same object presented in a match versus
quirements) may be insensitive to reveal certain cognitive abilities. a mismatch condition.
Indeed, recent neuroscientific studies using passive paradigms in We recruited dogs (n = 27) who were judged by their owners to
dogs19–22 revealed capacities, including aspects of lexical pro- have some experience with object words. Stimuli were individu-
cessing, that had not previously been evidenced in behavioral alized for each dog. Five owner-selected objects, all familiar to
tests. The ability of dogs to map object words to referents, howev- the dog, were used (Table S1). Corresponding object words
er, has not been investigated in any neuroscientific work to date. were embedded in ostensive spoken sentences, pre-recorded
To seek for neural evidence that dogs have object word-eli- from the owner (e.g., ‘‘Kun-kun, look, the ball’’). In every trial,
cited referential expectations, here we applied non-invasive we first displayed the audio recording with the object word,
awake electroencephalography (EEG), using a protocol devel- and then the owner visually presented either the matching object
oped in our lab.21,23 We adapted a modified version of the or a mismatching one (Figures 1A and 1B). The same words and

Current Biology 34, 1–5, April 22, 2024 ª 2024 Elsevier Inc. 1
Please cite this article in press as: Boros et al., Neural evidence for referential understanding of object words in dogs, Current Biology (2024), https://
doi.org/10.1016/j.cub.2024.02.029

ll
Report

A B
Owner shows the Match
object Mismatch
Sound from
Object displayed to loudspeaker Owner holds up Next trial starts on
the owner “Kun-kun, look, the the object button press
ball!”

1500 ms Variable length 1000 ms Variable length

2000 ms

Figure 1. Experimental design and procedure


(A) An owner (O1) presents an object to his dog during EEG measurement.
(B) Timeline of experimental trials in the match and mismatch conditions.

the same objects were presented in the matching and the mis- 4.97 ± 12.50 mV; mismatch, mean ± SD = 1.44 ± 8.61 mV; Fig-
matching conditions, to control for object word-related (e.g., ure 2B; Table S4). Condition-averaged data of most of the dogs
acoustic, familiarity-based, and cloze probability-based) and ob- (14 out of 18, a proportion comparable to reports from humans;
ject-related (e.g., perceptual, preferential, and attentional) con- cf. Parise and Csibra14) showed a positive mismatch-match dif-
founds. To ensure precise timing, objects were shown through ference (Wilcoxon signed-rank test, V = 22, p = 0.004; Figure 2C),
an electric window with opacity controlled. To maintain the and Spearman correlation analysis indicated no relationship be-
dog’s attention, the owner’s face was visible every time the win- tween the magnitude of the mismatch effect and dogs’ owner-
dow turned transparent. ERP analysis was time-locked to the reported vocabulary size for noun-like words (range: 5–230;
onset of object presentation. Only dogs with at least five trials Table S1; rs = 0.273, p = 0.289; BF = 0.507, n = 17), together sug-
per condition (match, mean ± SD = 37.33 ± 28.79; mismatch, gesting that the effect was not driven by a few individuals with a
mean ± SD = 39.50 ± 32.33; Table S2) after artefact rejection large vocabulary. Finally, the lack of correlation between both
were included in the statistical analyses (n = 18; Table S3; see age and head shape (cephalic index, CI) with the magnitude of
STAR Methods for more details). the mismatch effect indicated that the result is independent of
We first compared dogs’ ERPs for match versus mismatch tri- age and breed (rs = 0.421, p = 0.081, BF = 0.33, n = 18 and
als in an a priori time window (350–500 ms) defined by an N400 rs = 0.225, p = 0.368, BF = 0.33, n = 18, for age and CI,
effect reported in adult humans in a similar task.14 We averaged respectively).
data within this time window and applied a permutation test for
repeated-measures (RM) ANOVA, revealing an interaction effect DISCUSSION
between electrode and condition (F(2) = 4.307, p < 0.001).
Follow-up testing of the main effect of condition on the elec- Dogs understand the referential nature of object words
trodes (Fz, FCz, and Cz) separately revealed a significant differ- The ERP mismatch effect reported here constitutes the first
ence only on the Fz electrode (F(1) = 8.284, p = 0.002; Table S4), demonstration of a neural correlate of semantic expectation
indicating more positive deflection for the mismatch than for the violation in a non-human species. Several arguments support
match condition (mean ± SD = 1.34 ± 5.98 mV versus mean ± that this mismatch effect is indeed referential and semantic in
SD = 4.67 ± 7.65 mV, respectively). Then, to determine the nature. First, on a more general note, the same objects were
exact spatiotemporal distribution of this condition difference, presented an equal number of times in the matching and the
we applied a cluster-based random permutation approach.28 mismatching conditions, so ERP differences measured at ob-
The semantic mismatch effect was confirmed on the Fz elec- ject presentation are unlikely to reflect object-related percep-
trode between 206 and 606 ms after visual stimulus onset tual, preference, or attentional effects. Second, because of
(F(1) = 8.38, p = 0.001; match, mean ± SD = 4.53 ± 6.64 mV; the temporal delay between the prime word and the probe ob-
mismatch, mean ± SD = 1.72 ± 5.46 mV; Figure 2A; ject during stimulus presentation, the mental representation of
Table S4). A subsequent permutational RM ANOVA revealed a the object needed to emerge in the absence of the object (cf.
condition by reported word knowledge interaction (F(1) = Waxman2). This favors the interpretation that the expectation
1.972, p = 0.005), with a positive-going ERP deflection for the violation effect here reflects referential understanding of object
mismatch compared to the match condition for the well-known words rather than pure associations. Third, the fact that better
words (scored at least 4 by the owner on a 5-point Likert scale; known object words (as assessed by the owner) evoked a
Table S1) only (F(1) = 3.753, p = 0.017; match, mean ± SD = greater mismatch effect is also compatible with the account

2 Current Biology 34, 1–5, April 22, 2024


Please cite this article in press as: Boros et al., Neural evidence for referential understanding of object words in dogs, Current Biology (2024), https://
doi.org/10.1016/j.cub.2024.02.029

ll
Report
Figure 2. Results
(A) Grand-averaged ERPs for the match and
mismatch conditions at the Fz electrode. Onset of
object presentation was at 0 ms (vertical red dashed
line). Cluster-based random permutation identified
significant condition difference in the time window
206–606 ms (red box).
(B and C) Boxplots of individual ERPs per condition
averaged in this time window. (B) Split for (owner-
reported) little-known and well-known object words.
(C) Connected dots across conditions represent in-
dividual data. Boxplots indicate the median, 25th and
75th percentiles (boxes), and minimum and maximum
(whiskers).
**p < 0.01, *p < 0.05.
See also Tables S1, S2, and S4.

Providing neural evidence that dogs un-


derstand the referential nature of object
words, our results go essentially beyond
previous findings on lexical processing in
non-human animals. On the one hand, all
previous non-human evidence on object
word understanding was based on behav-
ioral measures and was not aiming to reveal
the underlying brain mechanisms of this ca-
pacity. On the other hand, existing neurosci-
entific work on dogs’ lexical processing
used words (such as praise and instruction
words) whose meaning is not an external
entity, and thus these studies could not
directly test referential understanding.19–21
that object word-elicited expectations were driven by semantic The only study using object words as stimuli probed sensitivity
knowledge.13,29 This result is in line with evidence from human for word familiarity and not object word understanding.31 Addi-
preverbal infant studies and thus further confirms that an ability tionally, our results confirm that neural measures can be applied
to produce words is not a prerequisite to form mental represen- successfully to reveal implicit knowledge22,32 in domains where
tations of object words.14 performance measures with non-human animals often fail.18,33
No correlation between vocabulary size and mismatch effect Our results suggest that dogs have a referential understanding
size suggests that forming semantic expectations is a prevalent of certain object words but do not imply that this understanding
capacity among dogs rather than a special skill of individuals is comparable to that of human adults, or even human infants.
with a large vocabulary. While extensive vocabulary has been When learning the meaning of a word, infants grasp that words
demonstrated reliably only in a handful of dogs behavior- refer to categories, not individual objects.1,2 The present study
ally,6–8,16,17 the present results provide evidence that dogs that tested one-to-one mapping of object names to individual ob-
know only a few object words also understand the referential na- jects, but not mapping to categories (for behavioral evidence
ture of object word usage. From an evolutionary perspective, on the latter, see Fugazza and Miklósi6 and Pilley and Reid6,8).
dogs’ here-identified capacity to form semantic expectations Understanding the names of individual entities nonetheless as-
may have emerged during their unique selection for cooperation sumes that dogs have to evoke the mental representation of
with humans15 and can thus be general within but specific for the the object upon hearing its name and thus link the two in a refer-
species. It is also possible that the capacity is more general ential manner.
across mammals and may be enhanced by object-related expe-
rience and/or exposure to a semantics-dominated communica- A dog ERP component for semantic mismatch
tion system. In either case, the demonstration of the use of The timing of the mismatch effect reported here between 206
mental representations in a communicative context on the pop- and 606 ms after visual stimulus presentation is reminiscent of
ulation level in a species evolutionarily distant from humans the latency of the human N400 effect.12 Yet while semantic
supports mentalist accounts over associationist theories of ani- mismatch typically evokes negative-going deflections (the
mal communication.3 Furthermore, these findings raise the pos- N400 effect),12,13 also in a paradigm similar to ours,14 here we
sibility that complex mental processes also underlie so-called observed a positive-going mismatch effect. Note, however,
functionally referential vocalizations, reported in behavioral that because the intracranial sources of EEG can be modeled
studies in a variety of non-human species.30 as dipolar fields, polarities have no inherent meaningfulness,

Current Biology 34, 1–5, April 22, 2024 3


Please cite this article in press as: Boros et al., Neural evidence for referential understanding of object words in dogs, Current Biology (2024), https://
doi.org/10.1016/j.cub.2024.02.029

ll
Report

and the polarity of the difference between conditions can vary (LP2017-13/2017), the Eötvös Loránd University, and the European Research
depending on factors such as location of electrodes, choice of Council under the European Union’s Horizon 2020 research and innovation
program (grant number 950159). M.B. was supported by the ÚNKP-22-4
the reference, and the geometry of underlying cortices. Indeed,
New National Excellence Program of the Ministry for Culture and Innovation
more positive-going deflections for matching than for mismatch- from the National Research, Development and Innovation Fund (ÚNKP-22-
ing words have been observed in semantic priming paradigms in 4-II-ELTE-963). A.A. was supported by the National Brain Programme 3.0 of
human infants in an early time window (N200–50034–36), and the the Hungarian Academy of Sciences (NAP2022-I-3/2022). We are grateful to
positive end of the N400 dipole has been identified in a semantic Claudia Fugazza and Ádám Miklósi for the discussion during the conceptual-
violation paradigm in humans with intracranial EEG.37 Further- ization of the study and to Elodie Ferrando, Maria Woitow, Lisa Touazi, Dávid
Török, and Andrea Sommese for their assistance in the EEG experiments. We
more, dipole orientation may vary across species due to gyrifica-
thank all dog owners and dogs for their participation in the experiments.
tion differences. Note, however, that previous canine ERP
studies with similarly positioned electrodes probing word pro-
AUTHOR CONTRIBUTIONS
cessing reproduced the negative polarity in paradigms that in hu-
mans typically evoked N400 (word segmentation index22; word Conceptualization, L.M., M.B., and A.A.; methodology, M.B., L.M., B.M.,
detection index21). It is also possible that the present findings R.H.-P., and A.A.; investigation, M.B., B.M., and S.D.; visualization, M.B.
reflect a component different from N400. An expectation-modu- and B.M.; writing – original draft, L.M., M.B., B.M., and A.A.; writing – review &
lated, positive-going component, P3b, has also been reported editing, L.M., M.B., B.M., R.H.-P., S.D., and A.A.; funding acquisition, A.A.;
across multiple species38 in a similar window as the effect project administration, M.B. and B.M.; supervision, L.M. and A.A.

observed here.39 In humans, however, semantic expectation


modulations do not typically elicit P3b but N400.12,39 It remains DECLARATION OF INTERESTS

to be determined whether the positive-going semantic mismatch


The authors declare no competing interests.
effect discovered here is a functional analog of either of the
N400, N200–500, or P3b effects, and whether it is specific for se- Received: November 24, 2023
mantic processing in dogs. Future studies should also investi- Revised: January 17, 2024
gate potential cross-species correspondences in the underlying Accepted: February 14, 2024
neurobiological generators of the effect described here. Published: March 22, 2024
This study identifies a dog ERP component that reflects se-
mantic expectations, thus providing the first neural evidence REFERENCES

for object word understanding in a non-human species. The cis of how children learn the meanings of words.
1. Bloom, P. (2001). Pre
discovery of this capacity in dogs informs theoretical work on Behav. Brain Sci. 24, 1095–1103.
language evolution and semantics by revealing that the appreci- 2. Waxman, S.R., and Gelman, S.A. (2009). Early word-learning entails refer-
ation of referentiality during lexical processing is not a distinctive ence, not merely associations. Trends Cogn. Sci. 13, 258–263.
feature of human language use. 3. Fitch, W.T. (2020). Animal cognition and the evolution of human language:
why we cannot focus solely on communication. Philos. Trans. R. Soc.
STAR+METHODS Lond. B Biol. Sci. 375, 20190046.
4. Pepperberg, I.M. (2006). Cognitive and communicative abilities of grey
Detailed methods are provided in the online version of this paper parrots. Appl. Anim. Behav. Sci. 100, 77–86.
and include the following: 5. Herman, L.M., Richards, D.G., and Wolz, J.P. (1984). Comprehension of
sentences by bottlenosed dolphins. Cognition 16, 129–219.
d KEY RESOURCES TABLE 6. Fugazza, C., and Miklósi, Á. (2020). Depths and limits of spontaneous
d RESOURCE AVAILABILITY categorization in a family dog. Sci. Rep. 10, 3082.
B Lead Contact 7. Kaminski, J., Call, J., and Fischer, J. (2004). Word learning in a domestic
B Materials availability dog: evidence for ‘‘fast mapping’’. Science 304, 1682–1683.
B Data and code availability 8. Pilley, J.W., and Reid, A.K. (2011). Border collie comprehends object
d EXPERIMENTAL MODEL AND SUBJECT DETAILS names as verbal referents. Behav. Processes 86, 184–195.
d METHOD DETAILS 9. Savage-Rumbaugh, S. (1986). Ape Language: From Conditioned
B Stimuli and procedure Response to Symbol (Columbia University Press).
B Data acquisition 10. Savage-Rumbaugh, E.S., Murphy, J., Sevcik, R.A., Brakke, K.E., Williams,
d QUANTIFICATION AND STATISTICAL ANALYSIS S.L., Rumbaugh, D.M., and Bates, E. (1993). Language comprehension in
ape and child. Monogr. Soc. Res. Child Dev. 58.
11. Friedrich, M., and Friederici, A.D. (2005). Phonotactic knowledge and lex-
SUPPLEMENTAL INFORMATION
ical-semantic processing in one-year-olds: brain responses to words and
nonsense words in picture contexts. J. Cogn. Neurosci. 17, 1785–1802.
Supplemental information can be found online at https://doi.org/10.1016/j.
cub.2024.02.029. 12. Kutas, M., and Federmeier, K.D. (2011). Thirty years and counting: finding
A video abstract is available at https://doi.org/10.1016/j.cub.2024.02. meaning in the N400 component of the event-related brain potential (ERP).
029#mmc3. Annu. Rev. Psychol. 62, 621–647.
13. Lau, E.F., Phillips, C., and Poeppel, D. (2008). A cortical network for se-
ACKNOWLEDGMENTS mantics: (de)constructing the N400. Nat. Rev. Neurosci. 9, 920–933.
14. Parise, E., and Csibra, G. (2012). Electrophysiological evidence for the un-
This project was funded by the Hungarian Academy of Sciences, a grant to the derstanding of maternal speech by 9-month-old infants. Psychol. Sci. 23,
MTA-ELTE ‘‘Lendület’’ Neuroethology of Communication Research Group 728–733.

4 Current Biology 34, 1–5, April 22, 2024


Please cite this article in press as: Boros et al., Neural evidence for referential understanding of object words in dogs, Current Biology (2024), https://
doi.org/10.1016/j.cub.2024.02.029

ll
Report
15. Miklósi, A., and Topál, J. (2013). What does it take to become ‘best 35. Friedrich, M., and Friederici, A.D. (2008). Neurophysiological correlates of
friends’? Evolutionary changes in canine social competence. Trends online word learning in 14-month-old infants. Neuroreport 19, 1757–1761.
Cogn. Sci. 17, 287–294. 36. Torkildsen, J.v.K., Syversen, G., Simonsen, H.G., Moen, I., and Lindgren,
16. Dror, S., Miklósi, Á., Sommese, A., Temesi, A., and Fugazza, C. (2021). M. (2007). Electrophysiological correlates of auditory semantic priming in
Acquisition and long-term memory of object names in a sample of gifted 24-month-olds. J. Neurolinguistics 20, 332–351.
word learner dogs. R. Soc. Open Sci. 8, 210976.
37. Nobre, A.C., and McCarthy, G. (1994). Language-related ERPs: scalp dis-
ni, A., and Miklósi, Á.
17. Fugazza, C., Andics, A., Magyari, L., Dror, S., Zemple tributions and modulation by word type and semantic priming. J. Cogn.
(2021). Rapid learning of object names in dogs. Sci. Rep. 11, 2222. Neurosci. 6, 233–255.
18. Ramos, D., and Mills, D.S. (2019). Limitations in the learning of verbal con-
38. Paller, K.A. (1994). The neural substrates of cognitive event-related poten-
tent by dogs during the training of OBJECT and ACTION commands.
tials: a review of animal models of P3. In Cognitive Electrophysiology, H.-J.
J. Vet. Behav. 31, 92–99. €user Boston),
Heinze, T.F. Münte, and G.R. Mangun, eds. (Birkha
19. Andics, A., Gábor, A., Gácsi, M., Faragó, T., Szabó, D., and Miklósi, Á. pp. 300–333.
(2016). Neural mechanisms for lexical processing in dogs. Science 353,
39. Leckey, M., and Federmeier, K.D. (2020). The P3b and P600(s): positive
1030–1032.
contributions to language comprehension. Psychophysiology 57, e13351.
20. Gábor, A., Gácsi, M., Szabó, D., Miklósi, Á., Kubinyi, E., and Andics, A.
(2020). Multilevel fMRI adaptation for spoken word processing in the 40. Boersma, P., and Weenink, D. (2023). Praat: doing phonetics by computer.
awake dog brain. Sci. Rep. 10, 11968. 41. Kleiner, M., Brainard, D.H., Pelli, D., Ingling, A., Murray, R., and Broussard,
21. Magyari, L., Huszár, Z., Turzó, A., and Andics, A. (2020). Event-related po- C. (2007). What’s new in Psychtoolbox-3.
tentials reveal limited readiness to access phonetic details during word 42. Oostenveld, R., Fries, P., Maris, E., and Schoffelen, J.-M. (2011). FieldTrip:
processing in dogs. R. Soc. Open Sci. 7, 200851. open source software for advanced analysis of MEG, EEG, and invasive
22. Boros, M., Magyari, L., Török, D., Bozsik, A., Deme, A., and Andics, A. electrophysiological data. Comput. Intell. Neurosci. 2011, 156869.
(2021). Neural processes underlying statistical learning for speech seg- 43. Team, R.C. (2022). R: A language and environment for statistical
mentation in dogs. Curr. Biol. 31, 5512–5521.e5. computing (R Foundation for Statistical Computing).
23. Kis, A., Szakadát, S., Kovács, E., Gácsi, M., Simor, P., Gombos, F., Topál,
44. Voeten, C.C. (2023). buildmer: stepwise elimination and term reordering
J., Miklósi, A., and Bódizs, R. (2014). Development of a non-invasive poly-
for mixed-effects regression.
somnography technique for dogs (Canis familiaris). Physiol. Behav. 130,
149–156. 45. Voeten, C.C. (2022). permutes: permutation tests for time series data.
glás, E., Gergely, A., Kupán, K., Miklósi, Á., and Topál, J. (2012). Dogs’
24. Te 46. Morey, R.D., and Rouder, J.N. (2022). BayesFactor: computation of Bayes
gaze following is tuned to human communicative signals. Curr. Biol. 22, factors for common designs.
209–212. 47. Reeve, C., and Jacques, S. (2022). Responses to spoken words by do-
25. Kaminski, J., Schulz, L., and Tomasello, M. (2012). How dogs know when mestic dogs: a new instrument for use with dog owners. Appl. Anim.
communication is intended for them. Dev. Sci. 15, 222–232. Behav. Sci. 246, 105513.
26. Senju, A., Csibra, G., and Johnson, M.H. (2008). Understanding the refer- 48. Pongracz, P., Miklosi, A., and Csányi, V. (2000). Owner’s beliefs on the
ential nature of looking: infants’ preference for object-directed gaze. ability of their pet dogs to understand human verbal communication: a
Cognition 108, 303–319. case of social understanding. Cah. Psychol. Cogn. 20, 87–107.
27. Tauzin, T., Csı́k, A., Kis, A., and Topál, J. (2015). What or where? The
49. Patrucco-Nanchen, T., Friend, M., Poulin-Dubois, D., and Zesiger, P.
meaning of referential human pointing for dogs (Canis familiaris).
(2019). Do early lexical skills predict language outcome at 3 years? A lon-
J. Comp. Psychol. 129, 334–338.
gitudinal study of French-speaking children. Infant Behav. Dev. 57,
28. Maris, E., and Oostenveld, R. (2007). Nonparametric statistical testing of 101379.
EEG- and MEG-data. J. Neurosci. Methods 164, 177–190.
50. Styles, S., and Plunkett, K. (2009). What is ‘word understanding’ for the
29. Kutas, M., Van Petten, C., and Kluender, R. (2006). Psycholinguistics elec- parent of a one-year-old? Matching the difficulty of a lexical comprehen-
trified II (1994–2005). In The Handbook of Psycholinguistics, M.J. Traxler, sion task to parental CDI report. J. Child Lang. 36, 895–908.
and M.A. Gernsbacher, eds. (Elsevier), pp. 659–724.
51. Kubinyi, E., and Wallis, L.J. (2019). Dominance in dogs as rated by owners
30. Townsend, S.W., and Manser, M.B. (2013). Functionally referential
corresponds to ethologically valid markers of dominance. PeerJ 7, e6838.
communication in mammals: the past, present and the future. Ethology
119, 1–11. 52. Czeibert, K., Baksa, G., Grimm, A., Nagy, S.A., Kubinyi, E., and Petneházy,
Ö. (2019). MRI, CT and high resolution macro-anatomical images with cry-
31. Prichard, A., Cook, P.F., Spivak, M., Chhibber, R., and Berns, G.S. (2018).
osectioning of a Beagle brain: creating the base of a multimodal imaging
Awake fMRI reveals brain regions for novel word detection in dogs. Front.
atlas. PLoS One 14, e0213458.
Neurosci. 12, 737.
32. Karuza, E.A., Emberson, L.L., and Aslin, R.N. (2014). Combining fMRI and €nnel, C., Attaheri, A., Friederici, A.D., and
53. Milne, A.E., Mueller, J.L., Ma
behavioral measures to examine the process of human learning. Petkov, C.I. (2016). Evolutionary origins of non-adjacent sequence pro-
Neurobiol. Learn. Mem. 109, 193–206. cessing in primate brain potentials. Sci. Rep. 6, 36259.

33. Fugazza, C., Dror, S., Sommese, A., Temesi, A., and Miklósi, Á. (2021). 54. Bognár, Z., Iotchev, I.B., and Kubinyi, E. (2018). Sex, skull length, breed,
Word learning dogs (Canis familiaris) provide an animal model for studying and age predict how dogs look at faces of humans and conspecifics.
exceptional performance. Sci. Rep. 11, 14070. Anim. Cogn. 21, 447–456.
34. Friedrich, M., and Friederici, A.D. (2004). N400-like semantic incongruity 55. Bognár, Z., Szabó, D., Dees, A., and Kubinyi, E. (2021). Shorter headed
effect in 19-month-olds: processing known words in picture contexts. dogs, visually cooperative breeds, younger and playful dogs form eye con-
J. Cogn. Neurosci. 16, 1465–1477. tact faster with an unfamiliar human. Sci. Rep. 11, 9293.

Current Biology 34, 1–5, April 22, 2024 5


Please cite this article in press as: Boros et al., Neural evidence for referential understanding of object words in dogs, Current Biology (2024), https://
doi.org/10.1016/j.cub.2024.02.029

ll
Report

STAR+METHODS

KEY RESOURCES TABLE

REAGENT or RESOURCE SOURCE IDENTIFIER


Deposited data
EEG data and scripts This paper https://doi.org/10.6084/m9.figshare.24499297
Experimental models: Organisms/strains
Domestic Dog Family dogs https://ethology.elte.hu/Family_Dog_Project
Software and algorithms
Praat Boersma and Weenink40 https://www.fon.hum.uva.nl/praat/
MATLAB 2014b The MathWorks, USA https://uk.mathworks.com/
MATLAB 2018a The MathWorks, USA https://uk.mathworks.com/
Psychotoolbox 3 Kleiner et al.41 http://psychtoolbox.org/
Curry 8 Compumedics Neuroscan, Australia https://compumedicsneuroscan.com/
curry-8-released/
FieldTrip Oostenveld et al.42 https://www.fieldtriptoolbox.org/
R version 4.2.2 R Development Core Team43 https://www.r-project.org/.
buildmer (R package) Voeten44 https://cran.r-project.org/web/packages/buildmer
45
permutes (R package) Voeten https://cran.r-project.org/web/packages/permutes
BayesFactor (R package) Morey and Rouder46 https://cran.r-project.org/web/packages/BayesFactor
Other
Smart PDLC Film Electric Starter Banggood.com, China https://hu.banggood.com/155x100mm-Smart-
PDLC-Film-Starter-Electric-Switchable-Tint-
Window-Glass-Film–p-1097538.html?imageAb=
2&cur_warehouse=CN&akmClientCountry=
HU&a=1704758368.3314&akmClientCountry=HU
H4n Pro Zoom recorder Sound Service GmbH, Germany https://zoomcorp.com/en/de/handheld-recorders/
handheld-recorders/h4n-pro/
Brio Ultra Hd Pro Business Webcam Logitech International S.A., Switzerland https://www.logitech.com/en-gb/products/webcams/
brio-4k-hdr-webcam.960-001106.html
Arduino Arduino s.r.l., Italy https://www.arduino.cc/pro/hardware
Logitech X-530 Loudspeaker Logitech International S.A., Switzerland discontinued
Neuvo 64-channel amplifier Compumedics Neuroscan, Australia https://compumedicsneuroscan.com/product/
neuvo-64-channel-eeg-erp-ep-amplifier/
EC2 Grass Electrode Cream Grass Technologies, USA https://neuro.natus.com/products-services/
neurodiagnostic-supplies/emg-supplies

RESOURCE AVAILABILITY

Lead Contact
Further information and requests for resources should be directed to and will be fulfilled by the Lead Contact, Marianna Boros
([email protected]). There is no restriction for distribution of materials.

Materials availability
This study did not generate new unique reagents.

Data and code availability


All original data and code has been deposited at figshare and is publicly available as of the date of publication. DOIs are listed in the
key resources table. Any additional information required to reanalyze the data reported in this paper is available from the lead contact
upon request.

e1 Current Biology 34, 1–5.e1–e4, April 22, 2024


Please cite this article in press as: Boros et al., Neural evidence for referential understanding of object words in dogs, Current Biology (2024), https://
doi.org/10.1016/j.cub.2024.02.029

ll
Report

EXPERIMENTAL MODEL AND SUBJECT DETAILS

Participants were healthy companion dogs (n = 27), recruited via departmental social platforms. To ensure that all subjects had some
experience with object words, we included only dogs that were judged by the owner to know at least three object words, which is an
average owner-reported knowledge of noun-like words.47 From the 27 dogs who participated in the EEG experiment, we excluded 2
dogs due to technical issues and 7 dogs because of the high number of movement-artefacts or noisy electrodes during the automatic
data cleaning procedure (see section EEG artefact-rejection and analysis). The EEG data of 18 dogs were analyzed (Table S3; 10
males; mean age ± SD in years: 6.36 ± 2.87, range: 1.5–10 years; 5 Border Collies, 1 Akita Inu, 1 White Swiss Shepard, 1 Miniature
Pinscher, 1 Toy Poodle, 1 Pumi, 1 Hungarian Vizsla, 1 Standard Schnauzer, 1 Labrador Retriever, 5 mixed breeds).
The study used non-invasive scalp EEG that did not cause any pain to the dogs. The University Institutional Animal Care and Use
Committee (UIACUC, Eötvös Loránd University, Hungary) issued its ethical approval of the project under the certificate no. ELTE-
AWC-010/2021. The owners of the dogs volunteered to take part in the project in return for a modest monetary compensation
and gave written informed consent.

METHOD DETAILS

Stimuli and procedure


Stimuli
Stimuli were individualized for each dog. To ensure sufficient variance in the stimuli, we asked owners to select five familiar, visually
distinct objects from which the names of at least three were reported to be used by the owners when talking to their dogs (Table S1).
The objects additionally needed to fit in a 155 mm 3 100 mm window (Figures 1A and 1B). We used the actual objects (and not pic-
tures of them) during the experiment.
Object words were presented to dogs embedded in sentences recorded in full (using H4n Pro Zoom recorder, Sound Service
GmbH, Germany) from their owners who were asked to use dog-directed speech with the following sentence structure: name of
the dog + verbal ostensive cue + object word. Owners were asked to select three verbal ostensive cues they use in their everyday
communication with their dog (e.g., ‘‘look!’’, ‘‘here’s the’’, ‘‘where’s the’’). Each object word was recorded with all three verbal osten-
sive cues, resulting in 15 unique recordings per dog (3 verbal ostensive cues x 5 object words). Recordings were digitized at 44 kHz
sampling rate and equalized for 68 dB RMS using Praat version 6.1.53.40
We estimated dogs’ word knowledge by asking owners to report on a five-point Likert scale how often their dog showed a behavior
consistent with understanding what each word used in the experiment refers to (reported word knowledge; 1-never, 5-always). Addi-
tionally, we also estimated vocabulary size by asking owners to list all noun-like words they believed their dog understands (owner-
reported vocabulary size). Owner-reported estimates are considered reliable measures of dogs vocabulary size,47,48 stemming from
human developmental studies suggesting that parents are competent at evaluating their child’s vocabulary,49,50 and evidence from
combined ethological-questionnaire works showing that owners provide reliable information about their dog’s behavior.51
Experimental setup and apparatus
EEG recording took place at the laboratory of the Department of Ethology of Eötvös Loránd University in Budapest. The experimental
room was divided into two zones (inner and outer zone) separated by three occluders forming a U shape (Figures 1A and 1B). The
middle occluder was equipped with an electric window (Smart PDLC Film Electric Starter), allowing visual communication between
the two zones. The opacity of the electric window was controlled throughout the experiment.
The outer zone contained all electrical equipment and was prepared to host the experimenter (E1) and the owner, whose voice was
used to record the auditory stimuli (O1) and who was showing the object to the dog through the electric window. The inner zone
contained a mattress for the dog and a seat for an accompanying person, a second owner (O2) or a second experimenter (E2).
We specifically asked owners to come in pairs for the tests. When this was possible, the O2, whose voice was not used during audio
recordings, sat next to the dog and encouraged the dog to remain on the mattress throughout the entire experiment. If the dog came
with one owner, E2 sat next to the dog.
O1 and the dog faced each other through the electric window. The objects used for testing were placed within reach of O1, out of
the dog’s sight. During EEG recording, owners (and E2 if present) were wearing sound-proof headphones.
E1 sat close to O1, out of the sight of the dog, and controlled the trial display through a PC. Throughout the whole experiment, E1
was watching the dog in real time through a Logitech HD Pro webcam (Logitech International S.A., Lausanne, Switzerland), which
was positioned in a way that ensured that E1 saw the dog’s posture and its eyes when facing the electric window.
Psychotoolbox 341 in MATLAB 2014b (The Mathworks, Massachusetts, USA) was used to control the opacity of the electric win-
dow (communicating through Arduino (Arduino s.r.l., Italy)), the display of the stimuli and the transfer of the triggers to the EEG ampli-
fier. Auditory stimuli were played from two loudspeakers (Logitech International S.A., Lausanne, Switzerland) placed on the ground
next to O1 in the right and left side, in the outer zone. Images of objects were displayed to O1 through a monitor placed below the
electric window in the outer zone.
Procedure
For a general habituation protocol of companion dogs to the laboratory setting and the electrode application, see Magyari et al.21
Prior to testing, each dog-owner pair participated in one training session on a separate day. During the training session, dogs
were familiarized with the room, the mattress, the electrodes and the procedure, and were trained to get used to watching O1 through

Current Biology 34, 1–5.e1–e4, April 22, 2024 e2


Please cite this article in press as: Boros et al., Neural evidence for referential understanding of object words in dogs, Current Biology (2024), https://
doi.org/10.1016/j.cub.2024.02.029

ll
Report

the electric window only. During the training session, O1 was trained on the experimental procedure with a stimulus set from
another dog.
At the testing sessions, after initial free exploration of the room, dogs were instructed to lay down on the mattress facing the oc-
cluder with the electric window. Next, E1 (and E2 if present) began the electrode application, during which the owner(s) was (were)
comforting the dog and keeping it in position. Once all electrodes were placed on the dog’s head, O2/E2 sat down next to the dog,
and E1 together with O1, left the inner zone. Testing started when all participants (dog, owners and experimenters) took their position,
and the dog was lying with head down in a stable and relaxed position.
At the beginning of each trial, the electric window was opaque. First, the image of the object was displayed on the monitor of O1 for
1500 ms to allow O1 to localize and grab the object. Then the electric window became transparent, and O1 gazed through the window
at the dog to grab its attention, while the auditory stimulus was played back from the loudspeakers. Next, the electric window became
opaque for 1000 ms, during which O1 raised the object in front of his/her face. Then the electric window turned transparent again, O1
held the object in front of his/her face and looked down at the monitor where a countdown of 2 s was displayed. After 2000 ms, the
electric window turned opaque again, and O1 put down the object. E1, who observed the dog through the webcam, decided on the
initiation of the next trial, if the dog was attentive, in the right position, and facing the electric window with open eyes.
During match trials, objects presented to the dogs by their owners were primed with the corresponding object word. During
mismatch trials, objects were primed with auditory stimuli containing a non-matching object word. The same words and the same
objects were presented an equal amount of times in the matching and the mismatching conditions. The presentation order of the
trials was pseudo–randomized with the following constraints: no more than two consecutive identical words in a row and no more
than two consecutive trials of the same kind in a row, either matching or mismatching followed each other.
The dogs completed as many trials during a testing session as their owners felt appropriate, i.e., felt the dog was not too tired and
still attentive (see Table S2 for the total number of trials/dogs). When a dog stood/sat up during the experiment, the person sitting next
to the dog was asked to encourage the dog to get back to a lying position. If the dog wanted to leave the mattress, the session was
aborted. Owners were invited to participate in many testing sessions. The final number of sessions depended on the availability of the
owners and the dog’s cooperativity (min. 1, max. 6 sessions).

Data acquisition
Electrophysiological recording
Electrode-placement followed a canine EEG setup developed and validated in our lab, which has been used successfully to study word
processing in dogs.21,22 Surface-attached scalp electrodes (gold-coated Ag|AgCl) were fixed with EC2 Grass Electrode Cream (Grass
Technologies, USA) on the dogs’ heads and placed over the anteroposterior midline of the skull (Fz, Cz, FCz, Pz), and on the zygomatic
arch (os zygomaticum), next to the left (F7) and right eyes (F8, electrooculography, EOG). The ground electrode was placed on the left
musculus temporalis. Fz, Cz, FCz and EOG derivations were referred to Pz.21,23 The reference electrode (Pz), Fz, Cz and FCz were placed
on the dog’s head at the anteroposterior midline above the bone to decrease the chance for artefacts from muscle movements. Pz was
placed posteriorly to the active electrodes on a head-surface above the back part of the external sagittal crest (crista sagittalis externa) at
the occipital bulge of dogs where the skull is usually the thickest and under which either no brain or only the cerebellum is located depend-
ing on the shape of the skull52 (see more explanation of the electrode placement in Magyari et al.21). Impedances for the EEG electrodes
were kept at least below 20 kU, but mostly below 10 kU. The EEG was amplified on 8 channels by a 64-channel Neuvo amplifier (Compu-
medics Neuroscan, Australia) with applying DC-recording and a 200 Hz anti-aliasing filter, and it was digitized at 500 Hz sampling-rate.
EEG artefact rejection
EEG artefact rejection and analysis were conducted using the FieldTrip software package42 in MATLAB R2018a (The Mathworks,
Massachusetts, USA).
EEG data was digitally filtered offline with a 0.1 Hz high-pass and 35-Hz low-pass filter and segmented between 200 ms pre-stimulus
and 1000 ms after stimulus onset (visual object presentation). Each segment was detrended by removing the first-order polynomial and
baselined between -200 ms and 0 ms. A two-step automatic artefact rejection procedure was applied to remove eye-, ear-, head- or any
other body movement-related and other artefacts (see Magyari et al.21). In the first step, trials with deviant voltage values were rejected
by removing each segment where amplitudes exceeded ± 110 mV at any time point in the segment. In the second step, abrupt changes
in amplitudes related to saccadic movements were identified as minimum and maximum values exceeding 120 mV at any of the elec-
trodes (Fz, Cz, FCz, F7 and F8) in 100 ms long sliding windows and segments containing such abrupt changes were subsequently re-
jected from further analysis. Additionally, videos recorded during experimental sessions were screened to identify trials where the dogs
were inattentive during object presentation. This screening did not identify additional trials for exclusion that have not already been
excluded during automatic artefact rejection. Segments were averaged separately for each condition, session, participant and elec-
trode (Fz, Cz, FCz). After the automatic artefact rejection, on average 30.15% of trials remained in the matching condition (min: 0%,
max: 76.66%) and 31.79% of the trials in the mismatching condition (min: 0%, max: 80%) (Table S2).

QUANTIFICATION AND STATISTICAL ANALYSIS

We included in the analysis all sessions where at least 5 trials per condition remained after the automatic artefact rejection (average
no of match trials per dog: 13.71, min: 5, max: 41; average no of mismatch trials: 13.75, min: 5, max: 43). This resulted in 18 dogs
being included in the analysis. All statistical analyses were carried out using R version 4.2.2.43

e3 Current Biology 34, 1–5.e1–e4, April 22, 2024


Please cite this article in press as: Boros et al., Neural evidence for referential understanding of object words in dogs, Current Biology (2024), https://
doi.org/10.1016/j.cub.2024.02.029

ll
Report

We first defined an a priori time window between 350 and 500 ms, defined by an N400 effect reported in humans using a similar
task.14 We averaged data across trials and conditions for each dog over the time window of interest separately for the Fz, Cz, and FCz
electrodes. Because regardless of the transformation applied, the raw EEG data and the residuals of linear models did not conform to
normal distribution, we proceeded by applying a permutation test for repeated measures (RM) ANOVA (with 1000 permutations) con-
taining the fixed factors of condition (match and mismatch) and electrode (Fz, Cz, FCz) and their interactions, and subject with nested
session as random terms, using the ‘buildmer’ package.44 The permutation test yielded significant interaction between the fixed ef-
fects, therefore we ran separate permutation tests for data from each electrode creating ANOVAs containing the fixed factor of con-
dition (match and mismatch), and subject with nested session as random terms. We report p values and F statistics from all tests in
Table S4.
To determine the exact spatio-temporal distribution of the conditional differences, we applied a cluster-based random permutation
procedure,28 utilizing the ‘permutes’45 and ‘buildmer’44 packages. As a first step, for each time point (1000 time points resulting from
each ms after stimulus onset) a separate Linear Mixed-Effects Model (LMM) was built using across-trial dog-averages from the Fz,
FCz and Cz electrodes, including condition (match and mismatch) and electrode (Fz, Cz, FCz) as fixed factors and their interactions,
and the subjects’ ID with nested session as random term. For each model, a permutation test was run to determine which factor had a
significant effect at each time point using permuted likelihood ratio tests (LRTs). Next, we used the LRTs to compute a cluster-mass
statistic and selected connected datasets on the basis of temporal adjacency (min. 50 ms to exclude spurious effects, cf. Milne
et al.53) with the largest cluster-level value. The permutation test yielded significant interaction between the fixed effects, therefore
we repeated the cluster-based random permutation procedure separately for data from each electrode. We report p values from all
tests in Table S4.
Additionally, we performed a Wilcoxon signed-rank test to evaluate whether the ERP differences between the match and mismatch
condition for the 206-606 ms time-window showed the same direction in most of the dogs.
In the final sample, dogs’ average owner-reported word knowledge ranged from 2 to 5 points (mean ± SD = 3.9 ± 0.95, see
Table S1). To test whether this reported word knowledge of dogs affected the observed mismatch effect, we performed a follow-
up analysis. For each trial, we included the reported knowledge score of the word used. Only words with an occurrence in minimum
one match and one mismatch trial within session (after artefact rejection) were included in the analysis. To obtain a balanced number
of observations, we collapsed the data into two new categories: little-known words (scores 1 to 3) and well-known words (scores 4 to
5). We then carried out a permutational RM ANOVA (with 1000 permutations) on the data averaged over in the time window defined by
the cluster-based random permutation analysis (206-606 ms) including two fixed factors: condition (match and mismatch) and re-
ported word knowledge (little-known words, well-known words), and their interactions; and subject with nested session as random
terms (‘buildmer’ package44). As the permutation test yielded significant interaction between the fixed effects, we ran separate per-
mutation tests for data from each reported word knowledge category, creating an RM ANOVA containing the fixed factor of condition
(match and mismatch), and subject with nested session as random terms. We report p values and F statistics from all tests in
Table S4.
Dog’s owner-reported vocabulary size ranged from 5 to 230 noun-like words (mean ± SD = 32.23 ± 55.85, see Table S1). Out of the
18 dogs whose data was analyzed, the owner of 17 provided this information. To assess if there is a relationship between dogs’
owner-reported vocabulary size and the magnitude of the mismatch effect, we carried out a Spearman correlation analysis between
each dog’s owner reported vocabulary size and the difference between their average ERP responses in the mismatch and match
conditions in the time window defined by the cluster-based random permutation analysis (206-606 ms). The strength of the evidence
was assessed using Bayesian inference by applying the ‘BayesFactor’ package.46
To test whether the obtained result depends on age or head shape (cephalic index), we correlated these factors with individual
response differences between conditions using Spearman correlation. Cephalic index was calculated from head length and width
measured externally (cf. Bognár et al.54,55) and conditional differences were calculated between the average ERP responses in
the mismatch and match conditions in the time window defined by the cluster-based random permutation analysis (206-606 ms).
The strength of the evidence was assessed using Bayesian inference by applying the ‘BayesFactor’ package.46

Current Biology 34, 1–5.e1–e4, April 22, 2024 e4

You might also like