Can Dogs Sense Negative Energy Peer Reviewed Articles
Biol Lett. 2016 Jan; 12(1): 20150883.
Dogs recognize dog and human emotions
Natalia Albuquerque
aneSchool of Life Sciences, University of Lincoln, Lincoln LN6 7DL, Britain
3Department of Experimental Psychology, Establish of Psychology, University of São Paulo, São Paulo 05508-030, Brazil
Kun Guo
2School of Psychology, University of Lincoln, Lincoln LN6 7DL, Great britain
Anna Wilkinson
1School of Life Sciences, University of Lincoln, Lincoln LN6 7DL, UK
Carine Savalli
4Department of Public Politics and Public Health, Federal University of São Paulo, Santos 11015-020, Brazil
Emma Otta
3Department of Experimental Psychology, Establish of Psychology, University of São Paulo, São Paulo 05508-030, Brazil
Daniel Mills
oneSchoolhouse of Life Sciences, University of Lincoln, Lincoln LN6 7DL, UK
Received 2015 Oct 20; Accustomed 2015 Dec 22.
Abstract
The perception of emotional expressions allows animals to evaluate the social intentions and motivations of each other. This usually takes identify within species; however, in the case of domestic dogs, it might be advantageous to recognize the emotions of humans as well as other dogs. In this sense, the combination of visual and auditory cues to categorize others' emotions facilitates the data processing and indicates high-level cognitive representations. Using a cross-modal preferential looking paradigm, we presented dogs with either homo or domestic dog faces with unlike emotional valences (happy/playful versus angry/aggressive) paired with a single voice from the aforementioned private with either a positive or negative valence or Brownian noise. Dogs looked significantly longer at the face whose expression was congruent to the valence of vocalization, for both conspecifics and heterospecifics, an ability previously known but in humans. These results demonstrate that dogs can extract and integrate bimodal sensory emotional information, and discriminate betwixt positive and negative emotions from both humans and dogs.
Keywords: Dog, cross-modal sensory integration, emotion recognition, social noesis
1. Introduction
The recognition of emotional expressions allows animals to evaluate the social intentions and motivations of others [i]. This provides crucial information about how to acquit in dissimilar situations involving the establishment and maintenance of long-term relationships [2]. Therefore, reading the emotions of others has enormous adaptive value. The ability to recognize and respond appropriately to these cues has biological fitness benefits for both signaller and the receiver [1].
During social interactions, individuals utilize a range of sensory modalities, such as visual and auditory cues, to express emotion with characteristic changes in both face up and phonation, which together produce a more robust percept [iii]. Although facial expressions are recognized equally a primary aqueduct for the manual of affective information in a range of species [2], the perception of emotion through cross-modal sensory integration enables faster, more authentic and more reliable recognition [4]. Cantankerous-modal integration of emotional cues has been observed in some primate species with conspecific stimuli, such equally matching a specific facial expression with the corresponding vocalisation or call [5–7]. However, there is currently no evidence of emotional recognition of heterospecifics in non-human animals. Understanding heterospecific emotions is of particular importance for animals such as domestic dogs, who live about of their lives in mixed species groups and have developed mechanisms to collaborate with humans (e.g. [8]). Some work has shown cross-modal capacity in dogs relating to the perception of specific activities (e.g. food-guarding) [9] or individual features (east.g. body size) [10], yet it remains unclear whether this power extends to the processing of emotional cues, which inform individuals about the internal country of others.
Dogs can discriminate human facial expressions and emotional sounds (east.thousand. [xi–18]); notwithstanding, there is notwithstanding no evidence of multimodal emotional integration and these results relating to discrimination could be explained through uncomplicated associative processes. They do non demonstrate emotional recognition, which requires the demonstration of categorization rather than differentiation. The integration of congruent signals across sensory inputs requires internal categorical representation [19–22] and so provides a means to demonstrate the representation of emotion.
In this written report, we used a cross-modal preferential looking prototype without familiarization phase to exam the hypothesis that dogs can extract and integrate emotional information from visual (facial) and auditory (vocal) inputs. If dogs can cantankerous-modally recognize emotions, they should await longer at facial expressions matching the emotional valence of simultaneously presented vocalizations, as demonstrated by other mammals (e.1000. [five–7,21,22]). Attributable to previous findings of valence [five], side [22], sexual practice [eleven,22] and species [12,23] biases in perception studies, we also investigated whether these 4 main factors would influence the dogs' response.
2. Cloth and methods
Seventeen salubrious socialized family developed dogs of various breeds were presented simultaneously with ii sources of emotional data. Pairs of greyness-scale gamma-corrected human or domestic dog face images from the same individual simply depicting different expressions (happy/playful versus aroused/aggressive) were projected onto two screens at the same time as a audio was played (effigy 1 a). The sound was a unmarried vocalization (domestic dog barks or man voice in an unfamiliar linguistic communication) of either positive or negative valence from the same private, or a neutral sound (Brownian racket). Stimuli (figure 1 b) featured one female and i male of both species. Unfamiliar individuals and an unfamiliar language (Brazilian Portuguese) were used to rule out the potential influence of previous feel with model identity and human language.
Experiments took place in a tranquility, dimly-lit test room and each dog received two x-trial sessions, separated by two weeks. Dogs stood in front end of two screens and a video camera recorded their spontaneous looking behaviour. A trial consisted of the presentation of a combination of the audio-visual and visual stimuli and lasted v south (see electronic supplementary cloth for details). Each trial was considered valid for analyses when the dog looked at the images for at least ii.5 s. The 20 trials presented different stimulus combinations: 4 face-pairs (ii human and ii dog models) × 2 vocalizations (positive and negative valence) × 2 face positions (left and right), in addition to 4 command trials (4 confront-pairs with neutral auditory stimulus). Therefore, each subject saw each possible combination in one case.
We calculated a congruence index = (C − I)/T, where C and I represent the amount of time the dog looked at the congruent (facial expression matching emotional vocalization, C) and incongruent faces (I), and T represents total looking time (looking left + looking right + looking at the middle) for the given trial, to measure out the dog's sensitivity to audio-visual emotional cues delivered simultaneously. We analysed the congruence index beyond all trials using a general linear mixed model (GLMM) with individual dog included in the model as a random consequence. Only emotion valence, stimulus sex, stimulus species and presentation position (left versus right) were included every bit the stock-still furnishings in the last analysis considering first- and second-order interactions were not significant. The means were compared to zero and conviction intervals were presented for all the main factors in this model. A astern selection procedure was applied to identify the significant factors. The normality supposition was verified past visually inspecting plots of residuals with no of import deviation from normality identified. To verify a possible interaction betwixt the sex of subjects and stimuli, we used a carve up GLMM taking into business relationship these factors. Nosotros also tested whether dogs preferentially looked at a particular valence throughout trials and at a particular face in the control trials (run across the electronic supplementary material for details of index calculation).
3. Results
Dogs showed a clear preference for the coinciding face in 67% of the trials (due north = 188). The mean congruence alphabetize was 0.19 ± 0.03 across all examination trials and was significantly greater than cypher (t 16 = 5.53; p < 0.0001), indicating dogs looked significantly longer at the confront whose expression matched the valence of vocalization. Moreover, nosotros institute a consistent congruent looking preference regardless of the stimulus species (dog: t 167 = 5.39, p < 0.0001; human: t 167 = 2.48, p = 0.01; figure 2 a), emotional valence (negative: t 167 = 5.01, p < 0.0001; positive: t 167 = 2.88, p = 0.005; figure ii b), stimulus gender (female: t 167 = 4.42, p < 0.0001; male: t 167 = 3.45, p < 0.001; figure two c) and stimulus position (left side: t 167 = 2.74, p < 0.01; correct side: t 167 = 5.xiv, p < 0.0001; figure two d). When a backwards selection process was practical to the model with the four primary factors, the final model included just stimulus species. The congruence index for this model was significantly higher for viewing canis familiaris rather than human faces (dog: 0.26 ± 0.05, human: 0.12 ± 0.05, F 1,170 = 4.42; p = 0.04, figure two a), indicating that dogs demonstrated greater sensitivity towards conspecific cues. In a separate model, we observed no pregnant interaction between subject sex and stimulus sex (F 1,169 = ane.33, p = 0.25) or main effects (subject sex: F one,169 = 0.17, p = 0.68; subject stimulus: F 1,169 = 0.56, p = 0.45).
Dogs did not preferentially wait at either of the facial expressions in control weather when the vocalization was the neutral sound (mean: 0.04 ± 0.07; t 16 = 0.56; p = 0.58). The mean preferential looking index was −0.05 ± 0.03, which was not significantly different from zero (t 16 = −1.6, p = 0.13), indicating that there was no deviation in the proportion of viewing time between positive and negative facial expressions beyond trials.
iv. Give-and-take
The findings are, nosotros believe, the first testify of the integration of heterospecific emotional expressions in a species other than humans, and extend across primates the demonstration of cross-modal integration of conspecific emotional expressions. These results show that domestic dogs can obtain dog and human being emotional information from both auditory and visual inputs, and integrate them into a coherent perception of emotion [21]. Therefore, information technology is likely that dogs possess at least the mental prototypes for emotional categorization (positive versus negative affect) and can recognize the emotional content of these expressions. Moreover, dogs performed in this way without any training or familiarization with the models, suggesting that these emotional signals are intrinsically important. This is consistent with this ability conferring important adaptive advantages [24].
Our study shows that dogs possess a similar ability to some non-human primates in existence able to match auditory and visual emotional information [5], but also demonstrates an of import advance. In our study, there was non a strict temporal correlation between the recording of visual and auditory cues (e.g. relaxed dog confront with open oral fissure paired with playful bawl), unlike the earlier research on primates (e.g. [5]). Thus the relationship betwixt the modalities was not temporally face-to-face, reducing the likelihood of learned associations bookkeeping for the results. This suggests the being of a robust categorical emotion representation.
Although dogs showed the ability to recognize both conspecific and heterospecific emotional cues, we found that they responded significantly more strongly towards dog stimuli. This could exist explained by a more than refined machinery for the categorization of emotional information from conspecifics, which is corroborated past the recent findings of dogs showing a greater sensitivity to conspecifics' facial expressions [12] and a preference for dog over homo images [23]. The ability to recognize emotions through visual and auditory cues may be a particularly advantageous social tool in a highly social species such as dogs and might accept been exapted for the establishment and maintenance of long-term relationships with humans. It is possible that during domestication, such features could have been retained and potentially selected for, albeit unconsciously. Nonetheless, the communicative value of emotion is one of the cadre components of the process and fifty-fifty less-social domestic species, such every bit cats, express affective states such as pain in their faces [25].
At that place has been a long-standing debate every bit to whether dogs can recognize human being emotions. Studies using either visual or auditory stimuli have observed that dogs tin can show differential behavioural responses to single modality sensory inputs with dissimilar emotional valences (east.g. [12,16]). For instance, Müller et al. [thirteen] constitute that dogs could selectively reply to happy or angry human facial expressions; when trained with just the pinnacle (or bottom) half of unfamiliar faces they generalized the learned discrimination to the other one-half of the face. However, these human being-expression-modulated behavioural responses could be attributed solely to learning of contiguous visual features. In this sense, dogs could be discriminating human facial expressions without recognizing the data being transmitted.
Our subjects needed to be able to extract the emotional information from i modality and activate the corresponding emotion category template for the other modality. This indicates that domestic dogs interpret faces and vocalizations using more than elementary discriminative processes; they obtain emotionally significant semantic content from relevant audio and visual stimuli that may aid communication and social interaction. Moreover, the use of unfamiliar Portuguese words controlled for potential artefacts induced by a dog's previous experience with specific words. The ability to form emotional representations that include more than than one sensory modality suggests cognitive capacities not previously demonstrated exterior of primates. Further, the ability of dogs to extract and integrate such information from an unfamiliar man stimulus demonstrates cognitive abilities non known to be beyond humans. These abilities may be fundamental to a functional human relationship within the mixed species social groups in which dogs often live. Moreover, our results may indicate a more than widespread distribution of the power to spontaneously integrate multimodal cues amongst non-man mammals, which may be central to understanding the development of social cognition.
Supplementary Material
Acknowledgements
We thank Fiona Williams and Lucas Albuquerque for profitable with information collection/double coding and figures preparation.
Ethics
Ethical approval was granted past the ethics commission in the School of Life Sciences, University of Lincoln. Prior to the study, written informed consent was obtained from the dogs' owners and man models whose face images and vocalizations were sampled as the stimuli. Nosotros can ostend that both the human models have agreed that their face images and vocalizations can exist used for inquiry and related publications, and we have received their written consent.
Authors' contribution
Northward.A., One thousand.G., A.W. and D.Grand. conceived/designed the study and wrote the paper. East.O. conceived the study. Northward.A. performed the experiments. N.A. and C.S. analysed and interpreted the data. North.A. prepared the figures. All authors gave concluding approval for publication and agree to be held accountable for the work performed.
Competing interests
We declare we have no competing interests.
Funding
Financial support for N.A. from Brazil Coordination for the Comeback of College Education Personnel is best-selling.
References
1. Schmidt KL, Cohn JF. 2001. Human expressions every bit adaptations: evolutionary questions in facial expression research. Am. J. Phys. Anthropol. 33, three–24. (10.1002/ajpa.20001) [PMC free article] [PubMed] [CrossRef] [Google Scholar]
ii. Parr LA, Winslow JT, Hopkins WD, de Waal FBM. 2000. Recognizing facial cues: individual discrimination by chimpanzees (Pan troglodytes) and rhesus monkeys (Macaca mulatta). J. Comp. Psychol. 114, 47–60. (10.1037/0735-7036.114.1.47) [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
3. Campanella Southward, Belin P. 2007. Integrating face and vocalism in person perception. Trends Cogn. Sci. 11, 535–543. (10.1016/j.tics.2007.x.001) [PubMed] [CrossRef] [Google Scholar]
4. Yuval-Greenberg Southward, Deouell LY. 2009. The dog'south meow: asymmetrical interaction in cantankerous-modal object recognition. Exp. Encephalon Res. 193, 603–614. (x.1007/s00221-008-1664-vi) [PubMed] [CrossRef] [Google Scholar]
5. Ghazanfar AA, Logothetis NK. 2003. Facial expressions linked to monkey calls. Nature 423, 937–938. (10.1038/423937a) [PubMed] [CrossRef] [Google Scholar]
6. Izumi A, Kojima South. 2004. Matching vocalizations to vocalizing faces in a chimpanzee (Pan troglodytes). Anim. Cogn. 7, 179–184. (10.1007/s10071-004-0212-four) [PubMed] [CrossRef] [Google Scholar]
seven. Payne C, Bachevalier J. 2013. Crossmodal integration of conspecific vocalizations in rhesus macaques. PLoS ONE 8, e81825 (10.1371/journal.pone.0081825) [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
8. Nagasawa M, Mitsui S, En S, Ohtani N, Ohta Thou, Sakuma Y, Onaka T, Mogi K, Kikusui T. 2015. Oxytocin-gaze positive loop and the coevolution of human-canis familiaris bonds. Science 348, 333–336. (ten.1126/science.1261022) [PubMed] [CrossRef] [Google Scholar]
9. Faragó T, Pongrácz P, Range F, Virányi Z, Miklósi A. 2010. 'The bone is mine': melancholia and referential aspects of domestic dog growls. Anim. Behav. 79, 917–925. (10.1016/j.anbehav.2010.01.005) [CrossRef] [Google Scholar]
x. Taylor AM, Reby D, McComb K. 2011. Cross modal perception of body size in domestic dogs (Canis familiaris). PLoS ONE 6, e0017069 (10.1371/journal.pone.0017069) [PMC gratis article] [PubMed] [CrossRef] [Google Scholar]
11. Nagasawa M, Murai K, Mogi K, Kikusui T. 2011. Dogs tin can discriminate human smiling faces from blank expressions. Anim. Cogn. 14, 525–533. (10.1007/s10071-011-0386-5) [PubMed] [CrossRef] [Google Scholar]
12. Racca A, Guo Thou, Meints K, Mills DS. 2012. Reading faces: differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-erstwhile children. PLoS ONE vii, e36076 (x.1371/journal.pone.0036076) [PMC free article] [PubMed] [CrossRef] [Google Scholar]
xiii. Müller CA, Schmitt One thousand, Barber ALA, Huber L. 2015. Dogs can discriminate emotional expressions of human faces. Curr. Biol. 25, 601–605. (10.1016/j.cub.2014.12.055) [PubMed] [CrossRef] [Google Scholar]
14. Buttelmann D, Tomasello M. 2013. Can domestic dogs (Domestic dog) use referential emotional expressions to locate hidden food? Anim. Cogn. 16, 137–145. (10.1007/s10071-012-0560-4) [PubMed] [CrossRef] [Google Scholar]
15. Flom R, Gartman P. In press. Does affective information influence domestic dogs' (Canis lupus familiaris) point-following behavior? Anim. Cogn (ten.1007/s10071-015-0934-5) [PubMed] [CrossRef] [Google Scholar]
16. Fukuzawa Yard, Mills DS, Cooper JJ. 2005. The effect of human command phonetic characteristics on auditory cognition in dogs (Domestic dog). J. Comp. Psychol. 119, 117–120. (ten.1037/0735-7036.119.1.117) [PubMed] [CrossRef] [Google Scholar]
17. Custance D, Mayer J. 2012. Empathic-like responding by domestic dogs (Canis familiaris) to distress in humans: an exploratory study. Anim. Cogn. fifteen, 851–859. (10.1007/s10071-012-0510-1) [PubMed] [CrossRef] [Google Scholar]
xviii. Andics A, Gácsi One thousand, Faragó T, Kis A, Miklósi A. 2014. Voice-sensitive regions in the dog and human brain are revealed by comparative fMRI. Curr. Biol. 24, 574–578. (10.1016/j.cub.2014.01.058) [PubMed] [CrossRef] [Google Scholar]
19. Kondo N, Izawa E-I, Watanabe S. 2012. Crows cross-modally recognize group member but not not-group members. Proc. R. Soc. B 279, 1937–1942. (10.1098/rspb.2011.2419) [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
20. Silwa J, Duhamel J, Pascalis O, Wirth Southward. 2011. Spontaneous voice–face identity matching past rhesus monkeys for familiar conspecifics and humans. Proc. Natl Acad. Sci. USA 108, 1735–1740. (10.1073/pnas.1008169108) [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
21. Proops L, McComb Thousand, Reby D. 2009. Cantankerous-modal private recognition in domestic horses (Equus caballus). Proc. Natl Acad. Sci. United states of america 106, 947–951. (10.1073/pnas.0809127105) [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
22. Proops L, McComb 1000. 2012. Cross-modal individual recognition in domestic horses (Equus caballus) extends to familiar humans. Proc. R. Soc. B 282, 3131–3138. (10.1098/rspb.2012.0626) [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]
23. Somppi South, Törnqvist H, Hänninen 50, Krause C, Vainio O. 2014. How dogs scan familiar and inverted faces: an eye movement study. Anim. Cogn. 17, 793–803. (10.1007/s10071-013-0713-0) [PubMed] [CrossRef] [Google Scholar]
24. Guo M, Meints K, Hall C, Hall S, Mills D. 2009. Left gaze bias in humans, rhesus monkeys and domestic dogs. Anim. Cogn. 12, 409–418. (ten.1007/s10071-008-0199-iii) [PubMed] [CrossRef] [Google Scholar]
25. Holden East, Calvo G, Collins M, Bell A, Reid J, Scot EM, Nolan AM. 2014. Evaluation of facial expression in acute pain in cats. J. Pocket-sized Anim. Pract. 55, 615–621. (ten.1111/jsap.12283) [PubMed] [CrossRef] [Google Scholar]
Articles from Biological science Letters are provided hither courtesy of The Imperial Society
Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4785927/
0 Response to "Can Dogs Sense Negative Energy Peer Reviewed Articles"
Post a Comment