Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective

Jennie E. Pyers, Pamela Perniss, Karen Emmorey

Research output: Contribution to journalArticlepeer-review

Abstract

Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer’s or the perceiver’s. In Experiment 1, ASL signers and sign-na ̈ıve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions nonegocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a nonegocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that nonlinguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.
Original languageEnglish
Pages (from-to)143-169
Number of pages27
JournalSpatial Cognition and Computation
Volume15
Issue number3
DOIs
Publication statusPublished - 3 Jul 2015

Bibliographical note

This is an Accepted Manuscript of an article published by Taylor & Francis in Spatial Cognition & Computation: An Interdisciplinary Journal on 07/07/2015, available online: http://wwww.tandfonline.com/10.1080/13875868.2014.1003933

Keywords

  • spatial language
  • sign language
  • viewpoint
  • gesture

Fingerprint

Dive into the research topics of 'Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective'. Together they form a unique fingerprint.

Cite this