Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective

Jennie E. Pyers, Pamela Perniss, Karen Emmorey

Research output: Contribution to journalArticlepeer-review


Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer’s or the perceiver’s. In Experiment 1, ASL signers and sign-na ̈ıve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions nonegocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a nonegocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that nonlinguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.
Original languageEnglish
Pages (from-to)143-169
Number of pages27
JournalSpatial Cognition and Computation
Issue number3
Publication statusPublished - 3 Jul 2015

Bibliographical note

This is an Accepted Manuscript of an article published by Taylor & Francis in Spatial Cognition & Computation: An Interdisciplinary Journal on 07/07/2015, available online:


  • spatial language
  • sign language
  • viewpoint
  • gesture


Dive into the research topics of 'Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective'. Together they form a unique fingerprint.

Cite this