Abstract
Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer’s or the perceiver’s. In Experiment 1, ASL signers and sign-na ̈ıve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions nonegocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a nonegocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that nonlinguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.
Original language | English |
---|---|
Pages (from-to) | 143-169 |
Number of pages | 27 |
Journal | Spatial Cognition and Computation |
Volume | 15 |
Issue number | 3 |
DOIs | |
Publication status | Published - 3 Jul 2015 |
Bibliographical note
This is an Accepted Manuscript of an article published by Taylor & Francis in Spatial Cognition & Computation: An Interdisciplinary Journal on 07/07/2015, available online: http://wwww.tandfonline.com/10.1080/13875868.2014.1003933Keywords
- spatial language
- sign language
- viewpoint
- gesture