For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g. categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems.
Bibliographical noteThis is the peer reviewed version of the following article: Perniss, P., Özyürek, A. and Morgan, G. (2015), The Influence of the Visual Modality on Language Structure and Conventionalization: Insights From Sign Language and Gesture. Topics in Cognitive Science, 7: 2–11, which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1111/tops.12127/abstract. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.
Perniss, P., Özyüre, A., & Morgan, G. (2015). The influence of the visual modality on language structure and language conventionalization: Insights from sign language and gesture. Topics in Cognitive Science, 7(1), 2-11. https://doi.org/10.1111/tops.12127