Non-verbal communication and context: Multi-modality in interaction

Tim Wharton, Pauline Madella

Research output: Chapter in Book/Conference proceeding with ISSN or ISBNChapterpeer-review

Abstract

Traditionally, the study of linguistics has focussed on verbal communication. In the sense that linguistics is the scientific study of language, the approach is perfectly justified. Those working in the sub-discipline of linguistic pragmatics, however, are faced with something of a dilemma. The aim of a pragmatic theory is to explain how utterances are understood, and utterances, of course, have both linguistic and non-linguistic properties. As well as this, current work in pragmatics emphasizes that the affective dimension of a speaker’s meaning is at least as important as the cognitive one and it is often the non-linguistic properties of utterances that convey information relating to this dimension.

This paper highlights the major role of non-verbal ‘modes’ of communication (‘multi-modality’) in accounting for how meaning is achieved and explores in particular how the quasi-musical contours we impose on the words we say, as well as the movements of our face and hands that accompany speech, constrain the context and guide the hearer to our intended meaning. We build on previous exploration of the relevance of prosody (Wilson and Wharton 2006) and, crucially, looks at prosody in relation to other non-verbal communicative behaviours from the perspective of relevance theory. In-so-doing, we also hope to shed light on the role of multimodality in both context construction and utterance interpretation and suggest prosody needs to be analysed as one tool in a set of broader gestural ones (Bolinger 1983).

Relevance theory is an inferential model, in which human communication revolves around the expression and recognition of the speaker’s intentions in the performance of an ostensive stimulus: an act accompanied by the appropriate combination of intentions. This inferential model is proposed as a replacement for the traditional code-model of communication, according to which a speaker simply encodes into a signal the thought they wish to communicate and the hearer retrieves their meaning by decoding the signal they have provided. We will argue that much existing work on multimodality remains rooted in a code model and show how adopting an inferential model enables us to integrate multimodal behaviours more completely within a theory of utterance interpretation. As ostensive stimuli, utterances are composites of a range of different behaviours, each working together to form a range of contextual cues.
Original languageEnglish
Title of host publicationThe Cambridge Handbook of Language in Context
EditorsJesus Romero-Trillo
Place of PublicationCambridge
PublisherCambridge University Press
ChapterPart V
Pages419-436
Number of pages17
ISBN (Print)9781108839136
Publication statusPublished - 1 Jan 2024

Fingerprint

Dive into the research topics of 'Non-verbal communication and context: Multi-modality in interaction'. Together they form a unique fingerprint.

Cite this