Affordable Access

Publisher Website

Multi-sensory learning and learning to read

Authors
Journal
International Journal of Psychophysiology
0167-8760
Publisher
Elsevier
Publication Date
Volume
77
Issue
3
Identifiers
DOI: 10.1016/j.ijpsycho.2010.06.025
Keywords
  • Letter-Speech Sound Associations
  • Grapheme-Phoneme Correspondences
  • Multi-Sensory Processing
  • Audiovisual Integration
  • Reading Development
Disciplines
  • Biology
  • Linguistics

Abstract

Abstract The basis of literacy acquisition in alphabetic orthographies is the learning of the associations between the letters and the corresponding speech sounds. In spite of this primacy in learning to read, there is only scarce knowledge on how this audiovisual integration process works and which mechanisms are involved. Recent electrophysiological studies of letter–speech sound processing have revealed that normally developing readers take years to automate these associations and dyslexic readers hardly exhibit automation of these associations. It is argued that the reason for this effortful learning may reside in the nature of the audiovisual process that is recruited for the integration of in principle arbitrarily linked elements. It is shown that letter–speech sound integration does not resemble the processes involved in the integration of natural audiovisual objects such as audiovisual speech. The automatic symmetrical recruitment of the assumedly uni-sensory visual and auditory cortices in audiovisual speech integration does not occur for letter and speech sound integration. It is also argued that letter–speech sound integration only partly resembles the integration of arbitrarily linked unfamiliar audiovisual objects. Letter–sound integration and artificial audiovisual objects share the necessity of a narrow time window for integration to occur. However, they differ from these artificial objects, because they constitute an integration of partly familiar elements which acquire meaning through the learning of an orthography. Although letter–speech sound pairs share similarities with audiovisual speech processing as well as with unfamiliar, arbitrary objects, it seems that letter–speech sound pairs develop into unique audiovisual objects that furthermore have to be processed in a unique way in order to enable fluent reading and thus very likely recruit other neurobiological learning mechanisms than the ones involved in learning natural or arbitrary unfamiliar audiovisual associations.

There are no comments yet on this publication. Be the first to share your thoughts.