Timsal n Tamazight
Volume 6, Numéro 1, Pages 94-96
2014-10-01

Can Alphabetical Engineering And Synchronisation Help To Change L1 Neural Commitment In Language Learners? Synchronised Web Authoring Notation System (swans) : An Authoring System For Improving Foreign Language Oral Perception And Production.

Authors : Stenton Anthony . Tazi Said . Kebbaj Nabil .

Abstract

Mother-tongue listening in adults is a process of rapid, automatic, generally error-free deconstruction. The listener listens and segments the continuous flow of sound and into discrete units which can be interpreted semantically. Foreign language listening is usually slower, far more error-prone and above all accompanied by perception problems related to mother-tongue interference or "L1 neural commitment" (Kuhl, 2005). These problems, which are never completely mastered, are often exacerbated by the student’s knowledge of reading, a process which is accompanied, consciously or unconsciously, by subvocalisation. It is this activity of subvocalisation which demonstrates the problems of the language learner. When reading a second or foreign language, the subvocalisation is marked by relatively well-known problems of pronunciation and by far less well studied problems of lexical stress. By transferring mothertongue stress patterns to the target foreign language when he subvocalises, and later when he speaks, the student creates communication problems. The francophone speaker of English will neglect the vowel reduction of the second syllable in “dollar”, which becomes “dollAR”, and immediately betray his francophone origins. The word “famine” may be pronounced “faMINE” (to rhyme with “machine”) and be greeted with total incomprehension. Such a refusal to respect the English language stress rules will be judged acceptable or unacceptable according to the experience of the listener. An accumulation of such errors often leads to communication breakdown. Finding a remedy for these problems of speech perception and production is thus a key challenge for Higher Education language learning. In the following article we present the hypothesis that students must be enabled to see what they have so much difficulty in hearing and that the computer screen is the indispensable platform for this process. The plasticity of computer based tools not only enables alphabetical engineering, which entails changing the size and colour of letters and using animation to improve memorisation, but since the arrival in 1999 of SMIL language (W3C) it is now also possible to synchronise large corpora of text with sound and to present such multimodal documents from within web pages. For the inexperienced, such an approach might be rapidly dismissed as “karaoke for language learning” but the careful, fine-tuned, smooth, synchronisation of lines of text, rather than the simple highlighting of the distracting movement of individual words, offers an ocular comfort which represents a potentially important breakthrough. In 2005 a team of 12 researchers from the fields of linguistics, cognitive science, information science and acoustics, funded by the CNRS in France, set about developing a multilingual authoring system which would generate synchronised sound and text and integrate manually-added syllabic annotations to show lexical stress. The resulting programme called SWANS (Synchronised Web Authoring Notation System) has now been tested with various languages in European university language centres associated with CercleS. The expertise for annotating foreign language texts according to possible L1 interference requires a cooperative effort from teachers on a worldwide basis. For example, defining the interference of Arabic as a mother tongue in L2 auditory perception requires a listening acuity which is not always easy to find in Europe. In a globalised world where the nationality mix of our classrooms is constantly on the increase, SWANS has clear implications for improving62 teacher education and provides a platform for teachers to share their knowledge of L1 interference on an international adaptive basis by the common annotation of chosen video or audio transcripts. In this paper we propose to report on feedback, discuss the proposed introduction of automatically generated stress pattern annotations and review recent field results concerning L2 oral perception and production and the introduction of new reading strategies in a distance and networked learning context.

Keywords

L1 neural commitment, alphabetical engineering, synchronisation, listening perception, oral production, lexical stress, SMIL