Attention and learning in L2 multimodality: a webcam-based eye-tracking study
Zhang, P.
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. Abstract/SummaryMultimodal input can significantly support second language (L2) vocabulary learning and comprehension. However, very little research has examined how L2 learners, especially young learners, allocate attention when exposed to such input and whether learning from multimodal input can be explained by attention allocation. This study therefore investigated individual differences in attention allocation during L2 vocabulary learning with multimodal input and how vocabulary learning and comprehension were influenced by these differences. Forty young learners of French watched two types of multimodal input (Written+Audio+Picture vs. Written+Speaker+Video) and had their eye-movements recorded through online webcam-based eye-tracking technology. They also completed tests of comprehension, vocabulary, and phonological short-term memory (PSTM). We show that greater attention was allocated to the non-verbal input in video than in picture format, and such attention allocation differences were further negatively predicted by learners’ PSTM capacity. Additionally, increased attention to the non-verbal element, whether video or picture, resulted in better overall comprehension and larger vocabulary gains in meaning recognition and recall. Our findings give new insights into the role of attention and how it can be maximized, with both theoretical and pedagogical implications for multimodal L2 learning.
Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |