Designing learning interventions for individual learners is a fantastic goal. But is the science behind learning styles sound enough for L&D professionals to rely upon? Is learning styles analysis, in fact, ‘as predictive as using a horoscope’.
MOL tutor, Christine Bell, thinks so. Christine runs her own facilitation and coaching business, Bell Thompson, and has been aware of learning styles theory for many years. In 2004, she was startled by a report by Frank Coffield and colleagues at the Learning and Skills Research Centre, which questioned the validity of tools designed to measure reactions to learning interventions. Christine began to look further into the scientific rigour of popular learning style assessments.
‘It’s all a myth!’ she insists. ‘For starters, the Coffield report showed there was little evidence to demonstrate that adults work their way around Kolb’s Experiential Learning Cycle.’ (Devised in 1984 and still the basis of most learning styles questionnaires.) ‘This automatically invalidates the questionnaires as reliable indicators.
‘A further problem suggested by Coffield et al, is about the objectivity of learners in determining their own preferences. Learning styles analysis relies largely on self-assessment questionnaires and if those questionnaires are filled out with subjective but inaccurate responses, the learning activities devised in response will hinder - rather than help - the learner.
‘I’m not against adapting techniques to help learners. What I’m opposed to is the diagnostic work, which is about as predictive as using a horoscope. Learning is complex, and trying to find a simple model so we can categorise learners seems like a significant error. It scares me when I read about schools putting V, A or K labels onto pupils based on this kind of assessment.’
A unified scientific test for the learning styles hypothesis has been suggested by many critics, most notably the Association for Psychological Science (APS) in 2009.. It suggests grouping students of similar ability into their respective learning styles and randomly assigning them to classes. Some learners would find themselves in appropriate classes for their learning styles, others in inappropriate classes for their learning styles. If there was a disparity in end-of-semester test results between those ‘correctly’ and ‘wrongly’ assigned, this would support the learning styles hypothesis. If there was little or no difference, it would invalidate the hypothesis.
No such major experiment has been conducted to date. The APS report concluded that ‘there is no adequate evidence base to justify incorporating leaning styles assessments into general educational practice.’ Yet, Robert Sternberg from Tufts University was quick to point out that the paper had failed to cite many leading researchers on the subject.
Is it really fair to say that no one benefits from learning styles analysis? There is much evidence to suggest that some L&D professionals have benefited from trying out different learning techniques based on the results of analysis; even imperfect analysis.
‘Again,’ responds Christine, ‘I don’t have any issue with people finding successful resolutions to learning problems. I just think it’s time we had a more watertight method. Until then, I’ll continue to refer to learning styles analysis in development toolkits and will give learners the opportunity to explore all the arguments and not just accept the information in an uncritical way. And I look forward to future developments.’