The Delicate Passage From Lab to School

Research in cognitive sciences is providing insights into the best ways of learning to read, write and count. The huge challenge however for the educational sciences remains bringing the results obtained in the laboratory into the classroom. This requires better coordination with teachers.



The latest international surveys assessing student learning (Pisa 2015) (*) and reading ability (Pirls 2016) (**) highlighted some crucial issues in the French educational system. They continue to show a significant drop in students’ reading and mathematics ability. According to the Pirls 2016 study, the average reading score of 9-10 year-old pupils (511) lies below the average for the fifty participating countries (500), in 34th position and almost last in Europe. The continuous fall in the French score since 2001 (525 in 2001, 520 in 2011) is mainly due to the increase in the number of students with a very low level. What can be done about this educational challenge? Can scientific research help bringing the scores back up?

The main objective of the cognitive sciences is to describe and explain the cognitive and affective processes at play in students and teachers during the numerous activities conducted in the classroom. How can the efficiency of these activities be ensured? This is precisely the goal of so-called “interventional” research, which consists of assessing any benefits of specific educational interventions in classes, centred on specific skills. Given the very nature of the methods this research uses and the results it gets, it seeks to improve not just student learning, but also teacher training.

What exactly are these methods? In science, it is traditional to consider two types of approach: the first called “hypothetico-deductive” is based on general theories and laws; it makes predictions which are then compared with the data from experimentation. The second, known as “explicative”, starts from observations and questions its determinants, by attempting to move back up the chain of causes. The two approaches are linked and inform each other. In both cases, scientists use complex systems of causality, involving several explanatory factors that may be highlighted by statistical analyses. In the field of education, both approaches are legitimate and complementary, as they can help us to better understand the effects of certain factors (pedagogical method used, class size, teachers’ level of training, socio-economic level of students’ families, etc.) on school performance.


In the interventional cognitive sciences however, the “hypothetico-deductive” method is preferred, as the experimental method enables “the proof to be administered”. That means showing that a specific factor (a teaching method or a learning technique) is indeed the main cause of the appearance of an observed behaviour - better learning performance for example.
To ensure that the causal relationship between the intervention and the variable measured is unquestionable, “intervention” (also called “training”) need to be organised in the classrooms. This required all the other factors likely to affect the performance measured to be controlled - school level, socioprofessional category, etc. This is obviously quite a delicate task, given the complexity of the educational system. To measure the efficiency of an intervention, we therefore use a rigorous methodology, similar to that used in clinical trials in medicine, with three levels of reliability (see inset).
Thus, numerous questions concerning learning and education are being explored: research being carried out around the world is looking at academic content (methods to teach reading, writing and maths, etc.) and the cross-disciplinary skills of students (working memory, attention, executive functions, emotional skills, etc.).

At the start of the 2000s, with my colleagues Florence Bara from the University of Toulouse, Pascale Colé from the University of Aix-Marseille and Liliane Sprenger-Charolles from the CNRS, we were among the first in France to develop interventional research in schools and publish our results in international journals. Today, several French teams are conducting this type of study. Level 1 or 2 experiments however (the most reliable) are relatively rare in the French school system (1). Most of them deal with learning to read. The summary results of international studies, such as those of French studies (2), show that, in order to be effective, the interventions need to be explicit: the skills worked on during each session must be very clearly established - learning a single grapheme-phoneme correspondence (*) for example. They must also be highly structured, with a precise sequence of planned exercises, and be done in small groups. Short sequences (20 to 30 minutes) need to be repeated several times in the same week and for one or two months. French studies (3) also show that the ability to identify and manipulate phonemes can be trained very early on, in the last year of nursery school.

Our studies agree with the international scientific literature, and also show that the most effective interventions are those in which the oral work on phonemes is done with the support of their corresponding written letters. This phoneme practice has greater effect on the ability to read and decode words when it is associated, in the same session, with visual and manual exercises to explore letters in relief. Concretely, the students follow the shape of the letters corresponding to the phoneme to be learned with their fingertips. This type of multisensory practice, including simultaneous oral, visual and tactile tasks, is particularly beneficial both for reading (including for decoding invented words) for children from poor socio-economic backgrounds or priority education networks (REP), but also for writing for children from middle or upper socio-economic backgrounds (4).

However, the studies were conducted in controlled conditions, in small groups, with reduced numbers from 40 to 100 students. Can the results be generalised to a large scale? In 2013 with colleagues from a variety of disciplines, we evaluated the effects of an intervention as rigorously as possible, on a large scale, with several hundred primary-school pupils in a priority area around Lyon, and in “realistic” conditions dictated by the constraints of various actors in the French national education system (5).

Over a school year, the intervention combined sessions focusing on decoding words, with exercises based on grapheme-phoneme correspondences, and oral and written comprehension of sentences and short texts. The time spent on these sessions and additional language exercises per week varied from ten to twelve hours. The effects of this specific intervention on the reading progress of the students were compared to those of students in control classes, where teachers were requested not to change their teaching practices.

Conducted independently by CNRS scientists and the Department of evaluation, prospective and performance (DEPP), part of the Ministry of National Education, this experiment was unique in its extent and its desire to limit any experimental bias. As such, we randomly designed “test” or “control” schools, were careful to take account of all children, to assess their level in maths to ensure that extensive teaching of French would not be at the expense of other subjects.

To assess reading progress, the skills of all first-year primary students in the “test” classes (1,252 pupils) were measured using standard tests, before and after the practice sessions, and were compared to those of students in the “control” classes (2,398 pupils).

At the start of the year, we offered each child analysis tests in phonology, vocabulary, oral comprehension and reading familiar and invented words aloud. To measure their progress, we conducted the same tests at the end of the school year, but also reading of a 265-word text (Alouette test) and written comprehension tests. In parallel, the DEPP conducted its own assessment (reading and mathematics) in the same classes, and in other similar classes throughout France (2,375 students).


Our hypothesis was that the pupils in the test classes and control classes would all progress, but that progress would be greater in the test classes. The results proved us wrong. The educational programme implemented did not help the children in the test group to progress more in reading (Lyon academy) than those in the control groups (Lyon and nationally). This is the conclusion of the quantitative analysis of the scores obtained in the assessments made by the CNRS scientists and those conducted by the DEPP.

So, should we forget the idea of such a programme? Not so sure! The subjective evaluations conducted by national education inspectors on the teachers involved in the programme were very positive. During quarterly meetings of the steering committee, everyone involved in the project (rector, inspectors, etc.) agreed on the benefits of the intervention on the individual and collective behaviour of the children, on the classroom atmosphere and on team work.

This led us to question the reasons for these results. Firstly, given how the Ministry of National Education operates, we only had a few weeks to prepare and present the project, and then to recruit, involve and train the volunteer teachers before the experiment year started, instead of several months which would have been necessary. In addition, as the large number of teachers was spread over a wide geographic area (3 departments), we were unable to offer regular, in-depth training and personalised monitoring. Training is essential to ensure the success of such an intervention. The difference in effectiveness noted between the small-scale interventions and larger-scale ones could be explained by this lack of training.

Real-scale experiments raise real theoretical and practical questions concerning the transfer and generalisation of results obtained in small groups of tens of children to larger groups of hundreds of children. Furthermore, this research is a perfect illustration of the old debate on the use of research results: as the psychologist William James pointed out more than a century ago, psychological descriptions cannot be transposed directly into specific educational prescriptions. Detailed procedures applicable to all situations and all publics cannot be deduced from research results specific to one situation, even though a certain permeability does exist between pedagogy and science, often greater than we may think. That is what this study shows; although the results from the cognitive sciences on learning to read are solid and the resulting educational principles now widely known, applying them to the classroom still requires a considerable effort in interventional research.


This effort requires a joint effort from those involved in research and from the people in the world of education. If we want to promote effective, large-scale interventions, we need teachers and scientists to design the interventional programmes together.

As well as offering a good professional training tool for teachers, this would create a virtuous circle between, on the one hand, the teachers, who are constantly elaborating and testing lots of pedagogical techniques, mainly from practice, and on the other hand, the scientists who are trying to understand, explain and evaluate learning techniques and strategies. Of course, the techniques elaborated by education are not of lesser importance, as much scientific research is based on them. Moreover, we known that the level of teacher’s skill determines the quality of the educational system and hence its performance. This in turn, depends on the training followed. In this regard, interventional research protocols could be of relevance in the initial and professional training of new or expert teachers. Training through research would enable them to better understand how children learn individually and in groups, and the effects of their teaching.

There are several conditions required for this interventional research by teachers and scientists to happen: the authority’s trust in their teachers; the willingness of teachers and scientists; the involvement of all human resources available in the region, including teacher trainers and in areas far from university centres; strong incentives given to teacher-training schools (Espe), in association with university research laboratories, to encourage them to participate in interventional research projects; scientific, educational and financial autonomy for each interventional research project; and finally, systematic distribution of the results (positive or negative) to all actors using digital tools. The project is huge, but so are the stakes.

(1) Édouard Gentaz and Philippe Dessus (dir.), Comprendre les apprentissages, Dunod, 2004.
(2) É. Gentaz (dir.), Apprendre ? Oui, mais comment... Des laboratoires aux salles de classe, Anae, 123, 2013.
(3) L. Sprenger-Charolles et al., Langue française, 199, 51, 2018.
(4) Édouard Gentaz, La Main, le cerveau et le toucher, Dunod, 2018.
(5) É. Gentaz et al., Anae, 123, 172, 2013.
(*) Pisa is the international OECD programme for monitoring student learning.
(**) Pirls is the international assessment of reading proficiency in fourth-grade pupils.





.LEVEL 1., the most reliable, consists of comparing students of the same age, divided into two groups; an “experimental” group and an “active control” group. In the first, the intervention to be tested is tried out on the students. In the second, students are given a different intervention. The groups are compared with each other twice, before and after the intervention, to assess its specific effectiveness.


.LEVEL 2. repeats the same schema as level 1, except the “active control” group is replaced by a “passive control” group. Students in this group receive no intervention or are placed on a waiting list to receive the intervention after the study. Although this type of study compares to a reference group, it cannot avoid potential well-known psycho-social effects. When the students and teachers know they are participating in a specific intervention, they have positive expectations about this intervention, which may influence the study. This is the famous “placebo effect”. There is a risk of obtaining biased or non-reproducible results.

.LEVEL 3. corresponds to “sole treatment” type protocols. This time, the intervention is offered to a single group of students, tested before and after the intervention. Therefore, no comparison is possible with a reference group. This type of protocol is used to test the feasibility of an intervention but allows no interpretation of the results. Even if progress is noted in the students tested, there is nothing to prove that other students who did not receive the intervention would not have seen the same progression in the variables measured.





Research in cognitive science applied to education is a continuation of the research carried out by the French founders of the educational sciences, which focused on identifying and validating tools that enable all children to reach their full potential. In 1922, Henri Wallon created a laboratory of child psychobiology and the journal Enfance in 1947. A chair in child psychology and education was even opened for him at the Collège de France in 1937. Gaston Mialaret who succeeded him at the head of the New French Education group was a primary school teacher and then professor of mathematics. In 1967, he obtained a chair in psychology, which he titled chair in educational sciences, at the University of Caen, giving birth to a new university department. His work is an illustration of the constant effort to compare teaching practice with the results of educational research. He placed considerable importance on teacher training, whose main goal was to enable them to develop a scientific attitude to facts. Finally, he placed the child at the centre of the educational system, insisting on the need to take account of a variety of psychological processes used in and by the teaching action.

Photo : Gaston Mialaret, en 2011. Pierre Lalongé/Université du Québec/Chicoutimi






Édouard Gentaz
Professor of developmental psychology

His research deals with the sensory-motor, cognitive, affective and social development of children, learning and visual deficiency. Author of numerous books and articles, he is chief editor of the journal Anae, on the neuropsychological approach to learning in children.

Share on Facebook
Share on Twitter
Please reload


November 11, 2018

November 11, 2018

Please reload