Generative AIs can be used to design interesting activities inside and outside the classroom. They will probably play a role in education, and in certain cases are already playing one. But what role, exactly? And how should a teacher engage their students? In what contexts? We will argue that, at this point, the teachers can and should restrict using generative AIs to the context of out-of-class activities.
“At this point”
Things have been moving incredibly fast. In October 2022, when the first version of this textbook appeared, ChatGPT didn’t even exist. One year later, one can find platforms on the web proposing generative-AI-powered tools for education. The speed of progress is such that what is valid at this point (November 2023) may possibly no longer be true in a few months’ time. Perhaps some of the flaws we are seeing today will be corrected. Perhaps teachers will have been offered enough training to work around these flaws. Perhaps the school or national authorities will have provided instructions as to what can or should be done. It is essential to stay informed.
“Policy questions”
AI is presenting ministries with hard challenges. On one hand, it is desirable to teach pupils in such a way as to prepare them for tomorrow’s world. After all, the numbers showing how the jobs market will be impacted make it reasonable if not necessary to envisage teaching the pupils early1. On the other hand, it may seem unsafe to use technologies which haven’t yet shown their resilience. This lack of safety can be seen especially regarding privacy issues2. It is still quite unclear what effect they will have on learning3.
Industry is pushing to make us adopt their products, while parents are focusing on contradictory messages. These are – the prioritising the teaching of the fundamentals (reading, writing, counting) and the necessity of learning skills associated with jobs. This division complicates the task of policy makers.
Decisions may take time, but when they come, teachers will want to understand them.
About ‘safe environments’
Much data is going to be exchanged during sessions with generative AIs. Teachers and pupils will possibly easily be giving away data which can rapidly become personal. And without the right implementations, this data can be directly associated to each individual. GDPR does protect individuals but it is still early to know if these laws will be sufficient. Some countries have introduced safe-school environments in which anonymisation is the rule. In such environments, the online activities will not be logged outside the school servers in association with individual users.
Data safety questions are numerous, and it isn’t easy for the teacher to be sure that their rights and those of their pupils are granted. How long are the data going to be stored? For what purpose and by whom will they be used? Can teachers make decisions on behalf of their pupils? The complexity of these questions explains why it is never a good idea to just register the pupils on external platforms, unless the authorities have made the necessary checks.
Out-of-the class activities
One can already find many examples of activities in which a teacher can engage with generative AIs, either at home or in an office, and without pupils. Between these, let us mention preparing classroom activities, writing tests, searching for information and exploring the topic of the next lecture. There is a general impression that in these situations AI allows to better explore, to find new ideas, to present material in a better way. And even if there are also a number of problems (lack of references, hallucinations, bias) the balance is generally seen as positive.
Most importantly, teachers are reporting gaining time. For once, technology is not just promising to do better, but to do better with less effort.
Arguments in favour of inside-the-class activities.
If generative AIs are set to play an important role in the future, and making sensible use of AIs will constitute a skill in the jobs market, surely pupils should learn, with a teacher, how to use them correctly. Indeed, this would address both technical and ethical aspects of AI.
Speaking with students about these technologies today is rewarding but worrying. On one hand, they are already users, but they have strong misconceptions, in particular when it comes to trust.
Arguments against inside-the-class activities.
On the other hand, anyone who has tested these tools will understand how difficult it is to teach with a tool whose output is so unpredictable. Run a generative AI three times with the same prompt and you will probably get three different results. This is, in fact, an asset for the technology, but it can bring an untrained teacher (but also a skilled one!) into a rather uncomfortable position. Imagine a chemistry teacher asking pupils to all run the same experiment, only to then observe a bang here, red smoke there and a strange smell at the back of the room.
It would prove to be interesting but quite challenging to give convincing general explanations… or even individual ones.
So…
At this point the teacher should safely be able to test generative AIs outside the classroom. This will help to better understand how it works but also to discover the possibilities the pupils will most likely find. Not remaining naive about generative AIs is essential. Furthermore, as more and more teachers are indicating through testimonials, this is the chance of using a technology which, for once, allows the teacher to save time.
On the other hand, in many situations, it is still a good idea not to use these technologies directly with the pupils.
So how do we help pupils understand?
Again, this will have to be in line with recommendation and rules set by national or school authorities.
Wherever teachers can do this, a first suggestion is to engage with the pupils, perhaps by asking, what is and is not cheating? Discussing this topic will help pupils to understand the complexity of the question.
A second suggestion is that a teacher could try generative AI in the classroom, but not to use it with a complex, unfamiliar topic. This may seem counter-intuitive but showing the pupils that one doesn’t always know the answer can be helpful. It can even pay off to use generative AI on topics on which the students themselves will have expertise – they might spot mistakes and understand that AI is not always right!
Error-spotting can be interesting for pupils. It can be much more difficult for a teacher to be challenged by a fact produced by a generative AI and spot the error on the fly. This isn’t about being right or wrong; teachers are allowed to make mistakes. But having to explain mistakes in a pedagogical way is not that simple.
1 Generative AI likely to augment rather than destroy jobs. ILO report, August 2022 https://www.ilo.org/global/about-the-ilo/newsroom/news/WCMS_890740/lang–en/index.htm
2 After Italy blocked access to OpenAI’s ChatGPT chatbot, will the rest of Europe follow? Euronews, 7/4/2023. https://www.euronews.com/next/2023/04/07/after-italy-blocked-access-to-openais-chatgpt-chatbot-will-the-rest-of-europe-follow
3 Holmes, W., Miao, F., Guidance for generative AI in education and research, Unesco, Paris, 2023.