In March 20231, and then in May 20232, two petitions/open letters were published, warning about some of the dangers of AI. They were signed in both cases by hundreds of reputed scientists and AI specialists from the industry. What do we need to know about this? ?
Is this an interesting question for teachers?
With the more mature students the question of the risks of AI to civilization will arise. Even if no teacher is obliged to give definite answers to all questions, it is fair to understand the contours of the controversy.
As a teacher, should one not just talk about the technical aspects of a topic and leave the human, economic, philosophical issues to specialists?
This is an interesting question over which there is a division of opinions.
Should a physics teacher know about Hiroshima or Chernobyl? Should conversations about these issues take place in that class? Or, in the case of AI, should a teacher be able only to use some software safely and understand generally how it works? Or, also be able to understand the running debates about the questions for society when it comes to AI?
Unesco’s and other experts’ position is that artificial intelligence isn’t just about technology and that a teacher should understand the ethical issues involved. These include the concerns about the impact of AI on society, civilisation and mankind.
Are these new questions?
Some of the questions about the dangers of AI have been around for a while. The question about what happens when AI is ‘superior’ to human intelligence has been discussed for a long time. Irving Good4, a former colleague of Alan Turing, introduced the notion of Technological Singularity as early as 1965. He suggested that, once AI is considered to be more intelligent than humans, or super-intelligent, the AI would be unstoppable. Good went on to advise Alan Kubrick for 2001: A Space Odyssey, – a movie featuring AI going rogue.
The positions
The text of the March open letter1 warned that AI could do good and bad, that the impact on society and on jobs could be considerable. It also introduced the notion that AI was not only going to replace humans in tedious and undesirable jobs, but also in ‘good’ jobs that people wanted to do. Furthermore, that the developments of AI led to developments of society and that the usual democratic mechanisms of change were not used.
In the second text2, the added risk discussed was that of AI going rogue (or a variation of this scenario) and the potential end of human civilisation.
A third position emerged from this debate3– that AI was indeed a cause of concern, but not for existential reasons which were masking the more urgent problems.
Is the debate over?
No, the debate isn’t over. Some scientists still claim there are many risks, that these technologies are growing too fast and that regulation is required. Others believe that at present AI comprises only benefits, and that we should be careful, but not scared.
It is difficult to say who is winning or losing, who is right or wrong. The debate is reminiscent of that which occurred after 1945 about physics.
A common position is that of asking for regulation, even if there is as yet no regulation with which everyone agrees.
Can there be a sound position?
Actually, both positions are probably sound. The current facts seem to be in favour of the enthusiasts (AI is enabling progress in medicine, agriculture, climate analysis, languages and communication), but the argument that we, as humans, have always found answers, has serious limitations.
Where do I find out more about this debate?
For an open-minded person (or teacher) there are numerous potential sources of information. Blogs, reliable sites, and position papers and videos from all leading scientists, including historians and philosophers.
1 https://futureoflife.org/open-letter/pause-giant-ai-experiments/
2 https://www.safe.ai/statement-on-ai-risk#open-letter
4 https://www.historyofinformation.com/detail.php?id=2142
5 https://en.wikipedia.org/wiki/I._J._Good