44
The key question of tomorrow’s AI could well be, for whom is AI working?
When you use a tool that is supposed to give you a benefit in learning, you expect it to be the case. But can there be a reason for which the tool is in fact aiming to optimise a more complex function than to just fulfil your needs? And does this matter, provided you also get the expected result? Let’s see.
Of course, when AI is built by a private company, it makes sense to understand what its business model is. This will help you to understand who they are working for: if it is once-off software to be bought by parents, they will need a reason to be interested. If it is schools, teachers or governments, these arguments will change, and so will the software.
We should remember that when there is machine-learning-based AI software, the learning will take place with regards to an objective function. The neural network can be trained to minimise the pupil’s learning time, and maximise the quiz test results, or perhaps both factors could be combined.
But in many cases, the learning will take place in a social environment, and the AI’s recommendation may affect the individual as well as the whole group.
To explore this idea let’s look at how Waze works. It is a popular traffic navigation system. It’s not used much in schools, but teachers like to avail of it it in order to be on time!
Waze
Waze is a navigation app used by drivers to find their route. Waze is used by 150 million people each month. It has many social network features but much of the data it uses to analyse traffic conditions does not come from official open data repositories or cameras, but from users themselves1.
For those who don’t use Waze, here’s a simple summary of how it works – you are on your way to work, same as every other day. You know your way but you still use Waze.
So will many of the drivers around you. On your map, you will find the route computed to bring you to your destination. You’ll be told the estimated time of arrival, which is updated every few minutes as local traffic conditions change. You could also be told that there is an object on the road at 260m, a car accident at 1km, a traffic jam in 3km. Depending on these updates, the system can propose an alternative route which will save you seven minutes…
For this to work, you, as a Wazer, will be entering information and warning fellow Wazers, via the system, that there is an animal wandering where you are or – and this is important – hat the animal or object is no longer there.
Where is the AI?
There is AI in the computation of expected times, the routes, etc. This means taking into account static information (distances) but also dynamic information (the speeds of the cars). Waze will also use your own history to take into account your driving patterns2. Waze will even know whether or not the traffic lights are synchronised to your advantage.
But there is more. When a Wazer enters new information, how does the system take it into account? Suppose I warn that the road is blocked, what happens then? A human expert could check the facts (are other users saying the same?), use a model that informs them how much credit should be given to this particular user, check if the user has really halted… The AI will do the same.
And more. When the system detects a traffic jam on the normal road, it will send users on a different route. But how can the system know that the traffic jam is less of a problem if it doesn’t send users into the traffic jam to check? The users already stuck cannot give that information. So the system has to send some traffic into the problem to find out if the problem is solved.
Some ethical considerations?
There are a number of ethical considerations:
- Waze knows a lot about you – where you live and work, your usual stops, your habits. It will propose adverts to which you may or may not answer.
- In order to satisfy as many customers as possible, Waze has to solve many exploration/exploitation dilemmas such as the one above. How does it make that decision? Is there a right way of making that decision?
- Using these tools regularly does have consequences on our capacity to solve our own problems. It is now known that our (human) cognitive capacities are being affected. As an example, which is surely not isolated, an author of this textbook was using Waze one morning. The system told him to leave the highway to avoid congestion. After driving for two km along a nice, secondary road, Waze changed its mind and suggested that the best route was driving back to the highway. What matters in this example is not that the system changed its optimised route, which makes sense, but the fact that our dependency on such AI-driven systems makes us incapable of making our own judgements3.
Consequences for education
To our knowledge, this issue of group handling doesn’t occur in education – yet. When resources are unlimited (access to a web platform, for example), this situation is of little consequence. But suppose the resources are limited: only three pupils can use the robot at the same time. In this case, an AI system will be proposing which pupils should have access to the robot. Many factors could govern the decision. If the system wants to be fair, the decision may be random. But many will not be happy by that. If the system wants to obtain the best results for the whole classroom, it may allocate more resources to disadvantaged children. But if the system is given the task of securing the fact that at least 90% of the pupils get grade XYZ at the end of term, this does not mean that each pupil has now 90% chances of success, but rather that 10% of the pupils are going to be sure to fail.
The role of the teacher
An AI-era teacher must understand how such systems work, what are the caveats of the algorithms, and that she/he must make the decisions. Easier said than done. A teacher can use an AI system because, as is the case of the navigation tool described above, this tool can benefit all. But a teacher can, and should, contrast the decision proposed by the AI with their own experience. Wasting 15 minutes on a road isn’t a big deal. But making the wrong call for your pupils is.
1 https://www.cozyberries.com/waze-statistics-users-facts/ and https://www.autoevolution.com/news/waze-reveals-how-many-users-run-the-app-on-android-and-iphone-197107.html for some facts and figures concerning Waze.
2 Petranu, Y. Under the Hood: Real-time ETA and How Waze Knows You’re on the Fastest Route. https://medium.com/waze/under-the-hood-real-time-eta-and-how-waze-knows-youre-on-the-fastest-route-78d63c158b90
3 Clemenson, G.D., Maselli, A., Fiannaca, A.J. et al. Rethinking GPS navigation: creating cognitive maps through auditory clues. Sci Rep 11, 7764 (2021). https://doi.org/10.1038/s41598-021-87148-4
https://www.nature.com/articles/s41598-021-87148-4