32

Homogenisation

A lot of money, computing resources, time and effort go into the creation of datasets, benchmarks and algorithms for machine learning. This is particularly true for deep learning and large-scale models. Therefore, it makes sense that the created resources are shared within this ecosystem. This is the case with many of the ML systems that we often use. Even where the end products are different and are created by a different company, methodology, datasets, machine-learning libraries and evaluations are often shared1. Thus, there is an argument to be made for their outputs to be similar under similar conditions.

If the output is an educational decision, it raises concern, for example, for the student who might be rejected unfairly from every educational opportunity1. But whether or not algorithmic homogenisation constitutes an injustice can only be decided on a case–by-case basis1.

On the other hand, if the system’s task is to help the student write, it brings into focus standardisation of writing styles, vocabulary and hence, thought patterns. Language models used in these cases are decided to predict the most probable text, based on their training dataset. These datasets, if not shared between systems, are constructed in a similar way, often with public internet data. Even when this data is screened for bias, prejudice and extreme content, it represents only a small ecosystem and is not representative of the world in all its diversity of ideas, culture and practices. Predictive text systems, based on deep learning and used for text messages and emails, have been demonstrated to change the way people write. The writing tends to become “more succinct, more predictable and less colourful”2.

Invisibility

Sequences of words which are repeated in the training data trickle down into the output of large language models. Thus, the values of the database creators get the power to curb alternative opinions and plural expressions of ideas. Without proper pedagogical interventions, this in turn might limit students’ creativity and originality, not only leading to formulaic writing but ultimately to citizens with lesser critical-thinking skills and thus to an overall less colourful world3.

Closely linked with many of the negative fallouts of machine learning, including homogeneity as discussed above, is the fact that the technologies have become so advanced that the human-machine interface is seamless and practically invisible. Whether it is the search engines incorporated in the browser’s address bar, or text prediction that works intuitively, with no time lag between writing, predicting and choosing suggestions, we often act under the influence of technology without consciously being aware of it or having the choice to put a brake and rethink situations and make our own decisions. Moreover, when we use it habitually to make decisions, we tend to forget its existence altogether4. “Once we are habituated to technologies, we stop looking at them, and instead look through them to the information and activities we use them to facilitate”. This raises such serious concerns about human agency, transparency and trust, especially when it comes to young minds, that experts have recommended interfaces be made more visible and even unwieldy4.

What’s beyond: an ethical AI

In each part of this open textbook, we have discussed pedagogical, ethical and societal impacts of AI, especially data-based AI. Data and Privacy, reliability of content and user autonomy, impact on personal identity, Bias and Fairness and Human agency were all discussed in their respective pages. Issues specific to search engines were discussed in Behind the Search Lens: Effects of search on the individual and the society, problems relating to adaptive systems were dealt with in The Flip Side of ALS and those particular to Generative AI in The Degenerative. In several places throughout the book, we looked at remedial measures that can be taken in the classroom to deal with specific problems. Our hope is that these measures will become less onerous once we have ethical and reliable AI systems for education. This ethical AI would be developed, deployed and used in compliance with ethical norms and principles5 and would be accountable and resilient.

Since we cede so much power to AI models and their programmers, sellers and evaluators, it is only reasonable to ask them to be transparent and assume responsibility and remedy errors when things go wrong6. We need service-level agreements that clearly outline “the support and maintenance services and steps to be taken to address reported problems”5.

A resilient AI would accept its imperfections, would expect them and can work in spite of them. Resilient AI systems would fail in a predictable way and have protocols to deal with these failures6.

In education, ethical AI should be guided by user-centred design principles and take into account all aspects of education7. Teachers would be able to inspect how it functions, understand its explanations, override its decisions or pause its use without difficulty8. These systems would reduce teacher workload, give them detailed insights into their students and support them in enhancing the reach and quality of education8. They would not cause harm to their users or the environment and would enhance the social and emotional well-being of learners and teachers5.

Until this day comes, a teacher will have to try to develop and participate in a community of colleagues and educators to raise awareness about problems, share experiences and best practices, and identify reliable providers of AI. They could also involve students and parents in discussions and decisions to better address different concerns and develop an environment of trust and camaraderie. They would be best served by doing their best to stay up to date with the latest trends in AIED and acquire competencies when and where possible5.

 


Bommasani, R., et al, Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization?, Advances in Neural Information Processing Systems, 2022.

Varshney, L., Respect for Human Autonomy in Recommender System, 3rd FAccTRec Workshop on Responsible Recommendation, 2020.

3 Holmes, W., Miao, F., Guidance for generative AI in education and research, UNESCO, Paris, 2023.

4 Susser, D., Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures, Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Association for Computing Machinery, New York, 403–408, 2019.

5 Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators, European Commission, October 2022.

6 Schneier, B., Data and Goliath: The Hidden Battles to Capture Your Data and Control Your World, W. W. Norton & Company, 2015.

7 Tlili, A., Shehata, B., Adarkwah, M.A. et al, What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education, Smart Learning Environments, 10, 15 2023.

8 U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington, DC, 2023.

Licence

Icon for the Creative Commons Attribution 4.0 International License

AI for Teachers: an Open Textbook Copyright © 2024 by Colin de la Higuera and Jotsna Iyer is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book