On The Limits of Artificial Intelligence in Education
On The Limits of Artificial Intelligence in Education
On The Limits of Artificial Intelligence in Education
ABSTRACT
The recent hyperbole around artificial intelligence (AI) has impacted on our ability to properly
consider the lasting educational implications of this technology. This paper outlines a number cri-
tical issues and concerns that need to feature more prominently in future educational discussions
around AI. These include: (i) the limited ways in which educational processes and practices can be
statistically modelled and calculated; (ii) the ways in which AI technologies risk perpetuating social
harms for minoritized students; (iii) the losses incurred through reorganising education to be more
‘machine readable’; and (iv) the ecological and environmental costs of data-intensive and device-
intensive forms of AI. The paper concludes with a call for slowing down and recalibrating current
discussions around AI and education – paying more attention to issues of power, resistance and the
possibility of re-imagining education AI along more equitable and educationally beneficial lines.
Introduction
The past twelve months have seen artificial intelligence (AI) attract heightened levels
of popular and political interest rarely seen before in the sixty-year history of the field.
Much of this has been fuelled by financiers chasing quick profits, policymakers keen
to appear supportive of national innovation, and Big Tech corporations scrambling to
catch-up with more agile specialist start-ups. One consequence of this furore is the
difficulty of now engaging in balanced and reasoned discussions about the societal
implications and challenges of AI. For example, we have reached a point where the
majority of US adults are now prepared to accept that “the swift growth of artificial
intelligence technology could put the future of humanity at risk” (Reuters, 2023).
This special issue of the Nordic Journal of Pedagogy & Critique therefore comes at a
moment when a lot is being said about AI, albeit little of which is likely to hold up to
scrutiny a few years hence.
While not suffering the extreme peaks and troughs of general public discussions
around AI, the education sector has also been experiencing its own version of AI-fever.
This has perhaps been most obvious in educational reactions to Chat-GPT and
other ‘generative AI’ writing tools capable of producing pages of plausible-sounding
text in response to short written prompts. At the beginning of 2023, initial publicity
around this particular form of AI raised widespread concerns over the likelihood of
students using such tools to fraudulently produce written assignments. This triggered
a succession of university and school-wide ‘bans,’ hasty reformulations of assessment
tasks, and the rapid marketing of new AI counter-measures claiming to be capable
of detecting algorithmically-generated writing. Observing this from the outside, it
seemed alarming how quickly the educational debate around Chat GPT spiralled out
of control, with many otherwise sober commentators reaching extreme conclusions
over the transformative connotations of this technology.
This paper calls for more reasoned responses to the educational possibilities of AI.
While educators should not completely ignore recent developments around machine
learning, large-language models and the like, there is certainly a need to resist the
more extreme hopes and fears that the idea of AI technology continues to provoke. At
the same time, there is also a need to better engage with complex issues and concerns
that have so far tended to remain sidelined in educational discussions around AI.
This requires sustained, ongoing and open dialogue that brings in perspectives not
usually given space in conversations around digital innovation and education futures.
In particular, this requires paying closer attention to the experiences and standpoints
of those groups likely to gain least (and likely to lose most) from the unfettered
implementation of AI technology in education. As a start, this brief paper sets out
some pertinent starting-points from which such discussions can progress in earnest.
4
On the Limits of Artificial Intelligence (AI) in Education
use of personalized learning systems to curate online learning content and activities
for different students on the basis of their prior performance.
Crucially, while these applications might seem incredibly sophisticated in
comparison to the educational technologies of the 2000s and 2010s, such examples
all constitute what is termed ‘narrow artificial intelligence.’ In other words, these
AI systems are designed to address one specific task (such as grading essays or
predicting student behaviours). These AI tools are refined using training data relating
to this specific area of education, and then operate within pre-defined boundaries to
recognise patterns in a limited range of input data. Thus, the forms of AI currently
entering our schools and classrooms are far-removed (if not totally distinct) from the
speculative forms of AI that often feature in popular discussions of how ‘sentient’
forms of AI might soon replace teachers, render schools obsolete, and even do away
with the need for humans to learn things for themselves. Thus, in contrast to the
fears and hopes that have fast grown up around ideas of ‘general AI,’ ‘digital minds,’
‘superintelligence’ and so-called ‘singularity,’ the first step in establishing a healthy
response to the emergence of AI technologies into schools is to foreground what Divya
Siddarth and colleagues (2021) term ‘Actually Existing AI’ – i.e. the computational
limitations of this technology alongside the IT firms and flows of funding that are
promoting it.
In particular, the idea of actually existing AI pushes us to frame educational AI in
terms of maths, statistics and computation. As Hilary Mason (2018, n.p.) puts it, “AI
is not inscrutable magic – it is math and data and computer programming, made by
regular humans.” Indeed, some elements of the computer science community have
recently begun to deliberately distance themselves from the term ‘AI’ and revert to using
labels that better describe the types of machine learning and algorithmic developments
that underpin their work (see Jordon in Pretz, 2021). Elsewhere, policymakers and
industry actors are also beginning to turn to alternate terms, such as ‘automated
decision making’ and ‘algorithmic forecasting.’ Such linguistic turns reinforce Emily
Tucker’s (2022, n.p.) assertation that “whatever the merit of the scientific aspirations
originally encompassed by the term ‘artificial intelligence,’ it [has become] a phrase
that now functions in the vernacular primarily to obfuscate, alienate and glamorize.”
Recognising AI as a sophisticated form of statistical processing quickly raises
questions over what can (and what cannot) actually be accomplished with these
technologies in education. For example, from this perspective, a seemingly sentient
AI tool such as Chat GPT is more accurately understood as assembling and
re-arranging pre-existing scraps of text taken from the internet in ways that are
statistically likely to resemble larger pieces of pre-existing text. Generative AI – as
with any AI tool – does not ‘know’ or ‘understand’ what it is doing any more than
any other non-human object. Even if it is producing apparently plausible reams of
text, a generative AI language tool has no ‘understanding’ or ‘knowledge’ of what its
output might mean. Instead, just as a parrot can mimic human speech without any
reference to meaning, so too will a large language model – albeit using sophisticated
5
N. Selwyn
probabilistic information about how text has previously been put together by human
authors (Bender et al., 2021). At best, then, these are statistical simulations, or more
accurately, replications of human-produced text with none of the human ingenuity,
imagination or insight that was used to produce the original source materials.
6
On the Limits of Artificial Intelligence (AI) in Education
7
N. Selwyn
in a social setting – what Shelby et al. (2022, p. 2) define as “adverse lived experiences
resulting from a system’s deployment and operation in the world.” In terms of the
ongoing educational application of AI, then, one set of concerns relates to what
Shelby refers to as ‘allocative harms’ – i.e. how AI systems are proving prone to
reaching decisions that result in the uneven – and sometimes unfair – distribution
of information, resources and/or opportunities. This is reflected in various recent
reports of ‘algorithmic discrimination’ in education – such as automated grading
systems awarding higher grades for privileged students who fit the profile of those who
historically have been awarded high grades, or voice recognition systems repeatedly
making false judgements of cheating on language tests against students with non-
native accents (NAO, 2019).
Also of concern are ‘quality-of-service harms’ – i.e. instances where AI systems
fail systematically to perform consistently and to the same standards regardless
of a person’s background or circumstances. This has already come to the fore in
instances where US schools have deployed facial recognition systems that regularly
fail to recognise students of colour (Feathers, 2020), or systems developed to detect
AI-generated writing that discriminate against non-native English speakers, whose
work is more likely to be written formulaically and use common words in predictable
ways (Sample, 2023). Of particular concern is the emergence of educational AI
systems that rely on processes unsuited to disabled and neuro-diverse students – for
example, eye-tracking technologies that take a steady gaze as a proxy for student
engagement (Shew, 2020).
Alongside these concerns are what Shelby terms ‘representational harms’ – i.e. the
ways in which AI systems rely on statistical categorisations of social characteristics and
social phenomenon that often do not split into neatly bounded categories. This can
lead to mis-representations of who students are, their backgrounds and behaviours
in ways that can perpetuate unjust hierarchies and socially-constructed beliefs about
social groups. Finally, are concerns over AI technologies adversely impacting on
social relations within education settings – what Shelby terms ‘interpersonal harms.’
These include AI-driven ‘student activity monitoring systems’ now being marketed
to allow teachers to surveil students’ laptop uses at home, or school authorities using
students’ online activities as the basis of algorithmically-profiling students who might
be deemed ‘at risk’ of course non-completion.
Running throughout all these examples is the underpinning concern that even
the most ‘benign’ uses of AI in a school or classroom setting is likely to exacerbate
and entrench pre-existing institutional forms of control. Schools and AI technologies
are similarly built around processes of monitoring, categorising, standardising,
synchronising and sorting. All told, while such exclusionary glitches might not be
a deliberate design feature, AI technologies are proving prone to replicating and
reinforcing oppressions that minoritized students are likely to regularly encounter
during their educational careers. In this sense, one of the most important conversations
we should now be having around the coming-together of education and AI relates to
8
On the Limits of Artificial Intelligence (AI) in Education
how AI is imbued with “a tendency to punch down: that is, the collateral damage that
comes from its statistical fragility ends up hurting the less privileged” (McQuillan,
2022, p. 35).
AI as environmental burden
Finally, is the underpinning concern that the data-intensive and device-intensive
forms of AI currently being taken up in education incur unsustainable ecological and
environmental costs. For example, MIT Technology Review reported in 2019 that
the carbon emissions associated with training one AI model had been estimated to
exceed 626,000 pounds of carbon dioxide (equivalent emissions to driving 62 petrol-
powered passenger vehicles for twelve months). Similarly, conducting a ‘conversation’
with Chat GPT of between 20 to 50 prompts is estimated to consume 500 ml of
water (Li et al., 2023). Thus in terms of natural resource consumption and energy
9
N. Selwyn
drain alone, as Thompson et al. (2021, n.p.) understatedly puts it, “the cost of [AI]
improvement is becoming unsustainable.”
It is therefore beginning to be argued that educators need to temper any
enthusiasms for the increased take-up of AI with the growing environmental and
ecological harms associated with the production, consumption and disposal of digital
technologies. In this sense, AI should not be seen as an immaterial, other-worldly
technology – somehow weightless, ephemeral and wholly ‘in the cloud.’ In reality,
AI is reliant on a chain of extractive processes that are resource-intensive and with
deleterious planetary consequences. In short, the growing use of AI technologies in
education comes at considerable environmental cost – implicated in the depletion
of scarce minerals and metals required to manufacture digital technologies, massive
amounts of energy and water required to support data processing and storage, and
fast-accumulating levels of toxic waste and pollution arising from the disposal of
digital technology (see Brevini, 2021).
Given all the above, any enthusiasms for the increased use of AI in education must
address the growing concerns among ecologically-concerned commentators that it
might not be desirable (and perhaps even impossible) to justify the development and
use of AI technologies in the medium to long-term. On the one hand, this necessitates
proponents of educational AI to explore how the continued use of AI in schools and
universities might be aligned with ‘green-tech’ principles and perhaps make a positive
contribution to forms of eco-growth. In this sense, there is certainly a pressing need
to explore the extent to which educational AI might be oriented toward emerging
developments in areas such as ‘carbon-responsive computing’ and ‘green’ forms of
machine learning. This implies, for example, developing different forms of AI built
around small datasets and refined processing techniques, and moving beyond ‘brute
force’ computational approaches (Nafus et al., 2021).
On the other hand, however, we also need to give serious consideration to the idea
that AI is ultimately an irredeemable addition to education, and needs to be rejected
outright. Strong arguments are being made that the environmental and ecological
harms arising from AI use cannot be offset by efforts to instigate ‘greener’ forms of
carbon-neutral digital technology and ‘cleaner’ forms of renewable energy. As such,
educationalists would do well to be open to the possibility that most – if not all –
forms of AI technology “are intrinsically incompatible with a habitable earth” (Crary,
2022, n.p.). If this is the case, then it makes little sense to continue to push for
the reframing of education in an era of climate crisis and environmental breakdown
around these technologies. From this perspective, then, AI is nothing more than a
dangerous distraction from much more pressing and threatening planetary issues.
10
On the Limits of Artificial Intelligence (AI) in Education
11
N. Selwyn
We are not objecting to the use of AI tools to solve specific problems within clear
parameters that are set and monitored by actual social communities. We are
objecting to the rhetoric and expansionist practice of offering AI as the solution for
everything. (Couldry, 2023, n.p.)
In this spirit then, it falls to the education community to now begin to work out
how to shape a new wave of discussions around AI in education that are framed in
more emancipatory, fair, or perhaps simply kinder ways than the brut(ish) forms of
corporate algorithmic control currently on offer. Indeed, there are some burgeoning
examples of how this might be done. On the one hand, we are beginning to see
some radical calls for feminist, queer, decolonialised and indigenous reimaging of
what AI might be (e.g. Adams, 2021; Klipphahn-Karge et al., 2023; Munn, 2023;
Toupin, 2023). On the other hand, a few mainstream public education agencies and
organisations are also beginning to make a decent start in calling for new forms of
AI that emphasize human elements of learning and teaching, that are sympathetic
to education contexts, that involve educators in their conception, development and
implementation, and are based around values of trust, care and that align with shared
education visions. For example, as the US Office of Educational Technology (2023,
p. 10) recently contended:
Use of AI systems and tools must be safe and effective for students. They must
include algorithmic discrimination protections, protect data privacy, provide notice
and explanation, and provide a recourse to humans when problems arise. The
people most affected by the use of AI in education must be part of the development
of the AI model, system, or tool, even if this slows the pace of adoption.
Conclusions
All told, this paper has begun to outline the case for slowing down, scaling back and
recalibrating current discussions around AI and education.While this might not feel like
an easy task, the urgency of current conversations around AI and education is clearly
unproductive in the long run. It makes good sense for educators to try to disconnect
themselves from the apparent imperatives of AI-driven educational ‘transformation,’
and instead work to slow down discussions around AI and education, and introduce
an element of reflection and nuance. Given the technical and social complexity of
AI, it behoves us to try to develop forms of public debate that engage with these
complexities rather than descend to overly-simplistic caricatures and fears. Given the
12
On the Limits of Artificial Intelligence (AI) in Education
clear inequalities and injustices already arising from AI technologies it also behoves
us to pay closer attention to “the oppressive use of AI technology against vulnerable
groups in society” (Birhane & Van Dijk, 2020, n.p.). Moreover, all of the concerns
raised in this paper all point to key questions of power – i.e. who gets to decide what
AI tools are implemented in education will inevitably wield considerable influence
over what goes on in that education setting. As Dan McQuillan (2023, n.p.) argues:
From this perspective, AI is not a way of representing the world but an intervention
that helps to produce the world that it claims to represent. Setting it up in one way
or another changes what becomes naturalised and what becomes problematised.
Who gets to set up the AI becomes a crucial question of power.
Seen in this light, then, it seems crucial that educators and the wider education
community become more involved in debates and decision-making around who
gets to ‘set up’ AI and education. The future of AI and education is not a foregone
conclusion that we simply need to adapt to. Instead, the incursion of AI into education
is definitely something that can be resisted and reimagined.
Acknowledgements
This paper arises from research supported by funding from the Australian Research
Council (DP240100111).
Author biography
Neil Selwyn has been researching and writing about digital education since the mid-
1990s. He is currently a professor at Monash University, Melbourne. Recent books
include: Should Robots Replace Teachers? AI and the Future of education (Polity 2019),
Critical Data Literacies (MIT Press 2023, with Luci Pangrazio), and the third edition
of Education and Technology: Key Issues and Debates (Bloomsbury 2021).
References
Adams, R. (2021). Can artificial intelligence be decolonized? Interdisciplinary Science Reviews, 46(1–2), 176–197.
Bender, E., Gebru, T., McMillan-Major, A., & Mitchell, M. (2021). On the dangers of stochastic parrots.
In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610–623).
Birhane, A., & van Dijk, J. (2020, February). Robot rights? Let’s talk about human welfare instead. In Proceedings
of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 207–213). Association for Computing Machinery.
Brevini, B. (2021). Is AI good for the planet? Polity.
Broussard, M. (2021, 22 April). [Tweet].Twitter. https://twitter.com/merbroussard/status/1384934004030418945
Caird, S., Lane, A., Swithenby, E., Roy, R., & Potter, S. (2015). Design of higher education teaching models and
carbon impacts. International Journal of Sustainability in Higher Education.
Christl, W. (2023). Surveillance and algorithmic control in the call centre. CrackedLabs. https://crackedlabs.org/en/
data-work/publications/callcenter/
Couldry, N. (2023, 11 April). AI as colonial knowledge production. University World News. https://www.
universityworldnews.com/post.php?story=2023041014520289
Crary, J. (2022). Scorched earth. Verso.
13
N. Selwyn
Feathers, T. (2020, 2 December). Facial recognition company lied to school district about its racist tech. Vice
Motherboard. https://www.vice.com/en/article/qjpkmx/fac-recognition-company-lied-to-school-district-
about-its-racist-tech
Giannini, S. (2023). Generative AI and the future of education. UNESCO. https://unesdoc.unesco.org/ark:/48223/
pf0000385877
Goulden, M. (2018). [Tweet]. Twitter. https://twitter.com/murraygoulden/status/1038338924270297094
Hao, K. (2020, 2 April). AI can’t predict how a child’s life will turn out even with a ton of data. MIT Technology
Review.
Høvsgaard, L. (2019). Adapting to the test. Discourse: Studies in the Cultural Politics of Education, 40(1), 78–92.
Klipphahn-Karge, M., Koster, A., & Bruss, S. (Eds.). (2023). Queer reflections on AI. Routledge.
Li, P., Yang, J., Islam, M., & Ren, S. (2023). Making AI less ‘thirsty’: uncovering and addressing the secret water
footprint of AI models. arXiv. https://doi.org/10.48550/arXiv.2304.03271
Mason, H. (2018, 3 July). [Tweet]. Twitter. https://twitter.com/hmason/status/1014180606496968704.
McQuillan, D. (2022). Resisting AI. Policy Press.
McQuillan, D. (2023, 6 June). Predicted benefits, proven harms. The Sociological Review: Magazine. https://
thesociologicalreview.org/magazine/june-2023/artificial-intelligence/predicted-benefits-proven-harms
Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux.
Munn, L. (2023). The five tests: Designing and evaluating AI according to Indigenous Māori principles. AI &
Society. https://doi.org/10.1007/s00146-023-01636-x
Nafus, D., Schooler, E., & Burch, K.. (2021). Carbon-responsive computing. Energies, 14(21), 6917.
NAO. (2019) Investigation into the response to cheating in English language tests. National Audit Office. https://
www.nao.org.uk/wp-content/uploads/2019/05/Investigation-into-the-response-to-cheating-in-English-
language-tests.pdf
Pretz, K. (2021, 31 March). Stop calling everything AI, machine-learning pioneer says. IEEE Spectrum. https://
spectrum.ieee.org/stop-calling-everything-ai-machinelearning-pioneer-says
Reuters. (2023, 18 May). AI threatens humanity’s future, 61% of Americans say: Reuters/Ipsos poll. www.reuters.
com/technology/ai-threatens-humanitys-future-61-americans-say-reutersipsos-2023-05-17/
Salganik, M., Lundberg, I., Kindel, A., Ahearn, C., Al-Ghoneim, K., Almaatouq, A., & Altschul, D. (2020).
Measuring the predictability of life outcomes with a scientific mass collaboration. Proceedings of the
National Academy of Sciences. www.pnas.org/content/117/15/8398
Sample, I. (2023, 10 July). Programs to detect AI discriminate against non-native English speakers, shows
study. Guardian. www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-
against-non-native-english-speakers-shows-study
Shelby, R., Rismani, S., Henne, K., Moon, A., Rostamzadeh, N., Nicholas, P., Yilla, N., Gallegos, J., Smart, A.,
Garcia, E., & Virk, G. (2022). Sociotechnical harms: Scoping a taxonomy for harm reduction. arXiv. https://
doi.org/10.48550/arXiv.2210.05791
Shew, A. (2020). Ableism, technoableism, and future AI. IEEE Technology and Society Magazine, 39(1), 40–85.
Siddarth, D., Acemoglu, D., Allen, D., Crawford K., Evans, J., Jordan, M., & Weyl, G. (2021, 1 December). How
AI fails us. https://ethics.harvard.edu/files/center-for-ethics/files/howai_fails_us_2.pdf?m=1638369605
Tennant, C., & Stilgoe, J. (2021). The attachments of ‘autonomous’ vehicles. Social Studies of Science, 51(6),
846–870.
Thompson, N. Greenewald, K., Lee, K., & Manso, G. (2021, 24 September). Deep learning’s diminishing
returns. IEEE Spectrum. https://spectrum.ieee.org/deep-learning-computational-cost
Toupin, S. (2023). Shaping feminist artificial intelligence. New Media & Society, 14614448221150776.
Tucker, E. (2022, 17 March). Artifice and intelligence. Tech Policy Press. https://techpolicy.press/artifice-and-
intelligence/
U.S. Office of Educational Technology. (2023). Artificial intelligence and future of teaching and learning. Washington
DC, U.S. Department of Education.
Versteijlen, M., Salgado, F., Groesbeek, M., & Counotte, A. (2017). Pros and cons of online education
as a measure to reduce carbon emissions in higher education in the Netherlands. Current Opinion in
Environmental Sustainability, 28, 80–89.
Wajcman, J. (2019). How Silicon Valley sets time. New Media & Society, 21(6), 1272–1289.
Winner, L. (1978). Autonomous technology. MIT Press.
14