Are we witnessing the death of creativity? The Use of AI in Education
Nearing the end of 2022, OpenAI’s ChatGPT made its debut on the technological stage, and the world has not looked back since. The premise of Artificial Intelligence has circulated within the technological stratosphere for some years now; it involves technology aimed at simulating human learning, comprehension and creativity to the extent where human intervention becomes unnecessary. ChatGPT represents a specific facet of this technology deemed generative AI, which utilises deep learning models to produce automatic, original content. Promising to create human-like responses to any query, dilemma or task, the transformative introduction of ChatGPT rendered the innovation an instant success, garnering one million users within just five days[1]. Whilst ChatGPT was hailed by many, seen as a successful implementation of the future brought to the present, the dangers of the creation have been glaringly apparent since its inception.
Photo Credit : Unsplash
Paralleling a Promethean fate, ChatGPT’s ability to self-improve beyond the capabilities of [SL1] humanity, pose the incredibly real risks of scientific ambition overshooting humanity’s boundaries. Indeed, even Geoffrey Hinton, the ‘Godfather of AI’, quit his job as one of Google’s top artificial intelligence scientists to warn of the dangers of this technological breakthrough.[2]
Much like Pandora’s Box, now that ChatGPT has been unleashed and we delve further into an age of technology, the use of AI in everyday life is here to stay. Therefore, when investigating the positive and negative implications of this new technological addition, one of the most interesting avenues to investigate is its impact on the sphere of education. Responsible for the carving and cultivation of future generations, the immense responsibility of this sector to do right by its pupils marks it a critical area where AI must be used ethically. Considering that the education system is notoriously under heaves of pressure, the utilisation of AI as a tool to improve the sector, is bolstered by logic. Indeed, in 2024, 41% of teachers reported their workload as ‘unmanageable’ with a further 37% reporting it as ‘only just manageable’.[3] In fact, only 1% of teachers described their workload as manageable ‘all the time’.[4] Therefore, ChatGPT has been woven into the education system in a bid to free up teachers’ schedules, hoping to result in more efficient and productive education for the pupils. Working as a ‘virtual assistant’, ChatGPT can draft lesson plans, produce learning resources and sort administrative tasks in the blink of an eye, trimming teacher’s extended workdays to a feasible time frame.
Acknowledging the potential of AI in this sector, the UK Government have encouraged working with ChatGPT rather than against it. Consequently, 2 million pounds has been invested into AI tools for the widely used online learning platform, Oak National Academy.[5] Alongside this, the Government has arranged hackathons to merge theory with reality by encouraging collaboration between data scientists and teachers.[6]
Whilst these implementations may appear promising, the use of AI in education still treads a precarious line. The introduction of ChatGPT brings with itself many risks: issues of misinformation, concerns about plagiarism and intellectual property, the counteractive use of technology in an increasingly anxious, screen-dependent generation and finally, the threat that ChatGPT poses to critical thinking. [SL2]
Although incredibly impressive, it is essential to remember ChatGPT is still in its early stages of development. As a result, generative AI often produces inaccurate and biased content. ChatGPT functions according to individual letters and words, often failing to comprehend whole statements or questions in their entire context.[7] Thus, the system can often be guilty of ‘hallucination’, a term used to describe the presentation of false information in a factual and formal tone.[8] ChatGPT specifically does this by citing fictional news articles, academic texts or by using false citations to convey the image of its output as reliable and trustworthy. Moreover, despite having access to the entire internet, ChatGPT suffers from a surprising lack of source traceability, making it difficult to align the content it produces with reputable and correct authors.[9] This is particularly problematic within the realm of education, as it not only jeopardises the content taught to younger generations but exacerbates young attitudes towards upholding academic integrity.
Furthermore, the outputs produced by generative AI also carry normative bias, replicating the most dominant and popular opinions available.[10] Marginalised voices are often misrepresented or eliminated as a result. Information received by ChatGPT is consequently likely to mirror homogenous patterns involving little originality or diversity, narrowing the range of expressions and perspectives that children ought to be exposed to.[11] The increasing technological connectivity of the world has already led to ‘information pollution’, whereby a crisis of both misinformation and disinformation consistently exhort hateful and discriminatory sentiments.[12]Whilst we have less control over what information is consumed by children online, it is vital that the information they are receiving within educational institutions is not only correct, but fair and unbiased.
Photo Credit : Unsplash
Recognising that ChatGPT was not designed with educational contexts in mind, these issues stem from a crucial lack of accountability for producing integral, educational information. The National Education Union emphasise this in their reminder that ‘No AI products have been independently evaluated in relation to education; there is no robust research on their use in schools’.[13] Whilst the government hackathons manifest as an attempt to address this, the results from this research won’t be fully apparent for at least a year. It is therefore worth noting that we are throwing the rectitude of children’s education in the deep end of a relatively unknown abyss of technological innovation, of which we know very little about the long-term educational consequences.
Whilst the threat of ChatGPT’s unknown risks is severe, it is also vital to discuss the known risks of implementing technology with childhood development. Nicknamed the ‘anxious generation’, there is myriad research condemning the use of technology by children.[14] Once again, the classroom is one of the few places that, in an increasingly techno-centric world, children’s access to online media can be limited and regulated. Wayne Holmes outlines the importance of the human element of learning, stating that the use of AI can actually ‘downplay the crucial role of education in community-building and social skills development, ignore the holistic development of students, and potentially perpetuate socio-economic and cultural disparities.[15] The introduction of yet another strand of technology therefore seems counteractive with the popular move to get children further away from the digital world, rather than closer to it.
Alongside this, is the fear that children may follow by their teacher’s example. As much of the internet tends to do so, it is likely that by introducing partial use of AI within schools may eventually snowball into students also adopting these tools, especially as they get older. For instance, students are already turning to AI for assistance with homework and exam revision. What then may occur, is a cyclical disaster whereby schools become a place where generative content is merely passed back and forth, reproducing further limited narratives.
When surrounded by technology, the developing brain of a child becomes receptive to ‘impoverished’ stimulation which ultimately interferes with their creativity and critical thinking skills.[16] The use of AI only exacerbates this. Indeed, a study examining the impacts of cognitive offloading on critical thinking found that over-reliance on AI leads to significantly decreased critical thinking.[17] Designed to optimise efficiency, use of AI ceases independent analysis and problem solving, since the mind fails to actively engage in tasks and challenges.[18] Essential for higher level jobs, social evaluations and the fostering of individuality and creativity, critical thinking is a vital skill that should be cultivated in younger minds[19]. Linking back to the issue of tackling misinformation, critical thinking is also vital for processing and discerning different evidence presented to individuals. ChatGPT directly hinders this, further obfuscating the real from the fake. Whilst ChatGPT is so far only being implemented as a tool for teachers, the worry may arise whereby children follow by example, becoming overly reliant on ChatGPT as a tool and ultimately missing out on fostering necessary critical and creative thinking patterns.
‘Creativity means seeing what others see, and thinking what no one else has thought’, Albert Einstein’
In a world that is increasingly overwhelmed by false news, hateful rhetoric and rising extremist views, it is essential that individuals are taught to rely on their own instincts and to process this information discerningly. The use of ChatGPT, even as a tool, provides a huge obstacle to this. Not everything in life is meant to be quick and efficient, and deeper chains of thinking and creative outbursts are found in moments where active engagement is required. ChatGPT fails to imbue creativity into young and adult minds, because it fails to ‘think what no one else has thought’. If anything, ChatGPT represents a step backwards, whereby the same, normative and unoriginal information is produced and circulated. Whilst a solution to alleviate the heavy workload of teachers is certainly necessary, ChatGPT is not the answer for this. The implementation of ChatGPT into the education system, therefore, must be preceded with caution and the significance of human creativity, critical thinking and expansive discussion must be preserved at all costs.
[1] Bernard Marr, ‘A Short History of ChatGPT: How We Got to Where We Are Today’ (Forbes19 May 2023) <https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/>.
[2] Tamlyn Hunt, ‘Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not’ (Scientific American25 May 2023) <https://www.scientificamerican.com/article/heres-why-ai-may-be-extremely-dangerous-whether-its-conscious-or-not/>.
[3] National Education Union, ‘State of Education: Workload and Wellbeing | National Education Union’ (National Education Union3 April 2024) <https://neu.org.uk/latest/press-releases/state-education-workload-and-wellbeing>.
[4] National Education Union, ‘State of Education: Workload and Wellbeing | National Education Union’ (National Education Union3 April 2024) <https://neu.org.uk/latest/press-releases/state-education-workload-and-wellbeing>.
[5] Education Hub, ‘Artificial Intelligence in Schools – Everything You Need to Know – the Education Hub’ (Blog.gov.uk6 December 2023) <https://educationhub.blog.gov.uk/2023/12/artificial-intelligence-in-schools-everything-you-need-to-know/>.
[6] Education Hub, ‘Artificial Intelligence in Schools – Everything You Need to Know – the Education Hub’ (Blog.gov.uk6 December 2023) <https://educationhub.blog.gov.uk/2023/12/artificial-intelligence-in-schools-everything-you-need-to-know/>.
[7] Georgia Koromila, ‘LibGuides: Generative Artificial Intelligence and University Study: Limitations and Risks’ (libguides.reading.ac.uk) <https://libguides.reading.ac.uk/generative-AI-and-university-study/limitations>.
[8] IBM, ‘AI Hallucinations’ (Ibm.comSeptember 2023) <https://www.ibm.com/think/topics/ai-hallucinations>.
[9] Georgia Koromila, ‘LibGuides: Generative Artificial Intelligence and University Study: Limitations and Risks’ (libguides.reading.ac.uk) <https://libguides.reading.ac.uk/generative-AI-and-university-study/limitations>.
[10] Georgia Koromila, ‘LibGuides: Generative Artificial Intelligence and University Study: Limitations and Risks’ (libguides.reading.ac.uk) <https://libguides.reading.ac.uk/generative-AI-and-university-study/limitations>.
[11]National Education Union, ‘Use and Abuse of Artificial Intelligence | National Education Union’ (National Education Union14 October 2024) <https://neu.org.uk/advice/classroom/artificial-intelligence-education/use-and-abuse-artificial-intelligence>.
[12] UNDP, ‘RISE ABOVE: Countering Misinformation and Disinformation in the Crisis Setting | United Nations Development Programme’ (UNDP) <https://www.undp.org/eurasia/dis/misinformation>.
[13] National Education Union, ‘Use and Abuse of Artificial Intelligence | National Education Union’ (National Education Union14 October 2024) <https://neu.org.uk/advice/classroom/artificial-intelligence-education/use-and-abuse-artificial-intelligence>.
[14] Jonathan Haidt, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness (Penguin 2024).
[15] Wayne Holmes, ‘The Unintended Consequences of Artificial Intelligence and Education’ (Education International18 October 2023) <https://www.ei-ie.org/en/item/28115:the-unintended-consequences-of-artificial-intelligence-and-education>.
[16] Debra Ruder, ‘Screen Time and the Brain’ (Harvard Medical School19 June 2019) <https://hms.harvard.edu/news/screen-time-brain>.
[17] Nick Skillicorn, ‘Relying on AI Tools Can Reduce Our Ability for Critical Thinking - Idea to Value’ (Idea to Value15 January 2025) <https://www.ideatovalue.com/insp/nickskillicorn/2025/01/relying-on-ai-tools-can-reduce-our-ability-for-critical-thinking/>.
[18] Nick Skillicorn, ‘Relying on AI Tools Can Reduce Our Ability for Critical Thinking - Idea to Value’ (Idea to Value15 January 2025) <https://www.ideatovalue.com/insp/nickskillicorn/2025/01/relying-on-ai-tools-can-reduce-our-ability-for-critical-thinking/>.
[19] Katherine Woollett, ‘5 Reasons Why Critical Thinking Is the Most Important Skill for Students’ (www.digitaltheatreplus.com9 August 2023) <https://www.digitaltheatreplus.com/blog/5-reasons-why-critical-thinking-is-the-most-important-skill-for-students>.
[SL1]Could you provide the abovementioned figure?
[SL2]probably elaborate more about this point: how ChartGPT stops people thinking critically ?