Pages

Friday 25 October 2024

AI Hallucination

I have a burning question around the validity of AI. I have run my own tests (here) in ChatGPT, where I felt that the hype around AI was just that: hype. My brief and rather unscientific experiments found that the AI I had used (ChatGPT) effectively made up the answers I obtained, which is called "AI hallucination" (Lingard, 2023). I know the answers I was given by the AI were largely nonsense because I carefully validated what the AI had delivered to me.

The trouble is, when we are writing academically, we cannot afford to stand our arguments on dodgy evidence. So if we lack a reasonable expectation that ChatGPT will supply us with sound evidence, its 'use' becomes useless. If it becomes “crucial for students to factcheck all ChatGPT output during interaction with the system to identify potential biases or inaccuracies to construct an accurate understanding of the topic” (Rasul et al., 2023, p. 8), how many students are going to do that? And if students DON'T fact-check, what does that do for their quality of their work? Or the overall quality of academic writing? 

We will not only mark students down for insufficient understanding, we will also ping them for using AI in their written work. The institutions I teach at require students to declare where and how they have used AI in their work. Lingard notes that academic publications are stating "that ChatGPT cannot be a co-author because it cannot take responsibility for the work, and they require that researchers document any use of ChatGPT in their Methods or Acknowledgements sections" (2023, p. 261). 

Just as I and others have noticed, Rasul et al. (2023, p. 3) point out:

That, yes, "ChatGPT can act as a research assistant, answering users’ questions based on the related literature [...], analysing data [, ...] serve as a writing assistant [,...] and provide writing support". However, "users should exercise caution as ChatGPT may be prone to hallucinations (Alkaissi & McFarlane, 2023) and fabricate references and quotes (Sallam, 2023; Shen et al., 2023)".

I continue to be concerned about AI. It needs to get much, much better before it can become a useful, reliable and valid tool.


Sam

References:

Lingard, L. (2023). Writing with ChatGPT: An illustration of its capacity, limitations & implications for academic writers. Perspectives on Medical Education, 12(1), 261-270.  https://doi.org/10.5334/pme.1072

Rasul, T., Nair, S., Kalendra, D., Robin, M., de Oliveira Santini, F., Ladeira, W. J., ... & Heathcote, L. (2023). The role of ChatGPT in higher education: Benefits, challenges, and future research directions. Journal of Applied Learning and Teaching, 6(1), 1-16. https://doi.org/10.37074/jalt.2023.6.1.29

No comments :

Post a Comment

Thanks for your feedback. The elves will post it shortly.