After all the hype about ChatGPT, I decided to have a go at using it. To say I am deeply disappointed is an understatement.
OK. So I signed up, then asked the AI a question about post-linguistic methodology. The bot replied with logical sounding, but what appear to be made-up answers: the answers looked relatively OK, but were still sniffy enough for me to ask for sources. I got four sources (author names and titles), but could not find the actual papers, so asked for APA references. The AI supplied a set of four different APA references to the sources, and - though they too looked relatively OK - the problem arose when I went locate, download and read the articles themselves. For example, after quite a bit of questioning, then asking for DOIs, I received another different set of four sources:
Preda, A. (2022). Post-Linguistic Social Science. Journal of Cultural Economy, 15(1), 1-10. https://doi.org/10.1080/17530350.2021.1998102
Looks fine, right? But I found that the articles the AI supplied simply did not exist. The bot had made the references up out of whole cloth. No article by that name; certainly not in the journal; not by that author (in any journal); and nothing similar even in that entire year. I checked the GoogleScholar author profiles and no similar articles are listed. The DOIs were unallocated. It appears that the answers and the sources are bullshit, top to bottom.
I quizzed the AI, and it then apologised for making a mistake. And for all the other mistakes it had made, but only once it was cornered in having provided false information. Providing I was polite - and I was - the bot seems to remain polite too (though I have heard anecdotally that the AI will return 'snippiness' if the biological user shows frustration).
So the whole thing appears to be a rort: and that is disturbing. If the AI was a person, I would say it 'lied' (yes, anthropomorphising). It seems, to me, that if the ChatGPT ai doesn't 'know' the next logical step, it makes up answers following its predictive text modelling.
But that then means that nothing it returns can be relied upon: we won't know if it has reached the bounds of its knowledge without testing everything ourselves. So if we cannot be certain that it is even 'roughly right', that we cannot rely on anything it says as being reliable; what then is the point of it...?
Author and technology forecaster, Jaron Lanier's comment was “This idea of [AI] surpassing human ability is silly because it’s made of human abilities”, going on to explain that "comparing ourselves with AI is the equivalent of comparing ourselves with a car. 'It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner'" (Hattenstone, 2023). I found this latter a very leavening comment that considerably reframed my thinking. AI is a tool. And, at the moment, it's output is blunt.
As a result, I really don’t see how this thing is going to revolutionise the world. Yet, anyway. And I am not the only one who thinks that ChatGPT "can just make stuff up" (Grove, 2023, citing Toby Walsh, Professor of AI, UNSW Sydney). So yes, it appears that ChatGPT indeed 'lies'.
So the AI hype is seeming more and more like smoke and mirrors to me.
Sam
References:
Grove, J. (16 March 2023). The ChatGPT revolution of academic research has begun. Times Higher Education. https://www.timeshighereducation.com/depth/chatgpt-revolution-academic-research-has-begun?spMailingID=24864972&spUserID=MTAxNzczMjY5MzYxMAS2&spJobID=2193180520&spReportId=MjE5MzE4MDUyMAS2
Hattenstone, S. (23 March 2023). Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’. The Guardian. https://www.theguardian.com/technology/2023/mar/23/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane
No comments :
Post a Comment
Thanks for your feedback. The elves will post it shortly.