Monday, June 03, 2024

Article Note: on Chat GPT and academic librarians

This short article out of C&RL News is mostly a very flattering paean of LLM's (large language models) and ChatGPT (one of those LLM's). The authors basically envision ways the tool will displace some workers for an AI (artificial intelligence) assisted future. To be honest, a good amount of this has echoes from the days of Library 2.0 and the hype that went along with it. As I read the piece, I found myself writing small comments and questions in the margins. At times I wondered if the authors had read or seen various news on ChatGPT and AI that are not exactly worshiping the models. 

Here are then some highlights that caught my eye: 

ChatGPT is a...

"...tool that uses deep learning techniques to generate text in response to questions posed to it. It can generate essays, email, song lyrics, recipes, computer code, webpages, even games and medical diagnoses" (99). 

It can also generate, to put it bluntly, a lot of bullshit and right out make up stuff (CNN; AP). 

Also, 

"...ChatGPT has been trained on a large corpus of text, including news articles, books, websites, academic articles, and other sources." (99).

Yes, and a lot of it is outright stolen from creators and/or scraped from the web without any form of compensation or attribution. In fact, some news organizations are suing due to what they see is content theft from the AI companies (via CBS News).  See also this piece out of Futurism on a guy basically stealing content and using AI to "repackage" it to sell. 

One detail from the article that also caught my eye is an optimism that seems to have little regard for any ethical concerns, a somewhat naive assumption that everyone involved will be honest. Keep in mind that very often plagiarism is a concern of faculty in academia. In earlier days, they might ask a librarian for assistance, then they turned to Google, and now they rely on tools like Turnitin (another tool that comes with its own ethical issues, but let's not digress further), and next it is AI. Somehow using a tool that basically steals content to check if content is stolen does not feel right, to this librarian at least. 

The authors do mention plagiarism, but then ask, gee, is a student turning in work done with ChatGPT really plagiarizing? (101). I'd say yes they are, and I say it both as a librarian and a former writing teacher, but the authors use a loophole: since plagiarism is defined as "presenting someone else's work or ideas as your own" and ChatGPT is not a someone, well, you get the idea. That is a nice loophole if you can get away with it.

As for librarian roles, a lot of it according to the article will be basically working as "prompt engineers" to "assist researchers by providing tips in asking the right questions to get the best results" (100). Uh huh. At least we may have some job security teaching faculty, and especially students (we all know faculty are not exactly known for wanting to be taught anything, but I digress), how to tell if some text or piece of art is real or AI generated. That will not be an easy task. 

Authors also mention that researchers may have concerns AI could seep into academic writing. Well, it is already happening as researchers are getting caught passing ChatGPT generated essays as their own for publication (via Futurism, but a small search will yield a few other stories).  

I could go, but I am stopping here because I just do not share the rose colored vision the authors of the article appear to have. At any rate, as I was reading this mercifully short article, other thoughts came to mind. Below then are some links that may not be as rosy in their view of ChatGPT and AI that I read recently as of this post. 

Citation for the article noted: 

Christopher Cox and Elias Tzoc, "ChatGPT: Implications for Academic Libraries." C&RL News, March 2023: 99-102.


 The additional links to consider against the whoopee of the original article in addition to the links I included in my comments above: