Elon Musk calls it “scary good.” Others just call it “scary.” Either way, ChatGPT continues to generate debate and speculation since its release last November.
The artificial intelligence (AI) chat bot developed by OpenAI uses natural language processing (NLP) to provide detailed text responses to instructions and prompts, based on human-user input. The application can answer questions and assist with tasks, including writing code, emails or essays.
Although ChatGPT is a relatively new AI tool, Victoria Rubin, an NLP scholar in the Faculty of Information and Media Studies (FIMS), says the idea behind the app is not.
Rubin has studied earlier forms of AI conversational agents throughout her career, including Eliza, one of the first chat bots, created in 1966; its spin-off Alice, and SGT Star, an artificially intelligent online virtual guide created for the U.S. Army 20 years ago, which has been available to the public for 17 years.
As a computational linguist and director of Western’s Language and Information Technology Lab (LiTRL), she’s well positioned to study the many questions around ChatGPT, from its use to its misuse.
Rubin also brings expertise as the author of Misinformation and Disinformation: Detecting Fakes with the Eye and AI. She believes automation, along with education and regulation, can play a key role in disrupting misinformation, intended or otherwise.
“As a researcher my role bridges two fields ─ natural language processing as well as library and information science and technology,” Rubin said. “I merge my understanding of engineering, software programming and coding with statistical methods, the use of language and the study of human behaviour.”
Rubin and her team of graduate students are currently conducting evidence-based mini studies to explore the uses and misuses of ChatGPT, and the mechanics and media ‘hype’ behind it.
“I’m so fortunate to have a great team who bring their young minds’ energy to this work,” Rubin said.
With a goal to publish their findings, they hope to first share the outcomes of their research at the Association for Information Science and Technology (ASIS&T) international conference in London, U.K., this fall.
Assessing for assignment plagiarism
The first mini study began in January and focuses on concerns about student plagiarism and the potential for students to use ChatGPT to complete their assignments.
“If students are tempted to manipulate information, we want to know what the current output would be,” Rubin said. “Can we tell the difference between a human’s and a robot’s output?”
Beyond the ethical implications, educators are also concerned students would be submitting material full of falsehoods and misinformation.
Drawing on a course she currently teaches in the Master of Library and Information Science (MLIS) program, Rubin is using an assignment on the Dewey Decimal Classification as a prompt to “see what comes back,” warning her teaching assistant of potential patterns in the OpenAI chatbot’s output.
Transparency over deception
Early observations show the level of complexity in accessing the interface is low. All it requires is “no fear of IT and a good motive to use it.” However, “all citations have to be verified, as (the app) tends to attribute claims to non-existing works (and) makes up quotes, works, and mis-attributes sources,” said Rubin.
She recognizes how ChatGPT may tempt some “tech-curious” students to test it out on an assignment, and how it’s tempting for some writers to use ChatGPT to polish their writing skills, phrase their writing more naturally and build more coherent arguments.
“It can be disruptive in education,” she said. “It could be treated as adversarial technology, if its use is undisclosed to the reader of the generated texts, or it could be embraced as assistive technology, if its use is fully disclosed and explained.”
The onus will be on educators to stay one step ahead and form policies around the use of these technologies, Rubin added.
“Their natural instinct may be to ban it all,” she said. “But when has that approach worked for us as a society? I think transparency, in terms of when using it is allowed or disallowed, is a better approach.”
Rubin points to WIRED magazine as an example. The publication recently shared its policies on the use of generative AI tools with its readers, noting these tools won’t be used to create original editorials requiring human depth and understanding, but might be used to suggest headlines or generate story ideas.
“I think education will adopt a similar line of thinking,” she said. “We want to trust our students and there are already policies in place. If there’s a culture of cheating, cheating will inevitably happen. But if there is a culture of transparency and the consequences are known – such as you might get a ‘zero,’ or you might get expelled – there will be less of it.”
Paywalls and perceptions
Rubin also observed students using the free version of ChatGPT may struggle to access the often-overloaded site and would have to plan ahead.
“The recent introduction of paywalls means students who are paid monthly subscribers will get access to ChatGPT, even during peak times. They’ll get to the first of the line and use it like a streaming service,” she said.
Future mini studies in the lab will explore the algorithms allowing the system to respond to a prompt. The team will also study how people perceive this technology and respond to the rhetoric generated by the media or by the messages delivered through public relations efforts of the open AI companies.
“If you think about the recent hype about ChatGPT, it’s the perfect PR campaign,” Rubin said. “Someone has very cleverly introduced a tool and it’s reached such a virality level that almost every media outlet has covered it. Everybody is concerned about it, and some have played with it. Even my mother called me up when she saw it on CNN.”
“If your grandma has told you about ChatGPT, it shows how effective this PR campaign is.”
Rubin’s team will also assess human-computer interaction and how the introduction of natural language in search engines could influence the ad revenue model and search optimization. An ad revenue model and search optimization is what drives search engines now, but adopting automatically generated summaries (at the top of the page instead of the links) may change what users see first. Will we still click on the first link? If not, companies won’t be buying their way to the top of the link list.
“We’re going to look at the future with a moral compass,” she said. “We don’t want just profit to drive us to the ‘true north’ on this. We want to talk about accuracy and honesty; kindness and compassion, and the things that make humans happy, as opposed to those that can lead us to feeling sad and depressed.”
With her lab located in FIMS, Rubin’s team is “afforded the ability to be interdisciplinary” in their investigations.
“We don’t have to limit ourselves to the efficiency of computer systems, and a ‘make it faster, make it more accurate’ approach. That’s not us,” she said. “We’re studying one potential topic from multiple perspectives, be it psychology, user needs, the likelihood of somebody opening it up or staying with it, or somebody using it for nefarious purposes.”