AI in the classroom: opportunity instead of fear 

Part of the Learning with the Machines feature on how UIC educators and researchers are exploring the impact of large language models 

When Karen Leick, the new director of the professional writing program in UIC’s English department, started seeing headlines about how large language models would cause an epidemic of student cheating, she was immediately skeptical. For good reason; in 2019, Leick published “Parents, Media and Panic through the Years,” a book on the anxieties that accompanied new technologies such as television, video games and the internet. 

Instead of a threat, Leick saw an opportunity. In her sections of the First-Year Writing Program, she assigns her students research papers on recent technologies such as gene editing or social media. What if they spent time in the classroom critically thinking about the ways AI might approach some of their research questions, exploring both the helpful and unhelpful results?  

Karen Leick
Karen Leick, senior lecturer and director of the professional writing program in the English department.

“‘Will students cheat?’ is not that interesting of a question to me,” Leick said. “The more interesting question is: ‘Since students have this new technology available to them, what are some ways we could use it that might actually be productive?’” 

Leick had her students ask the model the same questions they were asking in their research proposals and reflect on the results. Some found that it provided clear, factual information, but with a lack of depth and insight. Others reported that it made false claims. But some found that it suggested useful terms or topics that they could use for follow-up research. One student was enchanted by the conversational tone of the model. 

“I used ChatGPT and it is BLOWING MY MIND,” they wrote. “It has been polite, concise, and extremely expressive.” 

Leick also had her class write a code of conduct for the use of these models — the students agreed that any use should be transparently cited, and that generating the final text for a paper with AI was unethical. But after experimenting with the tool in her classroom, Leick remained sanguine about the risks for academic misuse. 

“If someone is going to show me that it’s going to make students not do the reading or not learn the material, then I would be concerned. But I haven’t yet seen evidence of that,” Leick said. “Maybe there are assignments instructors are going to need to modify, but I always think that’s good. If the AI can do it, maybe it’s not a great assignment.” 

For the fall semester, the First-Year Writing Program will update its academic integrity policy to make it clear that any submitted writing created by an AI program is considered plagiarism, said Mark Bennett, the program’s director. But he still sees potential for the tool in helping students initially outline their ideas or build a bibliography on a topic to explore more deeply.  

“We need to be honest with ourselves as instructors, that there is this technology that’s widely available for free, and a lot of our students are using it,” Bennett said. “If it seems like it does afford certain efficiencies in the writing process, I’m keen to explore that.” 

Jeffrey Kessler, senior lecturer in the English Department.

The sentiment was echoed by Jeffrey Kessler, a senior lecturer in English who also discussed ChatGPT with his students in recent courses. In his view, the arrival of the models is reminiscent of early internet sites such as Wikipedia and SparkNotes, which prompted similar handwringing over student shortcuts.  

“I think that this is an opportunity for us to make our students think a lot more acutely about what we’re trying to make them do in a reading classroom,” Kessler said. “There are useful things that large language models can do, but they can’t do the things that we teach.” 

Kessler also points to the bland, just-the-facts output of modern language models as an opportunity to think about what’s uniquely human about writing. Abstract concepts such as metaphor or forms of art like real poetry remain beyond the reach of algorithms, he said. 

“One of the things that I find really interesting is that the language is pretty boring,” Kessler said. “The language that these large language models produce is predictive of what it thinks that language is supposed to be, rather than a human kind of creating language that is spontaneous, that is making us see the world in different ways.” 

Print Friendly, PDF & Email