ChatGPT is really cool, but yeah, it’s also cheating

It’s quite fashionable to be the progressive educator who embraces the latest new technology for teaching in college classrooms. I’m that sort, of course. For example: I no longer grade assignments on paper.

The introduction a few months ago of the artificial intelligence program ChatGPT has divided college educators, some of whom welcome it as a teaching tool and student resource and others who see it as the doom of writing assignments and an invitation for academic misconduct. Some universities have taken steps to limit its use by students.

Students can go online to the ChatGPT website, enter a prompt or question (such as one that matches the subject of a writing assignment), and receive a fully written, AI-generated answer. It’s amazing and alarming at the same time.

I tried a few tests and, yes, it produces an acceptable essay free of writing errors. It wouldn’t get a student on the Dean’s List.

I teach a media ethics course and ChatGPT’s answer to my request for an analysis of whether journalists should use confidential sources offered persuasive points on both sides (without endorsing either side). I asked for a longer analysis and it fulfilled the task with some additional valid points but also with repetition and hot air. When I asked for five famous examples of reporters using confidential sources, ChatGPT delivered accurately.

However, when I asked for the same analysis using the reasoning model contained in our ethics course textbook, it bombed. It claimed it was using that model, but it produced nothing from the book.

Educators have freaked out about technological advancements before. Everyone was sure the advent of the calculator would destroy math skills. It didn’t. Everyone was sure the website Grammarly and similar spellcheck programs would make writers lazy and dependent. They haven’t. But we’ve never had a tool that can instantly generate a whole piece of cogent writing for free.

That kind of use is educationally defeating. The process of writing – developing, organizing and expressing thoughts -- is fundamental to learning. To go further, use of ChatGPT for writing is academic misconduct because it’s not original work.

I can, however, see acceptable uses of this new tech, such as using it to review course material prior to an exam. I asked ChatGPT to explain some ethics terminology – what a stereotype is, for instance – and it nailed it. Nothing wrong with that.

I also would sign off on engaging ChatGPT to locate examples, citations and other aspects of research. Because I don’t think there’s a huge difference between that and, say, a Google search.

this is chatgpt’s response to a prompt i entered on the website

Teachers have nothing to worry about if students aren’t considering use of ChatGPT.  According to a study by test prep company Study.com (linked story written by my former student Micah Ward for University Business), 90% of the 1,000 U.S. students surveyed knew of ChatGPT and 53% reported using it to write an essay. Two-thirds of respondents supported student access to it. Almost three-fourths of the 100 college professors surveyed said they worried about the potential for cheating.

Those student numbers seemed awfully high. So I surveyed one of my classes (foolproof method). Only 14 of 32 students said they were familiar with ChatGPT. I asked how many knew of a student who had used ChatGPT to write a college assignment. One.

There are some reasons why I believe this likely won’t turn into an epidemic. First, I do not believe students cheat as much as Study.com and plagiarism detection companies claim. (Cue scandal that will make me look like a fool for saying that.)

Also, it’s possible to determine when students use it. OpenAI, the company that created ChatGPT, announced its own detection software in January, though it lacks reliability. Turnitin, the plagiarism detection software used at UA and elsewhere, said in December that it will enhance its ability to spot AI-generated writing, including ChatGPT, this year. And, of course, professors can enter queries that match an assignment and compare similarities.

Finally, there’s the problem of ChatGPT’s inaccuracies and knowledge gaps, like the reasoning model in the ethics course textbook that’s a mandatory part of that course’s writing assignments. And, particular to my department, ChatGPT can’t produce the original reporting that journalism assignments require.

Speaking of knowledge gaps, I asked Chat GPT if it knew what The Arenblog is. It didn’t. What a fraud.