The emergence of artificial intelligence tools like ChatGPT has created a significant moment in higher education, as academics debate whether the new technology is a danger or opportunity for learning — or maybe a little bit of both.
Mercer University is responding swiftly and was among the first universities to begin training faculty on AI tools, said Dr. Susan Codone, director of the Center for Teaching and Learning, which provides support for teaching and professional development for Mercer faculty.
Sessions introduced faculty to the main AI tools that have emerged, traced their path of development and showed how they’re being used. Training also covered how to identify appropriate uses of the technology and how to make courses and activities AI resilient.
“We are paying attention to generative AI, particularly ChatGPT and the version included in Bing, because they are very capable of generating text and computer code to just about any kind of prompt, and they do it well,” said Dr. Codone, who is also a professor of technical communication in the School of Engineering. “There is the opportunity for students to use those in cases where it is not appropriate but also cases where it is appropriate, so we’re paying attention for both reasons.”
Open AI’s ChatGPT, Microsoft’s Bing chatbot and Google’s Bard are driven by large language models that allow the AI assistants to perform complex tasks and write humanlike text on a variety of topics. ChatGPT, launched on Nov. 30, was the first on the scene, and by January, Mercer was training faculty on AI tools and their use in higher education.
So far, more than 300 faculty members have participated in the sessions, Dr. Codone said.
One concern, and perhaps the most obvious one, is that students may use the technology to cheat or take shortcuts on assignments. But another question is more philosophical: If students use AI tools, are they losing the ability to learn how to write and think critically?
“When you teach writing, the purpose of your teaching is not just the development of content. The purpose of the teaching is for the student to internalize and begin to practice the cognitive processes by which that writing is produced,” said Dr. Deneen Senasi, Griffith Professor of English, co-chair of the English department and director of the writing program in the College of Liberal Arts and Sciences.
She was reminded of a George Orwell essay in which he famously writes, “If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them.”
“Part of what we’re teaching is a process, not a product,” Dr. Senasi said. “ChatGPT is designed to generate product, and you don’t have to think to do it.”
However, the new AI tools also bring the potential for new opportunities.
“One of the big ones that I have been excited about is teaching ‘invention,’ which is the stage of writing where you’re coming up with new ideas. That’s been a challenge to model,” said Dr. Bremen Vance, assistant professor of technical communication. “Using transcripts of a conversation with an AI tool helps model the idea of an invention process for exploring a topic and coming up with more and more interesting perspectives on it.
“So, I’ve used that a little bit in one of my writing classes to show how we can go from just a general topic idea to kind of understanding the different angles on it and really figuring out what we want to talk about.”
As part of the demonstration, he explained to his class how AI tools work and where the information comes from. He discovered many students already were familiar with AI tools and their potential pitfalls, including their ability to produce inaccurate or made-up content.
“There’s so much excitement around AI tools and so many possibilities. The big open-ended question for me is: How best should we be using it?” said Dr. Vance, who studies trends in technical communication and writing instruction. “One of the problems we have is learning the best uses for it at this time.”
As awareness of AI tools in higher education grows, the debate may expand to how the new technology can be used in various fields.
Dr. Jeffrey Ebert, clinical associate professor of physical therapy in the College of Health Professions, said he can see a future where discussions transition from awareness to trying to find ways to take advantage of and incorporate AI tools into health care classrooms and curricula.
“There are absolutely appropriate uses of this technology in health care, including using AI to help analyze a patient’s health history, medication use, imaging, and clinical signs and symptoms to help make diagnostic conclusions faster or to discover potential harms more quickly based on drug interactions and/or a patient’s health history,” he said. “I think we will also begin to see it utilized to automate many health care functions such as maintaining health records.”
AI tools are going to change the way people do a lot of things — and not just in education.
“We will begin to see AI incorporated into many facets of our everyday lives, for better or worse, and we need to stay on top of things,” Dr. Ebert said.
As the discussion continues, students are expected to continue to submit original work. Unless a professor has specifically authorized the use of an AI assistant, submitting AI-generated work would be subject to the honor code for academic dishonesty, Dr. Codone said.
All faculty members have access to Mercer’s plagiarism detection service, Turnitin, which can now analyze works for AI-generated content, she said.
The introduction of AI tools in higher education requires a thoughtful analysis and response.
“I’ve heard it appropriately described as the ‘calculator moment,’ and we have to grapple with it, but it’s not the first time that we’ve had to grapple with a new technology in communication that has kind of rattled the way we think about what we’re doing,” Dr. Vance said, referring to a time when the calculator, now commonplace, was a new technology. “There was a very long and systemwide conversation about how to handle the internet, how to handle Wikipedia, how to handle Google, and over time we’ve learned how to teach the best practices.
“We’re at the beginning stages of trying to figure out how we collectively are going to react and define best practices, and as we do so, we want to keep an eye on what the potential is and also what the dangers are and balance those out.”