PUBLISHED SEPTEMBER 5 , 2023

What should universities be doing to educate students about how to responsibly use generative artificial intelligence (AI) technologies in college and after they graduate?

 


 

Paul Dourish, Chancellor’s Professor & Steckler Endowed Chair of Information & Computer Science, UC Irvine and Director of the Steckler Center for Responsible, Ethical, and Accessible Technology

 

What I try to teach my students about AI-generated text is that they cannot rely on it being correct. It may be useful if they are not confident about their grammar, but that’s about as far as it goes. We already train students, when using material from the internet, to assess the sources; AI technologies are the same except that we can no longer see the sources!

I set an exercise in my most recent class that required students to use ChatGPT to generate an essay, but then asked them to fact-check it. Many students professed a prior familiarity with the technology (although most were coy about how they had acquired it), while others enthused about its usefulness. Almost uniformly, though, the essays on a wide range of topics included major errors of fact. My students caught some but by no means all. The errors often have a superficial plausibility and, of course, being generated by artificial “intelligence,” they seem reliable. I also take students through a dialog that I had with the same system in which, even after correction, it consistently returns to the same inaccuracies. It seems inevitable that, once they graduate, students will continue to rely on these technologies. The danger is that they lose their inclination to examine the products of AI critically and begin to assume that it must be true.


 

Rachel Karchmer-Klein, Associate Professor in the School of Education, University of Delaware

 

The conversation around generative AI in education elevated dramatically in the last year, with much of the focus on tools as disruptors and little on opportunities for teaching and learning. Although acknowledging the challenges is understandable, banning them, as many educators have, is nonsensical and detrimental to students’ education. In May 2023, the Office of Educational Technology released the report, Artificial Intelligence and the Future of Teaching and Learning, recognizing how these technologies can support evidence-based pedagogies. There is certainly no shortage of approaches to their integration into classroom instruction, therefore, it is incumbent upon higher ed faculty to systematically educate students in their responsible use. But how do we do this?

First, faculty must be knowledgeable of the tools. This is no small feat given the rapid pace at which technology changes. As a professor of education specializing in the areas of literacy and educational technology, I prioritize AI tools on my list of things to learn because I know my students must be trained in them before they enter the workforce. Second, provide students with time to explore AI tools from the perspective of learners. Students can leverage the technology for it to guide their learning and professors can observe how they triangulate what they know about being students with what is taught in class along with the tool’s affordances. This will illuminate how AI can support problem-solving, critical analysis, and reflective processes. Third, assignments should be designed to involve experiential learning simulations driven by AI tools that challenge students to apply their discipline-specific skills. This will educate them about how to use AI tools as professionals

 Although there are many uncertainties about generative AI, it is clear these technologies will remain for the duration. Higher education faculty must adapt to both the challenges and opportunities they allow for, so students are prepared to be productive citizens and well-prepared for the workforce upon graduation.


Emma Tolliver, ’23 UC Davis Graduate and 2022-2023 Center Fellow

Whether universities properly address it or not, generative AI has and will continue to be used by students. Generative AI, such as ChatGPT, produces text through a replication or mimicry of human language. It does not discern whether information is factual or based in reality. Campuses must ensure that students, staff, and faculty alike understand what generative AI does in order to create policy or training that meaningfully address its impact on campus. 

 UC Davis has updated their Code of Academic Conduct to state that taking credit for work created through AI will be considered plagiarism. However, there needs to be reliable ways to identify when AI has been used. Common word processors, such as Google Docs and Microsoft Word, allow users to “track changes” and can reveal what changes have been made to a document. Faculty members should be trained on how to track changes on a document to identify whether a student wrote the content themselves or if portions of the assignment were copied and pasted from another source, and students should be aware that faculty members can track changes to a document. Thus, students are discouraged from plagiarizing using AI.

However, students and scholars who have limited experience writing cover letters and applying for grants could benefit in using generative AI to produce a text replica to assist them in applications, especially if they have limited access to other resources and do not come from a background in academia. 

Ultimately, AI will be on college campuses. Ensuring the campus community understands what AI does and when it is appropriate to use will be fundamental in addressing, regulating, and perhaps even benefitting from it.


Matt Perault, Director, UNC Chapel Hill’s Center on Technology Policy

There needs to be significant innovation in how we approach education to keep pace with the evolution in our world. Trying to get people to stop using the technology is going to be a long, brutal, losing battle. And that’s part of the reason that AI bans and pauses are unlikely to be the optimal approach. 

Holding tightly to a prior educational model is unlikely to work in the long run, and will do a disservice to students who are seeking to develop the skills that will enable them to succeed in their careers. How do we provide a rich educational experience that incorporates the technologies that students will encounter when they graduate? To me, that means engaging with the technology and using it, not trying to get students to put the technology in a box and pretend it doesn’t exist. 

In future workplaces, it will be critical for workers to be able to use AI to make work products better, which will require them to marry their skills and knowledge with the technology. Educators should try to build this skill set, which means that they should develop learning plans and assessments oriented around it. For instance, you could ask students to generate an original draft of something using a generative AI tool, and then ask them to use track changes to improve it. You would grade the quality of their edits and the quality of the finished product. This type of an assignment evaluates students’ ability to prompt AI effectively, as well as their ability to improve AI-generated text. Those are the skill sets that will be critical to the workforce in the future.

Listen to our conversation with Matt Perault on the SpeechMatters Podcast: “The Challenge of AI: ‘Abstinence is Not the Best Approach'”

 

Related Resources

Undergraduate Student Advocacy in the University of California System: A Handbook

The Challenge of AI: "Abstinence is Not the Best Approach"

Submit a Question!