Artificial Intelligence in Teaching & Learning

Artificial Intelligence in Teaching & Learning

Join the AI Learning Community! Three more meetings during Spring 2024. Learn more

Getting to know AI

The field of Artificial Intelligence (AI) is progressing rapidly and in sometimes-bewildering ways. Applications of AI have become common in almost every sort of human interaction, including teaching and learning. You’ll find guidance below for thinking about the role of AI in your courses, writing a proprietary AI policy, and responding to unauthorized AI use.

What is AI?

AI is an umbrella term for any computer program that simulates cognitive processes. Many different AI platforms are now accessible for public use. ChatGPT, created by OpenAI, is perhaps the most widely used among university students and of most concern to faculty. Generative AI uses predictive algorithms to generate written text. It works by analyzing a large dataset of text to learn the patterns and relationships among words and phrases. When given a prompt or a starting point, ChatGPT and similar platforms can generate text that is similar to text it has seen before—and it is often hard to distinguish its output from human writing. ChatGPT is available in a free version (3.5) and a paid version (4). The paid version is more sophisticated but is sufficiently costly that its paywall may produce digital equity gaps for students who cannot afford to purchase access.

Other popular AI platforms:

  • Claude is similar to ChatGPT, with human-like chatting and writing
  • OpenAI’s Dall-e generates AI images based on text prompts
  • Microsoft Co-Pilot (built on OpenAI’s foundational GPT4 model) is aimed at creating efficiencies for users of Microsoft Office and related products
  • Gemini is a chatbot similar to Co-Pilot but geared toward Google users

Studybuddy.gg is a subscription browser plug-in that answers questions in online quizzes and even shows its work

Prompt Engineering

The process of creating and entering a prompt into an AI platform is called prompt engineering. Prompts can range from simple sentences to complex prompts that include a wide range of specifications such as creating assignments, writing quiz questions, writing code, and drafting papers. In general, the clearer the prompt, the more likely it is that AI will produce something akin to what the user is looking for. You can improve your prompt engineering skills through trial and error, or by exploring online resources and/or taking a course in prompt engineering. 

What can AI do?

Generative AI can competently do the following: 

  • Produce written text that can be indistinguishable from human writing
  • Generate plausible-sounding content, including “personal” reflection, in a variety of voices
  • Generate lists of ideas, outlines, and sources
  • Draft papers
  • Answer exam/quiz/homework questions
  • Respond to discussion forum prompts and posts
  • Provide feedback on writing
  • Give summaries of information similar to those you might on Wikipedia
  • Engage in interactive conversation
  • Adjust its output based on prompts, clarifications, and information provided by the user
  • Generate images based on text prompts
  • Follow instructions to create charts based on data entered by the user
  • Change citation lists to alternative formats (e.g., MLA to APA)

What can AI not do? What are its other limitations?

Generative AI cannot fact-check; write about anything that happened after its most recent training (updates vary between platforms); create infographics, interactive maps, videos, or memes; make predictions about the future; browse or summarize content from the internet (individual platforms were built with  specific datasets and they do not search the internet in real time); cite specific examples or quotations from another text; or draw connections between course content and visual materials. 

That said, generative AI is improving so rapidly that its capabilities in any given moment should be understood as transient. For example, while ChatGPT formerly refused to answer prompts requiring personal reflection, it now readily confabulates realistic-sounding personal experiences. 

When and why might students use AI?

Some students use AI in situations similar to those in which they might formerly have engaged in plagiarism. They may resort to AI because of a heavy course load or work schedule, overwhelming family obligations, to get a better grade than they think they can achieve on their own, to save time, or because they don’t understand the assignment. They may also use AI when they are unsure about what is and is not allowed.

Students also use AI in their coursework for purposes other than cheating. Common reasons include:

  • To edit writing for grammar, spelling, and/or writing mechanics
  • To improve the sophistication, fluidity, and word variation in writing
  • To seek explanations for complex ideas and theories
  • To brainstorm ideas and outline papers
  • To generate practice quiz questions
  • To summarize ideas or provide personal tutoring

Whether you discourage or encourage the use of AI, you should talk to your students about your expectations early and often—on the first day of the quarter and in preparation for all major assessments. 

Approaching AI use in your teaching

Take a proactive approach to AI. Regardless of your own feelings about it, AI is here to stay—eventually, you will need to find ways to either work with it or to minimize its downsides. The application and utility of AI will inevitably be different in different disciplines. In those disciplines wherein practitioners use AI to save time on tedious tasks, for instance, teaching practical AI use may confer future employment benefit on students. In disciplines such as writing, however, it may detract from students’ opportunities to develop skills. Your approach to AI use should address the specificities of your discipline, your department, and your professional practice as a scholar and teacher. Keep in mind, too, that your students are looking to you as a model for standards of scholarship within and beyond your discipline. If they perceive apathy in your approach to the ethical use of AI, they may follow your lead.

How can I incorporate AI into my teaching?

Some instructors have chosen to incorporate AI use into their assignments with the proviso that students clearly cite where they have used it. Here are just a few of the possibilities for this technology: 

  • Have students use AI to generate text based on a prompt that you provide. Then ask them to critique the results at the level of content, writing, or both
  • Ask students to conduct a compare/contrast between AI-generated text and human-generated text
  • Invite students to use AI for components of assignments that a professional in the field would be likely to use AI for (e.g., simple coding).
  • Use AI to help you create interactive case studies or scenarios for students to engage with in or out of class
  • Use AI to generate quiz/exam questions
  • Engineer AI prompts with rubrics to provide written feedback (although be aware that a strong majority of students express opposition to such use on the part of instructors)

While you’re not obligated to use AI, we do recommend experimenting with it so you can get a sense of how your students interact with it. An easy entry point is to create a free ChatGPT 3.5 account. You might enter one of your own assignment prompts into the chat box, or ask it to create a writing prompt or quiz question for your course.

Drafting an AI use policy for your class

Should you include an AI-use policy in your syllabus? Definitely! 

In surveys of student attitudes toward AI, respondents report that fear of getting caught violating an academic integrity policy provides a strong disincentive for cheating. Their perception of AI use as “wrong” is more likely when an instructor includes explicit syllabus language prohibiting some or all kinds of use. Other things to know:

  • Students say they’re more likely to use AI for cheating in online courses than in face-to-face courses; online syllabi and course design should take this into consideration.
  • Students are more likely to use AI for cheating in courses that are not required for their major; if you are teaching a GE course with a wide variety of majors, factor this into your planning.
  • Students hold a surprising variety of perspectives on how much AI use constitutes cheating (from “any at all” to various percentages). If you feel strongly about AI being used or not used in your classes, it is imperative that your AI policy clearly outline your standards.

We recommend that you craft an AI policy that aligns with the specific needs and ethics of your discipline, your department, your modality, and your course. Rather than providing institutional language, we offer the customizable formula below for constructing your own policy. Sample syllabus language follows this guide to give you some ideas for creating a proprietary version that suits your needs. You can also refer to statements used in courses at UC Santa Cruz or to this crowd-sourced document that contains syllabus language samples from higher education institutions all over the U.S. 

Note that unless your AI policy is uniformly prohibitive, your guidelines will be clearer to students as a standalone statement than as a brief part of your general academic integrity policy.

Recommended Components
  • A discipline-specific explanation of the role of AI in learning and practice
  • When and how AI resources may and may not be used in your course
  • When and how AI resources should (if you allow their use in certain circumstances) be cited or acknowledged
  • How you will handle cases of suspected or confirmed AI use that contravenes your policy
  • A list of resources students can use if they need help understanding how to interpret or use your policy— including an invitation to attend office hours

Putting It All Together
  • Follow equity best practices by using welcoming and inclusive language and writing in the first person; framing your policy as a commitment to helping students grow their capacities will be more persuasive than threats of discipline. 
  • Review your policy carefully to make sure you haven’t incorporated any requirements that are unenforceable or illegal (in particular, the non-consensual use of AI detection software, which is currently prohibited by the University of California due to its infringement of students’ privacy).
  • Your message to students about the use of AI (and academic integrity in general) should reflect your teaching philosophy, the priorities of your department, and the conventions of your field. 

Regardless of the approach you take, be aware that a small but not insignificant proportion of students will choose to push boundaries. Revisit your policy carefully on a regular basis. If it isn’t doing the work you intended, revise accordingly.


Sample Syllabus Language #1: A highly restrictive AI use policy
Why you should care about integrity

Integrity—other people’s perception of your word as true—is one of the most valuable assets you can cultivate in life. Being rigorous about your integrity in academic settings allows others to trust that you have completed the work for which you are taking credit. This is symbolic of the public trust from which you will benefit in your future occupation after you graduate from UCSC.

How I handle breaches of academic integrity

If you submit work that appears to have been written using unauthorized sources, I will ask you to meet with me to discuss your thinking and writing process. I will also ask you to talk through your submission orally so I can assess your knowledge in real time. If, after our conversation, I conclude it’s more likely than not that you did not personally complete an assignment you submitted under your name, you will get a 0 on the assignment, I may give you a failing grade for the entire course, and I will definitely report the incident to the university administration for further sanctions. 

Generative AI Policy

The easiest way to ensure that your writing does not come under suspicion for AI use is to not use AI. Here is the AI policy:

  • You may not use ChatGPT or any other generative AI platform or technology, including (but not limited to) Co-Pilot, Gemini, Claude, DALL-E, Grammarly Premium, StudyBuddy, predictive/suggestive text, etc. 
  • Unless explicitly instructed to do so for a specific assignment, you may not use AI for any reason, including (but not limited to) thinking, writing, brainstorming, researching, outlining, source searches, or editing.
  • Translation software (including, but not limited to, Google Translate) counts as an AI platform, so its use is strictly prohibited. Even if English is not your first language, you must write your papers directly in English rather than writing them in your native language and translating them. You may look up individual words in an English/Your-Native-Language online dictionary, but you may not use an online translator to translate phrases, sentences, paragraphs, or papers. 
  • I expect you to be able to easily define any word you use in your writing; please be sure to learn and memorize the definitions of any new words you have gotten from a dictionary. 
  • For spell-check and grammar-check functions, you are limited to Grammarly Basic (not Premium) or the basic spell-check and grammar-check features that come pre-loaded with word-processing software such as MS Word or Google Docs. You may not use any other editing software, nor should you use the suggestive/predictive text that such software proposes. 

Please ask if you have questions about this policy or about scholarly attribution practices in our discipline. I am here to help you learn those conventions.


Sample Syllabus Language #2: A policy allowing for limited AI use with attribution

Our discipline allows for generative AI use under certain circumstances, but always with clear and open attribution. Where readers have a reasonable expectation that work with your name on it was written by you, your ethical use of AI is imperative. Please read this policy carefully if you plan to make use of AI resources. 

Authorized uses of AI tools
  • Brainstorming paper topic choices
  • Finding sources
  • Personalized tutoring; asking about concepts you’d like to see explained in a different way
  • Editing for grammar, writing mechanics, and punctuation
  • Seeking writing feedback on a paper for which you wrote the original draft
Unauthorized uses of AI tools
  • Asking AI to draft an outline for you 
  • Asking AI to draft a paper for you
  • Using AI to make edits to your writing in ways that substantively change the voice of your work (in other words, using AI to write in ways that you cannot write on your own)
  • Using AI to answer quiz, exam, or homework questions
  • Using AI to write discussion posts or annotations
  • Using AI to hide plagiarism or to mislead readers about the provenance of your submitted work
Citing AI use

If you use AI resources in any authorized way as described above, you must note that in your submitted paper. You can do that by explaining exactly how you used it in a paragraph at the end of your assignment. 


Detecting and Responding to Unauthorized AI Use

How do I know whether a student has used AI on an assignment?

While conventional plagiarism is often easy to identify, text produced by AI may be indistinguishable from other student work. Some AI-generated text contains the telltale signs of a large language model: plausible but illogical arguments, invented “facts,” prose that feels flat, content presented in bullet points. But trying to detect AI-generated text is a losing game, and it will only become more difficult as the technology advances. The bottom line is: You may have a sense, but it’s virtually impossible to know for sure.


May I use an AI detector?

There are programs that detect AI-generated text with varying levels of accuracy, but they may only be used at UC Santa Cruz if the tool is hosted locally and data is protected from external access, the tool is contracted through campus Purchasing, or if you obtain prior approval from students. Without meeting one of these criteria, the use of AI detectors may violate the Family Educational Rights and Privacy Act (FERPA), which prevents the unauthorized sharing of student work. The guidance is explained in detail in this March 20, 2023 communication to faculty. 

A less direct—but still informative—approach is to enter your own assignment prompt into AI a few times to see what it produces. Platforms like ChatGPT don’t generate the same response twice, but they do generate similar responses across multiple iterations. Certain words and phrases reappear; arguments and analyses are markedly similar. You may compare AI-generated text to student submissions without prior approval. If applicable, share the output with your teaching team and ask them to familiarize themselves with it. 

A note of caution: If you plan to use a detection tool, proceed carefully, as detection tools occasionally produce false positives, which can harm student-instructor relations when false accusations are made. 


What should I do if I suspect unauthorized AI use?

If you suspect that a student’s submission was generated by AI, start with a conversation. Students who completed their own work are generally able to explain and defend their process. Assume the best of your students and consider your evidence carefully before coming to conclusions about a student’s behavior. Just as an AI detector can return a false positive, your intuition is fallible. An unfounded accusation against one student can damage your relationship with all of your students.

One strategy that can prove helpful is to request a meeting with the student. Without providing them advance notice of the topic of the meeting, ask the student to verbally explain their response to the assignment prompt you’re concerned about. A student who did their own work should be able to reproduce their thoughts, albeit in a less polished tone. If the student is unable to respond to your question, yet submitted an assignment suggestive of deep understanding, this mismatch may be cause for suspicion. 

If, after speaking with the student, you are convinced that they did not complete the assignment in accordance with your AI/academic integrity policy, follow up with the action items you previously identified in your policy such as no credit for the assignment, adjusted grade in the course, referral to college provost or academic integrity officer, etc. Describe this process clearly in your syllabus to ensure that your students perceive it as being fairly applied across the class. 

If you choose to pursue institutional disciplinary action based on suspected academic misconduct, be sure to clearly document the reasoning behind your suspicions as you are investigating the incident. 


Proactive approaches to minimizing unauthorized AI use

There is no way to prevent AI use in your classroom, but you have more influence than you might realize on students’ thinking and decision-making. The following is a non-exhaustive list of strategizes you might consider incorporating in concert with one another: 

  • Promote the values of academic integrity: doing your own work, acknowledging (and citing) sources of ideas and information, improving with practice, honesty and openness
  • Discuss AI’s limitations and biases with your students
  • Discuss the professional and personal implications of failing to acquire the skills/mastery that your assessments are meant to confer
  • Ask students to sign an academic integrity pledge at the beginning of the quarter and/or as they submit each assignment
  • Write a thoughtful academic integrity policy beyond AI use that clearly conveys to students the value that you and your discipline place on professional trust and ethics
  • Inoculate against cheating by appealing to students’ sense of personal honor and desire to attain mastery
  • Explain the pedagogy behind growth through mental effort: in short, you have to put real effort into learning how to write/think if you want to own that knowledge in the long term
  • Explain why students should want to know how to write—beyond passing the class (for example, in what other adult contexts could fluid writing prove useful?)
  • Explain the consequences of breaking interpersonal trust and the difficulties in repairing it; make clear that you want to be able to experience reciprocal trust in your students 
  • Design assignments that encourage students to leverage their creativity and personal experiences
  • Use writing prompts that are less compatible with AI text generators (here is one such source)
  • Build a peer-review process into major assignments, and require that students incorporate and address their peers’ feedback in subsequent drafts
  • Break larger assignments into smaller, scaffolded assignments to decrease the anxiety surrounding high-stakes assessments and to give you and your teaching team the opportunity to make early interventions if you suspect AI use
  • Ask students to reflect, as part of an assignment, on their writing process

What we’re reading about AI right now

Event: Let’s Talk About ChatGPT (March 21, 2023)

Faculty presentations from Leilani H. Gilpin, Assistant Professor of Computer Science and Engineering; Amy Vidali, Chair & Associate Teaching Professor, Writing Program; Zac Zimmer, Associate Professor of Literature; and Jennifer Parker, Professor of Art.

Watch recording of the event

Last modified: Apr 24, 2024