Academic Integrity & Generative AI

Addressing Generative AI

As Generative AI tools like ChatGPT become more accessible, instructors are encouraged to set clear expectations around proper usage.

University policy states that work authored by another (including material created by ChatGPT and other Generative AI tools) but represented as the student’s work, whether paraphrased or copied verbatim or in near-verbatim form, is considered plagiarism.

The Office of Academic Integrity encourages instructors to outline expectations in their course syllabus and identify if (and to what extent) using Generative AI is appropriate. Whether you decide to encourage the use of Generative AI in your course, permit its use for specific assignments, or prohibit the use of it entirely, sample syllabus language can be viewed at https://cet.usc.edu/teaching-resources/syllabus-template/.

Preventing Unauthorized AI Use

Below are some techniques that may help discourage and identify the unauthorized use of Artificial Intelligence applications:

  • Craft assignment prompts that require applying course concepts to specific contexts. This makes generically fabricated responses less plausible and more easily detectable.
  • Compare the assignment of concern to a student’s previous work. Dramatic inconsistencies in voice, style, and grasp of concepts may indicate the use of AI-generative tools. You may even identify inconsistencies within a single submitted assignment.
  • Gently inquire about their writing process and sources. Students are often eager to discuss insights from their work and learning about their writing process (when they started the assignment, what was their drafting/editing process, what tools were used, etc.) may help determine if unauthorized resources were used.
  • Verify citations and quotes. AI tools often generate fictional sources and misattribute source material.

AI Detection Tools Guidance

Instructors should exercise caution when using AI detection tools and prioritize their own professional judgment when evaluating student work. 

While automated detection tools may seem like a convenient solution, it’s crucial to understand their limitations. For example, they cannot definitively identify AI-generated content and have been known to produce false positives, which can unfairly implicate students. Overreliance on these tools can lead to incorrect labeling of student work as AI-generated, potentially damaging trust and causing harm to innocent students.

If you plan to use AI detection software on student work, students should be explicitly informed of this. It is also important to remove all identifying information before the work is shared with an AI platform to avoid violating FERPA.

Instead of relying solely on percentages from automated detection tools, it is recommended that instructors limit their use to highlight passages requiring additional consideration. If you suspect unauthorized use of Generative AI, begin by reviewing the work to identify areas of concern. For example, upon further review, does the work appear to

  • Use unexpected or restricted sources
  • Fabricate source materials
  • Have a change in voice or tone from the student’s other assignments
  • Lack evidence of the student’s learning process (e.g., missing drafts, outlines, or other required elements that demonstrate building upon ideas)
  • Is unexpected for the course material
  • Something else?

When concerns arise, consider having a conversation with the student about their work before making conclusions. This is not required, but can provide valuable context and may reveal legitimate explanations eliminating the need to file a report with OAI.

When submitting a report to OAI regarding unauthorized use of Generative AI in student work, it is expected that the instructor identifies why they believe the assignment, whether in full or in part, was created using Generative AI. Instructors should feel confident reporting any time they believe a violation may have occurred. However, relying on evidence created by AI detection tools is insufficient to determine responsibility without additional analysis or other supporting elements. The instructor’s analysis and contextual understanding of the student’s work are essential components of the academic integrity process and underscore the importance of human judgment in the detection process. While instructors are not required to find additional supporting elements, students are not likely to be found responsible without it.