Key Takeaways:
- Schools need an AI usage policy to address students’ growing use of generative AI tools. These tools can be misused for plagiarism and can also generate inaccurate information.
- There is no one-size-fits-all approach to AI usage policies. Schools can ban generative AI tools, allow them with restrictions, or allow them for specific purposes such as brainstorming.
- Students need to be aware of AI content generators’ limitations, which include plagiarism, inaccuracy, and bias.
- Instructors need to be aware of how to handle suspected AI misconduct. This includes speaking with the student, reviewing previous assignments, and using data from plagiarism detection tools.
As generative AI continues to saturate our everyday lives, from being integrated into our search engines to smartphone updates that now include genAI assistants, concerns continue to mount around where genAI fits into the world of education. After all, if the technology is available in the palm of our hand, how can you stop students from using it for assignments? Do schools finally embrace AI? If so, in what way? Where do they start when it comes to laying the groundwork for AI usage policies?
The Copyleaks Customer Success Team is often asked for suggestions and guidance on establishing an AI usage policy for education. However, one policy does not fit all, as each school’s situation is unique.
That’s why we compiled a guide to help educators and administration navigate the conversation around establishing and implementing AI usage policies within their institutions.
Why is adding an explicit AI usage policy to your Academic Integrity Policy important?
ChatGPT and other AI generation tools have revolutionized education. Students and instructors are rapidly adopting AI for lesson planning, outlining, researching, and more. To combat the misuse of these tools, your institution should have a clear policy on when it is and is not acceptable to use AI content in assignments. Without a comprehensive policy, instructors can be left to decide on a case-by-case basis what is allowed and how a student may need to be disciplined. Proactively adding an AI usage policy to your academic integrity will alleviate this pressure from instructors and make students aware of the risks and limitations of AI generation tools.
What are some options for an AI Usage Policy?
You can create a comprehensive policy that prohibits using any and all AI-generated content in school materials. You can also write a policy that AI is allowed and decide at the course level to what degree.
What are some examples of AI usage policies Copyleaks users have implemented?
- Generative AI is not allowed in any coursework:
- Generative AI tools, or AI-generated content, should never be used to complete coursework or assignments.
- Any AI content submitted in place of a student’s original work violates the school’s academic integrity policy and could result in disciplinary action.
- Generative AI is allowed in some form:
- AI usage is permissible when clearly stated within the rubric/ instructions.
- AI usage is allowed for preliminary research, outlining, or brainstorming.
- AI usage is allowed in class exercises only but not in submitted assignments/ course materials.
- AI-generated content is allowed, but it must be properly cited.
- AI-generated content can be present in a student’s work, but it should be rewritten in the student’s own words and properly cited.
- AI usage is permissible but should be appropriately researched for misinformation or plagiarism.
What are some limitations of AI Content Generators that students need to be aware of?
- Research has found that over 60% of AI-generated content contains some form of plagiarism. Because of this, students must ensure they create citations for proper sources and avoid using copyrighted material.
- AI content generators are also shown to provide inaccurate information, so additional research is needed before including AI-written text in assignments.
What should students know when using writing assistant tools such as Grammarly?
Platforms such as Grammarly utilize large language models (LLMs) for certain features, such as rewriting content to improve tone, paraphrasing, and more, and can be detected as AI. However, when performing basic functions such as grammatical corrections, mechanics, etc., tools such as the Copyleaks Writing Assistant are less likely to be flagged as AI. To learn more about potential AI detection when using tools like Grammarly compared to our Writing Assistant, click here.
What other AI best practices should Students and Faculty be aware of?
- Chat GPT, Gemini, and other generators are not the only AI tools. Writing tools like Google Translate and Grammarly are also powered by AI. When using Copyleaks, AI-based functions of these writing tools, such as Grammarly’s Improve It AI-writing feature, can be identified as AI content.
- While false positives are rare (0.2%), AI-generated can still be registered as human-created text. The rate of false negatives will decrease as we continue to train the models.
- Although AI is incredibly advanced, it can still inherit bias based on the data it is trained on. It’s important to keep this in mind when using AI-based writing in your submissions.
- AI content generators often plagiarize from other web sources, so thoroughly read and cite the content it can provide.
What do you do if there is suspected AI misconduct?
If you believe a student may have misused AI content in their work, you should always speak with them before pursuing disciplinary action. We recommend reviewing previous assignments and our Analytics tool to see if AI usage is a trend. Lastly, you should sync with other instructors and administrators to confirm how to proceed with a suspected academic integrity violation.
How should instructors handle a high AI content detection score?
Working with Copyleaks, our reports should always be a data point for further understanding how a student worked on an assignment. We encourage instructors to ask the students questions before jumping to conclusions. Some schools allow a second submission of flagged assignments.
We also recommend getting used to working with Copyleaks for two to three months before your school determines the appropriate similarity level for each course. We typically find that instructors teaching high school students can have an acceptable similarity level between 20-30% and college students 15-25%. We also recommend working with the template exclusion tools to avoid match similarities.
Our false positive rate is very low, so consider this when looking at student submission trends. Copyleaks Analytics tool provides student-level data to see if a particular student frequently gets high AI content scores. Since multiple false positives are extremely rare, it’s important to note these patterns.
We recommend looking at the past three assignments a student has submitted. If the student has AI Content in all of them, it is highly improbable that all three were a false positive.
Of course, this guide is not the final word on the matter. Like generative AI technology, it will always be evolving and expanding, so check back for new updates and expansions in the future.