Key Takeaways
- AI detectors aren’t perfect; false positives are always possible. Therefore, detectors should be used as a prompt for further review, not as a final verdict.
- When content is flagged, use the opportunity to teach students about the responsible use of AI and the chances of accidental plagiarism.
- Monitor the trend of AI usage over time to help reduce false positives and bring in more accurate assessments.
- Use AI detection as a discussion starter on responsible AI use. Take the time to walk students through the process rather than leaping to repercussions.
As the new school year begins, there’s a lot of buzz about AI tools and their role in the classroom. But with this excitement comes a bit of fear around the AI models themselves and AI detection tools, such as the Copyleaks AI Detector. So, let me help clear up some misconceptions and calm some concerns about why you shouldn’t be afraid to use AI or detectors. Trust me there is an importance in understanding the responsible use of both.
Understanding False Positives
First off, let’s talk about false positives. What is a false positive in AI Detection? Like in medicine, where no treatment is perfect, AI detectors aren’t flawless. If someone claims they have a 100% accurate cure, the medical community would be skeptical, and rightly so. The same goes for AI detectors. A false positive is when something is incorrectly identified as AI-generated when the content is actually written by a human (the opposite of this is called a False Negative). Knowing this can happen is essential, but avoiding using AI detectors altogether is not a reason.
Our method is to naturally lean towards content being human, and we attempt to do our best to prevent false positives. Still, just like when your credit card company calls because they see a suspicious charge, every now and then, we have to double-check our work; I often get calls from my credit card company while traveling that block my transaction because it thinks someone else is using my card. After all, they’re tracking my regular patterns. Typically, I’m not in the country where they see the purchase is coming from. However, I’m not mad or upset that they didn’t get it right…I’m glad they’re looking for these patterns and raising a flag, so I am safe when the worst-case scenario happens. Teachers need to look at it the same way here. We won’t always get it right, but it’s better to have something than nothing regarding AI.
Now, how do we handle them?
So, what should you do if an AI detector flags something as AI-generated, and you think it’s actually authentic work? The key is not to jump to conclusions. Most students aren’t using this new technology to cheat maliciously, and when they are using it, it’s because the tool is available and the path of least resistance. In the early days of Napster, the music industry had to spend millions of dollars educating people about piracy and why using Napster was ‘stealing’; the average user of these services didn’t understand the potential dangers at all, not out of ill will but out of ignorance.
Today, we’re in a similar place in the AI world. The goal when using a detector isn’t to accuse students of not doing their work but to understand that AI can accidentally plagiarize without them knowing it or stall their education when it comes to learning a new subject. Informing your students about the real risks of AI use in the classroom and establishing guidelines around when AI is allowed to be used is a much better first step than instantly flagging an assignment as a 0.
How Copyleaks suggest handling AI in the new school year:
- Educate Students: Start by teaching students about the potential consequences of using AI irresponsibly. It’s about understanding the rules and the importance of original work. Let your students know when and when using AI is inappropriate.
- Identify Power Users: Initially, everyone was testing out tools like ChatGPT. Now, it’s mostly power users—those who use the tech regularly. Educators need to identify these trends rather than focusing on one-off cases, as a trend line of AI flags across multiple pieces of work is incredibly unlikely to be inaccurate.
- Establish a Pattern: If an AI detector flags a student, look at the overall pattern, not just one scan. More data can reduce the false positive rate to almost 0%, allowing for a more accurate assessment and a much more fruitful conversation with the student.
Really, it’s about starting a conversation.
The ultimate goal of AI detection is not to jump to immediate conclusions but to start a conversation. Educate students who you think are using AI on why responsible use matters and the dangers of educational mistypes such as accidental plagiarism. Try your best not to think of it as playing cop but instead act as a guide for your students to understand the implications of using tools like ChatGPT. And don’t worry; things will continue to evolve and get a lot better over time. AI is still finding its footing, and as the industry evolves, we might find ways for tools like ChatGPT to cite its sources and become a more ethical tool, much like how streaming services have transformed the music industry beyond the old Napster days.
In the end, educators need to approach this AI world with a gentle hand. By educating and understanding, we can help students navigate the complexities of this new space and ensure they’re using these tools responsibly. So, don’t fear AI detectors like ours—embrace them as a tool for growth and learning in an ever-changing education world.
Shouvik Paul
Shouvik Paul, COO of Copyleaks and host of The Original Source podcast, leverages 25+ years in EdTech and Media SaaS to steer Copyleaks’ global leadership in AI governance, plagiarism detection, and AI content identification. Copyleaks, an award-winning text-analysis platform, ensures in AI governance, responsible AI adoption, intellectual property protection, copyright compliance, and maintaining academic integrity through advanced content detection.