The Accuracy of AI Detection Models for Non-Native English Speakers

In This Blog

Key Takeaways

  • Copyleaks’ AI Detector achieved 99.84% accuracy with non-native English texts, outperforming competitors with a <1.0% false positive rate. 

  • The study used diverse datasets, revealing slight discrepancies emphasizing the need for continuous model refinement. 

  • Datasets used to test AI detectors have unique licensing restrictions that should be considered when interpreting results and applying them to real-world scenarios.

  • Findings support AI Detectors’ broader applications in education and content verification, with a careful focus on linguistic diversity.

About This Report

In the rapidly evolving world of artificial intelligence, the reliability and accuracy of AI detection models are crucial. While most of these tools claim high accuracy rates, these rates are predominantly for English only. A recent Stanford study concludes that AI detectors may be biased against non-native English writers, raising concerns about their fairness and effectiveness in detecting AI-assisted cheating.

 

This study aims to provide insights into the real-world performance of select AI Detectors and their accuracy with non-native English speakers, centering around their overall effectiveness across varying datasets. It intends to provide complete transparency for millions of global users, primarily focusing on their performance when tested with datasets of texts written by non-English speakers.

 

The study was conducted by the data science team at Copyleaks on August 20, 2024.

Key Findings

Across the three non-native English datasets analyzed Copyleaks’ AI Detector had a combined accuracy rate of 99.84%, with 12 texts misclassified out of 7,482, a <1.0% false positive rate. For comparative purposes, when the same model was tested on August 14 against datasets containing native English speakers, the accuracy in one analysis was 99.56%, and in a separate study, 99.97%.

 

Another AI detector conducted a published similar study around non-native English writing on August 26, 2024, using 1,607 data points, resulting in a 5.04% false positive rate. Not only is this significantly higher than their false positive rate across predominantly English texts, which currently sits around 2%, but a 5.04% false positive rate can have consequential outcomes. For example, in a university with 50K students, if each student submits four papers yearly, that will result in over 10K false accusations. A 5.04% false positive rate, taken as a real-world example and as detailed in the Stanford study, underscores how critical AI detection model accuracy is.

Understanding the Datasets Studied

The study utilized three distinct datasets to test the AI Detector. Each dataset has unique characteristics and licensing restrictions essential for interpreting the results.

FCE v2.1

This dataset contains written answers from the First Certificate in English (FCE) exam, totaling 2,116 texts. It is subject to noncommercial use only, which may limit broader applications of the findings.

ELLs

This dataset includes essays from English Language Learners (ELLs) in grades 8 through 12. It comprises 3,911 texts and has an unknown license status.

COREFL

This dataset features written texts from learners of English as a second or foreign language. Its Creative Commons license allows for broader use.

Dataset-Specific Performance

Dataset

Texts

False Positives

Accuracy

FCE v2.1

2,116

4

0.9981

ELLs

3,911

0

1

COREFL

1,455

8

0.9945

Total

7,482

12

0.9984

FCE v2.1

The accuracy of the AI Detector on this dataset was 99.81%, with only four texts incorrectly identified as non-human. The slight inaccuracy (0.19%) indicates that the model is generally reliable but might struggle with certain nuances or errors specific to the FCE corpus. This minor issue highlights the need for continual refinement of AI detection algorithms.

ELLs

The AI Detector achieved a 100% accuracy rate, correctly identifying all texts as human-written. This reflects the model’s potential for high precision when applied to similar educational texts. However, the unknown licensing status of this dataset could limit its broader application and validation.

COREFL

AI Detector was accurate at 99.45% on this dataset, misclassifying eight texts as non-human. The slightly lower accuracy than other datasets suggests that texts from this corpus may present unique challenges. The model’s performance indicates a need for additional adjustments or training to handle diverse linguistic features more effectively.

Implications and Future Directions

The findings have important implications for various fields, including academic assessments, content moderation, and AI-generated content verification.

  • Refinement and Adaptation: The slight discrepancies in dataset-specific results suggest areas for improvement. Future iterations of the model will benefit from targeted training on datasets with varying linguistic features to enhance performance across diverse text types.

  • Licensing and Usage Considerations: The licensing restrictions associated with some datasets highlight the need for careful consideration when utilizing and publishing research findings. Researchers and practitioners should ensure compliance with licensing agreements to avoid potential legal issues.

  • Broader Applications: The model’s success in accurately detecting human-written texts from non-native English speakers opens avenues for its application in educational and professional settings. It could be a valuable tool for educators, content creators, and researchers working with diverse language learners.

Conclusion

This analysis did find an example of an AI Detector substantiating Stanford’s findings around an inherent bias against AI detectors, but this isn’t necessarily a blanket concern. While some models demonstrate an overall solid performance, attention to dataset-specific nuances and licensing considerations remains crucial. As AI detection technology advances, ongoing research and refinement will be vital to maintaining and enhancing its efficacy in various contexts across multiple world languages.

Find out what's in your copy.

Related Blogs