Advancing Machine Discernment of Linguistic Cues: A Stylometric Approach to AI-Synthesized Text Detection

Project Details

Description

Right now, artificial intelligence can generate text that often seems like it was written by a human. This is causing big problems as AI-created content spreads online and in other places. Companies like Turnitin have tried making AI tools to detect text that wasn't actually written by a person. But their models struggle to tell the difference accurately. These tools still have a lot of false positives, flagging human writing as AI-created. The technology for distinguishing AI content is still in early stages. Most methods just look at surface patterns and can't deeply analyse writing style like a human can. This research aims to move AI detection capabilities forward. By testing how well current AI can fool people, we'll better understand its limitations. And by developing a new stylometry technique that looks at deeper linguistic features, our goal is to create a breakthrough model for identifying AI-generated text. Reliably separating human and artificial writing could help slow the spread of deceptive AI content. It can assist educators in detecting plagiarism and maintain trust in online information. This project has real potential to close an emerging gap in AI’s capabilities and have far-reaching impacts on ethics and society.

Layman's description

Right now, artificial intelligence (AI) can create text that looks like it was written by a real person, which is causing issues as AI-generated content spreads online and elsewhere. Companies like Turnitin have developed tools to detect when writing is created by AI, but these tools aren’t very accurate yet. They often make mistakes, sometimes even labeling human writing as AI-made.

The technology to reliably spot AI-generated text is still in its early stages. Most current methods focus on simple patterns in the writing, but they can’t analyze the writing style as deeply as a human can. This research aims to improve AI detection tools. By testing how well current AI can trick people, we can better understand its limits. We’re also working on a new way to analyze deeper aspects of writing, which could lead to a much more effective model for detecting AI-generated content.

Being able to reliably tell the difference between human and AI writing could help slow the spread of misleading content, help teachers catch plagiarism, and ensure people can trust the information they read online. This project has the potential to close a critical gap in AI’s development and have a meaningful impact on ethics and society.

Short titleAdvancing Machine Discernment of Linguistic Cue
StatusFinished
Effective start/end date27/11/2331/07/24

Collaborative partners

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.