ChatGPT-like content might not be able to keep its origins a secret.
SEO agency Search Logistics has released a report that claims almost 90% of CNET’s AI-generated content was detectable using a public AI detection tool.
CNET Money had been experimenting with an “AI assist” to compile explainers in response to frequently asked questions. By mid-January they had published around 75 such articles.
Why we care. The results reported by Search Logistics, if iterable across larger samples of text, could be relevant to the many questions that have been raised about the use of ChatGPT-like content creation tools. For one thing, Google has said it will regard AI-generated content as “spam,” thus threatening search rankings for sites that come to depend heavily on such content. The Copyright Office has consistently said that only human-generated content can be copyrighted.
Such postures beg the question: Can AI-generated content be reliably identified? The Search Logistics study suggests the answer may be yes. This doesn’t necessarily mean AI can’t replace human content creators; just because the AI detection tool (Originality.AI) knows when it’s being fed the ruminations of a robot, it doesn’t follow that a human reader can tell.
The data. The report found that:
- 87.2% of CNET’s AI-generated content was detectable.
- 12.8% avoided detection.
- 19.2% of the articles tested had 50% or more content generated by AI.
- 7.7% had 75%+ AI-generated content.
(CNET has said that AI-generated content is fact-checked and edited by humans).
The post AI-generated content is detectable, new study claims appeared first on MarTech.
MarTech(18)