Synthetic intelligence powerhouse OpenAI has discreetly pulled the pin on its AI-detection software program citing a low charge of accuracy.
The OpenAI-developed AI classifier was first launched on Jan. 31, and aimed to assist customers, similar to lecturers and professors, in distinguishing human-written textual content from AI-generated textual content.
Nevertheless, per the unique blog post which introduced the launch of the instrument, the AI classifier has been shut down as of July 20:
“As of July 20, 2023, the AI classifier is not obtainable on account of its low charge of accuracy.”
The hyperlink to the instrument is not practical, whereas the observe provided solely easy reasoning as to why the instrument was shut down. Nevertheless, the corporate defined that it was taking a look at new, more practical methods of figuring out AI-generated content material.
“We’re working to include suggestions and are presently researching more practical provenance methods for textual content, and have made a dedication to develop and deploy mechanisms that allow customers to know if audio or visible content material is AI-generated,” the observe learn.

From the get go, OpenAI made it clear the detection instrument was vulnerable to errors and couldn’t be thought of “absolutely dependable.”
The corporate mentioned limitations of its AI detection instrument included being “very inaccurate” at verifying textual content with lower than 1,000 characters and will “confidently” label textual content written by people as AI-generated.
Associated: Apple has its personal GPT AI system however no acknowledged plans for public launch: Report
The classifier is the most recent of OpenAI’s merchandise to return underneath scrutiny.
On July 18, researchers from Stanford and UC Berkeley revealed a research which revealed that OpenAI’s flagship product ChatGPT was getting considerably worse with age.
We evaluated #ChatGPT‘s habits over time and located substantial diffs in its responses to the *similar questions* between the June model of GPT4 and GPT3.5 and the March variations. The newer variations obtained worse on some duties. w/ Lingjiao Chen @matei_zaharia https://t.co/TGeN4T18Fd https://t.co/36mjnejERy pic.twitter.com/FEiqrUVbg6
— James Zou (@james_y_zou) July 19, 2023
Researchers discovered that over the course of the previous couple of months, ChatGPT-4’s capability to precisely establish prime numbers had plummeted from 97.6% to simply 2.4%. Moreover, each ChatGPT-3.5 and ChatGPT-4 witnessed a major decline in having the ability to generate new strains of code.
AI Eye: AI’s educated on AI content material go MAD, is Threads a loss chief for AI knowledge?