OpenAI can’t tell if something was written by AI after all

Future Tech Frontier
By -
0

 

OpenAI has closed down a tool designed to identify AI-generated writing due to its low accuracy rate. In a recent blog post, the company announced the decision to end the AI classifier, stating they are working on incorporating feedback and researching more effective techniques for tracing the origin of text. Although the tool was not adept at detecting AI-generated content and often produced false positives for human-written text, OpenAI believed it could improve with more data.

Following the tremendous success of ChatGPT and its widespread adoption, concerns arose in various sectors, especially among educators fearing that students might rely on ChatGPT to complete their homework. In response to worries about accuracy, safety, and cheating, New York schools even banned access to ChatGPT on their premises.

AI-generated misinformation has also become a major concern, as studies indicated that AI-generated texts, such as tweets, could be more persuasive than those written by humans. Governments are struggling to regulate AI, leaving the responsibility to individual groups and organizations to develop their own protective measures. However, there is currently no definitive solution for differentiating AI and human-generated work, and even OpenAI, a pioneer in generative AI technology, lacks concrete answers to address the challenges.

Furthermore, OpenAI has faced setbacks in its trust and safety division and is under investigation by the Federal Trade Commission to assess its information and data vetting processes. Despite these challenges, OpenAI has not provided further comments beyond its blog post.


Post a Comment

0Comments

Post a Comment (0)