In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth?
In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth?
This is a New York Times article. By default, the New York Times is the citation, just like every other MSM. And even then, this specific article does attribute it:
The article only said they made a test, not that they weren’t failing it, which happens to be what the linked paper says. This is not new as LLMs also always failed a certain intelligence test devised around that same time period until ~2024.
That’s 55%: https://humanfactors.jmir.org/2025/1/e71065