OpenAI rolls back update that made ChatGPT a sycophantic mess

arstechnica 0 تعليق ارسل طباعة تبليغ

ChatGPT users have been frustrated with the AI model's tone, and the company is taking action. After widespread mockery of the robot's relentlessly positive and complimentary output recently, OpenAI CEO Sam Altman confirms the firm will roll back the latest update to GPT-4o. So get ready for a more reserved and less sycophantic chatbot, at least for now.

GPT-4o is not a new model—OpenAI released it almost a year ago, but the company occasionally releases revised versions of existing models. As people interact with the chatbot, OpenAI gathers data on the responses people like more. Then, engineers revise the production model using a technique called reinforcement learning from human feedback (RLHF).

Recently, however, that reinforcement learning went off the rails. The AI went from generally positive to the world's biggest suck-up. Users could present ChatGPT with completely terrible ideas or misguided claims, and it might respond, "Wow, you're a genius," and "This is on a whole different level."

OpenAI seems to realize it missed the mark with its latest update, so it's undoing the damage. Altman says the company began pulling the latest 4o model last night, and the process is already done for free users. As for paid users, the company is still working on it, but the reversion should be finished later today (April 29). Altman promises to share an update once that's done. This move comes just a few days after Altman acknowledged that recent updates to the model made its personality "too sycophant-y and annoying."

In search of good vibes

OpenAI, along with competitors like Google and Anthropic, is trying to build chatbots that people want to chat with. So, designing the model's apparent personality to be positive and supportive makes sense—people are less likely to use an AI that comes off as harsh or dismissive. For lack of a better word, it's increasingly about vibemarking.

أخبار ذات صلة

0 تعليق