OpenAI has developed an AI assistant, dubbed CriticGPT, to help its crowd-sourced trainers further refine the GPT-4 model. It spots subtle coding errors that humans might otherwise miss.
After a large language model like GPT-4 is initially trained, it subsequently undergoes a continual process of refinement, known as Reinforcement Learning from Human Feedback (RLHF). Human trainers interact with the system and annotate the responses to various questions, as well as rate various responses against one another, so that the system learns to return the preferred response and increases the model’s response accuracy.
The problem is that as the system’s performance improves, it can outpace the level of expertise of its trainer, and the process of identifying mistakes and errors becomes increasingly difficult.
These AI trainers aren’t always subject matter experts, mind you. Last year, OpenAI got caught crowd sourcing the effort to Kenyan workers — and paying them less than $2 an hour — to improve its models’ performance.
This issue is especially difficult when refining the system’s code generation capabilities, which is where CriticGPT comes in.
“We’ve trained a model, based on GPT-4, called CriticGPT, to catch errors in ChatGPT’s code output,” the company explained in a blog post Thursday. “We found that when people get help from CriticGPT to review ChatGPT code they outperform those without help 60 percent of the time.”
What’s more, the company released a whitepaper on the subject, titled “LLM Critics Help Catch LLM Bugs,” which found that “LLMs catch substantially more inserted bugs than qualified humans paid for code review, and further that model critiques are preferred over human critiques more than 80 percent of the time.”
Interestingly, the study also found that when humans collaborated with CriticGPT, the AI’s rate of hallucinating responses was lower than when CriticGPT did the work alone, but that rate of hallucination was still higher than if a human just did the work by themselves.