bot, colorful, robot-4877998.jpg

ChatGPT Optimizing


 

ChatGPT optimizing to be capable of interacting with individuals in a conversational manner, the conversation paradigm allows ChatGPT to respond to additional queries, admit its mistakes, acknowledge incorrect premises, and dismiss inappropriate requests, ChatGPT is a sibling model to CommandsGPT2, which is trained to follow an instruction manual and provide a detailed response.

Methods

Earlier than in Reinforcement Learning from Human Feedback (RLHF), we used the same methods that were used in InstructGPT, but with slight differences in the data collection setup, we then used supervised fine-tuning to train a high-level model: human trainers talked with both sides the user and an AI assistant.

For both their programming practice sessions as well as informal \”I-guessation\” questions, the authors of the workouts were given access to detailedly worded suggestions to assist them compose their replies, we combined this new dialog-dataset with the InstructGPT corpus, which we converted to a dialog format.

To create a reinforcement learning model for rewarding learning, we needed to curate score datasets consisting of two or much more ranked model replies. For that, we took conversations AI coaches had with the chatbot, and we selected a model-generated message, then extracted many plausible answers and ranked them.

 By utilizing these incentive systems, we can refine the model through Proximal Policy Optimization, we implemented multiple rounds of this procedure.

ChatGPT is fine-tuned from a model in the GPT-3.5 series, which finished training in early 2022. You can learn more about the 3.5 series here. ChatGPT and GPT 3.5 were trained on an Azure AI supercomputing infrastructure.

Limitations

  • ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
  • ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
  • The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.12
  • Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.
  • While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.

Iterative deployment

Today’s research release of ChatGPT is the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems. Many lessons from deployment of earlier models like GPT-3 and Codex have informed the safety mitigations in place for this release, including substantial reductions in harmful and untruthful outputs achieved by the use of reinforcement learning from human feedback (RLHF), the following capabilities demonstrate what is  ChatGPT:

Samm Joe
Author: Samm Joe