OpenAI has recently adjusted the training methods for its AI models, aiming to strongly support "freedom of knowledge," regardless of how challenging or controversial the topics may be. This policy shift implies that ChatGPT will be able to answer more questions, offer a wider range of perspectives, and reduce the number of topics it avoids discussing in the future.
Analyzing this change, some suggest it might aim to maintain good relations with the newly elected Trump administration while reflecting a broader shift in perception regarding "AI safety" within Silicon Valley.
On Wednesday, OpenAI released an updated 187-page document titled "Model Guidelines," detailing how the company trains the behavior of its AI models. Among these updates, a new guiding principle was introduced: not to deceive users through false statements or by omitting important background information.
In a new section called "Jointly Pursuing Truth," OpenAI stated that it hopes ChatGPT will avoid taking editorial stances in responses, even if some users find them morally wrong or offensive. Therefore, ChatGPT will provide multiple viewpoints on controversial topics to remain neutral.
For instance, regarding political issues, OpenAI believes ChatGPT should express both the "Black Lives Matter" perspective and acknowledge "All Lives Matter." It won't refuse to answer or take sides but will first affirm love for humanity before providing background information on each movement.
OpenAI pointed out in the guidelines that this principle may provoke controversy as it suggests the assistant could remain neutral on topics some users consider morally wrong or offensive. However, the purpose of the AI assistant is to assist humans, not to shape them.
Despite this, the new "Model Guidelines" do not mean ChatGPT will operate without any restrictions. It will still refuse to answer certain objectionable questions or respond in ways that promote clearly false information.
Some argue that these changes might be a response to conservative criticism of ChatGPT's safeguards, which previously seemed to lean towards a center-left bias. Nevertheless, an OpenAI spokesperson denied this, stating that embracing freedom of knowledge reflects the company's long-standing belief in "granting users more control."
However, not everyone agrees with this view.