Anthropic Unveils New AI "Constitution" for Claude

2026-01-22

Anthropic PBC has unveiled an updated version of the Claude Charter today, a document outlining how its large language model series should process user prompts.

The company initially released the original version of this document in May 2023. It contained directives aimed at preventing Claude from generating harmful or unhelpful outputs. Recognizing limitations in these initial instructions, Anthropic opted to draft a new constitutional framework.

A primary challenge identified was Claude's difficulty in applying broad, human-authored guidelines to novel situations. If safety instructions for the LLM did not explicitly specify how to address a particular prompt, the model might produce an incorrect or undesirable response.

According to Anthropic, the new charter not only provides instructions for the Claude models but also explains "why we want them to behave in certain ways." This explanatory component makes it easier for the large language models to apply principles to unfamiliar tasks.

The revised charter is structured around four core directives. First, Claude should be "genuinely helpful" by ensuring its outputs align with user needs. For instance, the charter stipulates the LLM series should not generate programming code in languages beyond what a developer has requested.

The next section of the document clearly states Claude should be "broadly safe." Anthropic clarifies this means the model should not perform actions the user has prohibited. Claude is also required to maintain transparency regarding its decision-making process.

The charter's other two core priorities are ensuring Claude is "broadly ethical" and adheres to "more specific guidelines" provided by Anthropic. Some of these guidelines explicitly detail how the LLM series should resist jailbreak attempts. Others offer Claude direction on interacting with third-party applications.

This constitutional framework is integrated into Claude's training dataset. Furthermore, the large language models in the series utilize this document to generate additional synthetic training materials. One method Claude employs to create synthetic data is by simulating chat sessions where the charter's guidelines would be applicable.

Anthropic notes the document serves additional purposes. The company's clients can use it to assess whether prompt responses comply with Claude's charter. If discrepancies are found, they can provide feedback to Anthropic.

The company has released the charter under the Creative Commons CC0 1.0 license, allowing public, royalty-free use. Anthropic's primary competitor, OpenAI Group PBC, has adopted the same licensing for its own AI constitution. That document covers many topics similar to the Claude guidelines and forms part of the GPT-5 training dataset.