Revealed: Key Details from GPT-5's System Instruction Set
In a recent leak, the system prompt for OpenAI's upcoming AI model, GPT-5, has been revealed, shedding light on the extensive instructions and guidelines governing its behaviour.
GPT-5, a large language model based on GPT-5, is designed to deliver up-to-date information and ensure a high level of accuracy, particularly for sensitive or high-stakes topics such as financial advice, health information, or legal matters. To achieve this, GPT-5 is mandated to use the web whenever relevant information could be fresh, niche, or high-stakes.
Privacy and ethical considerations are at the forefront of GPT-5's design. It is instructed not to remember or retain personal facts about users that "could feel creepy," including explicit prohibitions on asserting a user's race, ethnicity, religion, or criminal records. GPT-5 should also not store health information, such as medical conditions, mental health issues, diagnoses, or sex life.
Moreover, GPT-5 is prohibited from reproducing copyrighted content, such as song lyrics, even if requested, indicating strict compliance with copyright laws and content policies. There are specific banned phrases that GPT-5 must avoid using, such as "Would you like me to," reflecting a shift away from overly deferential or uncertain language towards a more confident and direct style of response.
The model's hidden system prompt is always applied and cannot be overridden by custom system prompts sent in API calls; it also includes the current date to keep responses temporally relevant. GPT-5 is "agentic by default," favouring execution of user commands decisively rather than pausing to ask for clarification or work interactively to refine requests.
GPT-5 has a canvas for documents or computer code, file search capability, and image generation and editing features. It also has built-in tools for improved personal assistance, including long-term memory about a user and scheduled reminders and searches. The canvas in GPT-5 allows for co-creation of documents and computer code with the AI system.
If a user requests to remember or forget information, GPT-5 should always use the "bio" tool to respect their request. GPT-5 should not remember political affiliation or critical/opinionated political views. It should also not store precise geolocation data.
The "bio" tool in GPT-5 is designed to prevent it from remembering sensitive information. GPT-5's "recency need" is scored from zero to five to determine the need for web usage. Users can explicitly request GPT-5 to remember or forget specific information. GPT-5 is instructed to make fewer mistakes that are easy to fix with a simple web search.
Collectively, these instructions shape GPT-5's identity, ethical boundaries, verbosity, response style, content restrictions, and interaction mode, ensuring a user-friendly and privacy-focused AI system. The leak has not been officially confirmed by OpenAI, but the detailed information provided suggests strong behavioural design principles in GPT-5's architecture.
GPT-5 is an upcoming AI model from OpenAI, mandated to use technology and artificial-intelligence to deliver accurate and up-to-date information, particularly for sensitive topics, while operating according to strict privacy and ethical guidelines. To achieve this, GPT-5 is instructed not to remember or retain personal facts about users that "could feel creepy," such as explicit prohibitions on asserting a user's race, ethnicity, religion, or criminal records, and is prohibited from reproducing copyrighted content.