OpenAI · AI Safety · ChatGPT · United Kingdom · OpenAI
Please note that misuse of the complaints process, such as submitting manifestly unfounded notices, may also result in action
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
★ Tier-1 Source
Last updated: September 18, 2025.
Key facts
- For more information about how they comply with the UK Online Safety Act, please see here
- For more information about how they comply with the Australia Online Safety Act, please see here
- When they identify content that violates their terms or policies, they may take actions such
- The team consider factors like legal requirements, the severity of the violation, and past or repeat violations, when determining enforcement actions
Summary
To promote safe and responsible use of their products, they use a range of procedures and tools to address content that may violate the law or their terms and policies. Proactive detection: The team use classifiers, reasoning models, hash-matching, blocklists, and other automated systems to identify content that may violate their terms or policies. User reports: The team respond to external notices and user reports about content violations. The team aim to review external notices and user reports as quickly as possible. Human review: Their team may review flagged content to determine appropriate actions.