今天abc看了啥🤔
https://openai.com/index/introducing-gpt-5-4/
GPT‑5.4 in Codex includes experimental support for the 1M context window. Developers can try this by configuring model_context_window and model_auto_compact_token_limit. Requests that exceed the standard 272K context window count against usage limits at 2x the normal rate.
 
 
Back to Top