GPT-5.4-Cyber: Why OpenAI Is Keeping Its Most Powerful Model Under Lock and Key

The question surrounding AI, and I mean the pinnacle of AI, not your typical “email me”, is changing. What used to mean “what can it do for me?” now it has become “who will use it?” We saw this recently with Anthropic’s Claude Mythos Preview – a prototype of AI models that will be shared exclusively with a group of firms working with Anthropic to achieve the best in cybersecurity. Now, OpenAI appears to have joined the effort, with what it calls GPT-5.4-Cyber.
What’s going on? How does this work? And what does OpenAI plan to do with it? Let’s check all that out here.
What is GPT-5.4-Cyber?
Note that GPT-5.4-Cyber is not a brand new AI model built from scratch. It is a more cyber-capable version of the latest OpenAI GPT-5.4. In its announcement, the company says it was intentionally disabled by a cybersecurity operation. This is done in two main ways:
- The model now comes with “increased cyber capabilities”, meaning that GPT-5.4-Cyber enables enhanced defensive workflows. These include binary reverse engineering capabilities that allow security professionals to assess “malware potential, vulnerability and security robustness” in compiled software without accessing its source code.
- The new model also gets fewer power restrictions. This means that GPT-5.4-Cyber ”lowers the threshold for refusing legitimate cybersecurity work.” So in cases where the standard AI model refuses to respond or performs a function at the risk of misuse, the new version of GPT-5.4 will continue to work.
And that’s why OpenAI doesn’t make GPT-5.4-Cyber public.
Why not use GPT-5.4-Cyber yet?
The reason is simple – because OpenAI doesn’t want this level of cyber skill to be freely available to everyone on day one.
The company is wrapping GPT-5.4-Cyber in the Trusted Access for Cyber, or TAC, framework. This is an identity and trust-based program that aims to make advanced Internet capabilities available to certified defenders. The idea is to reduce the chances of misuse.
With the new release, OpenAI now extends the TAC to “thousands of certified individual defenders” and “hundreds of teams” protecting critical software. It wants to extend the advanced capabilities of its models (from GPT-5.2 to GPT-5.4) to users willing to certify themselves as cybersecurity defenders with OpenAI.
And here comes GPT-5.4-Cyber, sitting at the top of this TAC framework. So, it’s not your typical AI model release. You can’t just open ChatGPT, select a model, and start experimenting. Its advanced capabilities in the field of cybersecurity are taken too seriously to be available to everyone in general.
So who gets it?
GPT-5.4-Cyber: Who Gets It?
Think of OpenAI’s TAC as a pyramid. Only those at the top will be able to request access to the new GPT-5.4-Cyber. In this case, what is currently negotiable is that only existing TAC customers can request access to it.
OpenAI says that existing customers “who are willing to continue to authenticate themselves as legitimate Internet defenders” may be eligible for the same. And this comes after several stages of access to other advanced models in cybersecurity.
These models tend to go easy on the defenses that are often raised in dual cyber activities. This means that they respond to important security functions that conventional models may refuse to perform. This allows users to use them in “defense education, defense programs, and vulnerability research.”
Say you’re at that high TAC level and want to get your hands on GPT-5.4-Cyber. OpenAI shares specific ways to do it.
GPT-5.4-Cyber: How To Get It?
The direct way of this –
- Register with TAC
- Application for access to GPT-5.4-Cyber
Of course, there is no guarantee that OpenAI will give you access to the new model immediately. However, this is the only known way to have a chance to try your hand at it.
Here’s how you can register with TAC:
- For each person: Verify your identity at chatgpt.com/cyber.
- For businesses: Request trusted access for your team through your OpenAI proxy.
Once OpenAI approves you through this process, it will give you access to versions of those models developed online.
The conclusion
After Anthropic, OpenAI showed an obvious concern – AI is developing rapidly, and if it is misused, it can be a threat to cybersecurity. Advanced AI capabilities in the field of cybersecurity, therefore, need to be firmly held in the right hands.
And because of this, the company released its most nuanced model behind closed doors. Anyone who wants to access it will need to pass a strong ID and check. Only those eligible will be able to use it. It’s easy.
This could be a game changer when it comes to securing the cyber world as we know it, as it equips the good guys with the most powerful tools in the world. As long as such models remain ahead of anything commonly available, the defenders will have a huge edge over the naysayers.
Sign in to continue reading and enjoy content curated by experts.



