OpenAI just dropped GPT‑OSS, its first open‑weight model family since GPT‑2: a 120B variant and a 20B variant that users can run locally — even on laptops or high-end desktops. Both are released under Apache 2.0, and perform nearly as well as OpenAI’s previous o3‑mini and o4‑mini models.
This isn't a marketing stunt — it's significant. It marks OpenAI’s return to releasing weights that anyone can inspect, modify, and self-host. That shift came only after competitors like DeepSeek and Meta released real open‑source or open‑weight models. DeepSeek’s R1 model, launched in January 2025, was fully open and cost‑efficient, quickly outperforming proprietary systems and grabbing global attention. OpenAI’s move followed that disruption.
Why Open Source Still Matters
Open‑weight and open‑source aren’t the same, OpenAI’s GPT‑OSS lacks full training datasets or source code. But the access to model internals and usable weights still opens the door for developers and hackers to experiment offline, inspect biases, and build custom apps without depending on APIs.
From a cyberpunk perspective, where individuals and small collectives stand against corporate and state-controlled systems, open-source AI is infrastructure resistance. You can’t disappear. You can’t modify. But if you can run and own your version of the model, you reclaim basic agency.
Would OpenAI Have Released Without LLaMA or DeepSeek?
Unlikely. OpenAI once delayed GPT‑2 weights out of safety concerns, and kept GPT‑3 and GPT‑4 mostly proprietary. Open source models like LLaMA and DeepSeek didn’t just prove performance viability — they pressured OpenAI into reevaluating its stance. Sam Altman even framed GPT‑OSS as a response to existing community demand and competitive parity with open offerings.
DeepSeek, cheaper to build and completely public, showed that access could be profitable and innovative. Meta’s LLaMA, despite its licensing restrictions, made mainstream press claim “open-source,” though OSI and FSF contested that label. Without these disruptors, OpenAI could have stayed closed longer, but now open weights are here, and the game has changed.
The Cyberpunk Value of Openness
For us on the edge of the digital sprawl neither controlled by app stores nor silenced by closed APIs, open weights are lifelines. Running models offline, behind firewalls, or on air-gapped machines is not a convenience, it’s autonomy. It’s code you trust because you can audit it. Even without full training transparency, open‑weight AI gives us breathing room.
And from that space, we can structure tools that resist surveillance, enforce privacy, and adapt to local needs. Decentralized stacks and auditable AI become possible.
But It’s Not Enough
Open weight doesn’t mean open all the way. We still lack access to training data, code that built the system, and full model provenance. Full verifiable transparency is still largely an academic ideal, see initiatives like LLM360, calling for full release of data, checkpoints, and code to support reproducible research.
Still, having weights is better than nothing. And the rising open model ecosystem will help ensure that AI doesn’t become another closed panopticon.
Further reading
OpenAI launches open-source models
OpenAI releases open models to compete with China’s DeepSeek
LLM360: Towards Fully Transparent Open-Source LLMs