Policy Analysis: Industrial Policy for the Intelligence Age

OpenAI — April 2026 | Consequence-Based Policy Framework | Analyst: Brandy Mitchell

Internal Consistency
2/5
Proposals contradict substrate
Enforcement Architecture
1/5
Voluntary throughout
Distributional Reach
2/5
US-centric, market-bounded
Democratic Accountability
1/5
Author governs itself
Conflict of Interest
5/5
Maximal — structural
Distributional Index
Pro-Distribution Signals34 / 100
Concentration-Preserving Architecture71 / 100

Distribution signals are real. They are structurally subordinated to mechanisms that preserve existing power. The document cannot close this gap from within its own framework.

Consequence Test
Mechanism Audit
Hidden Assumptions
What's Excluded
Alternative Specs
Consequence Test: What Does This Actually Produce?

Policy analysis reads by outcome, not intent. The question is not what OpenAI wants — it is what these mechanisms produce if enacted exactly as written.

Public Wealth Fund — Who Seeds It?

The document proposes AI companies "work with policymakers" to seed the fund but specifies no mandatory contribution amounts, no governance structure, and no democratic oversight mechanism.

Consequence: A fund designed in corporate boardrooms, seeded at corporate discretion, governed by whoever controls the enabling legislation. This is corporate philanthropy rebranded as universal dividend.

Tax Reform — No Rates Specified

The document explicitly declines to propose corporate tax rates, noting only that Trump lowered them to 21%. It proposes the direction of reform without the instrument of reform.

Consequence: The framing shifts public debate while enforcement power stays with whoever controls Congress. The document names the problem and removes its own teeth.

Adaptive Safety Net — Who Defines the Triggers?

The auto-trigger mechanism activates when "displacement metrics exceed pre-defined thresholds." The document does not specify who defines the metrics, who sets the thresholds, or who controls the measurement infrastructure.

Consequence: A safety net that can be permanently set below activation by whoever controls the data. The mechanism exists; the accountability over the mechanism does not.

4-Day Workweek — Pilot, Not Right

The efficiency dividend frames the 4-day workweek as an employer incentive to run time-bound pilots — not a labor right, not a legal standard, not a negotiated floor.

Consequence: The 4-day workweek remains conditional on corporate participation. Economic power announced by those holding it is not the same as economic power transferred. This is the structural distinction between a press release and a policy.

Containment Playbooks — Who Responds?

The document acknowledges dangerous AI systems may be unrecallable once released, then proposes "coordinated playbooks" involving government. It does not propose independent government authority to act without corporate cooperation.

Consequence: OpenAI becomes a necessary partner in any containment effort it helped create the need for. The company that generates the emergency helps design the emergency response. This is regulatory capture embedded in the crisis protocol.

Mechanism Audit: How Does the Logic Actually Run?

Every policy document encodes a decision logic. Tracing that logic reveals what the document structurally permits and prohibits.

Primary Logic Chain

1. AI will create enormous wealth concentration
2. That concentration requires redistribution
3. Redistribution proposals should come from AI companies
4. Governments implement proposals acceptable to companies
5. Workers and communities give feedback through designated channels

This is supply-side redistribution. Capital remains at the top of the decision hierarchy. The beneficiaries of policy are consulted, not empowered.

Authority-by-Proximity Logic

The document repeatedly positions OpenAI as most qualified to propose solutions because it is closest to the technology. Proximity to the system being governed becomes the basis for governance authority.

Applied consistently: defense contractors should write procurement law. Pharmaceutical companies should set drug pricing policy. The banking industry should design financial regulation. This is not expertise — it is monopoly on definition.

Liberation Language as Structural Cover

Terms like "people first," "democratize access," "right to AI," and "share prosperity broadly" are not false. They describe real values. But the mechanisms proposed cannot produce those outcomes from within the framework's own constraints.

This pattern is identifiable across history: reform documents that correctly name the harm, propose insufficient remedies, and foreclose stronger alternatives by occupying the policy space first. The sincerity of the authors is irrelevant to this structural analysis.

Hidden Assumptions: What Must Be True for This to Work?

Every policy document rests on foundational commitments it cannot examine without collapsing. These are the load-bearing walls.

Assumption 1: Capitalism Is the Correct Container

The document states directly: "Capitalism, imperfect as it is, remains an effective system for translating human ingenuity into shared prosperity." This is a cosmological commitment, not a policy claim. It forecloses mechanisms requiring structural redistribution of ownership — not just distribution of returns. The document cannot propose worker ownership of AI infrastructure because its foundational assumption prohibits it.

Assumption 2: US Competitiveness Is the Organizing Value

The document invokes competition with China repeatedly to justify speed, infrastructure concentration, and reduced regulatory friction. This converts a justice question — who benefits from AI? — into a security question — how does America win? These are not the same question and do not produce the same answers.

Assumption 3: Technological Development Pace Is Not Democratically Controllable

The document treats superintelligence as inevitable and imminent — "not a distant possibility." This forecloses the question of whether the pace of development should be subject to democratic authorization. The only permitted question is how to manage what is already coming. Whether to build it is structurally absent.

Assumption 4: OpenAI Is a Legitimate Governance Co-Author

The document assumes a private corporation with fiduciary duties to shareholders is an appropriate co-designer of the regulatory architecture that governs it. This assumption is invisible until named. When named, it fails the most basic conflict-of-interest standard applied in every other governance domain.

What's Excluded: Perspectives the Document Cannot Hold

A document's exclusions are as analytically significant as its inclusions. What questions cannot be asked within this framework?

Labor Sovereignty

A framework in which workers hold inherent authority over the conditions of their labor — not as a policy concession, but as a pre-political right. The document offers workers a "voice" and a "formal way to collaborate with management." In a labor sovereignty framework, workers do not collaborate with management decisions. They make them. The document cannot hold this position without dismantling its own logic.

The Global South as Stakeholder, Not Recipient

AI systems are trained on data extracted from communities across the globe who will not own or govern the systems built on their data. The document mentions deploying benefits "globally" once. The entire policy architecture is organized around US competitiveness and US workers. The majority world appears as a future market, not a present co-authority.

Non-Market Definitions of a Good Life

The document's entire value framework is organized around economic participation, output, productivity, and labor market attachment. A framework organized around relationship, community, land, rest, or care as primary goods — not as workforce pipeline categories — cannot be expressed within this document's language. "Pathways into human-centered work" still means labor market integration. The alternative — that a good life may not require market participation at all — is structurally unaskable.

Permanent Dignity Floor vs. Triggered Safety Net

The document proposes safety net expansion that activates when displacement crosses thresholds and phases out as conditions stabilize. This is structurally different from a permanent guaranteed floor — income, healthcare, housing — that does not require proving harm to receive. The latter position, argued by King, Friedman, and others across the political spectrum, cannot be held within a framework committed to means-testing and market normalization as baseline conditions.

Alternative Specifications: What Would Actually Work?

Not what OpenAI should have said — what a liberation-aligned industrial policy requires structurally.

Mandatory Ownership, Not Voluntary Distribution

Require worker and community equity stakes in AI infrastructure as a condition of operating license — not a dividend from a fund companies help design. The difference between owning the means and receiving a check from those who do is not semantic. It is the difference between power and dependency.

Democratic Authorization of Development Pace

Subject the pace of AI development to democratic authorization processes — not just democratic management of consequences. Communities most vulnerable to displacement should hold meaningful veto power over deployment timelines in their sectors, not advisory input after decisions are made.

Global Governance Parity

Give the Global South, Indigenous nations, and majority-world communities equal standing in AI governance architecture — not as recipients of access programs, but as co-authorities over systems built on their data and deployed in their contexts.

Permanent Dignity Floor

Replace the auto-trigger safety net with a permanent guaranteed floor — income, healthcare, housing — that does not require proving displacement to receive. The test of a just society is not whether it responds to suffering. It is whether suffering is required to qualify for support.

Conflict of Interest Prohibition

Prohibit AI companies from co-authoring the regulatory frameworks that govern them — the same standard applied to defense contractors, pharmaceutical companies, and financial institutions in other regulatory domains. OpenAI should be a subject of industrial policy. It should not be its author.

Analytical Verdict

This document correctly diagnoses the problem: AI will concentrate wealth, erode the tax base, and displace workers at speed and scale unprecedented in modern economic history. Its proposed remedies are structurally insufficient because they are designed within the system generating the harm.

It is not disinformation. It is something analytically more significant: a sincere proposal that cannot work, authored by actors who benefit from its failure to work, timed to occupy the policy space before external democratic pressure can force a structurally stronger alternative.

The document names the disruption it is causing, proposes to manage that disruption on its own terms, and positions itself as a necessary partner in any governance response. That is not accountability. That is agenda-setting by the entity that should be the subject of the agenda.

The consequence test: if enacted exactly as written, who holds concentrated power in 20 years? The answer is the same entities that hold it now. That is the verdict.