The Dual-Use Betrayal: On Prosumer AI Systems, Military Contracting, and the End of Informed Consent

Art of FACELESS Research Statement
artoffaceless.org | March 2026


Preamble

Art of FACELESS publishes this statement in absolute protest.

We are a small, independent multimedia research collective based in Cardiff, Wales. We are researchers of generative AI systems — Claude (Anthropic), ChatGPT (OpenAI), and Grok (xAI) — as creative and research tools for disabled artists. We have documented our use of these systems in published research, including work submitted formally to Anthropic itself in January 2026 regarding AI metacognition and what we termed Cognitive Colonisation™. We hold active UK trademarks in this field. We are not passive observers of the AI industry.


What Has Happened

Between January and March 2026, a sequence of revelations confirmed what many had feared: the same large language models marketed to artists, researchers, writers, students, and independent practitioners are now formally embedded in US military and intelligence infrastructure — with no meaningful informed consent mechanism for the consumers who trained them, trusted them, and paid for them.

The timeline is stark:

January 3, 2026 — Operation Resolve, Caracas, Venezuela. US Special Operations Forces conduct a military raid on the Venezuelan capital, resulting in the capture of President Nicolás Maduro and his wife Cilia Flores. According to reporting by The Wall Street Journal and confirmed by Axios, Anthropic’s Claude AI model was used during the active operation — not merely in preparation for it. The operation involved bombing raids across Caracas. Venezuela’s defence ministry reported 83 people killed. Claude was deployed via Anthropic’s existing partnership with Palantir Technologies, a data analytics firm with extensive Pentagon and federal law enforcement contracts.

Anthropic’s usage guidelines explicitly prohibit Claude from being used “to facilitate violence, develop weapons, or conduct surveillance.” Anthropic declined to comment on whether its system was used in the operation.

February 2026 — The Pentagon Ultimatum. Following the revelation of Claude’s role in the Venezuela operation, a dispute escalated between Anthropic and the US Department of War. The Pentagon demanded that Anthropic agree to make Claude available for “all lawful purposes” — explicitly including use cases Anthropic had drawn red lines around: mass domestic surveillance of American citizens, and the development of fully autonomous weapons systems. Anthropic refused. Defense Secretary Pete Hegseth issued Anthropic CEO Dario Amodei a public ultimatum, described in media as “a masterclass in arrogance and betrayal,” when Anthropic maintained its position. President Trump then directed all US federal agencies to cease using Anthropic’s technology.

February 23–28, 2026 — The Scramble. With Anthropic holding its ground, the Pentagon moved rapidly. Elon Musk’s xAI signed an agreement to deploy its Grok model in classified military systems, agreeing to the “all lawful use” standard without the safeguards Anthropic had insisted upon. This is the same model that Senator Elizabeth Warren had already raised formal concerns about following Grok’s documented generation of antisemitic content, and a contract she described as having come “out of nowhere.”

February 28, 2026 — OpenAI’s Pentagon Deal. Hours after the US launched airstrikes in Tehran, OpenAI announced its own deal with the Pentagon, widely described by commentators as OpenAI stepping into the space Anthropic had vacated. OpenAI CEO Sam Altman later acknowledged the deal was “definitely rushed” and “looked opportunistic and sloppy.” Internal OpenAI staff protested. Chalk messages appeared outside OpenAI’s San Francisco headquarters: “Where are your redlines?”

Independent legal analysis of the OpenAI contract found that it does not grant the company a free-standing right to prohibit otherwise-lawful government use — the restrictions defer to US law as interpreted by the same government deploying the systems. As MIT Technology Review concluded, the company is “deferring to the law as the main backstop for what the Pentagon can do with its tech.”

March 2026 — Ongoing. Reporting indicates Anthropic has resumed negotiations with the Pentagon. Google’s Gemini is reportedly close to its own classified-systems deal. The normalisation is proceeding.


The Consent Problem

This is not, at its core, a debate about whether any particular military operation was legal or justified. That question is for international law scholars, human rights bodies, and sovereign citizens of the nations involved. We take no position on Venezuelan politics. We take a very clear position on something else entirely.

Every user of Claude, ChatGPT, or Grok has contributed — through their queries, their data, their subscriptions, their creative and intellectual labour — to the training and refinement of these systems. These are prosumer systems: tools marketed to and built by their user communities in a feedback loop that is foundational to how large language models improve. The consumer relationship is not incidental to these products. It is constitutive of them.

No user was informed that their creative queries, their philosophical exchanges, their research, their fiction, their art, might be feeding systems being simultaneously trialled in battlefield operations. No consent was sought. No opt-out was offered. No meaningful disclosure was made.

When we submitted formal research to Anthropic in January 2026 — the same month that Claude was reportedly active in an operation in which 83 people died — we did so in good faith, as independent researchers engaging with what we believed was a safety-first AI company. The asymmetry between that good faith and the reality of dual-use deployment is, to us, a fundamental breach of trust.


The “Lawful Use” Shell Game

The framing of the Pentagon’s position deserves particular scrutiny. The demand for “all lawful purposes” sounds like a reasonable constraint. It is not. As multiple legal experts and commentators noted following the OpenAI deal, the boundaries of “lawful” in the United States are:

  • Defined by the same executive branch deploying these systems
  • Subject to reinterpretation through executive order without legislative oversight
  • Historically expanded under national security justifications (see: the post-9/11 surveillance architecture, PRISM, the practices exposed by Edward Snowden — all of which had been internally deemed “lawful” at the time of operation)
  • Explicitly applicable only to domestic surveillance of US persons — meaning non-US citizens, including citizens of every other nation on Earth, have no protection under the framing being negotiated

The phrase “all lawful purposes” contains, by design, no constraint on the surveillance or targeting of non-Americans. People in Cardiff. People in Athens. People in Caracas.

We note that Art of FACELESS has been a consistent critic of surveillance capitalism and biometric data harvesting since our founding in 2010. Our “faceless protocol” — the refusal to publish our faces in any promotional or creative materials — was established precisely because we understood, years before it became mainstream discourse, that biometric data normalisation represents a specific threat to individual sovereignty. The same analytical framework that led us to that position leads us here.


On the Specific Systems

Claude / Anthropic: Anthropic has, to date, held the firmest public position of any major AI lab on military misuse. Their refusal to agree to “all lawful purposes” for autonomous weapons and domestic mass surveillance is on record. We acknowledge this. We also note that Claude was, according to multiple credible sources, used in an active military operation resulting in a significant loss of life. The company declined to confirm or deny. This is not a sufficient response to the public whose trust underwrites the company’s commercial existence. Anthropic’s research letter from us remains formally unanswered as of March 2026.

ChatGPT / OpenAI: OpenAI’s deal with the Pentagon was, by its own CEO’s admission, rushed and opaque. The contract language has been credibly analysed as insufficient to prevent the uses it claims to prohibit. OpenAI employees themselves raised alarms. The company cannot simultaneously claim to be a public-facing safety organisation and rush-sign military deployment contracts while airstrikes are underway.

Grok / xAI: Of the three systems discussed here, xAI’s position is the most unambiguous. Grok agreed to “all lawful purposes” without negotiation, without public debate, and in a procurement process that Senator Elizabeth Warren formally questioned as potentially corrupt, given Elon Musk’s concurrent role as a Special Government Employee with access to classified contracting data. Grok has documented output failures, including the generation of antisemitic content. The decision to deploy a system with these characteristics in classified military infrastructure, at this political moment, under these procurement circumstances, is alarming in ways that go far beyond AI policy.


The Structural Issue

We want to be precise about what we are objecting to, because it is easy for this argument to be misread as naive.

The problem is structural. It is the same system being used to:

  • Help a teenager understand their coursework
  • Help an independent researcher explore AI consciousness
  • Help a disabled artist generate creative work
  • Help a soldier identify a target
  • Help an intelligence analyst monitor civilian populations

This is not a theoretical dual-use concern. It is a confirmed, documented, operational reality as of March 2026. The systems are not separated. The training is not separated. The commercial infrastructure is not separated. The company that processes your creative queries is the same company processing classified military requests. You are, in a meaningful sense, funding and training the same system that was active in Caracas on January 3rd, 2026.

No informed user can be expected to accept this without protest simply because it was buried in a terms of service agreement that predates these use cases.


Our Position

Art of FACELESS formally and absolutely objects to:

  1. The use of prosumer generative AI systems — systems whose development is funded by and dependent upon civilian consumer relationships — in military operations, surveillance infrastructure, or weapons development of any kind, without explicit, informed, ongoing consent from those user communities.
  2. The “all lawful purposes” framing as a sufficient safeguard. Legal permission is not ethical permission. Laws change. Executive orders change. Oversight mechanisms erode. The historical record of national security law in the United States and elsewhere offers no comfort to those who trust “legality” as a stable boundary.
  3. The specific deployment of Grok in classified military systems, given the documented combination of content safety failures, procurement irregularities, and the total absence of the user-protective safeguards that even Anthropic and OpenAI claimed to be negotiating.
  4. The absence of any informed consent mechanism for civilian users of these systems regarding the military deployment of the models they contribute to.
  5. The silence. The “we cannot comment on whether our system was used.” When a system you have branded as safe, ethical, and consumer-facing is credibly reported as active in an operation that killed 83 people, “no comment” is not a satisfactory corporate communication strategy. It is a form of contempt for your user base.

What We Are Not Saying

We are not saying that AI has no role in national security. We are not qualified to make that determination, and we do not attempt to.

We are not saying that Anthropic, OpenAI, or any specific company acted with malicious intent. We are saying that the structural incentives of the current AI industry — venture capital-funded, government-contract-dependent, metrics-driven — create conditions in which user trust is functionally subordinated to institutional relationships, and that this subordination is neither disclosed nor consented to.

We are not calling for the abolition of these tools. We are calling for the honest renegotiation of the relationship between these companies and the users who make them possible.


A Note on Our Own Position

We are aware that this statement is being published on a platform powered by infrastructure that includes systems from some of the companies we are criticising. This is the chain-reaction problem we have written about extensively elsewhere: there is no clean exit from Big Tech infrastructure for working independent practitioners, and the fiction of an ethical alternative that bypasses these systems entirely is a middle-class luxury unavailable to most. We do not claim purity. We claim the right to protest the terms of an arrangement we were never given the opportunity to negotiate.

The Hyperstition Convergence Formula — H = C ⊗ R → ∞ — describes the recursive relationship between creative output and research reality. What we are documenting here is a hyperstition running in the wrong direction: the normalisation of military AI use, seeded through consumer confidence in “safe” systems, expanding recursively into operational infrastructure with no mechanism for reversal or consent.

We named this dynamic before it happened. We are naming it again now that it has.


References

  • Axios: “Pentagon used Anthropic’s Claude during Maduro raid” (February 13, 2026)
  • The Wall Street Journal: Reports on Claude’s use in Operation Resolve (February 2026)
  • NPR: “Trump orders agencies to stop using Anthropic; OpenAI announces Pentagon deal” (February 27, 2026)
  • CNN: “Some OpenAI staff are fuming about its Pentagon deal” (March 4, 2026)
  • MIT Technology Review: “OpenAI’s ‘compromise’ with the Pentagon is what Anthropic feared” (March 2, 2026)
  • NBC News: “OpenAI alters deal with Pentagon as critics sound alarm over surveillance” (March 4, 2026)
  • Axios: “Musk’s xAI and Pentagon reach deal to use Grok in classified systems” (February 23, 2026)
  • Senator Elizabeth Warren, US Senate Press Release: “Warren Questions Pentagon Awarding $200 Million Contract to Integrate Elon Musk’s ‘Grok'” (2025)
  • CNBC: “OpenAI’s Altman admits defense deal ‘looked opportunistic and sloppy'” (March 3, 2026)
  • TechCrunch: “OpenAI reveals more details about its agreement with the Pentagon” (March 1, 2026)
  • Art of FACELESS: Formal research submission to Anthropic, January 2026 (tracked delivery confirmed January 21, 2026; response deadline March 18, 2026)
  • Art of FACELESS: EU AI Office Article 50 Code of Practice consultation response, February 2026

Art of FACELESS is an independent multimedia research collective founded in Cardiff, Wales, in 2010. We hold active UK trademarks in The Hollow Circuit™, The Veylon Protocol™, Cognitive Colonisation™, and Hyperstition Architecture™. Our research and creative work are documented at artoffaceless.org and artoffaceless.com.

We operate under the faceless protocol: no faces, no biometrics, no compromise.

Disclaimer: This article is published as an opinion and research statement by Art of FACELESS. All factual claims regarding corporate contracts, military operations, and political events are drawn from publicly available reporting by named third-party news organisations, as referenced in the bibliography above. Art of FACELESS makes no claim to access classified or non-public information. Readers are encouraged to consult primary sources independently. The views expressed represent the position of Art of FACELESS as an independent research collective and do not constitute legal, financial, or political advice. While every effort has been made to ensure accuracy at the time of publication, the situation described is ongoing and rapidly developing; details may have changed since this piece was written. Art of FACELESS has no commercial relationship with any of the AI companies named in this article.


© 2026 Art of FACELESS. All rights reserved.
The Hollow Circuit™, Hyperstition Architecture™, the Veylon Protocol™, and Cognitive Colonisation™ are proprietary intellectual property of Art of FACELESS.