National Endowment for the Humanities building exterior in Washington, D.C.
National Endowment for the Humanities headquarters in Washington, D.C. Courtesy of NEH.
News
March 9, 2026

Lawsuits Allege DOGE Staff Used ChatGPT Screening to Help Terminate NEH Grants

Court filings claim AI-assisted triage was used in decisions affecting previously approved humanities grants, raising new compliance and governance questions.

By artworld.today

New legal filings allege that employees connected to the Department of Government Efficiency used ChatGPT prompts while evaluating whether National Endowment for the Humanities grants should be canceled. The claim centers on short-form classification prompts and whether those outputs materially shaped downstream administrative decisions.

If substantiated in court, the issue is not simply use of AI software; it is procedural due process in publicly funded culture. Grant ecosystems depend on transparent criteria, documented review steps, and defensible rationale. Those expectations are foundational for agencies such as the <a href='https://www.neh.gov/' target='_blank' rel='noopener'>National Endowment for the Humanities and peer funders across federal cultural agencies.

Plaintiffs named in reporting include major field bodies representing historians, authors, and humanities scholars. Their argument is that AI-assisted screening may have compressed nuanced project assessment into binary outputs that over-index on keyword proximity rather than program merit, context, and statutory mission fit.

The legal theory centers on whether automated tools can lawfully replace human judgment in funding decisions with significant financial and operational consequences for recipients.

Museums, archives, and public-history organizations are watching closely because federal humanities grants frequently underwrite interpretation, conservation research, educational programming, and community-facing projects.

The governance question now extends beyond one case: what documentation should agencies disclose when AI tools are used in internal screening? Institutions operating under public mandate will likely face growing pressure to codify human-review thresholds.

This is already familiar terrain in adjacent policy contexts at organizations like the <a href='https://www.arts.gov/' target='_blank' rel='noopener'>National Endowment for the Arts and major public universities that increasingly publish procurement and risk frameworks for automated systems.

For the art world, the practical takeaway is straightforward: grant applicants and recipients should prepare stronger documentation hygiene, maintain clear impact records, and track policy updates from both <a href='https://www.congress.gov/' target='_blank' rel='noopener'>U.S. Congress and federal agencies.

The broader implications extend to how cultural institutions manage risk around federal funding relationships. Organizations that depend on NEH grants for operating support face potential exposure if AI-mediated review processes become standard without adequate oversight mechanisms.

Legal scholars note that the case could establish precedent for how automated decision-making systems interact with administrative law requirements for reasoned explanations and human review. These procedural safeguards exist to prevent arbitrary action, and their applicability to AI systems is being tested across multiple policy domains.

For museum professionals and arts administrators, the practical response involves documenting grant review processes more thoroughly and maintaining relationships with multiple funding sources to reduce dependence on any single federal channel. This diversification strategy has become increasingly important as funding uncertainty grows.

The case also raises questions about transparency in how AI tools are being deployed across government more broadly. As automated systems handle more decisions affecting arts, humanities, and cultural programming, the need for disclosure about their use becomes a matter of public interest and institutional accountability.

The outcome of these proceedings will likely influence how cultural funding agencies worldwide approach AI-assisted review processes in the coming years.

Institutions should begin documenting their own internal review processes now to prepare for potential regulatory requirements.