AI

DOGE's ChatGPT-Driven Grant Cuts Ruled Unconstitutional

At a glance:

  • DOGE used ChatGPT to eliminate over $100 million in NEH grants tied to DEI
  • Judge Colleen McMahon ruled the process unconstitutional
  • Grants related to race, religion, and sexuality were targeted

The Unconstitutional Use of AI in Grant Decisions

The Department of Government Efficiency (DOGE) faced a landmark legal challenge after a court ruled its use of ChatGPT to cancel over $100 million in National Endowment for the Humanities (NEH) grants violated constitutional protections. The 143-page ruling by U.S. District Judge Colleen McMahon found that DOGE's process, which relied on AI to identify grants linked to diversity, equity, and inclusion (DEI), was both unlawful and unconstitutional. The decision emphasized that DOGE's actions directly targeted protected characteristics such as race, national origin, and religion, violating the First and Fifth Amendments.

The core issue centered on how DOGE employed ChatGPT as a tool for decision-making. Instead of human review, staff used the AI to scan grant descriptions for keywords associated with DEI. This method, as detailed in the ruling, lacked transparency and due process. For instance, Justin Fox, a DOGE staffer, testified that he inputted grant summaries into ChatGPT with a prompt asking whether they related to DEI. The AI's responses, which Fox then used to classify grants as "wasteful," were not independently verified. This reliance on an opaque algorithm raised serious concerns about bias and the lack of accountability in federal spending decisions.

The Mechanics of ChatGPT's Role in Grant Elimination

DOGE's process involved two key steps: first, feeding grant descriptions into ChatGPT with a standardized prompt, and second, using the AI's output to flag grants for cancellation. Justin Fox, who worked alongside Nate Cavanaugh, admitted that he did not define "DEI" for the AI and had no understanding of how the model interpreted the term. This ambiguity allowed the system to generate classifications that may not have aligned with the actual intent of the grants. For example, grants focused on the Holocaust, civil rights education, or indigenous knowledge were flagged as DEI-related, leading to their cancellation.

The AI's "Detection Codes" further exacerbated the issue. Fox used specific terms like "BIPOC," "LGBTQ," and "Indigenous" to identify grants. When asked if he ran these terms through every grant description, Fox confirmed he did. This systematic approach meant that any mention of protected characteristics—regardless of context—triggered automatic exclusion. The ruling highlighted that this method was not a neutral tool but a deliberate strategy to target specific groups, which the court deemed unconstitutional.

Legal and Ethical Implications of AI in Government

Judge McMahon's ruling underscored the dangers of using AI as a substitute for human judgment in sensitive government functions. She argued that DOGE's reliance on ChatGPT did not absolve it of responsibility, as the AI was merely an instrument of the government. The decision rejected the argument that the unconstitutional outcomes were solely due to the AI's flaws, stating that DOGE's use of the technology was a deliberate choice. This case sets a precedent for how AI can be misused in public policy, particularly when it comes to protected classes.

The ruling also raised questions about the ethical use of AI in decision-making. While ChatGPT is a powerful tool, its lack of transparency and potential for bias make it unsuitable for high-stakes decisions without rigorous oversight. The court's emphasis on the need for human review and accountability highlights a growing concern about the role of AI in governance. This case may influence future policies on AI deployment in federal agencies, pushing for stricter guidelines to prevent similar violations.

The Impact on Grants and Affected Communities

The cancellation of over 1,400 NEH grants had far-reaching consequences for communities and institutions. Many of the affected grants supported projects related to marginalized groups, such as educational programs for Indigenous peoples, civil rights initiatives, and healthcare access for LGBTQ+ individuals. The ruling mandates the restoration of these grants, but the damage to public trust and funding for critical social programs remains significant. Advocacy groups involved in the lawsuit, including humanities organizations, have called for broader reforms to prevent AI from being used in ways that perpetuate discrimination.

The case also has implications for how government agencies approach technology. The reliance on AI without proper safeguards can lead to unintended consequences, particularly when dealing with sensitive issues like DEI. The ruling serves as a cautionary tale about the need for transparency, human oversight, and ethical considerations in AI applications. As AI becomes more integrated into public services, this decision may prompt a reevaluation of how such technologies are implemented and monitored.

What Comes Next for DOGE and AI in Government

Following the ruling, DOGE is likely to face increased scrutiny over its use of AI in decision-making. The court's decision may lead to internal reforms within the department, requiring more rigorous processes for AI deployment. Additionally, the case could influence legislation or regulatory frameworks aimed at governing AI use in public sector contexts. For DOGE, the ruling is a setback, but it also presents an opportunity to demonstrate a commitment to ethical AI practices. The broader tech community may also take note, as this case highlights the risks of over-reliance on AI without adequate safeguards.

The future of AI in government will depend on how agencies balance innovation with accountability. While AI offers efficiency and data-driven insights, its use must be paired with human judgment and legal compliance. This case serves as a reminder that technology should enhance, not replace, the principles of fairness and due process in public administration.

Editorial SiliconFeed is an automated feed: facts are checked against sources; copy is normalized and lightly edited for readers.

FAQ

What did the judge rule about DOGE's use of ChatGPT?
Judge Colleen McMahon ruled that DOGE's use of ChatGPT to cancel over $100 million in NEH grants was unconstitutional, violating the First and Fifth Amendments by targeting protected characteristics like race and religion.
How did DOGE use ChatGPT to eliminate grants?
DOGE staff, including Justin Fox, inputted grant descriptions into ChatGPT with a prompt asking whether they related to DEI. The AI's responses, which were not independently verified, were used to flag grants for cancellation, often based on keywords like 'BIPOC' or 'LGBTQ'.
What types of grants were affected by DOGE's actions?
Grants related to diversity, equity, and inclusion (DEI) were targeted, including projects focused on the Holocaust, civil rights education, indigenous knowledge, and LGBTQ+ initiatives. Over 1,400 NEH grants were canceled.

More in the feed

Prepared by the editorial stack from public data and external sources.

Original article