With Civil Rights, How We Talk About AI Matters
Who truly gains power when AI systems replace human decision-makers in government?
When Elon Musk's Department of Government Efficiency (DOGE) pushes to replace civil servants with chatbots and automate public benefits, it isn't about streamlining bureaucracy — it’s about consolidating power. This is the crux of the problem: debates about AI in civil rights often fixate on technical performance ("Is the model fair?" "Is the data reliable?") while ignoring the political agenda driving its use. Even if an AI system were magically unbiased, its deployment in governance, policing, or social services is rarely neutral. It is a tool for shifting authority away from people and toward unaccountable systems.
Take DOGE’s mission. By replacing human caseworkers with generative AI to allocate Medicaid or unemployment benefits, the goal is not efficiency — it is disempowerment. When an algorithm denies healthcare to thousands in Arkansas or falsely accuses 40,000 Michiganders of fraud, the harm is not just a technical error. It is the result of a political choice to outsource life-and-death decisions to machines that cannot be interrogated, challenged, or held accountable. To the single parent stripped of benefits or the tenant evicted by a rent-setting algorithm, it does not matter whether the system uses machine learning or a simple spreadsheet. What does matter is that their survival now hinges on a system designed to sideline their voice.
This is AI as a political project. Proponents like Musk frame automation as a quest for "productivity," but this rhetoric masks a deeper aim: to hollow out democratic institutions. When DOGE installs AI to draft policies or allocate budgets, it isn't fixing the government — it's replacing debate and deliberation with algorithmic determinism. The same logic applies to facial recognition in policing or predictive tools in child welfare. These systems aren't just flawed; they’re weapons of political offloading, shifting responsibility from elected officials to code that cannot dissent, negotiate, or care.
The danger of technical debates is that they legitimize this agenda. Arguing over whether facial recognition can be "less racist" or whether ChatGPT makes fewer errors distracts from the core question: Why are we using these tools in contexts that demand human accountability? When we focus on improving AI's accuracy, we tacitly endorse its use. On its face, that is not bad. I design and build small AI apps that help my work. But when our civil rights are at stake, marginalized communities don't need "better" AI replacing government — they need systems that center their agency. A landlord using AI to screen tenants isn't a problem because the algorithm is biased; it's a problem because housing is a human right, not a computational puzzle.
Language Matters: Framing the Fight
To resist AI's weaponization in the Federal government, we must shift how we talk about it:
Instead of (Technocratic/Neutral) | Say (Power-Focused/Critical) | Explanation |
---|---|---|
Automation | Political Offloading | Highlights the deliberate transfer of power away from accountable individuals. |
AI Replacing Federal Employees | Disempowerment Infrastructure | Emphasizes that AI is a system designed to reduce agency, not just a tool. |
Black-box Algorithms | Accountability Vacuums | Directly points to the lack of transparency and recourse. |
Error Rates | Collective Harm | Focuses on the real-world consequences for communities, not just technical flaws. |
Technological Advancement | Ideological Project | Reframes AI deployment as a deliberate choice reflecting a specific political agenda, not neutral progress. |
Examples
- Don't say, "Facial recognition misidentifies Black faces." Say, "Facial recognition expands racially targeted surveillance."
- Don't say, "AI streamlines benefit applications." Say, "AI creates an accountability vacuum in social services, removing the human element from life-altering decisions."
Avoid
- Praising AI for "neutrality" or "objectivity."
- Debating technical fixes without challenging the system's purpose.
- Accepting terms like "efficiency" or "innovation" uncritically.
The fight isn’t about fixing AI. It's about dismantling systems that use AI to evade democracy. When we talk about AI in civil rights, our language must unmask its role as a tool of control — not a neutral innovation. Whether it's DOGE's chatbots or predictive policing algorithms, the goal is the same: to silence dissent, centralize power, and render rights negotiable. Our words must reflect that truth.
Further reading
- Anatomy of An AI Coup, Eryk Salvaggio (February 9, 2025)
- DOGE Has ‘God Mode’ Access to Government Data, Warzel et al. (February 19, 2025) [archive.is link]