Criminal defense is unprepared for A.I.

As a technologist working in the legal profession, I've had a front-row seat to witness how artificial intelligence is dramatically reshaping our criminal justice system. I worked at the Suffolk County District Attorney's Office in Boston when, in 2019, prosecutors who brought the Golden State Killer case triumphantly presented their work at the Massachusetts Prosecutors conference. They used techniques involving DNA-matching that created a watershed moment in criminal investigations — but they weren't telling the whole story.
What the public heard was a simplified narrative: investigators uploaded crime scene DNA to GEDmatch, an open genealogy platform where people voluntarily share genetic information. What remained hidden was far more troubling. These same investigators secretly searched private DNA databases like FamilyTreeDNA and MyHeritage without warrants. They created fake accounts to access people's genetic data, directly contradicting these companies' privacy policies that promised user information would only be released if “required by law” or with a “lawful request.” Even criminal defense attorneys were kept in the dark about these methods — a prosecutor later admitted this created a "false impression" of how the case was actually solved.
The implications reach far beyond a single case. For Americans of European descent, there's now a 60% chance they have a third cousin or closer relative in such databases; this dropped to ~40% for someone of sub-Saharan African ancestry. This isn't just innovative detective work — it's the quiet construction of a massive surveillance network built on recreational ancestry testing that people use to learn about their family histories.
At the recent National Association of Criminal Defense Lawyers (NACDL) Forensic Science and Technology Seminar that I attended, what became clear wasn't just the technical capabilities of these A.I. systems, but their profound human implications and the economic incentives driving their adoption. We're witnessing powerful technology deployed without transparency, oversight, or informed consent — fundamental principles that shouldn't be sacrificed in the name of catching criminals, or automating their suspicion because the technology said so.
Claims of ‘efficiency’ are lies.
Companies pushing A.I. tools in the criminal justice context would likely argue that their products are “quick and easy tools” that, with proper training, cannot be used improperly. Clearview AI, a facial recognition technology (FRT) company, built its database by scraping users' photos from social media platforms — violating their Terms of Service. Clearview dismisses legitimate concerns about facial recognition as “myths,” despite overwhelming evidence of real harm.
This isn’t simply about algorithms and efficiency — it’s about people’s lives and ability to exist free of harm.
At Professor Maneka Sinha’s session on “Challenging Automated Suspicion,” one attorney described how her client was detained for hours because a facial recognition system wrongly matched him to a robbery suspect. The system was confident but wrong. Her client was a young Black man with no criminal record.
These stories aren’t outliers. Attorneys have shared numerous stories with me that rarely make headlines: families separated by algorithmic risk assessments, individuals denied bail based on an algorithm “predicting” their likelihood to re-offend, and communities over-policed based on software rather than actual behavior.
The generative A.I. model is the product
As of early this year, over 60% of law enforcement agencies claim to use A.I. in investigations. What’s often overlooked is that these aren’t just tools — they’re products built on valuable data. The global A.I. market ($279 billion in 2024, projected to reach $3.68 trillion by 2034) is driven by user interactions. In criminal justice, every police encounter, court case, and correctional decision generates data fueling these markets.
When criminal justice systems use AI, three things happen simultaneously:
- The system performs its desired function (identifying of people)
- It collects valuable training data that improves the underlying models
- It creates economic value for the companies that developed it
This creates a troubling cycle: the more these systems are used — even when they make mistakes — the more valuable they become as products. Companies benefit from wide deployment, creating powerful incentives to expand use regardless of accuracy.
People processed through these systems are unwittingly contributing to datasets worth billions, yet they bear the costs of errors while receiving none of the economic benefits.
Machines are deciding who is suspicious.
A.I. fundamentally changes how people become suspects through what Professor Sinha calls “automated suspicion” in her talk and in her paper for the Emory Law Journal.
Technologies like predictive policing algorithms, facial recognition systems, and automated gunshot detection are now making decisions once made by trained officers. They claim to predict where crime might happen, who might be involved, or whether a crime occurred — through processes too complex for most officers to understand.
The economic model makes this especially troubling. Companies developing these tools benefit from “data concentration” — more system use means more data collection, which improves their models and creates “data moats” protecting their market position. This creates a powerful incentive to maximize system use, even at the expense of accuracy.
This shift raises serious Fourth Amendment concerns. Courts typically defer to police judgments about “reasonable suspicion” or “probable cause,” but now extend this deference to technology without adequate assessment of the reliability of the technology that was used to identify someone. As Professor Sinha notes, courts often treat algorithm outputs as objective facts rather than probabilistic estimates requiring scrutiny.
Psychological factors compound these risks. Automation bias (our tendency to trust technology) and confirmation bias (our tendency to reinforce our own bias) lead officers to accept algorithm outputs as justification for action, bypassing traditional investigation and the requirement for “particularized suspicion.”
Real people have suffered the consequences:
- Randal Reid spent nearly a week in jail after facial recognition wrongly matched him to a theft suspect in another state.
- Bobby Jones, then-16-years-old, was detained based on a predictive policing program’s flawed assessment flagging him as “high-risk” without him or his parents knowing
- Pasco County’s program used school records to flag children as “potential criminals,” subjecting families to intrusive surveillance.
- Ms. Gonzalez and her 12-year-old sister were detained at gunpoint after an automated license plate reader misread a single digit.
Each case shows how technology outputs became the sole justification for police action. Meanwhile, each interaction generated more data, improving the systems and increasing their economic value — even when producing harmful errors.
Facial recognition is broken, but profitable.
Facial recognition technology claims accuracy rates above 97% in controlled settings, but performs much worse in real-world conditions. Sidney Thaxter’s presentation revealed how these systems are widely used despite documented racial biases and inaccuracies.
Money pressures drive rushed A.I. adoption. Research shows companies pay lower taxes on machines than on workers, making A.I. systems artificially cheaper to deploy. This creates a strong financial incentive to roll out systems quickly before proper testing.
Companies that combine different biometric data (like face and voice together) gain massive advantages over competitors. One technology alone is valuable, but research shows that combining face and voice recognition is 100 times more powerful than just using face recognition. This multiplication effect creates a powerful competitive edge.
This creates a “race to be first” mentality. Just like pharmaceutical companies might cut corners on safety testing to get a drug to market first, companies rush A.I. systems into use before they're properly validated. When being first means capturing most of the market, companies prioritize speed over safety.
A 2025 GAO report found only three of seven federal law enforcement agencies had policies protecting civil rights when using facial recognition. These systems show higher error rates when identifying people with darker skin tones.
Most concerning: as of 2023, some agencies conducted tens of thousands of searches using facial recognition with no training requirements in place.
AI’s reach in criminal justice is broad and varied
“AI” isn’t a single technology — it encompasses many different tools used throughout investigations. Each presents unique challenges for criminal defense. Elizabeth Daniel Vasquez, in her “AI Roadmap for Practicing Attorneys,” session emphasized that defenders need to understand these tools and their specific weaknesses.
Predictive policing feeds on its own data.
Predictive policing tools analyze historical data of where crime has occured, to attempt to forecast crime hotspots or identify individuals deemed “at-risk.” The problem is, that most of that data is from the operations and actions of the police agencies themselves — so it is biased from the onset.
The NACDL report “Garbage In, Gospel Out” highlights how these systems, trained on biased historical policing patterns, can create self-fulfilling prophecies and reinforce systematic discrimination. When biased data goes in, biased (but seemingly objective) results come out, justifying further over-policing of already policed communities.
What makes these systems particularly troubling is the economic feedback loop they create. As more predictive policing data is collected, the systems become more valuable products. This creates what economists call “positive feedback loops” where increased police data collection leads to increased system usage, generating more data — creating a multiplicative rather than additive value gain cycle, at the expense of people’s fundamental rights.
These concerns aren’t merely theoretical. Research published in 2021 confirmed that even when using victim report data (supposedly less biased than arrest data), predictive policing algorithms still disproportionately labeled Black neighborhoods as crime hot spots. The reason? Reporting itself is influenced by racial dynamics — wealthier white people are more likely to report crimes supposedly committed by Black individuals, and Black people are more likely to report other Black people, creating biased inputs that lead to biased outputs.
Some jurisdictions have responded by requiring impact assessments and audits, but adoption has been uneven. A concerning development is the push by companies to sell these tools to smaller departments with fewer resources for proper oversight and training. This expansion is driven by the economic imperative to capture more data and expand market share.
Surveillance & Sensor AI
In this section, we’re going to examine a few of the other types of AI-adjacent technologies that presenters at the seminar discussed. These technologies dramatically expand the government’s ability to monitor and collect data.
Cellular Location Data (CSLI)
Andrew Garrett’s session exposed what he called the “great lie” often told in court regarding Historical Cell Site Analysis (HCSA).
What typically happens in court: Police officers with basic FBI CAST training often testify that cell phones can be precisely tracked using cell towers. They claim towers have neat, predictable coverage areas (exactly 120 degrees in three directions) that reach exactly 4-5 miles, and that phones connect to the closest tower 70% of the time. This makes it seem like they can pinpoint someone's location with high confidence.
The reality: Radio frequency propagation is incredibly complex and depends on many factors:
- Frequency (lower frequencies travel further)
- Terrain
- Building materials and density
- Weather conditions
- Network load
- Network protocols like “cell selection hysteresis” (a process that prevents phones from rapidly switching between towers when in overlapping coverage areas)
Garrett detailed how simplified models presented by FBI CAST-trained officers grossly misrepresent the complex realities of radio frequency propagation. Accurate analysis requires data often withheld (like frequency bands used) and expertise far beyond typical two-day law enforcement training courses. This makes HCSA vulnerable to challenges under evidence standards like Daubert/Frye when presented by inadequately qualified witnesses relying solely on visualization tools like CASTViz.
Video Analysis
Brian Cummings’ expertise highlighted significant challenges with video evidence “enhancement.” Techniques used to “clarify” video (adjusting brightness, contrast, sharpness, frame rate, or using digital zoom/interpolation) end up altering the original pixel data.
Claiming these “enhancements” are simply like using a magnifying glass ignores the complex algorithmic processes involved. Digital zoom doesn’t just make existing pixels bigger — it creates new pixels based on algorithms. These include “bilinear” or “bicubic” interpolation methods that generate entirely new data, rather than simply enlarging existing pixels (like the more basic, blockier “nearest neighbor” method).
These alterations, often undocumented and lacking scientific validation for forensic purposes, risk distorting the evidence. Defenders must challenge the admissibility of such altered video under the “silent witness” doctrine, demanding proof that the enhancement process is reliable and hasn’t compromised the evidence’s integrity.
Probabilistic Genotyping Software (PGS)
Tools like STRmix and TrueAllele analyze complex DNA mixtures where multiple people’s DNA is present in a single sample. Their “black box” nature, combined with companies’ resistance to source code disclosure (citing trade secrets), makes challenging their reliability difficult.
In practice, this means defense attorneys often can’t access the actual code that determined their client was “a match” to DNA found at a crime scene. Imagine being told “our proprietary algorithm determined you were at the crime scene with 99.9% certainty” but not being allowed to examine how that determination was made.
This scenario exemplifies what economists call “vertical integration” in the A.I. value chain — companies control both the data collection and the analysis, creating high barriers to transparency and accountability. These companies’ business models depend on maintaining proprietary control over their algorithms, creating direct conflicts with defendants’ rights to examine evidence against them.
Tamar Lerer’s litigation successes, like in State v. Pickett, show transparency is essential. In Pickett, the court recognized the defendant’s right to examine the software’s source code to challenge its reliability, setting an important precedent. The court held that if the State’s expert relies on “novel probabilistic genotyping software” to give DNA testimony, a defendant is entitled, upon a showing of particularized need, to access the software’s source code and supporting documentation to challenge its reliability.
Generative A.I. (LLMs like ChatGPT)
Patrick Barone’s presentation highlighted both potential benefits and serious risks of large language models (LLMs). While useful for drafting or framing an issue, an LLM’s tendency to “hallucinate” (fabricate information) creates significant ethical risks, as we saw in Mata v. Avianca, where a lawyer submitted fictional case citations generated by AI.
Barone emphasized lawyers’ non-delegable duty to verify A.I. output and protect client confidentiality, especially with cloud-based A.I. tools. He defined prompting as an essential skill. By using specific prompting patterns, lawyers that want to use generative A.I. can better control its output:
- Persona Pattern: “Act as a seasoned cross-examiner reviewing this police report and identify inconsistencies.”
- Audience Persona Pattern: “Explain reasonable doubt to me as if I am a juror with no legal background.” (Useful for preparing arguments or client explanations).
- Question Refinement Pattern: “I want to ask about the error rate of this facial recognition system. Suggest a more precise version of my question for a Daubert hearing.”
- Cognitive Verifier Pattern: “Act as a DNA expert. Before explaining the likelihood ratio, ask me clarifying questions about the case context until you have enough information.”
- Fact Check Pattern: “Summarize the Riley v. California decision and provide a bullet-point list of key facts and holdings supporting your summary.”
The economics of generative A.I. are particularly important to understand. These systems generate value through “trust and safety data” — information about how users interact with the system and how the system responds. This creates an unusual dynamic where even problematic interactions (like hallucinating case citations) provide valuable training data to improve future versions of the model. Research seems to support the idea that companies implementing strong trust-related actions in their generative A.I. deployments report higher benefit levels and better risk management.
Understanding these diverse technologies allows defense counsel to develop targeted challenges, focusing on specific vulnerabilities — biased data, flawed assumptions in cell tower analysis, lack of validation for facial recognition or video enhancement, opacity of DNA analysis software, or the potential for A.I. hallucinations.
The State is both A.I. deployer and watchdog
Government agencies have evolved beyond regulatory functions into active participants in A.I. data economies. This dual position creates a fundamental tension in public accountability. In criminal justice specifically, agencies simultaneously purchase A.I. systems while generating the very data that makes these systems valuable. The real economic worth comes not just from the software's immediate functions, but from the continuous data harvesting that enhances model performance and market valuation.
These public-private partnerships create massive one-way data transfers. When law enforcement deploys facial recognition tech, they contribute thousands of facial images daily that refine algorithmic accuracy — typically without data sharing agreements or compensation structures in place.
This arrangement creates a troubling value extraction pattern: taxpayer-funded activities generate high-quality training data that improves proprietary A.I. systems, yet the economic benefits flow almost exclusively to private vendors who retain ownership of the enhanced models. As one expert bluntly stated, "We're witnessing a massive transfer of publicly generated value to private hands, with minimal transparency."
The current legal landscape offers almost no protection against this extraction.
A.I. Is Forcing Courts to Evolve
Courts are beginning to grapple with AI’s implications. Mata v. Avianca exposed the dangers of unverified A.I. output when a lawyer submitted fictional case citations generated by A.I. — resulting in sanctions and judicial warnings.
This concern has only intensified since then. In February 2025, a Wyoming federal judge sanctioned lawyers from Morgan & Morgan (America's largest personal injury firm) for submitting court filings containing completely fabricated cases hallucinated by their in-house A.I. platform with citations of nonexistent legal precedents.
Similarly, in late 2024, Judge Marcia Crose in the Eastern District of Texas sanctioned an attorney who submitted a response to a summary judgment motion containing nonexistent cases and fabricated quotations generated by an A.I. tool. The court imposed a $2,000 penalty and ordered the lawyer to attend continuing education on A.I. in legal practice. A May 2024 study from the Stanford Institute for Human-Centered A.I. found legal A.I. tools hallucinate at alarming rates, producing incorrect information in one out of every six queries.
Professional organizations are issuing guidance emphasizing human oversight and verification:
- ABA Resolution 604 provides guidelines for A.I. use by legal professionals
- Florida Bar Ethics Opinion 24-1 addresses confidentiality and competence issues
- NYC Bar Association Formal Opinion 2024-5 emphasizes that lawyers need “guardrails and not hard-and-fast restrictions.”
There are others, these are just a select few.
However, courts rarely understand the economic incentives shaping these technologies. When a company’s business model depends on maintaining proprietary control over algorithms used in criminal cases, this creates fundamental tensions with due process rights.
State v. Loomis highlighted concerns about proprietary risk assessment algorithms that defendants couldn’t examine. While upholding the sentence, the court noted serious concerns about such tools, setting the stage for stronger challenges. Professor Sonia Gipson Rankin warns against A.I. becoming “automated stategraft” — systems that, under the guise of neutrality, disproportionately extract resources from specific communities. Her research shows how automated traffic enforcement cameras generate revenue while disproportionately impacting low-income communities, often without demonstrable safety benefits.
This concept takes on new significance when we consider the economic value of collected data. Communities subjected to increased surveillance aren’t just bearing social costs — they’re unwittingly contributing valuable training data that improves systems and increases market value, while receiving none of the economic benefits.

Every defender needs a tech strategy
The integration of A.I. and complex digital evidence demands a proactive, tech-savvy defense. Professional competence now clearly includes understanding “the benefits and risks associated with relevant technology” (ABA Model Rule 1.1, Comment 8).
For Defense Attorneys
- Build Your Knowledge: You need ongoing education about AI, digital forensics, vehicle data, cell site location, and video analysis. This must include understanding the money trail — who profits from these systems and how that shapes their design.
- Demand Everything in Discovery: When facing A.I. evidence, request:
- Software versions and databases
- Raw data and algorithms (push past trade secret claims using cases like Pickett)
- Validation studies and error rates
- Operator logs and training
- Video project files showing every enhancement step
- EDR download reports and independent reconstruction
- CSLI raw data including frequency bands
- Business models and data sharing agreements
- Challenge Reliability: Use Professor Sinha’s framework for Fourth Amendment challenges based on unreliable technology. Mount vigorous challenges under evidence standards like Daubert/Frye, questioning error rates and validation. Challenge economic incentives that might compromise accuracy. When a company makes more money by collecting more data rather than ensuring accuracy, this creates conflicts that affect justice outcomes.
- Scrutinize Warrants and Consent: Challenge overly broad digital search warrants. Fight for Fifth Amendment protection when passcodes are compelled. Question whether consent was truly voluntary when officers request device access.
- Use Your Own A.I. Tools Wisely: Leverage A.I. for case preparation but verify everything and protect client confidentiality. Master prompt engineering techniques to get better results.
- Find Experts: You may need,
- Digital forensic specialists
- Software/A.I. experts
- Statisticians
- Collision reconstructionists
- RF engineers
- Economists who understand A.I. business models
- Ask the Right Questions: Develop cross-examination strategies that expose technology’s limitations:
- “Do you understand the physics of radio frequency propagation?”
- “What algorithms created these new pixels in the enhanced video?”
- “What is the error rate for this vehicle speed measurement?”
- “Who profits from this system’s wide deployment?”
- “What happens to all the data collected during system use?”
For Defendants
The increasing complexity of evidence makes getting the right counsel crucial.
- Choose Knowledgeable Lawyers: Ask potential attorneys about their experience with digital evidence and AI, and their network of experts. Their ability to understand and challenge technology directly affects your case.
- Understand That Technology Is Fallible: A facial recognition “match,” vehicle speed reading, or cell tower location can seem definitive but may be based on errors or false assumptions. You need a lawyer who can demystify this evidence and protect your rights.
- Know Your Data Rights: Your interactions with the justice system generate valuable data. While legal frameworks for asserting rights over this data are still evolving, working with counsel who understand these dynamics is increasingly important.
Automation without accountability is injustice on repeat
The future of A.I. in our justice system is being written right now. As defense attorneys, technologists, and community advocates, we cannot be passive observers — we must actively shape this path to ensure it aligns with true justice.
Resistance to the “inevitable march of technology” requires concrete actions:
- Challenge the reliability of automated suspicion at every turn
- Demand transparency in both algorithms and the data that feeds them
- Master the technical details of digital evidence, from cell phones to DNA
- Defend constitutional rights against tech-powered intrusions
- Question who profits from these systems and how that shapes their design
We must address fundamental questions of power and economics: Who benefits from the data collected through our justice system? When A.I. systems improve by processing cases in courts and police departments, who captures the value created? How can communities bearing the burden of algorithmic suspicion share in any successful and forceful pushback?
What gives me hope is the growing awareness of these issues. More courts are requiring validation of A.I. systems, legislators are considering guardrails, and defense attorneys are developing the technical skills to challenge flawed systems.
If you’re working in this space — whether as a technologist, attorney, advocate, or community member — I’d love to hear your experience. The conversation about A.I. in criminal justice is just beginning, and all of us have a stake in ensuring it develops in ways that strengthen rather than undermine our fundamental rights and creates value that benefits everyone, not just those who control the technology.