In a first for U.S. courts, prosecutors in Maricopa County, Arizona, used AI-generated video to recreate a deceased victim, Christopher Pelkey, delivering his own victim impact statement during sentencing. The digital avatar, created by Pelkey’s sister, expressed forgiveness and addressed his killer in court, prompting both admiration and deep ethical concern.
The Pelkey Case: groundbreaking or problematic?
Christopher Pelkey, a 37-year-old Army veteran, was shot to death in a 2021 road-rage incident in Chandler, Arizona. At Gabriel Horcasitas’s sentencing in May 2025, Pelkey’s sister Stacey Wales unveiled an AI avatar—complete with his likeness, voice, beard, and cap—reading a script she crafted, in which her brother forgave his killer.
The avatar prefaced with transparency: “I am a version of Chris Pelkey recreated through AI…” and shared a message of forgiveness: “In another life, we probably could have been friends…” Judge Todd Lang responded emotionally—“I loved that AI…”—and sentenced Horcasitas to 10½ years, one year more than what prosecutors sought.
Sources:
Power and Risks of Emotional AI
AI’s ability to humanize victims has undeniable appeal—but that same power raises red flags. The avatar had a profound emotional impact on the courtroom, potentially swaying judicial decisions.
Yet critics argue this technology may manipulate emotion more than truth. Defense attorney Jason Lamm called it “inauthentic,” warning it amounted to “putting words into the victim’s mouth.”
Legal scholars echo concerns about shifting justice dynamics through persuasive AI content.
More: Associated Press on AI and court ethics
Ethics, Consent, and Legal Standards
Key ethical questions include:
-
Consent and representation: Did the deceased truly consent? Stacey Wales insisted the script reflected his values, but defining digital representation remains fraught.
-
Fairness and bias: AI tools may be available only to wealthier families, skewing justice representation.
-
Authentication and legal precedent: Arizona’s Supreme Court has formed a steering committee to address AI and issued guidelines for judges on AI-generated courtroom content.
What the Law is Doing—and Doesn’t Do
- Present status: The AI-generated statement was allowed in sentencing, not trial, and judges were informed of its artificial origin.
- State-level action: Arizona’s high court quickly began forming recommendations for AI courtroom use.
- Potential impact: Defense attorneys are preparing appeals, questioning whether the avatar’s emotional influence prejudiced fairness.
FAQs
1. Is this technology legally allowed?
Yes—currently, no laws prohibit AI victim statements in Arizona and elsewhere. However, courts must ensure transparent disclosure, relevance, and oversight.
2. Could similar AI be used in trials?
Technically possible—but far more controversial. The avatar used here was for sentencing only, not to determine guilt or innocence.
3. What worries ethicists most?
Unsupervised use could allow emotional manipulation, misrepresentation, or unequal resource access, distorting justice for those who can’t afford such technology.
Looking Ahead: Safeguards and Accountability
To responsibly integrate AI into legal systems, courts should adopt clear measures:
- Mandatory transparency
- Verification of accuracy
- Judicial training and oversight
- Equitable access
- Limit use to sentencing
Arizona’s case is a pivotal example—and a call to action.
Conclusion
AI is not fiction—it’s reshaping how we remember, grieve, and pursue justice. In Arizona, it gave a voice to the voiceless with devastating poignancy. But without proper guardrails, that voice risks echoing bias, distortion, and inequity.
The Pelkey case isn’t the end—it’s the beginning. As AI forges deeper into courtroom walls, lawmakers, judges, and ethicists must answer: Can digital revival coexist with authentic justice? Such questions will define the next frontier of legal ethics.