Pre-Conference Workshop

AI in Forensic Mental Health: Revolutionary or Hallucinatory?


PRESENTER: Prof. dr. David J. Vinkers

DATE: June 15, 2026

TIME: 9am - 12:30pm *half day*

CONTINUING EDUCATION CREDITS: 3.5 credits

COST: $250 CAD // Student Member Rate: $125 CAD (includes 1 catered coffee break; lunch NOT included) 



DESCRIPTION

Artificial intelligence (AI) is entering forensic mental health rapidly, from automated risk assessment to AI-generated clinical reports. There are, however, several concerns about the use of AI, such as the “black box” problem, algorithmic bias, privacy risks and loss of professional identity. Is there any value in using AI in forensic mental health ?

The answer to this question is probably that this depends on the balance of risks and benefits of using AI. This workshop is exactly about this topic. You will get a hands-on understanding of the core mechanisms in AI systems and learn about different AI applications in forensic mental health: risk estimation, summarization of judicial documents, analysis of recorded conversations and AI based voice analysis. You learn how to annotate (dummy) data, how to record and summarize a conversation and how to build a prompt. You will also learn about the judicial background of AI use including the EU AI Act and a recent Dutch example of AI in risk estimation (OXREC) will be discussed.


WORKSHOP OUTLINE:

Part 1: How AI Works: Foundations for Clinicians (30 min)

An accessible introduction to the core mechanisms of modern AI for clinicians without technical background. Topics: tokenization and embeddings; transformer architecture and attention; the difference between pattern matching and reasoning; hallucination, confabulation, and confidence of calibration; prompt engineering for clinical applications. Participants will experiment with structured prompts to generate forensic text.

Part 2: From Data to Evidence (45 min)

Clinical AI depends on high-quality annotated data. We will learn and demonstrate a complete annotation-to-model pipeline, specifically for use forensic mental health. We will take a look into DSPy (Declarative Self-improving Language Programs), a framework for building modular, optimizable AI pipelines that improve through structured expert feedback rather than manual prompt engineering Using Prodigy, participants learn how forensic source documents (court rulings, police reports, clinical evaluations) are annotated for structured information extraction through active learning. Topics: why annotation quality determines AI quality; Prodigy’s active learning workflow; building gold-standard forensic datasets; inter-rater reliability for forensic constructs; DSPy module architecture and optimization. Special attention will be given to the security of the data, the difference between local and online application and the AI Act.

Part 3: Voice Analysis and Conversation Processing (45 min)

Forensic assessment relies on clinical interviews, yet these conversations are now not fully used. This segment covers the full pipeline from audio to clinical report. Topics: automated speech-to-text for forensic interviews (Whisper and cloud alternatives); speaker diarization and attribution; voice biomarkers—what tone, pace, pauses, and prosody can and cannot reveal about mental state; generating structured clinical reports from transcripts using LLMs; GDPR requirements for recorded clinical interactions. Live demonstration: a “dummy” forensic interview excerpt is transcribed, diarized, and analyzed in real-time.

Part 4: Path Forward (45 min)

Critical discussion of regulatory and ethical frameworks. Topics: EU AI Act classification (high-risk vs. preparatory); the OXREC case, automation bias and human-in-the-loop design; the hybrid architecture (AI as preparatory tool with preserved professional judgment); practical evaluation framework for forensic services considering AI adoption. And what to do and not to do !


LEARNING OBJECTIVES 

  1. Explain the core mechanisms of AI and large language models (tokenization, attention, generation) and identify why these produce both useful outputs and characteristic failures (hallucination, confabulation) in forensic contexts.

  2. Critically evaluate AI-assisted forensic assessment tools by applying knowledge of annotation quality, training data, and the pattern-matching/reasoning distinction, using the OXREC case as reference.

  3. Describe the annotation-to-model pipeline for forensic AI, including Prodigy for active learning annotation and DSPy for building modular, optimizable assessment pipelines improved through expert feedback.

  4. Assess opportunities and limitations of voice analysis in forensic mental health—AI-powered transcription, speaker diarization, paralinguistic features—and the privacy requirements under GDPR.

  5. Apply the EU AI Act classification framework to forensic tools, distinguish high-risk from preparatory applications, and design human-in-the-loop architectures preserving professional judgment.

Register for this workshop here!



Powered by Wild Apricot Membership Software