Feature|Articles|November 19, 2025

How to stop bringing work home by using AI scribes

Kathleen Allison-Black, DVM, explains how veterinary clinics can use AI scribes and closed LLM tools to streamline documentation, improve client communication, protect patient data, and retain clinician oversight while reducing after-hours charting.

Kathleen Allison-Black, DVM, is a small-animal practitioner based in Maryland who lives in Gettysburg, Pennsylvania. A graduate of the University of Tennessee College of Veterinary Medicine (UTCVM), Allison-Black has worked in relief, emergency, and shelter medicine and has practiced general practice at the same clinic for the past 5 years. Frustrated by administrative burden and incomplete records, she began using artificial intelligence (AI) tools—particularly ambient scribes—to improve documentation, client communication, and workflow. In this interview, she discusses which tools she trusts, how to evaluate an AI scribe, privacy and liability considerations, and practical steps for clinics that want to get started.

Editor’s note: This interview has been edited for clarity and length. Where content was clarified or standardized for readability, language was smoothed but the meaning preserved.

dvm360: To start, tell our readers briefly who you are and your background.

Allison-Black: I’m Kathleen Allison-Black, DVM. I live in Gettysburg, Pennsylvania, and practice in Maryland. I’m originally from Tennessee and graduated from UTCVM in 2002. Over the years I’ve worked relief, emergency, and shelter medicine. For the last 5 years I’ve been working in one general practice, which has been a big change from moving around. I began using AI because I was frustrated with the pace of practice and keeping records accurate and complete. I originally started using AI as a scribe.

dvm360: What workflow improvements can clinicians realistically expect from AI scribing and other AI tools? Are there cost savings or hidden costs we should consider?

Allison-Black: The biggest practical benefit of an ambient AI scribe is that I don’t have to rely on my memory or on staff who are on-the-job–trained and may lack clinical vocabulary. For example, a technician might not document a rectal exam as “normal,” they might just leave it blank. An AI scribe captures what I say, so if I say a rectal is normal, it’s in the chart.

Operationally, I can go into an exam room, start and stop a recording, do the exam, state my plan, check the transcription quickly, and paste it into the record. That immediacy lets my staff access the plan right away; they can read the transcript and pull the items they need without interrupting me. I can also instantly generate a client email or go-home instructions from the visit summary and send that to owners, which saves time and reduces phone follow-up.

Practically, this turned me away from having 4 hours of charting after work. I used to bring work home; I don’t anymore. That was huge for my work–life balance. It also calms my anxiety about missing details. I have ADHD, and knowing the record is complete helps me stay focused.

There are costs: subscriptions, and hardware (a phone, tablet, or a computer with a good microphone). There’s also an initial investment of time to trial systems and train them to your workflow. But the time savings and improved communications often outweigh those costs.

dvm360: You mentioned trying several scribes. How big a difference is there between products?

Allison-Black: Huge. Not all AI scribes are equal. I trialed 7 or 8. Some transcribed poorly or hallucinated content; others were very accurate. The one I settled on had the best mix of accuracy, low hallucination, and responsiveness from their development team. For example, I tested a scribe embedded in another software and it produced notes that lacked most of the physical-exam content. That frustrated me and reinforced that you must test tools in your real workflow. Most companies offer free trials; use them, run head-to-head comparisons, and choose the one that matches your clinical style.

dvm360: What should clinics look for when evaluating an AI tool? Any red flags?

Allison-Black: Key things to look for:

  • Veterinary training on the model: Prefer tools developed or vetted by DVMs. Veterinary terminology and concepts differ from human medicine.
  • Data security: End-to-end encryption, clear policies on data storage, retention, deletion, and who has access.
  • Plasticity and training: The tool should let you customize templates (SOAP notes, go-home instructions, lab summaries, radiology/ultrasound reports, vet-to-vet communications). It should also learn corrections. For example, if it mishears a proper name, you should be able to correct it once and have it remember.
  • Responsiveness: Good support from the vendor and fast issue resolution.
  • Clinical fidelity: The language model should be trained on veterinary concepts so summaries and plan outputs reflect veterinary reasoning, not just surface transcription.
  • DVM involvement: If there are no DVMs on the development or advisory team, that’s a red flag.

Also, test for hallucinations. If the scribe invents that you discussed something you did not, that’s unacceptable and a reason to switch.

dvm360: How do you balance time savings with clinical oversight? How much checking is enough to avoid errors, hallucinations, or incorrect emails to clients?

Allison-Black: The scribe is a tool, not a decision-maker. In my workflow everything is a cut-and-paste step: I always review the transcript or the generated note before it goes into the permanent record or before I send an email to a client. Over a year with the same scribe, it’s learned my accent and vocabulary, so small corrections are easier. Misspellings are flagged automatically by my system. If something looks off, I correct it.

For decision support tools that suggest differentials, I use those as a check, not as the origin of my plan. I might use a diagnostic plugin to surface differentials or 'do-not-miss' diagnoses and citations, then verify against the literature and my clinical judgment. I also build my own algorithms after reading primary sources, then optionally load them into a closed model to check that output against my sources.

In short: use AI to augment documentation and to organize knowledge; retain clinician judgment and verify anything that would change care.

dvm360: Do you feel AI has made you a better clinician?

Allison-Black: Broadly, yes. It’s made me happier and less anxious. I can communicate more clearly and quickly with owners: lab summaries, patient summaries, and next-step plans can be generated in 1–2 minutes and emailed immediately. That reduces long phone calls and the risk of miscommunication when someone else (a technician or a family member) relays information. Clients appreciate the documentation, and I think that improves compliance and trust.

Newsletter

From exam room tips to practice management insights, get trusted veterinary news delivered straight to your inbox—subscribe to dvm360.