A veterinarian from Lincoln Memorial University has turned Amazon’s Alexa into a teaching tool for veterinary students.
For her innovative efforts in combining technology and veterinary education, Jamie Perkins, DVM, EdD, recently won first place in the Hyperdrive contest at the 2018 DevLearn Conference in Las Vegas. Held annually, the competition highlights cutting-edge projects that leverage technology to enhance learning.
Dr. Perkins—a clinical skills veterinarian in the Center for Innovation in Veterinary Education and Technology (CIVET) at Lincoln Memorial University College of Veterinary Medicine (LMU-CVM)—earned top honors for enabling the Amazon Alexa device to teach communication and clinical reasoning to veterinary students.
“To my knowledge, we are the first veterinary school in the country to use Alexa technology to teach students,” Dr. Perkins said.
Dr. Perkins spent much of the past year developing skills for Alexa devices that will be used to help students at LMU-CVM review anatomy and canine theriogenology, and prepare for their board exam.
For Dr. Perkins, the most exciting Alexa skill she has worked on—and the one she presented at the Hyperdrive competition—is clinical reasoning, which is difficult to master. Like many veterinary schools, LMU-CVM uses standardized (simulated) patients to teach clinical reasoning skills. By using Alexa first, students will be able to develop skills at their own pace before facing a standardized patient in an exam room.
With Dr. Perkins’ tool, scheduled for an early 2019 release, an Alexa device is preprogrammed with data about a fictitious patient and students can ask the device questions to determine the pets’ health status. Dr. Perkins believes that Alexa can bridge the gap between classroom learning and preparation for live patients.
“This is a very different way to develop an educational resource,” she said. “Everything about working with voice is vastly different. You really have to think about how to write the questions but also develop the skill to anticipate responses in a different format when working with voice recognition.”