An Automatic, Objective Method to Measure and Visualize Volumetric Changes in Patients with Facial Palsy during 3D Video Recordings [Abstract]
In 95th Annual Meeting German Society of Oto-Rhino-Laryngology, Head and Neck Surgery e. V., Bonn , 2024
Introduction: Using grading systems, the severity of facial palsy is typically classified through static 2D images. These approaches fail to capture crucial facial attributes, such as the depth of the nasolabial fold. We present a novel technique that uses 3D video recordings to overcome this limitation. Our method automatically characterizes the facial structure, calculates volumetric dispari8es between the affected and contralateral side, and includes an intuitive visualization. Material: 35 patients (mean age 51 years, min. 25, max. 72; 7 ♂, 28 ♀) with unilateral chronic synkinetic facial palsy were enrolled. We utilized the 3dMD face system (3dMD LCC, Georgia, USA) to record their facial movements while they mimicked happy facial expressions four times. Each recording lasted 6.5 seconds, with a total of 140 videos. Results: We found a difference in volume between the neutral and happy expressions: 11.7 ± 9.1 mm3 and 13.73 ± 10.0 mm3 , respectively. This suggests that there is a higher level of asymmetry during movements. Our process is fully automa8c without human intervention, highlights the impacted areas, and emphasizes the differences between the affected and contralateral side. Discussion: Our data-driven method allows healthcare professionals to track and visualize patients’ volumetric changes automatically, facilitating personalized treatments. It mitigates the risk of human biases in therapeutic evaluations and effectively transitions from static 2D images to dynamic 4D assessments of facial palsy state. Supported by DFG DE-735/15-1 and DFG GU-463/12-1