CS Seminar

Title: Crowdsourcing and Semi-Supervised Learning for Detection and Prediction of Hospital Acquired Pressure Ulcer Injury
Defense: Computer Science
Speaker: Mani Sotoodeh, Emory University
Contact: Joyce Ho, joyce.c.ho@emory.edu
Date: 2021-07-27 at 1:00PM
Venue: https://emory.zoom.us/j/93643444080
  Download Flyer  Add to Calendar
Abstract:
Pressure ulcer injury (PUI) or bedsore is “a localized injury to the skin and/or underlying tissue due to pressure.” More than 2.5 million Americans develop PUI annually, and the incidence of hospital-acquired PUI (HAPUI) is around 5% to 6%. Bedsores are correlated with reduced quality of life, higher mortality and readmission rates, and longer hospital stays. The Center for Medicare and Medicaid considers PUI as the most frequent preventable event, and PUIs are the 2nd most common claim in lawsuits. The current practice of manual quarterly assessments for a day to estimate PUI rates has many disadvantages including high cost, subjectivity, and substantial disagreement among nurses, not to mention missed opportunities to alter practices to improve care instantly. The biggest challenge in HAPUI detection using EHRs is assigning ground truth for HAPUI classification, which requires consideration of multiple clinical criteria from nursing guidelines. However, these criteria do not explicitly map to EHRs data sources. Furthermore, there is no consistent cohort definition among research works tackling HAPUI detection. As labels significantly impact the model’s performance, inconsistent labels complicate the comparison of research works. Multiple opinions for the same HAPUI classification task can remedy this uncertainty in labeling. Research works on learning with multiple uncertain labels are mainly developed for computer vision. Unfortunately, however, acquiring images from PUIs at hospitals is not standard practice, and we have to resort to tabular or time-series data. Finally, acquiring expert nursing annotations for establishing accurate labels is costly. Though if unlabelled samples can be utilized, a combination of annotated and unlabelled samples could yield a robust classifier. To overcome these challenges, we achieved the following: 1) Proposing a new standardized HAPUI cohort definition applicable to EHR data loyal to clinical guidelines; 2) Introducing a novel model for learning with unreliable crowdsourcing labels using sample-specific perturbations, suitable for sparse annotations of HAPUI detection (CrowdTeacher); 3) Exploring unstructured notes for CrowdTeacher enhancement and gleaning better feature representations for HAPUI detection; 4) Incorporating unlabelled data into HAPUI detection via semi-supervised learning to reduce annotation costs

See All Seminars