Faculty Mentor(s)

Dr. Omofolakunmi Olagbemi, Computer Science; Dr. Brooke Odle, Engineering

Document Type

Poster

Event Date

4-12-2024

Abstract

Explainability within models is a crucial part of machine learning (ML) models because it promotes trust in the models by providing insights into how their predictions were determined. Our study applies the classification model XCM's explainability component in identifying the critical features leading to classification decisions on data collected while participants performed patient-handling tasks on manikins. Studies show that nurses sustain musculoskeletal injuries early in their career, attributable to some extent to posture adopted during patienthandling tasks. The ML models classify posture adopted during tasks as "good", "poor", or, in some cases, "neutral", where good posture minimizes the risk of musculoskeletal injuries (especially low back pain) in the participants. Features deemed important by XCM (e.g., lumbar rotation, hip flexion) aligned with expectations from a biomechanical standpoint. Training another ML model using only those features improved model accuracy. Further analysis will assist with determining metrics indicative of good posture.

Comments

This research was supported in part by funding provided by the National Aeronautics and Space Administration (NASA), under award number 80NSSC20M0124, Michigan Space Grant Consortium (MSGC), by the RESTORE Center of Stanford University, supported by NICHD of the National Institutes of Health under award number 5P2CHD101913, and the Hope College Computer Science Department.

Share

COinS