ML4H 2021
  • Home
  • Accepted Papers
  • Attend
    • Registration
    • Participation Guide
    • Schedule
    • Speakers
    • Research Roundtables
    • Career Mentorship
    • Raffle
    • Code of Conduct
  • Submit
    • Call for Participation
    • Writing Guidelines
    • Reviewer Instructions
    • Submission Mentorship
    • Reviewer Mentorship
  • Organization
    • About
    • Organizers
  • Past Events
    • 2020
    • 2019
    • 2018
    • 2017
    • 2016

Interpretability Aware Model Training to Improve Robustness against OOD Magnetic Resonance Images in Alzheimer's Disease Classification

Merel Kuijs, Catherine R. Jutzeler, Bastian Rieck, Sarah C. Brueningk

Abstract: Owing to its pristine soft-tissue contrast and high resolution, structural magnetic resonance imaging (MRI) is widely applied in neurology, making it a valuable data source for image- based machine learning (ML) and deep learning applications. The physical nature of MRI acquisition and reconstruction, however, causes variations in image intensity, resolution, and signal-to-noise ratio. Since ML models are sensitive to such variations, performance on out-of-distribution data, which is inherent to the setting of a deployed healthcare ML application, typically drops below acceptable levels. We propose an interpretability aware adversarial training regime to improve robustness against out-of-distribution samples originating from different MRI hardware. The approach is applied to 1.5T and 3T MRIs obtained from the AlzheimerÕs Disease Neuroimaging Initiative database. We present preliminary results showing promising performance on out-of-distribution samples.

Poster
Abstract: Owing to its pristine soft-tissue contrast and high resolution, structural magnetic resonance imaging (MRI) is widely applied in neurology, making it a valuable data source for image- based machine learning (ML) and deep learning applications. The physical nature of MRI acquisition and reconstruction, however, causes variations in image intensity, resolution, and signal-to-noise ratio. Since ML models are sensitive to such variations, performance on out-of-distribution data, which is inherent to the setting of a deployed healthcare ML application, typically drops below acceptable levels. We propose an interpretability aware adversarial training regime to improve robustness against out-of-distribution samples originating from different MRI hardware. The approach is applied to 1.5T and 3T MRIs obtained from the AlzheimerÕs Disease Neuroimaging Initiative database. We present preliminary results showing promising performance on out-of-distribution samples.

Back to Top

© 2021 ML4H Organization Committee