April 21, 2023

The Goal of Explaining Black Boxes in EEG Seizure Prediction is Not to Explain Models’ Decisions

Abstract found on Wiley Online Library

Many state-of-the-art methods for seizure prediction, using the electroencephalogram, are based on machine learning models that are black boxes, weakening the trust of clinicians in them for high-risk decisions. Seizure prediction concerns a multidimensional time-series problem that performs continuous sliding window analysis and classification. In this work, we make a critical review of which explanations increase trust in models’ decisions for predicting seizures. We developed three machine learning methodologies to explore their explainability potential. These contain different levels of model transparency: a logistic regression, an ensemble of fifteen Support Vector Machines, and an ensemble of three Convolutional Neural Networks. For each methodology, we evaluated quasi-prospectively the performance in 40 patients (testing data comprised 2055 hours and 104 seizures). We selected patients with good and poor performance to explain the models’ decisions. Then, with Grounded Theory, we evaluated how these explanations helped specialists (data scientists and clinicians working in epilepsy) to understand the obtained model dynamics. We obtained four lessons for better communication between data scientists and clinicians. We found that the goal of explainability is not to explain the system’s decisions but to improve the system itself. Model transparency is not the most significant factor in explaining a model decision for seizure prediction. Even when using intuitive and state-of-the-art features, it is hard to understand brain dynamics and their relationship with the developed models. We achieve an increase in understanding by developing, in parallel, several systems that explicitly deal with signal dynamics changes that help develop a complete problem formulation.