Ontology highlight
ABSTRACT: Background
Translation of predictive and prognostic image-based learning models to clinical applications is challenging due in part to their lack of interpretability. Some deep-learning-based methods provide information about the regions driving the model output. Yet, due to the high-level abstraction of deep features, these methods do not completely solve the interpretation challenge. In addition, low sample size cohorts can lead to instabilities and suboptimal convergence for models involving a large number of parameters such as convolutional neural networks.Purpose
Here, we propose a method for designing radiomic models that combines the interpretability of handcrafted radiomics with a sub-regional analysis.Materials and methods
Our approach relies on voxel-wise engineered radiomic features with average global aggregation and logistic regression. The method is illustrated using a small dataset of 51 soft tissue sarcoma (STS) patients where the task is to predict the risk of lung metastasis occurrence during the follow-up period.Results
Using positron emission tomography/computed tomography and two magnetic resonance imaging sequences separately to build two radiomic models, we show that our approach produces quantitative maps that highlight the signal that contributes to the decision within the tumor region of interest. In our STS example, the analysis of these maps identified two biological patterns that are consistent with STS grading systems and knowledge: necrosis development and glucose metabolism of the tumor.Conclusions
We demonstrate how that method makes it possible to spatially and quantitatively interpret radiomic models amenable to sub-regions identification and biological interpretation for patient stratification.
SUBMITTER: Escobar T
PROVIDER: S-EPMC9325536 | biostudies-literature | 2022 Jun
REPOSITORIES: biostudies-literature
Escobar Thibault T Vauclin Sébastien S Orlhac Fanny F Nioche Christophe C Pineau Pascal P Champion Laurence L Brisse Hervé H Buvat Irène I
Medical physics 20220421 6
<h4>Background</h4>Translation of predictive and prognostic image-based learning models to clinical applications is challenging due in part to their lack of interpretability. Some deep-learning-based methods provide information about the regions driving the model output. Yet, due to the high-level abstraction of deep features, these methods do not completely solve the interpretation challenge. In addition, low sample size cohorts can lead to instabilities and suboptimal convergence for models in ...[more]