Unknown

Dataset Information

0

Comparison of different feature extraction methods for applicable automated ICD coding.


ABSTRACT:

Background

Automated ICD coding on medical texts via machine learning has been a hot topic. Related studies from medical field heavily relies on conventional bag-of-words (BoW) as the feature extraction method, and do not commonly use more complicated methods, such as word2vec (W2V) and large pretrained models like BERT. This study aimed at uncovering the most effective feature extraction methods for coding models by comparing BoW, W2V and BERT variants.

Methods

We experimented with a Chinese dataset from Fuwai Hospital, which contains 6947 records and 1532 unique ICD codes, and a public Spanish dataset, which contains 1000 records and 2557 unique ICD codes. We designed coding tasks with different code frequency thresholds (denoted as [Formula: see text]), with a lower threshold indicating a more complex task. Using traditional classifiers, we compared BoW, W2V and BERT variants on accomplishing these coding tasks.

Results

When [Formula: see text] was equal to or greater than 140 for Fuwai dataset, and 60 for the Spanish dataset, the BERT variants with the whole network fine-tuned was the best method, leading to a Micro-F1 of 93.9% for Fuwai data when [Formula: see text], and a Micro-F1 of 85.41% for the Spanish dataset when [Formula: see text]. When [Formula: see text] fell below 140 for Fuwai dataset, and 60 for the Spanish dataset, BoW turned out to be the best, leading to a Micro-F1 of 83% for Fuwai dataset when [Formula: see text], and a Micro-F1 of 39.1% for the Spanish dataset when [Formula: see text]. Our experiments also showed that both the BERT variants and BoW possessed good interpretability, which is important for medical applications of coding models.

Conclusions

This study shed light on building promising machine learning models for automated ICD coding by revealing the most effective feature extraction methods. Concretely, our results indicated that fine-tuning the whole network of the BERT variants was the optimal method for tasks covering only frequent codes, especially codes that represented unspecified diseases, while BoW was the best for tasks involving both frequent and infrequent codes. The frequency threshold where the best-performing method varied differed between different datasets due to factors like language and codeset.

SUBMITTER: Shuai Z 

PROVIDER: S-EPMC8756659 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC6779285 | biostudies-literature
| PRJEB29363 | ENA
| S-EPMC4842096 | biostudies-literature
| S-EPMC10018467 | biostudies-literature
| S-EPMC3916103 | biostudies-other
2024-01-01 | GSE246437 | GEO
| S-EPMC8432850 | biostudies-literature
| S-EPMC6515685 | biostudies-literature
| S-EPMC8374470 | biostudies-literature
| S-EPMC6258844 | biostudies-literature