Unknown

Dataset Information

0

Adapting Bidirectional Encoder Representations from Transformers (BERT) to Assess Clinical Semantic Textual Similarity: Algorithm Development and Validation Study.


ABSTRACT:

Background

Natural Language Understanding enables automatic extraction of relevant information from clinical text data, which are acquired every day in hospitals. In 2018, the language model Bidirectional Encoder Representations from Transformers (BERT) was introduced, generating new state-of-the-art results on several downstream tasks. The National NLP Clinical Challenges (n2c2) is an initiative that strives to tackle such downstream tasks on domain-specific clinical data. In this paper, we present the results of our participation in the 2019 n2c2 and related work completed thereafter.

Objective

The objective of this study was to optimally leverage BERT for the task of assessing the semantic textual similarity of clinical text data.

Methods

We used BERT as an initial baseline and analyzed the results, which we used as a starting point to develop 3 different approaches where we (1) added additional, handcrafted sentence similarity features to the classifier token of BERT and combined the results with more features in multiple regression estimators, (2) incorporated a built-in ensembling method, M-Heads, into BERT by duplicating the regression head and applying an adapted training strategy to facilitate the focus of the heads on different input patterns of the medical sentences, and (3) developed a graph-based similarity approach for medications, which allows extrapolating similarities across known entities from the training set. The approaches were evaluated with the Pearson correlation coefficient between the predicted scores and ground truth of the official training and test dataset.

Results

We improved the performance of BERT on the test dataset from a Pearson correlation coefficient of 0.859 to 0.883 using a combination of the M-Heads method and the graph-based similarity approach. We also show differences between the test and training dataset and how the two datasets influenced the results.

Conclusions

We found that using a graph-based similarity approach has the potential to extrapolate domain specific knowledge to unseen sentences. We observed that it is easily possible to obtain deceptive results from the test dataset, especially when the distribution of the data samples is different between training and test datasets.

SUBMITTER: Kades K 

PROVIDER: S-EPMC7889424 | biostudies-literature | 2021 Feb

REPOSITORIES: biostudies-literature

altmetric image

Publications

Adapting Bidirectional Encoder Representations from Transformers (BERT) to Assess Clinical Semantic Textual Similarity: Algorithm Development and Validation Study.

Kades Klaus K   Sellner Jan J   Koehler Gregor G   Full Peter M PM   Lai T Y Emmy TYE   Kleesiek Jens J   Maier-Hein Klaus H KH  

JMIR medical informatics 20210203 2


<h4>Background</h4>Natural Language Understanding enables automatic extraction of relevant information from clinical text data, which are acquired every day in hospitals. In 2018, the language model Bidirectional Encoder Representations from Transformers (BERT) was introduced, generating new state-of-the-art results on several downstream tasks. The National NLP Clinical Challenges (n2c2) is an initiative that strives to tackle such downstream tasks on domain-specific clinical data. In this paper  ...[more]

Similar Datasets

| S-EPMC10909178 | biostudies-literature
| S-EPMC6746103 | biostudies-literature
| S-EPMC8294940 | biostudies-literature
| S-EPMC7566510 | biostudies-literature
| S-EPMC7837998 | biostudies-literature
| S-EPMC7221648 | biostudies-literature
| S-EPMC9338483 | biostudies-literature
| S-EPMC9371328 | biostudies-literature
| S-EPMC7721552 | biostudies-literature
| S-EPMC11339519 | biostudies-literature