Unknown

Dataset Information

0

Data preparation and interannotator agreement: BioCreAtIvE task 1B.


ABSTRACT:

Background

We prepared and evaluated training and test materials for an assessment of text mining methods in molecular biology. The goal of the assessment was to evaluate the ability of automated systems to generate a list of unique gene identifiers from PubMed abstracts for the three model organisms Fly, Mouse, and Yeast. This paper describes the preparation and evaluation of answer keys for training and testing. These consisted of lists of normalized gene names found in the abstracts, generated by adapting the gene list for the full journal articles found in the model organism databases. For the training dataset, the gene list was pruned automatically to remove gene names not found in the abstract; for the testing dataset, it was further refined by manual annotation by annotators provided with guidelines. A critical step in interpreting the results of an assessment is to evaluate the quality of the data preparation. We did this by careful assessment of interannotator agreement and the use of answer pooling of participant results to improve the quality of the final testing dataset.

Results

Interannotator analysis on a small dataset showed that our gene lists for Fly and Yeast were good (87% and 91% three-way agreement) but the Mouse gene list had many conflicts (mostly omissions), which resulted in errors (69% interannotator agreement). By comparing and pooling answers from the participant systems, we were able to add an additional check on the test data; this allowed us to find additional errors, especially in Mouse. This led to 1% change in the Yeast and Fly "gold standard" answer keys, but to an 8% change in the mouse answer key.

Conclusion

We found that clear annotation guidelines are important, along with careful interannotator experiments, to validate the generated gene lists. Also, abstracts alone are a poor resource for identifying genes in paper, containing only a fraction of genes mentioned in the full text (25% for Fly, 36% for Mouse). We found that there are intrinsic differences between the model organism databases related to the number of synonymous terms and also to curation criteria. Finally, we found that answer pooling was much faster and allowed us to identify more conflicting genes than interannotator analysis.

SUBMITTER: Colosimo ME 

PROVIDER: S-EPMC1869005 | biostudies-literature | 2005

REPOSITORIES: biostudies-literature

altmetric image

Publications

Data preparation and interannotator agreement: BioCreAtIvE task 1B.

Colosimo Marc E ME   Morgan Alexander A AA   Yeh Alexander S AS   Colombe Jeffrey B JB   Hirschman Lynette L  

BMC bioinformatics 20050524


<h4>Background</h4>We prepared and evaluated training and test materials for an assessment of text mining methods in molecular biology. The goal of the assessment was to evaluate the ability of automated systems to generate a list of unique gene identifiers from PubMed abstracts for the three model organisms Fly, Mouse, and Yeast. This paper describes the preparation and evaluation of answer keys for training and testing. These consisted of lists of normalized gene names found in the abstracts,  ...[more]

Similar Datasets

| S-EPMC3269939 | biostudies-literature
| S-EPMC1869008 | biostudies-literature
| S-EPMC3269937 | biostudies-literature
| S-EPMC5009325 | biostudies-literature
| S-EPMC4112614 | biostudies-literature
| S-EPMC5009341 | biostudies-literature
| S-EPMC3625048 | biostudies-literature
| S-EPMC6196310 | biostudies-literature
| S-EPMC4799720 | biostudies-literature
| S-EPMC3148238 | biostudies-literature