Unknown

Dataset Information

0

Objective and automated protocols for the evaluation of biomedical search engines using No Title Evaluation protocols.


ABSTRACT:

Background

The evaluation of information retrieval techniques has traditionally relied on human judges to determine which documents are relevant to a query and which are not. This protocol is used in the Text Retrieval Evaluation Conference (TREC), organized annually for the past 15 years, to support the unbiased evaluation of novel information retrieval approaches. The TREC Genomics Track has recently been introduced to measure the performance of information retrieval for biomedical applications.

Results

We describe two protocols for evaluating biomedical information retrieval techniques without human relevance judgments. We call these protocols No Title Evaluation (NT Evaluation). The first protocol measures performance for focused searches, where only one relevant document exists for each query. The second protocol measures performance for queries expected to have potentially many relevant documents per query (high-recall searches). Both protocols take advantage of the clear separation of titles and abstracts found in Medline. We compare the performance obtained with these evaluation protocols to results obtained by reusing the relevance judgments produced in the 2004 and 2005 TREC Genomics Track and observe significant correlations between performance rankings generated by our approach and TREC. Spearman's correlation coefficients in the range of 0.79-0.92 are observed comparing bpref measured with NT Evaluation or with TREC evaluations. For comparison, coefficients in the range 0.86-0.94 can be observed when evaluating the same set of methods with data from two independent TREC Genomics Track evaluations. We discuss the advantages of NT Evaluation over the TRels and the data fusion evaluation protocols introduced recently.

Conclusion

Our results suggest that the NT Evaluation protocols described here could be used to optimize some search engine parameters before human evaluation. Further research is needed to determine if NT Evaluation or variants of these protocols can fully substitute for human evaluations.

SUBMITTER: Campagne F 

PROVIDER: S-EPMC2292696 | biostudies-literature | 2008 Feb

REPOSITORIES: biostudies-literature

altmetric image

Publications

Objective and automated protocols for the evaluation of biomedical search engines using No Title Evaluation protocols.

Campagne Fabien F  

BMC bioinformatics 20080229


<h4>Background</h4>The evaluation of information retrieval techniques has traditionally relied on human judges to determine which documents are relevant to a query and which are not. This protocol is used in the Text Retrieval Evaluation Conference (TREC), organized annually for the past 15 years, to support the unbiased evaluation of novel information retrieval approaches. The TREC Genomics Track has recently been introduced to measure the performance of information retrieval for biomedical app  ...[more]

Similar Datasets

| S-EPMC3374816 | biostudies-other
| S-EPMC4184451 | biostudies-literature
| S-EPMC5918684 | biostudies-literature
2014-09-01 | PXD001118 | Pride
| S-EPMC5540613 | biostudies-literature
| S-EPMC6238235 | biostudies-literature
| S-EPMC3769318 | biostudies-literature
| S-EPMC10282050 | biostudies-literature
| S-EPMC6819997 | biostudies-literature
| S-EPMC7727334 | biostudies-literature