원문정보
초록
영어
The performance of a novel medical question-answering engine called CliniCluster and existing search engines, such as CQA-1.0, Google, and Google Scholar, was evaluated using known-item searching. Known-item searching is a document that has been critically appraised to be highly relevant to a therapy question. Results show that, using CliniCluster, known-items were retrieved on average at rank 2 (MRR@10 ≈ 0.50), and most of the known-items could be identified from the top-10 document lists. In response to ill-defined questions, the known-items were ranked lower by CliniCluster and CQA-1.0, whereas for Google and Google Scholar, significant difference in ranking was not found between well- and ill-defined questions. Less than 40% of the known- items could be identified from the top-10 documents retrieved by CQA-1.0, Google, and Google Scholar. An analysis of the top-ranked documents by strength of evidence revealed that CliniCluster outperformed other search engines by providing a higher number of recent publications with the highest study design. In conclusion, the overall results support the use of CliniCluster in answering therapy questions by ranking highly relevant documents in the top positions of the search results.
목차
Ⅰ. Introduction
Ⅱ. Background
2.1. Question Formulation
2.2. Document Appraisal
Ⅲ. MedQA Systems
3.1. Question Processing
3.2. Document Processing
3.3. Answer Processing
Ⅳ. Known-Item Search
4.1. Search Engines
4.2. Known-Item Search
Ⅴ. Performance Measures
5.1. Mean Reciprocal Rank
5.2. Percentage Gain
5.3. Strength of Evidence
Ⅵ. Results and Discussion
6.1. Mean Reciprocal Rank
6.2. Percentage Gain
6.3. Strength of Evidence
Ⅶ. CONCLUSION