Concept recognition, information retrieval, and machine learning in genomics question-answering

James G Caporaso, William A. Baumgartner, Hyunmin Kim, Zhiyong Lu, Helen L. Johnson, Olga Medvedeva, Anna Lindemann, Lynne M. Fox, Elizabeth K. White, K. Bretonnel Cohen, Lawrence Hunter

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

TREC Genomics 2006 presented a genomics question-answering challenge with questions on twenty-seven topics, and a corpus of 162,259 full-text biomedical journal articles from which to derive answers. Questions were formulated from actual information needs of biomedical researchers, and performance was based on human evaluation of the answers. The University of Colorado approach to this task involved three key components: semantic analysis, document zoning, and a promiscuous retrieval approach followed by pruning by classifiers trained to identify near-misses. We began by parsing the document HTML, splitting it into paragraph-length passages and classifying each passage with respect to a model of the sections (zones) of scientific publications. We filtered out certain sections, and built a search index for these passages using the Lemur system. Next, for each query, we semi-automatically created a set of expansions using ontological resources, including MeSH and the Gene Ontology. This expansion included not only synonyms, but terms related to concepts that were both more specific and (in some cases) more general than the query. We searched the passage collection for these expanded queries using the Indri search engine from the Lemur package, with pseudo-relevance feedback. We also tried expanding the retrieved passages by adding passages that had a small cosine distance to the initial retrievals in an LSA-defined vector space. Our final step was to filter this expanded retrieval set with document classifiers whose input features included word stems and recognized concepts. Three separate runs were constructed using varying components of the above set, allowing us to explore the utility of each. The system produced the best result for at least one query in each of the three evaluations (document, passage and aspect diversity).

Original languageEnglish (US)
Title of host publicationNIST Special Publication
StatePublished - 2006
Externally publishedYes
Event15th Text REtrieval Conference, TREC 2006 - Gaithersburg, MD, United States
Duration: Nov 14 2006Nov 17 2006

Other

Other15th Text REtrieval Conference, TREC 2006
CountryUnited States
CityGaithersburg, MD
Period11/14/0611/17/06

Fingerprint

Information retrieval
Learning systems
Classifiers
Zoning
HTML
Vector spaces
Search engines
Ontology
Genes
Semantics
Feedback
Genomics

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Caporaso, J. G., Baumgartner, W. A., Kim, H., Lu, Z., Johnson, H. L., Medvedeva, O., ... Hunter, L. (2006). Concept recognition, information retrieval, and machine learning in genomics question-answering. In NIST Special Publication

Concept recognition, information retrieval, and machine learning in genomics question-answering. / Caporaso, James G; Baumgartner, William A.; Kim, Hyunmin; Lu, Zhiyong; Johnson, Helen L.; Medvedeva, Olga; Lindemann, Anna; Fox, Lynne M.; White, Elizabeth K.; Cohen, K. Bretonnel; Hunter, Lawrence.

NIST Special Publication. 2006.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Caporaso, JG, Baumgartner, WA, Kim, H, Lu, Z, Johnson, HL, Medvedeva, O, Lindemann, A, Fox, LM, White, EK, Cohen, KB & Hunter, L 2006, Concept recognition, information retrieval, and machine learning in genomics question-answering. in NIST Special Publication. 15th Text REtrieval Conference, TREC 2006, Gaithersburg, MD, United States, 11/14/06.
Caporaso JG, Baumgartner WA, Kim H, Lu Z, Johnson HL, Medvedeva O et al. Concept recognition, information retrieval, and machine learning in genomics question-answering. In NIST Special Publication. 2006
Caporaso, James G ; Baumgartner, William A. ; Kim, Hyunmin ; Lu, Zhiyong ; Johnson, Helen L. ; Medvedeva, Olga ; Lindemann, Anna ; Fox, Lynne M. ; White, Elizabeth K. ; Cohen, K. Bretonnel ; Hunter, Lawrence. / Concept recognition, information retrieval, and machine learning in genomics question-answering. NIST Special Publication. 2006.
@inproceedings{0b1d61f6fc914e81b1efe66c401e447b,
title = "Concept recognition, information retrieval, and machine learning in genomics question-answering",
abstract = "TREC Genomics 2006 presented a genomics question-answering challenge with questions on twenty-seven topics, and a corpus of 162,259 full-text biomedical journal articles from which to derive answers. Questions were formulated from actual information needs of biomedical researchers, and performance was based on human evaluation of the answers. The University of Colorado approach to this task involved three key components: semantic analysis, document zoning, and a promiscuous retrieval approach followed by pruning by classifiers trained to identify near-misses. We began by parsing the document HTML, splitting it into paragraph-length passages and classifying each passage with respect to a model of the sections (zones) of scientific publications. We filtered out certain sections, and built a search index for these passages using the Lemur system. Next, for each query, we semi-automatically created a set of expansions using ontological resources, including MeSH and the Gene Ontology. This expansion included not only synonyms, but terms related to concepts that were both more specific and (in some cases) more general than the query. We searched the passage collection for these expanded queries using the Indri search engine from the Lemur package, with pseudo-relevance feedback. We also tried expanding the retrieved passages by adding passages that had a small cosine distance to the initial retrievals in an LSA-defined vector space. Our final step was to filter this expanded retrieval set with document classifiers whose input features included word stems and recognized concepts. Three separate runs were constructed using varying components of the above set, allowing us to explore the utility of each. The system produced the best result for at least one query in each of the three evaluations (document, passage and aspect diversity).",
author = "Caporaso, {James G} and Baumgartner, {William A.} and Hyunmin Kim and Zhiyong Lu and Johnson, {Helen L.} and Olga Medvedeva and Anna Lindemann and Fox, {Lynne M.} and White, {Elizabeth K.} and Cohen, {K. Bretonnel} and Lawrence Hunter",
year = "2006",
language = "English (US)",
booktitle = "NIST Special Publication",

}

TY - GEN

T1 - Concept recognition, information retrieval, and machine learning in genomics question-answering

AU - Caporaso, James G

AU - Baumgartner, William A.

AU - Kim, Hyunmin

AU - Lu, Zhiyong

AU - Johnson, Helen L.

AU - Medvedeva, Olga

AU - Lindemann, Anna

AU - Fox, Lynne M.

AU - White, Elizabeth K.

AU - Cohen, K. Bretonnel

AU - Hunter, Lawrence

PY - 2006

Y1 - 2006

N2 - TREC Genomics 2006 presented a genomics question-answering challenge with questions on twenty-seven topics, and a corpus of 162,259 full-text biomedical journal articles from which to derive answers. Questions were formulated from actual information needs of biomedical researchers, and performance was based on human evaluation of the answers. The University of Colorado approach to this task involved three key components: semantic analysis, document zoning, and a promiscuous retrieval approach followed by pruning by classifiers trained to identify near-misses. We began by parsing the document HTML, splitting it into paragraph-length passages and classifying each passage with respect to a model of the sections (zones) of scientific publications. We filtered out certain sections, and built a search index for these passages using the Lemur system. Next, for each query, we semi-automatically created a set of expansions using ontological resources, including MeSH and the Gene Ontology. This expansion included not only synonyms, but terms related to concepts that were both more specific and (in some cases) more general than the query. We searched the passage collection for these expanded queries using the Indri search engine from the Lemur package, with pseudo-relevance feedback. We also tried expanding the retrieved passages by adding passages that had a small cosine distance to the initial retrievals in an LSA-defined vector space. Our final step was to filter this expanded retrieval set with document classifiers whose input features included word stems and recognized concepts. Three separate runs were constructed using varying components of the above set, allowing us to explore the utility of each. The system produced the best result for at least one query in each of the three evaluations (document, passage and aspect diversity).

AB - TREC Genomics 2006 presented a genomics question-answering challenge with questions on twenty-seven topics, and a corpus of 162,259 full-text biomedical journal articles from which to derive answers. Questions were formulated from actual information needs of biomedical researchers, and performance was based on human evaluation of the answers. The University of Colorado approach to this task involved three key components: semantic analysis, document zoning, and a promiscuous retrieval approach followed by pruning by classifiers trained to identify near-misses. We began by parsing the document HTML, splitting it into paragraph-length passages and classifying each passage with respect to a model of the sections (zones) of scientific publications. We filtered out certain sections, and built a search index for these passages using the Lemur system. Next, for each query, we semi-automatically created a set of expansions using ontological resources, including MeSH and the Gene Ontology. This expansion included not only synonyms, but terms related to concepts that were both more specific and (in some cases) more general than the query. We searched the passage collection for these expanded queries using the Indri search engine from the Lemur package, with pseudo-relevance feedback. We also tried expanding the retrieved passages by adding passages that had a small cosine distance to the initial retrievals in an LSA-defined vector space. Our final step was to filter this expanded retrieval set with document classifiers whose input features included word stems and recognized concepts. Three separate runs were constructed using varying components of the above set, allowing us to explore the utility of each. The system produced the best result for at least one query in each of the three evaluations (document, passage and aspect diversity).

UR - http://www.scopus.com/inward/record.url?scp=84873539048&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84873539048&partnerID=8YFLogxK

M3 - Conference contribution

BT - NIST Special Publication

ER -