Universal IR Evaluation
This article needs additional citations for verification. (April 2011) |
In computer science, Universal IR Evaluation (information retrieval evaluation) aims to develop measures of database retrieval performance that shall be comparable across all information retrieval tasks.
Measures of "relevance"
[edit]IR (information retrieval) evaluation begins whenever a user submits a query (search term) to a database. If the user is able to determine the relevance of each document in the database (relevant or not relevant), then for each query, the complete set of documents is naturally divided into four distinct (mutually exclusive) subsets: relevant documents that are retrieved, not relevant documents that are retrieved, relevant documents that are not retrieved, and not relevant documents that are not retrieved. These four subsets (of documents) are denoted by the letters a,b,c,d respectively and are called Swets variables, named after their inventor.[1]
In addition to the Swets definitions, four relevance metrics have also been defined: Recall refers to the fraction of relevant documents that are retrieved (a/(a+b)), and Precision refers to the fraction of retrieved documents that are relevant (a/(a+c)). These are the most commonly used and well-known relevance metrics found in the IR evaluation literature. Two less commonly used metrics include the Fallout, i.e., the fraction of not relevant documents that are retrieved (b/(b+d)), and the Miss, which refers to the fraction of relevant documents that are not retrieved (c/(c+d)) during any given search.
Universal IR evaluation techniques
[edit]Universal IR evaluation addresses the mathematical possibilities and relationships among the four relevance metrics Precision, Recall, Fallout and Miss, denoted by P, R, F and M, respectively. One aspect of the problem involves finding a mathematical derivation of a complete set of universal IR evaluation points.[2] The complete set of 16 points, each one a quadruple of the form (P,R,F,M), describes all the possible universal IR outcomes. For example, many of us have had the experience of querying a database and not retrieving any documents at all. In this case, the Precision would take on the undetermined form 0/0, the Recall and Fallout would both be zero, and the Miss would be any value greater than zero and less than one (assuming a mix of relevant and not relevant documents were in the database, none of which were retrieved). This universal IR evaluation point would thus be denoted by (0/0, 0, 0, M), which represents only one of the 16 possible universal IR outcomes.
The mathematics of universal IR evaluation is a fairly new subject since the relevance metrics P,R,F,M were not analyzed collectively until recently (within the past decade). A lot of the theoretical groundwork has already been formulated, but new insights in this area await discovery. For a detailed mathematical analysis, a query in the ScienceDirect database for "universal IR evaluation" retrieves several relevant peer-reviewed papers.
See also
[edit]References
[edit]- ^ Swets, J.A. (1969). Effectiveness of information retrieval methods. American Documentation, 20(1), 72-89.
- ^ Schatkun, M. (2010). A Second look at Egghe's universal IR surface and a simple derivation of a complete set of universal IR evaluation points. Information Processing & Management, 46(1), 110-114.