# Extended Boolean model

The Extended Boolean model was described in a Communications of the ACM article appearing in 1983, by Gerard Salton, Edward A. Fox, and Harry Wu. The goal of the Extended Boolean model is to overcome the drawbacks of the Boolean model that has been used in information retrieval. The Boolean model doesn't consider term weights in queries, and the result set of a Boolean query is often either too small or too big. The idea of the extended model is to make use of partial matching and term weights as in the vector space model. It combines the characteristics of the Vector Space Model with the properties of Boolean algebra and ranks the similarity between queries and documents. This way a document may be somewhat relevant if it matches some of the queried terms and will be returned as a result, whereas in the Standard Boolean model it wasn't.[1]

Thus, the extended Boolean model can be considered as a generalization of both the Boolean and vector space models; those two are special cases if suitable settings and definitions are employed. Further, research has shown effectiveness improves relative to that for Boolean query processing. Other research has shown that relevance feedback and query expansion can be integrated with extended Boolean query processing.

## Definitions

In the Extended Boolean model, a document is represented as a vector (similarly to in the vector model). Each i dimension corresponds to a separate term associated with the document.

The weight of term Kx associated with document dj is measured by its normalized Term frequency and can be defined as:

$w_{x,j}=f_{x,j}*\frac{Idf_{x}}{max_{i}Idf_{i}}$

where Idfx is inverse document frequency.

The weight vector associated with document dj can be represented as:

$\mathbf{v}_{d_j} = [w_{1,j}, w_{2,j}, \ldots, w_{i,j}]$

## The 2 Dimensions Example

Figure 1: The similarities of q = (KxKy) with documents dj and dj+1.
Figure 2: The similarities of q = (KxKy) with documents dj and dj+1.

Considering the space composed of two terms Kx and Ky only, the corresponding term weights are w1 and w2.[2] Thus, for query qor = (KxKy), we can calculate the similarity with the following formula:

$sim(q_{or},d)=\sqrt{\frac{w_1^2+w_2^2}{2}}$

For query qand = (KxKy), we can use:

$sim(q_{and},d)=1-\sqrt{\frac{(1-w_1)^2+(1-w_2)^2}{2}}$

## Generalizing the idea and P-norms

We can generalize the previous 2D extended Boolean model example to higher t-dimensional space using Euclidean distances.

This can be done using P-norms which extends the notion of distance to include p-distances, where 1 ≤ p ≤ ∞ is a new parameter.[3]

• A generalized conjunctive query is given by:
$q_{or}=k_1 \lor^p k_2 \lor^p .... \lor^p k_t$
• The similarity of $q_{or}$ and $d_j$ can be defined as:

:$sim(q_{or},d_j)=\sqrt[p]{\frac{w_1^p+w_2^p+....+w_t^p}{t}}$

• A generalized disjunctive query is given by:
$q_{and}=k_1 \land^p k_2 \land^p .... \land^p k_t$
• The similarity of $q_{and}$ and $d_j$ can be defined as:
$sim(q_{and},d_j)=1-\sqrt[p]{\frac{(1-w_1)^p+(1-w_2)^p+....+(1-w_t)^p}{t}}$

## Examples

Consider the query q = (K1K2) ∨ K3. The similarity between query q and document d can be computed using the formula:

$sim(q,d)=\sqrt[p]{\frac{(1-\sqrt[p]{(\frac{(1-w_1)^p+(1-w_2)^p}{2}}))^p+w_3^p}{2}}$

## Improvements over the Standard Boolean Model

Lee and Fox[4] compared the Standard and Extended Boolean models with three test collections, CISI, CACM and INSPEC. Using P-norms they obtained an average precision improvement of 79%, 106% and 210% over the Standard model, for the CISI, CACM and INSPEC collections, respectively.
The P-norm model is computationally expensive because of the number of exponentiation operations that it requires but it achieves much better results than the Standard model and even Fuzzy retrieval techniques. The Standard Boolean model is still the most efficient.