1 package org.apache.lucene.search;
4 * Licensed to the Apache Software Foundation (ASF) under one or more
5 * contributor license agreements. See the NOTICE file distributed with
6 * this work for additional information regarding copyright ownership.
7 * The ASF licenses this file to You under the Apache License, Version 2.0
8 * (the "License"); you may not use this file except in compliance with
9 * the License. You may obtain a copy of the License at
11 * http://www.apache.org/licenses/LICENSE-2.0
13 * Unless required by applicable law or agreed to in writing, software
14 * distributed under the License is distributed on an "AS IS" BASIS,
15 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16 * See the License for the specific language governing permissions and
17 * limitations under the License.
21 import org.apache.lucene.index.FieldInvertState;
22 import org.apache.lucene.index.Term;
23 import org.apache.lucene.search.Explanation.IDFExplanation;
24 import org.apache.lucene.util.SmallFloat;
25 import org.apache.lucene.util.VirtualMethod;
27 import java.io.IOException;
28 import java.io.Serializable;
29 import java.util.Collection;
33 * Expert: Scoring API.
35 * <p>Similarity defines the components of Lucene scoring.
36 * Overriding computation of these components is a convenient
37 * way to alter Lucene scoring.
39 * <p>Suggested reading:
40 * <a href="http://nlp.stanford.edu/IR-book/html/htmledition/queries-as-vectors-1.html">
41 * Introduction To Information Retrieval, Chapter 6</a>.
43 * <p>The following describes how Lucene scoring evolves from
44 * underlying information retrieval models to (efficient) implementation.
45 * We first brief on <i>VSM Score</i>,
46 * then derive from it <i>Lucene's Conceptual Scoring Formula</i>,
47 * from which, finally, evolves <i>Lucene's Practical Scoring Function</i>
48 * (the latter is connected directly with Lucene classes and methods).
51 * <a href="http://en.wikipedia.org/wiki/Standard_Boolean_model">
52 * Boolean model (BM) of Information Retrieval</a>
54 * <a href="http://en.wikipedia.org/wiki/Vector_Space_Model">
55 * Vector Space Model (VSM) of Information Retrieval</a> -
56 * documents "approved" by BM are scored by VSM.
58 * <p>In VSM, documents and queries are represented as
59 * weighted vectors in a multi-dimensional space,
60 * where each distinct index term is a dimension,
62 * <a href="http://en.wikipedia.org/wiki/Tfidf">Tf-idf</a> values.
64 * <p>VSM does not require weights to be <i>Tf-idf</i> values,
65 * but <i>Tf-idf</i> values are believed to produce search results of high quality,
66 * and so Lucene is using <i>Tf-idf</i>.
67 * <i>Tf</i> and <i>Idf</i> are described in more detail below,
68 * but for now, for completion, let's just say that
69 * for given term <i>t</i> and document (or query) <i>x</i>,
70 * <i>Tf(t,x)</i> varies with the number of occurrences of term <i>t</i> in <i>x</i>
71 * (when one increases so does the other) and
72 * <i>idf(t)</i> similarly varies with the inverse of the
73 * number of index documents containing term <i>t</i>.
75 * <p><i>VSM score</i> of document <i>d</i> for query <i>q</i> is the
76 * <a href="http://en.wikipedia.org/wiki/Cosine_similarity">
77 * Cosine Similarity</a>
78 * of the weighted query vectors <i>V(q)</i> and <i>V(d)</i>:
81 * <table cellpadding="2" cellspacing="2" border="0" align="center">
83 * <table cellpadding="1" cellspacing="0" border="1" align="center">
85 * <table cellpadding="2" cellspacing="2" border="0" align="center">
87 * <td valign="middle" align="right" rowspan="1">
88 * cosine-similarity(q,d) =
90 * <td valign="middle" align="center">
92 * <tr><td align="center"><small>V(q) · V(d)</small></td></tr>
93 * <tr><td align="center">–––––––––</td></tr>
94 * <tr><td align="center"><small>|V(q)| |V(d)|</small></td></tr>
103 * <center><font=-1><u>VSM Score</u></font></center>
109 * Where <i>V(q)</i> · <i>V(d)</i> is the
110 * <a href="http://en.wikipedia.org/wiki/Dot_product">dot product</a>
111 * of the weighted vectors,
112 * and <i>|V(q)|</i> and <i>|V(d)|</i> are their
113 * <a href="http://en.wikipedia.org/wiki/Euclidean_norm#Euclidean_norm">Euclidean norms</a>.
115 * <p>Note: the above equation can be viewed as the dot product of
116 * the normalized weighted vectors, in the sense that dividing
117 * <i>V(q)</i> by its euclidean norm is normalizing it to a unit vector.
119 * <p>Lucene refines <i>VSM score</i> for both search quality and usability:
121 * <li>Normalizing <i>V(d)</i> to the unit vector is known to be problematic in that
122 * it removes all document length information.
123 * For some documents removing this info is probably ok,
124 * e.g. a document made by duplicating a certain paragraph <i>10</i> times,
125 * especially if that paragraph is made of distinct terms.
126 * But for a document which contains no duplicated paragraphs,
127 * this might be wrong.
128 * To avoid this problem, a different document length normalization
129 * factor is used, which normalizes to a vector equal to or larger
130 * than the unit vector: <i>doc-len-norm(d)</i>.
133 * <li>At indexing, users can specify that certain documents are more
134 * important than others, by assigning a document boost.
135 * For this, the score of each document is also multiplied by its boost value
136 * <i>doc-boost(d)</i>.
139 * <li>Lucene is field based, hence each query term applies to a single
140 * field, document length normalization is by the length of the certain field,
141 * and in addition to document boost there are also document fields boosts.
144 * <li>The same field can be added to a document during indexing several times,
145 * and so the boost of that field is the multiplication of the boosts of
146 * the separate additions (or parts) of that field within the document.
149 * <li>At search time users can specify boosts to each query, sub-query, and
150 * each query term, hence the contribution of a query term to the score of
151 * a document is multiplied by the boost of that query term <i>query-boost(q)</i>.
154 * <li>A document may match a multi term query without containing all
155 * the terms of that query (this is correct for some of the queries),
156 * and users can further reward documents matching more query terms
157 * through a coordination factor, which is usually larger when
158 * more terms are matched: <i>coord-factor(q,d)</i>.
162 * <p>Under the simplifying assumption of a single field in the index,
163 * we get <i>Lucene's Conceptual scoring formula</i>:
166 * <table cellpadding="2" cellspacing="2" border="0" align="center">
168 * <table cellpadding="1" cellspacing="0" border="1" align="center">
170 * <table cellpadding="2" cellspacing="2" border="0" align="center">
172 * <td valign="middle" align="right" rowspan="1">
173 * score(q,d) =
174 * <font color="#FF9933">coord-factor(q,d)</font> ·
175 * <font color="#CCCC00">query-boost(q)</font> ·
177 * <td valign="middle" align="center">
179 * <tr><td align="center"><small><font color="#993399">V(q) · V(d)</font></small></td></tr>
180 * <tr><td align="center">–––––––––</td></tr>
181 * <tr><td align="center"><small><font color="#FF33CC">|V(q)|</font></small></td></tr>
184 * <td valign="middle" align="right" rowspan="1">
185 * · <font color="#3399FF">doc-len-norm(d)</font>
186 * · <font color="#3399FF">doc-boost(d)</font>
194 * <center><font=-1><u>Lucene Conceptual Scoring Formula</u></font></center>
199 * <p>The conceptual formula is a simplification in the sense that (1) terms and documents
200 * are fielded and (2) boosts are usually per query term rather than per query.
202 * <p>We now describe how Lucene implements this conceptual scoring formula, and
203 * derive from it <i>Lucene's Practical Scoring Function</i>.
205 * <p>For efficient score computation some scoring components
206 * are computed and aggregated in advance:
209 * <li><i>Query-boost</i> for the query (actually for each query term)
210 * is known when search starts.
213 * <li>Query Euclidean norm <i>|V(q)|</i> can be computed when search starts,
214 * as it is independent of the document being scored.
215 * From search optimization perspective, it is a valid question
216 * why bother to normalize the query at all, because all
217 * scored documents will be multiplied by the same <i>|V(q)|</i>,
218 * and hence documents ranks (their order by score) will not
219 * be affected by this normalization.
220 * There are two good reasons to keep this normalization:
223 * <a href="http://en.wikipedia.org/wiki/Cosine_similarity">
224 * Cosine Similarity</a> can be used find how similar
225 * two documents are. One can use Lucene for e.g.
226 * clustering, and use a document as a query to compute
227 * its similarity to other documents.
228 * In this use case it is important that the score of document <i>d3</i>
229 * for query <i>d1</i> is comparable to the score of document <i>d3</i>
230 * for query <i>d2</i>. In other words, scores of a document for two
231 * distinct queries should be comparable.
232 * There are other applications that may require this.
233 * And this is exactly what normalizing the query vector <i>V(q)</i>
234 * provides: comparability (to a certain extent) of two or more queries.
237 * <li>Applying query normalization on the scores helps to keep the
238 * scores around the unit vector, hence preventing loss of score data
239 * because of floating point precision limitations.
244 * <li>Document length norm <i>doc-len-norm(d)</i> and document
245 * boost <i>doc-boost(d)</i> are known at indexing time.
246 * They are computed in advance and their multiplication
247 * is saved as a single value in the index: <i>norm(d)</i>.
248 * (In the equations below, <i>norm(t in d)</i> means <i>norm(field(t) in doc d)</i>
249 * where <i>field(t)</i> is the field associated with term <i>t</i>.)
253 * <p><i>Lucene's Practical Scoring Function</i> is derived from the above.
254 * The color codes demonstrate how it relates
255 * to those of the <i>conceptual</i> formula:
258 * <table cellpadding="2" cellspacing="2" border="0" align="center">
260 * <table cellpadding="" cellspacing="2" border="2" align="center">
262 * <table cellpadding="2" cellspacing="2" border="0" align="center">
264 * <td valign="middle" align="right" rowspan="1">
265 * score(q,d) =
266 * <A HREF="#formula_coord"><font color="#FF9933">coord(q,d)</font></A> ·
267 * <A HREF="#formula_queryNorm"><font color="#FF33CC">queryNorm(q)</font></A> ·
269 * <td valign="bottom" align="center" rowspan="1">
270 * <big><big><big>∑</big></big></big>
272 * <td valign="middle" align="right" rowspan="1">
273 * <big><big>(</big></big>
274 * <A HREF="#formula_tf"><font color="#993399">tf(t in d)</font></A> ·
275 * <A HREF="#formula_idf"><font color="#993399">idf(t)</font></A><sup>2</sup> ·
276 * <A HREF="#formula_termBoost"><font color="#CCCC00">t.getBoost()</font></A> ·
277 * <A HREF="#formula_norm"><font color="#3399FF">norm(t,d)</font></A>
278 * <big><big>)</big></big>
283 * <td align="center"><small>t in q</small></td>
291 * <center><font=-1><u>Lucene Practical Scoring Function</u></font></center>
298 * <A NAME="formula_tf"></A>
299 * <b><i>tf(t in d)</i></b>
300 * correlates to the term's <i>frequency</i>,
301 * defined as the number of times term <i>t</i> appears in the currently scored document <i>d</i>.
302 * Documents that have more occurrences of a given term receive a higher score.
303 * Note that <i>tf(t in q)</i> is assumed to be <i>1</i> and therefore it does not appear in this equation,
304 * However if a query contains twice the same term, there will be
305 * two term-queries with that same term and hence the computation would still be correct (although
306 * not very efficient).
307 * The default computation for <i>tf(t in d)</i> in
308 * {@link org.apache.lucene.search.DefaultSimilarity#tf(float) DefaultSimilarity} is:
311 * <table cellpadding="2" cellspacing="2" border="0" align="center">
313 * <td valign="middle" align="right" rowspan="1">
314 * {@link org.apache.lucene.search.DefaultSimilarity#tf(float) tf(t in d)} =
316 * <td valign="top" align="center" rowspan="1">
317 * frequency<sup><big>½</big></sup>
325 * <A NAME="formula_idf"></A>
326 * <b><i>idf(t)</i></b> stands for Inverse Document Frequency. This value
327 * correlates to the inverse of <i>docFreq</i>
328 * (the number of documents in which the term <i>t</i> appears).
329 * This means rarer terms give higher contribution to the total score.
330 * <i>idf(t)</i> appears for <i>t</i> in both the query and the document,
331 * hence it is squared in the equation.
332 * The default computation for <i>idf(t)</i> in
333 * {@link org.apache.lucene.search.DefaultSimilarity#idf(int, int) DefaultSimilarity} is:
336 * <table cellpadding="2" cellspacing="2" border="0" align="center">
338 * <td valign="middle" align="right">
339 * {@link org.apache.lucene.search.DefaultSimilarity#idf(int, int) idf(t)} =
341 * <td valign="middle" align="center">
342 * 1 + log <big>(</big>
344 * <td valign="middle" align="center">
346 * <tr><td align="center"><small>numDocs</small></td></tr>
347 * <tr><td align="center">–––––––––</td></tr>
348 * <tr><td align="center"><small>docFreq+1</small></td></tr>
351 * <td valign="middle" align="center">
360 * <A NAME="formula_coord"></A>
361 * <b><i>coord(q,d)</i></b>
362 * is a score factor based on how many of the query terms are found in the specified document.
363 * Typically, a document that contains more of the query's terms will receive a higher score
364 * than another document with fewer query terms.
365 * This is a search time factor computed in
366 * {@link #coord(int, int) coord(q,d)}
367 * by the Similarity in effect at search time.
372 * <A NAME="formula_queryNorm"></A>
373 * <i>queryNorm(q)</i>
375 * is a normalizing factor used to make scores between queries comparable.
376 * This factor does not affect document ranking (since all ranked documents are multiplied by the same factor),
377 * but rather just attempts to make scores from different queries (or even different indexes) comparable.
378 * This is a search time factor computed by the Similarity in effect at search time.
380 * The default computation in
381 * {@link org.apache.lucene.search.DefaultSimilarity#queryNorm(float) DefaultSimilarity}
382 * produces a <a href="http://en.wikipedia.org/wiki/Euclidean_norm#Euclidean_norm">Euclidean norm</a>:
384 * <table cellpadding="1" cellspacing="0" border="0" align="center">
386 * <td valign="middle" align="right" rowspan="1">
387 * queryNorm(q) =
388 * {@link org.apache.lucene.search.DefaultSimilarity#queryNorm(float) queryNorm(sumOfSquaredWeights)}
391 * <td valign="middle" align="center" rowspan="1">
393 * <tr><td align="center"><big>1</big></td></tr>
394 * <tr><td align="center"><big>
395 * ––––––––––––––
397 * <tr><td align="center">sumOfSquaredWeights<sup><big>½</big></sup></td></tr>
404 * The sum of squared weights (of the query terms) is
405 * computed by the query {@link org.apache.lucene.search.Weight} object.
406 * For example, a {@link org.apache.lucene.search.BooleanQuery}
407 * computes this value as:
410 * <table cellpadding="1" cellspacing="0" border="0"n align="center">
412 * <td valign="middle" align="right" rowspan="1">
413 * {@link org.apache.lucene.search.Weight#sumOfSquaredWeights() sumOfSquaredWeights} =
414 * {@link org.apache.lucene.search.Query#getBoost() q.getBoost()} <sup><big>2</big></sup>
415 * ·
417 * <td valign="bottom" align="center" rowspan="1">
418 * <big><big><big>∑</big></big></big>
420 * <td valign="middle" align="right" rowspan="1">
421 * <big><big>(</big></big>
422 * <A HREF="#formula_idf">idf(t)</A> ·
423 * <A HREF="#formula_termBoost">t.getBoost()</A>
424 * <big><big>) <sup>2</sup> </big></big>
429 * <td align="center"><small>t in q</small></td>
438 * <A NAME="formula_termBoost"></A>
439 * <b><i>t.getBoost()</i></b>
440 * is a search time boost of term <i>t</i> in the query <i>q</i> as
441 * specified in the query text
442 * (see <A HREF="../../../../../../queryparsersyntax.html#Boosting a Term">query syntax</A>),
443 * or as set by application calls to
444 * {@link org.apache.lucene.search.Query#setBoost(float) setBoost()}.
445 * Notice that there is really no direct API for accessing a boost of one term in a multi term query,
446 * but rather multi terms are represented in a query as multi
447 * {@link org.apache.lucene.search.TermQuery TermQuery} objects,
448 * and so the boost of a term in the query is accessible by calling the sub-query
449 * {@link org.apache.lucene.search.Query#getBoost() getBoost()}.
454 * <A NAME="formula_norm"></A>
455 * <b><i>norm(t,d)</i></b> encapsulates a few (indexing time) boost and length factors:
458 * <li><b>Document boost</b> - set by calling
459 * {@link org.apache.lucene.document.Document#setBoost(float) doc.setBoost()}
460 * before adding the document to the index.
462 * <li><b>Field boost</b> - set by calling
463 * {@link org.apache.lucene.document.Fieldable#setBoost(float) field.setBoost()}
464 * before adding the field to a document.
466 * <li><b>lengthNorm</b> - computed
467 * when the document is added to the index in accordance with the number of tokens
468 * of this field in the document, so that shorter fields contribute more to the score.
469 * LengthNorm is computed by the Similarity class in effect at indexing.
472 * The {@link #computeNorm} method is responsible for
473 * combining all of these factors into a single float.
476 * When a document is added to the index, all the above factors are multiplied.
477 * If the document has multiple fields with the same name, all their boosts are multiplied together:
480 * <table cellpadding="1" cellspacing="0" border="0"n align="center">
482 * <td valign="middle" align="right" rowspan="1">
483 * norm(t,d) =
484 * {@link org.apache.lucene.document.Document#getBoost() doc.getBoost()}
485 * ·
487 * ·
489 * <td valign="bottom" align="center" rowspan="1">
490 * <big><big><big>∏</big></big></big>
492 * <td valign="middle" align="right" rowspan="1">
493 * {@link org.apache.lucene.document.Fieldable#getBoost() f.getBoost}()
498 * <td align="center"><small>field <i><b>f</b></i> in <i>d</i> named as <i><b>t</b></i></small></td>
503 * However the resulted <i>norm</i> value is {@link #encodeNormValue(float) encoded} as a single byte
504 * before being stored.
505 * At search time, the norm byte value is read from the index
506 * {@link org.apache.lucene.store.Directory directory} and
507 * {@link #decodeNormValue(byte) decoded} back to a float <i>norm</i> value.
508 * This encoding/decoding, while reducing index size, comes with the price of
509 * precision loss - it is not guaranteed that <i>decode(encode(x)) = x</i>.
510 * For instance, <i>decode(encode(0.89)) = 0.75</i>.
512 * Compression of norm values to a single byte saves memory at search time,
513 * because once a field is referenced at search time, its norms - for
514 * all documents - are maintained in memory.
516 * The rationale supporting such lossy compression of norm values is that
517 * given the difficulty (and inaccuracy) of users to express their true information
518 * need by a query, only big differences matter.
520 * Last, note that search time is too late to modify this <i>norm</i> part of scoring, e.g. by
521 * using a different {@link Similarity} for search.
526 * @see #setDefault(Similarity)
527 * @see org.apache.lucene.index.IndexWriter#setSimilarity(Similarity)
528 * @see Searcher#setSimilarity(Similarity)
530 public abstract class Similarity implements Serializable {
532 // NOTE: this static code must precede setting the static defaultImpl:
533 private static final VirtualMethod<Similarity> withoutDocFreqMethod =
534 new VirtualMethod<Similarity>(Similarity.class, "idfExplain", Term.class, Searcher.class);
535 private static final VirtualMethod<Similarity> withDocFreqMethod =
536 new VirtualMethod<Similarity>(Similarity.class, "idfExplain", Term.class, Searcher.class, int.class);
538 private final boolean hasIDFExplainWithDocFreqAPI =
539 VirtualMethod.compareImplementationDistance(getClass(),
540 withDocFreqMethod, withoutDocFreqMethod) >= 0; // its ok for both to be overridden
542 * The Similarity implementation used by default.
544 private static Similarity defaultImpl = new DefaultSimilarity();
546 public static final int NO_DOC_ID_PROVIDED = -1;
548 /** Set the default Similarity implementation used by indexing and search
551 * @see Searcher#setSimilarity(Similarity)
552 * @see org.apache.lucene.index.IndexWriter#setSimilarity(Similarity)
554 public static void setDefault(Similarity similarity) {
555 Similarity.defaultImpl = similarity;
558 /** Return the default Similarity implementation used by indexing and search
561 * <p>This is initially an instance of {@link DefaultSimilarity}.
563 * @see Searcher#setSimilarity(Similarity)
564 * @see org.apache.lucene.index.IndexWriter#setSimilarity(Similarity)
566 public static Similarity getDefault() {
567 return Similarity.defaultImpl;
570 /** Cache of decoded bytes. */
571 private static final float[] NORM_TABLE = new float[256];
574 for (int i = 0; i < 256; i++)
575 NORM_TABLE[i] = SmallFloat.byte315ToFloat((byte)i);
579 * Decodes a normalization factor stored in an index.
580 * @see #decodeNormValue(byte)
581 * @deprecated Use {@link #decodeNormValue} instead.
584 public static float decodeNorm(byte b) {
585 return NORM_TABLE[b & 0xFF]; // & 0xFF maps negative bytes to positive above 127
588 /** Decodes a normalization factor stored in an index.
590 * <b>WARNING: If you override this method, you should change the default
591 * Similarity to your implementation with {@link Similarity#setDefault(Similarity)}.
592 * Otherwise, your method may not always be called, especially if you omit norms
593 * for some fields.</b>
594 * @see #encodeNormValue(float)
596 public float decodeNormValue(byte b) {
597 return NORM_TABLE[b & 0xFF]; // & 0xFF maps negative bytes to positive above 127
600 /** Returns a table for decoding normalization bytes.
601 * @see #encodeNormValue(float)
602 * @see #decodeNormValue(byte)
604 * @deprecated Use instance methods for encoding/decoding norm values to enable customization.
607 public static float[] getNormDecoder() {
612 * Computes the normalization value for a field, given the accumulated
613 * state of term processing for this field (see {@link FieldInvertState}).
615 * <p>Implementations should calculate a float value based on the field
616 * state and then return that value.
618 * <p>Matches in longer fields are less precise, so implementations of this
619 * method usually return smaller values when <code>state.getLength()</code> is large,
620 * and larger values when <code>state.getLength()</code> is small.
622 * <p>Note that the return values are computed under
623 * {@link org.apache.lucene.index.IndexWriter#addDocument(org.apache.lucene.document.Document)}
624 * and then stored using
625 * {@link #encodeNormValue(float)}.
626 * Thus they have limited precision, and documents
627 * must be re-indexed if this method is altered.
629 * <p>For backward compatibility this method by default calls
630 * {@link #lengthNorm(String, int)} passing
631 * {@link FieldInvertState#getLength()} as the second argument, and
632 * then multiplies this value by {@link FieldInvertState#getBoost()}.</p>
634 * @lucene.experimental
636 * @param field field name
637 * @param state current processing state for this field
638 * @return the calculated float norm
640 public abstract float computeNorm(String field, FieldInvertState state);
642 /** Computes the normalization value for a field given the total number of
643 * terms contained in a field. These values, together with field boosts, are
644 * stored in an index and multipled into scores for hits on each field by the
647 * <p>Matches in longer fields are less precise, so implementations of this
648 * method usually return smaller values when <code>numTokens</code> is large,
649 * and larger values when <code>numTokens</code> is small.
651 * <p>Note that the return values are computed under
652 * {@link org.apache.lucene.index.IndexWriter#addDocument(org.apache.lucene.document.Document)}
653 * and then stored using
654 * {@link #encodeNormValue(float)}.
655 * Thus they have limited precision, and documents
656 * must be re-indexed if this method is altered.
658 * @param fieldName the name of the field
659 * @param numTokens the total number of tokens contained in fields named
660 * <i>fieldName</i> of <i>doc</i>.
661 * @return a normalization factor for hits on this field of this document
663 * @see org.apache.lucene.document.Field#setBoost(float)
665 * @deprecated Please override computeNorm instead
668 public final float lengthNorm(String fieldName, int numTokens) {
669 throw new UnsupportedOperationException("please use computeNorm instead");
672 /** Computes the normalization value for a query given the sum of the squared
673 * weights of each of the query terms. This value is multiplied into the
674 * weight of each query term. While the classic query normalization factor is
675 * computed as 1/sqrt(sumOfSquaredWeights), other implementations might
676 * completely ignore sumOfSquaredWeights (ie return 1).
678 * <p>This does not affect ranking, but the default implementation does make scores
679 * from different queries more comparable than they would be by eliminating the
680 * magnitude of the Query vector as a factor in the score.
682 * @param sumOfSquaredWeights the sum of the squares of query term weights
683 * @return a normalization factor for query weights
685 public abstract float queryNorm(float sumOfSquaredWeights);
687 /** Encodes a normalization factor for storage in an index.
689 * <p>The encoding uses a three-bit mantissa, a five-bit exponent, and
690 * the zero-exponent point at 15, thus
691 * representing values from around 7x10^9 to 2x10^-9 with about one
692 * significant decimal digit of accuracy. Zero is also represented.
693 * Negative numbers are rounded up to zero. Values too large to represent
694 * are rounded down to the largest representable value. Positive values too
695 * small to represent are rounded up to the smallest positive representable
698 * <b>WARNING: If you override this method, you should change the default
699 * Similarity to your implementation with {@link Similarity#setDefault(Similarity)}.
700 * Otherwise, your method may not always be called, especially if you omit norms
701 * for some fields.</b>
702 * @see org.apache.lucene.document.Field#setBoost(float)
703 * @see org.apache.lucene.util.SmallFloat
705 public byte encodeNormValue(float f) {
706 return SmallFloat.floatToByte315(f);
710 * Static accessor kept for backwards compability reason, use encodeNormValue instead.
711 * @param f norm-value to encode
712 * @return byte representing the given float
713 * @deprecated Use {@link #encodeNormValue} instead.
715 * @see #encodeNormValue(float)
718 public static byte encodeNorm(float f) {
719 return SmallFloat.floatToByte315(f);
723 /** Computes a score factor based on a term or phrase's frequency in a
724 * document. This value is multiplied by the {@link #idf(int, int)}
725 * factor for each term in the query and these products are then summed to
726 * form the initial score for a document.
728 * <p>Terms and phrases repeated in a document indicate the topic of the
729 * document, so implementations of this method usually return larger values
730 * when <code>freq</code> is large, and smaller values when <code>freq</code>
733 * <p>The default implementation calls {@link #tf(float)}.
735 * @param freq the frequency of a term within a document
736 * @return a score factor based on a term's within-document frequency
738 public float tf(int freq) {
739 return tf((float)freq);
742 /** Computes the amount of a sloppy phrase match, based on an edit distance.
743 * This value is summed for each sloppy phrase match in a document to form
744 * the frequency that is passed to {@link #tf(float)}.
746 * <p>A phrase match with a small edit distance to a document passage more
747 * closely matches the document, so implementations of this method usually
748 * return larger values when the edit distance is small and smaller values
751 * @see PhraseQuery#setSlop(int)
752 * @param distance the edit distance of this sloppy phrase match
753 * @return the frequency increment for this match
755 public abstract float sloppyFreq(int distance);
757 /** Computes a score factor based on a term or phrase's frequency in a
758 * document. This value is multiplied by the {@link #idf(int, int)}
759 * factor for each term in the query and these products are then summed to
760 * form the initial score for a document.
762 * <p>Terms and phrases repeated in a document indicate the topic of the
763 * document, so implementations of this method usually return larger values
764 * when <code>freq</code> is large, and smaller values when <code>freq</code>
767 * @param freq the frequency of a term within a document
768 * @return a score factor based on a term's within-document frequency
770 public abstract float tf(float freq);
774 * Computes a score factor for a simple term and returns an explanation
775 * for that score factor.
778 * The default implementation uses:
781 * idf(searcher.docFreq(term), searcher.maxDoc());
784 * Note that {@link Searcher#maxDoc()} is used instead of
785 * {@link org.apache.lucene.index.IndexReader#numDocs() IndexReader#numDocs()} because also
786 * {@link Searcher#docFreq(Term)} is used, and when the latter
787 * is inaccurate, so is {@link Searcher#maxDoc()}, and in the same direction.
788 * In addition, {@link Searcher#maxDoc()} is more efficient to compute
790 * @param term the term in question
791 * @param searcher the document collection being searched
792 * @return an IDFExplain object that includes both an idf score factor
793 and an explanation for the term.
794 * @throws IOException
796 public IDFExplanation idfExplain(final Term term, final Searcher searcher, int docFreq) throws IOException {
798 if (!hasIDFExplainWithDocFreqAPI) {
799 // Fallback to slow impl
800 return idfExplain(term, searcher);
802 final int df = docFreq;
803 final int max = searcher.maxDoc();
804 final float idf = idf(df, max);
805 return new IDFExplanation() {
807 public String explain() {
808 return "idf(docFreq=" + df +
809 ", maxDocs=" + max + ")";
812 public float getIdf() {
818 * This method forwards to {@link
819 * #idfExplain(Term,Searcher,int)} by passing
820 * <code>searcher.docFreq(term)</code> as the docFreq.
822 * WARNING: if you subclass Similariary and override this
823 * method then you may hit a peformance hit for certain
824 * queries. Better to override {@link
825 * #idfExplain(Term,Searcher,int)} instead.
827 public IDFExplanation idfExplain(final Term term, final Searcher searcher) throws IOException {
828 return idfExplain(term, searcher, searcher.docFreq(term));
832 * Computes a score factor for a phrase.
835 * The default implementation sums the idf factor for
836 * each term in the phrase.
838 * @param terms the terms in the phrase
839 * @param searcher the document collection being searched
840 * @return an IDFExplain object that includes both an idf
841 * score factor for the phrase and an explanation
843 * @throws IOException
845 public IDFExplanation idfExplain(Collection<Term> terms, Searcher searcher) throws IOException {
846 final int max = searcher.maxDoc();
848 final StringBuilder exp = new StringBuilder();
849 for (final Term term : terms ) {
850 final int df = searcher.docFreq(term);
853 exp.append(term.text());
857 final float fIdf = idf;
858 return new IDFExplanation() {
860 public float getIdf() {
864 public String explain() {
865 return exp.toString();
870 /** Computes a score factor based on a term's document frequency (the number
871 * of documents which contain the term). This value is multiplied by the
872 * {@link #tf(int)} factor for each term in the query and these products are
873 * then summed to form the initial score for a document.
875 * <p>Terms that occur in fewer documents are better indicators of topic, so
876 * implementations of this method usually return larger values for rare terms,
877 * and smaller values for common terms.
879 * @param docFreq the number of documents which contain the term
880 * @param numDocs the total number of documents in the collection
881 * @return a score factor based on the term's document frequency
883 public abstract float idf(int docFreq, int numDocs);
885 /** Computes a score factor based on the fraction of all query terms that a
886 * document contains. This value is multiplied into scores.
888 * <p>The presence of a large portion of the query terms indicates a better
889 * match with the query, so implementations of this method usually return
890 * larger values when the ratio between these parameters is large and smaller
891 * values when the ratio between them is small.
893 * @param overlap the number of query terms matched in the document
894 * @param maxOverlap the total number of terms in the query
895 * @return a score factor based on term overlap with the query
897 public abstract float coord(int overlap, int maxOverlap);
900 * Calculate a scoring factor based on the data in the payload. Overriding implementations
901 * are responsible for interpreting what is in the payload. Lucene makes no assumptions about
902 * what is in the byte array.
904 * The default implementation returns 1.
906 * @param docId The docId currently being scored. If this value is {@link #NO_DOC_ID_PROVIDED}, then it should be assumed that the PayloadQuery implementation does not provide document information
907 * @param fieldName The fieldName of the term this payload belongs to
908 * @param start The start position of the payload
909 * @param end The end position of the payload
910 * @param payload The payload byte array to be scored
911 * @param offset The offset into the payload array
912 * @param length The length in the array
913 * @return An implementation dependent float to be used as a scoring factor
916 public float scorePayload(int docId, String fieldName, int start, int end, byte [] payload, int offset, int length)