Learning Lucene: Search


Lucene Queries
Check this page for all Lucene queries:

public void testIndexSearcher() throws Exception {

  try (Directory directory = FSDirectory.open(new File(FILE_PATH));
      DirectoryReader indexReader = DirectoryReader.open(directory);) {

    IndexSearcher searcher = new IndexSearcher(indexReader);

    // there are different ways to search in Lucene
    // 1. Use QueryParser
    Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_4_9);
    QueryParser parser = new QueryParser(Version.LUCENE_4_9,
        "description", analyzer);
    Query query = parser.parse("description");

    TopDocs hits = searcher.search(query, 10);
    printSearchResult(searcher, hits);

    // Using TermQuery
    query = new TermQuery(new Term("description", "description"));
    hits = searcher.search(query, 10);
    printSearchResult(searcher, hits);

    // Search multiple fields using MultiFieldQueryParser, the default
    // operator is OR
    query = new MultiFieldQueryParser(Version.LUCENE_4_9, new String[] {
        "title", "description" }, new StandardAnalyzer(
        Version.LUCENE_4_9)).parse("title");
    hits = searcher.search(query, 10);
    printSearchResult(searcher, hits);

    // use MultiFieldQueryParser, change operator to AND, return 0
    // hits
    query = MultiFieldQueryParser.parse(Version.LUCENE_4_9,
        "description", new String[] { "title", "description" },
        new BooleanClause.Occur[] { BooleanClause.Occur.MUST,
            BooleanClause.Occur.MUST }, new StandardAnalyzer(
            Version.LUCENE_4_9));
    hits = searcher.search(query, 10);
    printSearchResult(searcher, hits);

    // use BooleanQuery to combining queries
    BooleanQuery searchingBooks2004 = new BooleanQuery();
    searchingBooks2004.add(new TermQuery(new Term("title", "title")),
        BooleanClause.Occur.MUST);
    Query priceQuery = NumericRangeQuery.newIntRange("price", 20, 80,
        true, true);
    searchingBooks2004.add(priceQuery, BooleanClause.Occur.MUST);
    hits = searcher.search(query, 10);
    printSearchResult(searcher, hits);
  }
}
How Search Works
score(q,d) = queryNorm(q) · coord(q,d) · ∑ ( tf(t in d) · idf(t)² · t.getBoost() · norm(t,d) ) (t in q)
Abstract TFIDFSimilarity
tf(t in d) correlates to the term's frequency, defined as the number of times term t appears in the currently scored document d. Documents that have more occurrences of a given term receive a higher score.
Math.sqrt(freq)

idf(t) stands for Inverse Document Frequency. This value correlates to the inverse of docFreq (the number of documents in which the term t appears). This means rarer terms give higher contribution to the total score
Math.log(numDocs/(double)(docFreq+1)) + 1.0

Query Coordination
coord(q,d) is a score factor based on how many of the query terms are found in the specified document. Typically, a document that contains more of the query's terms will receive a higher score than another document with fewer query terms. - computed at search time
overlap / (float)maxOverlap

The coordination factor (coord) is used to reward documents that contain a higher percentage of the query terms. The more query terms that appear in the document, the greater the chances that the document is a good match for the query.

queryNorm(q) is a normalizing factor used to make scores between queries comparable. This factor does not affect document ranking (since all ranked documents are multiplied by the same factor), but rather just attempts to make scores from different queries (or even different indexes) comparable.
1.0 / Math.sqrt(sumOfSquaredWeights)
The sumOfSquaredWeights is calculated by adding together the IDF of each term in the query, squared.

t.getBoost() is a search time boost of term t in the query q as specified in the query text

Index-Time Field-Level Boosting
We strongly recommend against using field-level index-time boosts

norm(t,d) encapsulates a few (indexing time) boost and length factors:
Field boost - set by calling field.setBoost() before adding the field to a document.
lengthNorm(Field-length norm)- computed when the document is added to the index in accordance with the number of tokens of this field in the document, so that shorter fields contribute more to the score. LengthNorm is computed by the Similarity class in effect at indexing.

DefaultSimilarity extends TFIDFSimilarity
https://lucene.apache.org/core/5_1_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html
https://www.elastic.co/guide/en/elasticsearch/guide/master/practical-scoring-function.html
Boolean Model
Vector Space Model
A vector is really just a one-dimensional array containing numbers
The nice thing about vectors is that they can be compared. By measuring the angle between the query vector and the document vector, it is possible to assign a relevance score to each document.
https://www.elastic.co/guide/en/elasticsearch/guide/master/scoring-theory.html

Query Normalization Factoredit(queryNorm)
The query normalization factor is an attempt to normalize a query so that the results from one query may be compared with the results of another.

http://stackoverflow.com/questions/14512885/is-there-a-way-to-remove-the-calculation-of-length-norms-for-fields-in-elastic-s

The lengthNorm and field level boosting, as you said, are both stored in the norm. So no, you can't have one without the other.

But you don't actually need field boosting at index time. You can apply it at search time instead, and that way you have more flexibility when you want to tweak the boost level later on.

Not only that, by setting omit_norms you reduce the amount of data you have to store at index time by quite a lot, so it is to be recommended where appropriate (such as in your case).

TODO
https://www.elastic.co/guide/en/elasticsearch/guide/master/query-time-boosting.html

Labels

adsense (5) Algorithm (69) Algorithm Series (35) Android (7) ANT (6) bat (8) Big Data (7) Blogger (14) Bugs (6) Cache (5) Chrome (19) Code Example (29) Code Quality (7) Coding Skills (5) Database (7) Debug (16) Design (5) Dev Tips (63) Eclipse (32) Git (5) Google (33) Guava (7) How to (9) Http Client (8) IDE (7) Interview (88) J2EE (13) J2SE (49) Java (186) JavaScript (27) JSON (7) Learning code (9) Lesson Learned (6) Linux (26) Lucene-Solr (112) Mac (10) Maven (8) Network (9) Nutch2 (18) Performance (9) PowerShell (11) Problem Solving (11) Programmer Skills (6) regex (5) Scala (6) Security (9) Soft Skills (38) Spring (22) System Design (11) Testing (7) Text Mining (14) Tips (17) Tools (24) Troubleshooting (29) UIMA (9) Web Development (19) Windows (21) xml (5)