Search

Chapter 20. Querying

download PDF

20.1. Querying

Infinispan Query can execute Lucene queries and retrieve domain objects from a Red Hat JBoss Data Grid cache.

Prepare and Execute a Query

  1. Get SearchManager of an indexing enabled cache as follows:

    SearchManager manager = Search.getSearchManager(cache);
  2. Create a QueryBuilder to build queries for Myth.class as follows:

    final org.hibernate.search.query.dsl.QueryBuilder queryBuilder =
        manager.buildQueryBuilderForClass(Myth.class).get();
  3. Create an Apache Lucene query that queries the Myth.class class' atributes as follows:

    org.apache.lucene.search.Query query = queryBuilder.keyword()
        .onField("history").boostedTo(3)
        .matching("storm")
        .createQuery();
    
    // wrap Lucene query in a org.infinispan.query.CacheQuery
    CacheQuery cacheQuery = manager.getQuery(query);
    
    // Get query result
    List<Object> result = cacheQuery.list();

20.2. Building Queries

20.2.1. Building Queries

Query Module queries are built on Lucene queries, allowing users to use any Lucene query type. When the query is built, Infinispan Query uses org.infinispan.query.CacheQuery as the query manipulation API for further query processing.

20.2.2. Building a Lucene Query Using the Lucene-based Query API

With the Lucene API, use either the query parser (simple queries) or the Lucene programmatic API (complex queries). For details, see the online Lucene documentation or a copy of Lucene in Action or Hibernate Search in Action .

20.2.3. Building a Lucene Query

20.2.3.1. Building a Lucene Query

Using the Lucene programmatic API, it is possible to write full-text queries. However, when using Lucene programmatic API, the parameters must be converted to their string equivalent and must also apply the correct analyzer to the right field. A ngram analyzer for example uses several ngrams as the tokens for a given word and should be searched as such. It is recommended to use the QueryBuilder for this task.

The Lucene-based query API is fluent. This API has a following key characteristics:

  • Method names are in English. As a result, API operations can be read and understood as a series of English phrases and instructions.
  • It uses IDE autocompletion which helps possible completions for the current input prefix and allows the user to choose the right option.
  • It often uses the chaining method pattern.
  • It is easy to use and read the API operations.

To use the API, first create a query builder that is attached to a given indexed type. This QueryBuilder knows what analyzer to use and what field bridge to apply. Several QueryBuilders (one for each type involved in the root of your query) can be created. The QueryBuilder is derived from the SearchManager.

Search.getSearchManager(cache).buildQueryBuilderForClass(Myth.class).get();

The analyzer, used for a given field or fields can also be overridden.

SearchManager searchManager = Search.getSearchManager(cache);
    QueryBuilder mythQB = searchManager.buildQueryBuilderForClass(Myth.class)
        .overridesForField("history","stem_analyzer_definition")
        .get();

The query builder is now used to build Lucene queries.

20.2.3.2. Keyword Queries

The following example shows how to search for a specific word:

Keyword Search

Query luceneQuery = mythQB.keyword().onField("history").matching("storm").createQuery();

Table 20.1. Keyword query parameters
ParameterDescription

keyword()

Use this parameter to find a specific word

onField()

Use this parameter to specify in which lucene field to search the word

matching()

use this parameter to specify the match for search string

createQuery()

creates the Lucene query object

  • The value "storm" is passed through the "history" FieldBridge. This is useful when numbers or dates are involved.
  • The field bridge value is then passed to the analyzer used to index the field "history". This ensures that the query uses the same term transformation than the indexing (lower case, ngram, stemming and so on). If the analyzing process generates several terms for a given word, a boolean query is used with the SHOULD logic (roughly an OR logic).

To search a property that is not of type string.

@Indexed
public class Myth {
    @Field(analyze = Analyze.NO)
    @DateBridge(resolution = Resolution.YEAR)
    public Date getCreationDate() { return creationDate; }
    public void setCreationDate(Date creationDate) { this.creationDate = creationDate; }
    private Date creationDate;
}

Date birthdate = ...;
Query luceneQuery = mythQb.keyword()
    .onField("creationDate")
    .matching(birthdate)
    .createQuery();
Note

In plain Lucene, the Date object had to be converted to its string representation (in this case the year)

This conversion works for any object, provided that the FieldBridge has an objectToString method (and all built-in FieldBridge implementations do).

The next example searches a field that uses ngram analyzers. The ngram analyzers index succession of ngrams of words, which helps to avoid user typos. For example, the 3-grams of the word hibernate are hib, ibe, ber, rna, nat, ate.

Searching Using Ngram Analyzers

@AnalyzerDef(name = "ngram",
    tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
    filters = {
        @TokenFilterDef(factory = StandardFilterFactory.class),
        @TokenFilterDef(factory = LowerCaseFilterFactory.class),
        @TokenFilterDef(factory = StopFilterFactory.class),
        @TokenFilterDef(factory = NGramFilterFactory.class,
            params = {
                @Parameter(name = "minGramSize", value = "3"),
                @Parameter(name = "maxGramSize", value = "3")})
    })
public class Myth {
    @Field(analyzer = @Analyzer(definition = "ngram"))
    public String getName() { return name; }
    public String setName(String name) { this.name = name; }
    private String name;
}

Date birthdate = ...;
Query luceneQuery = mythQb.keyword()
    .onField("name")
    .matching("Sisiphus")
    .createQuery();

The matching word "Sisiphus" will be lower-cased and then split into 3-grams: sis, isi, sip, phu, hus. Each of these ngram will be part of the query. The user is then able to find the Sysiphus myth (with a y). All that is transparently done for the user.

Note

If the user does not want a specific field to use the field bridge or the analyzer then the ignoreAnalyzer() or ignoreFieldBridge() functions can be called.

To search for multiple possible words in the same field, add them all in the matching clause.

Searching for Multiple Words

//search document with storm or lightning in their history
Query luceneQuery =
    mythQB.keyword().onField("history").matching("storm lightning").createQuery();

To search the same word on multiple fields, use the onFields method.

Searching Multiple Fields

Query luceneQuery = mythQB
    .keyword()
    .onFields("history","description","name")
    .matching("storm")
    .createQuery();

In some cases, one field must be treated differently from another field even if searching the same term. In this case, use the andField() method.

Using the andField Method

Query luceneQuery = mythQB.keyword()
    .onField("history")
    .andField("name")
    .boostedTo(5)
    .andField("description")
    .matching("storm")
    .createQuery();

In the previous example, only field name is boosted to 5.

20.2.3.3. Fuzzy Queries

To execute a fuzzy query (based on the Levenshtein distance algorithm), start like a keyword query and add the fuzzy flag.

Fuzzy Query

Query luceneQuery = mythQB.keyword()
    .fuzzy()
    .withEditDistanceUpTo(1)
    .withPrefixLength(1)
    .onField("history")
    .matching("starm")
    .createQuery();

The withEditDistanceUpTo is the maximum value of the edit distance (Levenshtein distance) to consider two terms matching. It is an integer value between 0 and 2, with a default value of 2. The prefixLength is the length of the prefix ignored by the "fuzzyness". While the default value is 0, a non zero value is recommended for indexes containing a huge amount of distinct terms.

20.2.3.4. Wildcard Queries

Wildcard queries can also be executed (queries where some of parts of the word are unknown). The ? represents a single character and * represents any character sequence. Note that for performance purposes, it is recommended that the query does not start with either ? or \*.

Wildcard Query

Query luceneQuery = mythQB.keyword()
    .wildcard()
    .onField("history")
    .matching("sto*")
    .createQuery();

Note

Wildcard queries do not apply the analyzer on the matching terms. Otherwise the risk of \* or ? being mangled is too high.

20.2.3.5. Phrase Queries

So far we have been looking for words or sets of words, the user can also search exact or approximate sentences. Use phrase() to do so.

Phrase Query

Query luceneQuery = mythQB.phrase()
    .onField("history")
    .sentence("Thou shalt not kill")
    .createQuery();

Approximate sentences can be searched by adding a slop factor. The slop factor represents the number of other words permitted in the sentence: this works like a within or near operator.

Adding Slop Factor

Query luceneQuery = mythQB.phrase()
    .withSlop(3)
    .onField("history")
    .sentence("Thou kill")
    .createQuery();

20.2.3.6. Range Queries

A range query searches for a value in between given boundaries (included or not) or for a value below or above a given boundary (included or not).

Range Query

//look for 0 <= starred < 3
Query luceneQuery = mythQB.range()
    .onField("starred")
    .from(0).to(3).excludeLimit()
    .createQuery();

//look for myths strictly BC
Date beforeChrist = ...;
Query luceneQuery = mythQB.range()
    .onField("creationDate")
    .below(beforeChrist).excludeLimit()
    .createQuery();

20.2.3.7. Combining Queries

Queries can be aggregated (combine) to create more complex queries. The following aggregation operators are available:

  • SHOULD: the query should contain the matching elements of the subquery.
  • MUST: the query must contain the matching elements of the subquery.
  • MUST NOT: the query must not contain the matching elements of the subquery.

The subqueries can be any Lucene query including a boolean query itself. Following are some examples:

Combining Subqueries

//look for popular modern myths that are not urban
Date twentiethCentury = ...;
Query luceneQuery = mythQB.bool()
    .must(mythQB.keyword().onField("description").matching("urban").createQuery())
    .not()
    .must(mythQB.range().onField("starred").above(4).createQuery())
    .must(mythQB.range()
        .onField("creationDate")
        .above(twentiethCentury)
        .createQuery())
    .createQuery();

//look for popular myths that are preferably urban
Query luceneQuery = mythQB
    .bool()
    .should(mythQB.keyword()
        .onField("description")
        .matching("urban")
        .createQuery())
    .must(mythQB.range().onField("starred").above(4).createQuery())
    .createQuery();

//look for all myths except religious ones
Query luceneQuery = mythQB.all()
    .except(mythQB.keyword()
        .onField("description_stem")
        .matching("religion")
        .createQuery())
    .createQuery();

20.2.3.8. Query Options

The following is a summary of query options for query types and fields:

  • boostedTo (on query type and on field) boosts the query or field to a provided factor.
  • withConstantScore (on query) returns all results that match the query and have a constant score equal to the boost.
  • filteredBy(Filter)(on query) filters query results using the Filter instance.
  • ignoreAnalyzer (on field) ignores the analyzer when processing this field.
  • ignoreFieldBridge (on field) ignores the field bridge when processing this field.

The following example illustrates how to use these options:

Querying Options

Query luceneQuery = mythQB
    .bool()
    .should(mythQB.keyword().onField("description").matching("urban").createQuery())
    .should(mythQB
        .keyword()
        .onField("name")
        .boostedTo(3)
        .ignoreAnalyzer()
        .matching("urban").createQuery())
    .must(mythQB
        .range()
        .boostedTo(5)
        .withConstantScore()
        .onField("starred")
        .above(4).createQuery())
    .createQuery();

20.2.4. Build a Query with Infinispan Query

20.2.4.1. Generality

After building the Lucene query, wrap it within a Infinispan CacheQuery. The query searches all indexed entities and returns all types of indexed classes unless explicitly configured not to do so.

Wrapping a Lucene Query in an Infinispan CacheQuery

CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(luceneQuery);

For improved performance, restrict the returned types as follows:

Filtering the Search Result by Entity Type

CacheQuery cacheQuery =
    Search.getSearchManager(cache).getQuery(luceneQuery, Customer.class);
// or
CacheQuery cacheQuery =
    Search.getSearchManager(cache).getQuery(luceneQuery, Item.class, Actor.class);

The first part of the second example only returns the matching Customer instances. The second part of the same example returns matching Actor and Item instances. The type restriction is polymorphic. As a result, if the two subclasses Salesman and Customer of the base class Person return, specify Person.class to filter based on result types.

20.2.4.2. Pagination

To avoid performance degradation, it is recommended to restrict the number of returned objects per query. A user navigating from one page to another page is a very common use case. The way to define pagination is similar to defining pagination in a plain HQL or Criteria query.

Defining pagination for a search query

CacheQuery cacheQuery = Search.getSearchManager(cache)
                              .getQuery(luceneQuery, Customer.class);
cacheQuery.firstResult(15); //start from the 15th element
cacheQuery.maxResults(10); //return 10 elements

Note

The total number of matching elements, despite the pagination, is accessible via cacheQuery.getResultSize().

20.2.4.3. Sorting

Apache Lucene contains a flexible and powerful result sorting mechanism. The default sorting is by relevance and is appropriate for a large variety of use cases. The sorting mechanism can be changed to sort by other properties using the Lucene Sort object to apply a Lucene sorting strategy.

Specifying a Lucene Sort

org.infinispan.query.CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(luceneQuery, Book.class);
org.apache.lucene.search.Sort sort = new Sort(
    new SortField("title", SortField.STRING_FIRST));
cacheQuery.sort(sort);
List results = cacheQuery.list();

Note

Fields used for sorting must not be tokenized. For more information about tokenizing, see @Field.

20.2.4.4. Projection

In some cases, only a small subset of the properties is required. Use Infinispan Query to return a subset of properties as follows:

Using Projection Instead of Returning the Full Domain Object

SearchManager searchManager = Search.getSearchManager(cache);
CacheQuery cacheQuery = searchManager.getQuery(luceneQuery, Book.class);
cacheQuery.projection("id", "summary", "body", "mainAuthor.name");
List results = cacheQuery.list();
Object[] firstResult = (Object[]) results.get(0);
Integer id = (Integer) firstResult[0];
String summary = (String) firstResult[1];
String body = (String) firstResult[2];
String authorName = (String) firstResult[3];

The Query Module extracts properties from the Lucene index and converts them to their object representation and returns a list of Object[]. Projections prevent a time consuming database round-trip. However, they have following constraints:

  • The properties projected must be stored in the index (@Field(store=Store.YES)), which increases the index size.
  • The properties projected must use a FieldBridge implementing org.infinispan.query.bridge.TwoWayFieldBridge or org.infinispan.query.bridge.TwoWayStringBridge, the latter being the simpler version.

    Note

    All Lucene-based Query API built-in types are two-way.

  • Only the simple properties of the indexed entity or its embedded associations can be projected. Therefore a whole embedded entity cannot be projected.
  • Projection does not work on collections or maps which are indexed via @IndexedEmbedded

Lucene provides metadata information about query results. Use projection constants to retrieve the metadata.

Using Projection to Retrieve Metadata

SearchManager searchManager = Search.getSearchManager(cache);
CacheQuery cacheQuery = searchManager.getQuery(luceneQuery, Book.class);
cacheQuery.projection("mainAuthor.name");
List results = cacheQuery.list();
Object[] firstResult = (Object[]) results.get(0);
float score = (Float) firstResult[0];
Book book = (Book) firstResult[1];
String authorName = (String) firstResult[2];

Fields can be mixed with the following projection constants:

  • FullTextQuery.THIS returns the initialized and managed entity as a non-projected query does.
  • FullTextQuery.DOCUMENT returns the Lucene Document related to the projected object.
  • FullTextQuery.OBJECT_CLASS returns the indexed entity’s class.
  • FullTextQuery.SCORE returns the document score in the query. Use scores to compare one result against another for a given query. However, scores are not relevant to compare the results of two different queries.
  • FullTextQuery.ID is the ID property value of the projected object.
  • FullTextQuery.DOCUMENT_ID is the Lucene document ID. The Lucene document ID changes between two IndexReader openings.
  • FullTextQuery.EXPLANATION returns the Lucene Explanation object for the matching object/document in the query. This is not suitable for retrieving large amounts of data. Running FullTextQuery.EXPLANATION is as expensive as running a Lucene query for each matching element. As a result, projection is recommended.

20.2.4.5. Limiting the Time of a Query

Limit the time a query takes in Infinispan Query as follows:

  • Raise an exception when arriving at the limit.
  • Limit to the number of results retrieved when the time limit is raised.

20.2.4.6. Raise an Exception on Time Limit

If a query uses more than the defined amount of time, a custom exception might be defined to be thrown.

To define the limit when using the CacheQuery API, use the following approach:

Defining a Timeout in Query Execution

SearchManagerImplementor searchManager = (SearchManagerImplementor) Search.getSearchManager(cache);
searchManager.setTimeoutExceptionFactory(new MyTimeoutExceptionFactory());
CacheQuery cacheQuery = searchManager.getQuery(luceneQuery, Book.class);

//define the timeout in seconds
cacheQuery.timeout(2, TimeUnit.SECONDS);

try {
    cacheQuery.list();
}
catch (MyTimeoutException e) {
    //do something, too slow
}

private static class MyTimeoutExceptionFactory implements TimeoutExceptionFactory {
    @Override
    public RuntimeException createTimeoutException(String message, String queryDescription) {
        return new MyTimeoutException();
    }
}

public static class MyTimeoutException extends RuntimeException {
}

The getResultSize(), iterate() and scroll() honor the timeout until the end of the method call. As a result, Iterable or the ScrollableResults ignore the timeout. Additionally, explain() does not honor this timeout period. This method is used for debugging and to check the reasons for slow performance of a query.

Important

The example code does not guarantee that the query stops at the specified results amount.

20.3. Retrieving the Results

20.3.1. Retrieving the Results

After building the Infinispan Query, it can be executed in the same way as a HQL or Criteria query. The same paradigm and object semantic apply to Lucene Query query and all the common operations like list().

20.3.2. Performance Considerations

list() can be used to receive a reasonable number of results (for example when using pagination) and to work on them all. list() works best if the batch-size entity is correctly set up. If list() is used, the Query Module processes all Lucene Hits elements within the pagination.

20.3.3. Result Size

Some use cases require information about the total number of matching documents. Consider the following examples:

Retrieving all matching documents is costly in terms of resources. The Lucene-based Query API retrieves all matching documents regardless of pagination parameters. Since it is costly to retrieve all the matching documents, the Lucene-based Query API can retrieve the total number of matching documents regardless of the pagination parameters. All matching elements are retrieved without triggering any object loads.

Determining the Result Size of a Query

CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(luceneQuery,
                Book.class);
//return the number of matching books without loading a single one
assert 3245 == cacheQuery.getResultSize();

CacheQuery cacheQueryLimited =
        Search.getSearchManager(cache).getQuery(luceneQuery, Book.class);
cacheQuery.maxResults(10);
List results = cacheQuery.list();
assert 10 == results.size();
//return the total number of matching books regardless of pagination
assert 3245 == cacheQuery.getResultSize();

The number of results is an approximation if the index is not correctly synchronized with the database. An ansychronous cluster is an example of this scenario.

20.3.4. Understanding Results

Luke can be used to determine why a result appears (or does not appear) in the expected query result. The Query Module also offers the Lucene Explanation object for a given result (in a given query). This is an advanced class. Access the Explanation object as follows:

cacheQuery.explain(int) method

This method requires a document ID as a parameter and returns the Explanation object.

Note

In terms of resources, building an explanation object is as expensive as running the Lucene query. Do not build an explanation object unless it is necessary for the implementation.

20.4. Filters

20.4.1. Filters

Apache Lucene is able to filter query results according to a custom filtering process. This is a powerful way to apply additional data restrictions, especially since filters can be cached and reused. Applicable use cases include:

  • security
  • temporal data (example, view only last month’s data)
  • population filter (example, search limited to a given category)
  • and many more

20.4.2. Defining and Implementing a Filter

The Lucene-based Query API includes transparent caches named filters which include parameters. The API is similar to the Hibernate Core filters:

Enabling Fulltext Filters for a Query

cacheQuery = Search.getSearchManager(cache).getQuery(query, Driver.class);
cacheQuery.enableFullTextFilter("bestDriver");
cacheQuery.enableFullTextFilter("security").setParameter("login", "andre");
cacheQuery.list(); //returns only best drivers where andre has credentials

In the provided example, two filters are enabled in the query. Enable or disable filters to customize the query.

Declare filters using the @FullTextFilterDef annotation. This annotation applies to @Indexed entities irrespective of the filter’s query. Filter definitions are global therefore each filter must have a unique name. If two @FullTextFilterDef annotations with the same name are defined, a SearchException is thrown. Each named filter must specify its filter implementation.

Defining and Implementing a Filter

@FullTextFilterDefs({
    @FullTextFilterDef(name = "bestDriver", impl = BestDriversFilter.class),
    @FullTextFilterDef(name = "security", impl = SecurityFilterFactory.class)
})
public class Driver { ... }

public class BestDriversFilter extends org.apache.lucene.search.Filter {

    public DocIdSet getDocIdSet(IndexReader reader) throws IOException {
        OpenBitSet bitSet = new OpenBitSet(reader.maxDoc());
        TermDocs termDocs = reader.termDocs(new Term("score", "5"));
        while (termDocs.next()) {
            bitSet.set(termDocs.doc());
        }
        return bitSet;
    }
}

BestDriversFilter is a Lucene filter that reduces the result set to drivers where the score is 5. In the example, the filter implements the org.apache.lucene.search.Filter directly and contains a no-arg constructor.

20.4.3. The @Factory Filter

Use the following factory pattern if the filter creation requires further steps, or if the filter does not have a no-arg constructor:

Creating a filter using the factory pattern

@FullTextFilterDef(name = "bestDriver", impl = BestDriversFilterFactory.class)
public class Driver { ... }

public class BestDriversFilterFactory {

    @Factory
    public Filter getFilter() {
        //some additional steps to cache the filter results per IndexReader
        Filter bestDriversFilter = new BestDriversFilter();
        return new CachingWrapperFilter(bestDriversFilter);
    }
}

The Lucene-based Query API uses a @Factory annotated method to build the filter instance. The factory must have a no argument constructor.

Named filters come in handy where parameters have to be passed to the filter. For example a security filter might want to know which security level you want to apply:

Passing parameters to a defined filter

cacheQuery = Search.getSearchManager(cache).getQuery(query, Driver.class);
cacheQuery.enableFullTextFilter("security").setParameter("level", 5);

Each parameter name should have an associated setter on either the filter or filter factory of the targeted named filter definition.

Using parameters in the actual filter implementation

public class SecurityFilterFactory {
    private Integer level;

    /**
     * injected parameter
     */
    public void setLevel(Integer level) {
        this.level = level;
    }

    @Key
    public FilterKey getKey() {
        StandardFilterKey key = new StandardFilterKey();
        key.addParameter(level);
        return key;
    }

    @Factory
    public Filter getFilter() {
        Query query = new TermQuery(new Term("level", level.toString()));
        return new CachingWrapperFilter(new QueryWrapperFilter(query));
    }
}

Note the method annotated @Key returns a FilterKey object. The returned object has a special contract: the key object must implement equals() / hashCode() so that two keys are equal if and only if the given Filter types are the same and the set of parameters are the same. In other words, two filter keys are equal if and only if the filters from which the keys are generated can be interchanged. The key object is used as a key in the cache mechanism.

20.4.4. Key Objects

@Key methods are needed only if:

  • the filter caching system is enabled (enabled by default)
  • the filter has parameters

The StandardFilterKey delegates the equals() / hashCode() implementation to each of the parameters equals and hashcode methods.

The defined filters are per default cached. The cache uses a combination of hard and soft references to allow disposal of memory when needed. The hard reference cache keeps track of the most recently used filters and transforms the ones least used to SoftReferences when needed. Once the limit of the hard reference cache is reached additional filters are cached as SoftReferences. To adjust the size of the hard reference cache, use default.filter.cache_strategy.size (defaults to 128). For advanced use of filter caching, you can implement your own FilterCachingStrategy. The classname is defined by default.filter.cache_strategy.

This filter caching mechanism should not be confused with caching the actual filter results. In Lucene it is common practice to wrap filters using the IndexReader around a CachingWrapperFilter. The wrapper will cache the DocIdSet returned from the getDocIdSet(IndexReader reader) method to avoid expensive recomputation. It is important to mention that the computed DocIdSet is only cachable for the same IndexReader instance, because the reader effectively represents the state of the index at the moment it was opened. The document list cannot change within an opened IndexReader. A different/newIndexReader instance, however, works potentially on a different set of Documents (either from a different index or simply because the index has changed), hence the cached DocIdSet has to be recomputed.

20.4.5. Full Text Filter

The Lucene-based Query API uses the cache flag of @FullTextFilterDef, set to FilterCacheModeType.INSTANCE_AND_DOCIDSETRESULTS which automatically caches the filter instance and wraps the filter around a Hibernate specific implementation of CachingWrapperFilter. Unlike Lucene’s version of this class, SoftReferences are used with a hard reference count (see discussion about filter cache). The hard reference count is adjusted using default.filter.cache_docidresults.size (defaults to 5). Wrapping is controlled using the @FullTextFilterDef.cache parameter. There are three different values for this parameter:

ValueDefinition

FilterCacheModeType.NONE

No filter instance and no result is cached by Hibernate Search. For every filter call, a new filter instance is created. This setting might be useful for rapidly changing data sets or heavily memory constrained environments.

FilterCacheModeType.INSTANCE_ONLY

The filter instance is cached and reused across concurrent Filter.getDocIdSet() calls. DocIdSet results are not cached. This setting is useful when a filter uses its own specific caching mechanism or the filter results change dynamically due to application specific events making DocIdSet caching in both cases unnecessary.

FilterCacheModeType.INSTANCE_AND_DOCIDSETRESULTS

Both the filter instance and the DocIdSet results are cached. This is the default value.

Filters should be cached in the following situations:

  • The system does not update the targeted entity index often (in other words, the IndexReader is reused a lot).
  • The Filter’s DocIdSet is expensive to compute (compared to the time spent to execute the query).

20.4.6. Using Filters in a Sharded Environment

Execute queries on a subset of the available shards in a sharded environment as follows:

  1. Create a sharding strategy to select a subset of IndexManagers depending on filter configurations.
  2. Activate the filter when running the query.

The following is an example of sharding strategy that queries a specific shard if the customer filter is activated:

Querying a Specific Shard

public class CustomerShardingStrategy implements IndexShardingStrategy {

    // stored IndexManagers in a array indexed by customerID
    private IndexManager[] indexManagers;

    public void initialize(Properties properties, IndexManager[] indexManagers) {
        this.indexManagers = indexManagers;
    }

    public IndexManager[] getIndexManagersForAllShards() {
        return indexManagers;
    }

    public IndexManager getIndexManagerForAddition(
        Class<?> entity, Serializable id, String idInString, Document document) {
        Integer customerID = Integer.parseInt(document.getFieldable("customerID")
                                                      .stringValue());
        return indexManagers[customerID];
    }

    public IndexManager[] getIndexManagersForDeletion(
        Class<?> entity, Serializable id, String idInString) {
        return getIndexManagersForAllShards();
    }

    /**
     * Optimization; don't search ALL shards and union the results; in this case, we
     * can be certain that all the data for a particular customer Filter is in a single
     * shard; return that shard by customerID.
     */
    public IndexManager[] getIndexManagersForQuery(
        FullTextFilterImplementor[] filters) {
        FullTextFilter filter = getCustomerFilter(filters, "customer");
        if (filter == null) {
            return getIndexManagersForAllShards();
        }
        else {
            return new IndexManager[] { indexManagers[Integer.parseInt(
                filter.getParameter("customerID").toString())] };
        }
    }

    private FullTextFilter getCustomerFilter(FullTextFilterImplementor[] filters,
                                             String name) {
        for (FullTextFilterImplementor filter: filters) {
            if (filter.getName().equals(name)) return filter;
        }
        return null;
    }
}

If the customer filter is present in the example, the query only uses the shard dedicated to the customer. The query returns all shards if the customer filter is not found. The sharding strategy reacts to each filter depending on the provided parameters.

Activate the filter when the query must be run. The filter is a regular filter (as defined in Filters), which filters Lucene results after the query. As an alternate, use a special filter that is passed to the sharding strategy and then ignored for duration of the query. Use the ShardSensitiveOnlyFilter class to declare the filter.

Using the ShardSensitiveOnlyFilter Class

@Indexed
@FullTextFilterDef(name = "customer", impl = ShardSensitiveOnlyFilter.class)
public class Customer {
   ...
}

CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(query,
    Customer.class);
cacheQuery.enableFullTextFilter("customer").setParameter("CustomerID", 5);
@SuppressWarnings("unchecked")
List results = cacheQuery.list();

If the ShardSensitiveOnlyFilter filter is used, Lucene filters do not need to be implemented. Use filters and sharding strategies reacting to these filters for faster query execution in a sharded environment.

20.5. Continuous Queries

20.5.1. Continuous Query

Continuous Querying allows an application to receive the entries that currently match a query, and be continuously notified of any changes to the queried data set. This includes both incoming matches, for values that have joined the set, and outgoing matches, for values that have left the set, that resulted from further cache operations. By using a Continuous Query the application receives a steady stream of events instead of repeatedly executing the same query to look for changes, resulting in a more efficient use of resources.

For instance, all of the following use cases could utilize Continuous Queries:

  1. Return all persons with an age between 18 and 25 (assuming the Person entity has an age property and is updated by the user application).
  2. Return all transactions higher than $2000.
  3. Return all times where the lap speed of F1 racers were less than 1:45.00s (assuming the cache contains Lap entries and that laps are entered live during the race).

20.5.2. Continuous Query Evaluation

A Continuous Query uses a listener that receives a notification when:

  • An entry starts matching the specified query, represented by a Join event.
  • An entry stops matching the specified query, represented by a Leave event.

When a client registers a Continuous Query Listener it immediately begins to receive the results currently matching the query, received as Join events as described above. In addition, it will receive subsequent notifications when other entries begin matching the query, as Join events, or stop matching the query, as Leave events, as a consequence of any cache operations that would normally generate creation, modification, removal, or expiration events.

To determine if the listener receives a Join or Leave event the following logic is used:

  1. If the query on both the old and new values evaluate false, then the event is suppressed.
  2. If the query on both the old and new values evaluate true, then the event is suppressed.
  3. If the query on the old value evaluates false and the query on the new value evaluates true, then a Join event is sent.
  4. If the query on the old value evaluates true and the query on the new value evaluates false, then a Leave event is sent.
  5. If the query on the old value evaluates true and the entry is removed, then a Leave event is sent.
Note

Continuous Queries cannot use grouping, aggregation, or sorting operations.

20.5.3. Using Continuous Queries

The following instructions apply to both Library and Remote Client-Server modes.

Adding Continuous Queries

To create a Continuous Query the Query object will be created similar to other querying methods; however, ensure that the Query is registered with a org.infinispan.query.api.continuous.ContinuousQuery and a org.infinispan.query.api.continuous.ContinuousQueryListener is in use.

The ContinuousQuery object associated to a cache can be obtained by calling the static method org.infinispan.client.hotrod.Search.getContinuousQuery(RemoteCache<K, V> cache) if running in Client-Server mode or org.infinispan.query.Search.getContinuousQuery(Cache<K, V> cache) when running in Library mode.

Once the ContinuousQueryListener has been defined it may be added by using the addContinuousQueryListener method of ContinuousQuery:

continuousQuery.addContinuousQueryListener(query, listener)

The following example demonstrates a simple method of implementing and adding a Continuous Query in Library mode:

Defining and Adding a Continuous Query

import org.infinispan.query.api.continuous.ContinuousQuery;
import org.infinispan.query.api.continuous.ContinuousQueryListener;
import org.infinispan.query.Search;
import org.infinispan.query.dsl.QueryFactory;
import org.infinispan.query.dsl.Query;

import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;

[...]

// To begin we create a ContinuousQuery instance on the cache
ContinuousQuery<Integer, Person> continuousQuery = Search.getContinuousQuery(cache);

// Define our query. In this case we will be looking for any
// Person instances under 21 years of age.
QueryFactory queryFactory = Search.getQueryFactory(cache);
Query query = queryFactory.from(Person.class)
    .having("age").lt(21)
    .build();

final Map<Integer, Person> matches = new ConcurrentHashMap<Integer, Person>();

// Define the ContinuousQueryListener
ContinuousQueryListener<Integer, Person> listener = new ContinuousQueryListener<Integer, Person>() {
    @Override
    public void resultJoining(Integer key, Person value) {
        matches.put(key, value);
    }

    @Override
    public void resultLeaving(Integer key) {
        matches.remove(key);
    }
};

// Add the listener and generated query
continuousQuery.addContinuousQueryListener(query, listener);

[...]

// Remove the listener to stop receiving notifications
continuousQuery.removeContinuousQueryListener(listener);

As Person instances are added to the cache that contain an Age less than 21 they will be placed into matches, and when these entries are removed from the cache they will be also be removed from matches.

Removing Continuous Queries

To stop the query from further execution remove the listener:

continuousQuery.removeContinuousQueryListener(listener);

20.5.4. C++ and C# Continuous Queries

In addition to native Java based continuous queries, JBoss Data Grid also supports C++ and C# based continuous queries.

20.5.4.1. C++ Continous Queries

C++ continuous queries can be setup using the following code:

C++ Continuous Query setup

ContinuousQueryListener<int, sample_bank_account::User> cql(testCache,"select id from sample_bank_account.User");
std::function<void(int, sample_bank_account::User)> join = [](int k, sample_bank_account::User u) {
    std::cout << "JOINING: key="<< u.id() << " value="<< u.name() << std::endl;
};
std::function<void(int, sample_bank_account::User)> leave =[](int k, sample_bank_account::User u) {
    std::cout << "LEAVING: key="<< u.id() << " value="<< u.name() << std::endl;
};
std::function<void(int, sample_bank_account::User)> change =[](int k, sample_bank_account::User u) {
    std::cout << "CHANGING: key="<< u.id() << " value="<< u.name() << std::endl;
};

cql.setJoiningListener(join);
cql.setLeavingListener(leave);
cql.setUpdatedListener(change);
testCache.addContinuousQueryListener(cql);

C++ continuous queries can be removed using the following code:

C++ Continuous Query Removal

testCache.addContinuousQueryListener(cql);

[...]

// Remove the listener to stop receiving notifications
testCache.removeContinuousQueryListener(cql);

20.5.4.2. C# Continuous Queries

C# continuous queries can be setup using the following code:

C# Continuous Query setup

qr.QueryString = "from sample_bank_account.User";

Event.ContinuousQueryListener<int, User> cql = new Event.ContinuousQueryListener<int, User>(qr.QueryString);
cql.JoiningCallback = (int k, User v) => { Console.WriteLine("JOINING: " + k + ", " + v); s.Release(); };
cql.LeavingCallback = (int k, User v) => { Console.WriteLine("LEAVING: " + k + ", " + v); };
cql.UpdatedCallback = (int k, User v) => { Console.WriteLine("UPDATED: " + k + ", " + v); };
userCache.AddContinuousQueryListener(cql);

C# continuous queries can be removed using the following code:

C# Continuous Query Removal

userCache.AddContinuousQueryListener(cql);

[...]

// Remove the listener to stop receiving notifications
userCache.RemoveContinuousQueryListener(cql);

20.5.5. Performance Considerations with Continuous Queries

Continuous Queries are designed to constantly keep any applications updated where it is implemented, potentially resulting in a large number of events generated for particularly broad queries. In addition, a new memory allocation is made for each event. This behavior may result in memory pressure, including potential errors, if queries are not carefully designed.

To prevent these issues it is strongly recommended to ensure that each query captures only the information needed, and that each ContinuousQueryListener is designed to quickly process all received events.

20.6. Broadcast Queries

20.6.1. Broadcast Queries

The broadcast query feature allows each node to index its own data during writes, and at query time, it sends, or "broadcasts", the query to each node. The results from each node are then combined before being returned to the caller. This is ideal for DIST caches with large indices since the amount of data transferred is the query itself and the results.

20.6.1.1. Using Broadcast Queries

To use broadcast queries include IndexedQueryMode.BROADCAST as an argument to your query. An example of this is shown below:

CacheQuery<Person> broadcastQuery = Search.getSearchManager(cache).getQuery(new MatchAllDocsQuery(), IndexedQueryMode.BROADCAST);

List<Person> result = broadcastQuery.list();
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.