此内容没有您所选择的语言版本。
Chapter 14. Hibernate Search
14.1. Getting Started with Hibernate Search 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
14.1.1. About Hibernate Search 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
Hibernate Search provides full-text search capability to Hibernate applications. It is especially suited to search applications for which SQL-based solutions are not suited, including: full-text, fuzzy and geolocation searches. Hibernate Search uses Apache Lucene as its full-text search engine, but is designed to minimize the maintenance overhead. Once it is configured, indexing, clustering and data synchronization is maintained transparently, allowing you to focus on meeting your business requirements.
14.1.2. First Steps with Hibernate Search 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
To get started with Hibernate Search for your application, follow these topics.
- See Configuration in the JBoss EAP Administration and Configuration Guide to configure Hibernate Search.
14.1.3. Enable Hibernate Search using Maven 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
Use the following configuration in your Maven project to add
hibernate-search-orm dependencies:
14.1.4. Add Annotations 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
For this section, consider the example in which you have a database containing details of books. Your application contains the Hibernate managed classes
example.Book and example.Author and you want to add free text search capabilities to your application to enable searching for books.
Example 14.1. Entities Book and Author Before Adding Hibernate Search Specific Annotations
To achieve this you have to add a few annotations to the
Book and Author class. The first annotation @Indexed marks Book as indexable. By design Hibernate Search stores an untokenized ID in the index to ensure index unicity for a given entity. @DocumentId marks the property to use for this purpose and is in most cases the same as the database primary key. The @DocumentId annotation is optional in the case where an @Id annotation exists.
Next the fields you want to make searchable must be marked as such. In this example, start with
title and subtitle and annotate both with @Field. The parameter index=Index.YES will ensure that the text will be indexed, while analyze=Analyze.YES ensures that the text will be analyzed using the default Lucene analyzer. Usually, analyzing means chunking a sentence into individual words and potentially excluding common words like 'a' or 'the'. We will talk more about analyzers a little later on. The third parameter we specify within @Field, store=Store.NO, ensures that the actual data will not be stored in the index. Whether this data is stored in the index or not has nothing to do with the ability to search for it. From Lucene's perspective it is not necessary to keep the data once the index is created. The benefit of storing it is the ability to retrieve it via projections ( see Section 14.3.1.10.5, “Projection”).
Without projections, Hibernate Search will per default execute a Lucene query in order to find the database identifiers of the entities matching the query criteria and use these identifiers to retrieve managed objects from the database. The decision for or against projection has to be made on a case to case basis. The default behavior is recommended since it returns managed objects whereas projections only return object arrays.
Note that
index=Index.YES, analyze=Analyze.YES and store=Store.NO are the default values for these parameters and could be omitted.
Another annotation not yet discussed is
@DateBridge. This annotation is one of the built-in field bridges in Hibernate Search. The Lucene index is purely string based. For this reason Hibernate Search must convert the data types of the indexed fields to strings and vice-versa. A range of predefined bridges are provided, including the DateBridge which will convert a java.util.Date into a String with the specified resolution. For more details see Section 14.2.4, “Bridges”.
This leaves us with
@IndexedEmbedded.This annotation is used to index associated entities (@ManyToMany, @*ToOne, @Embedded and @ElementCollection) as part of the owning entity. This is needed since a Lucene index document is a flat data structure which does not know anything about object relations. To ensure that the authors' name will be searchable you have to ensure that the names are indexed as part of the book itself. On top of @IndexedEmbedded you will also have to mark all fields of the associated entity you want to have included in the index with @Indexed. For more details see Section 14.2.1.3, “Embedded and Associated Objects”
These settings should be sufficient for now. For more details on entity mapping see Section 14.2.1, “Mapping an Entity”.
Example 14.2. Entities After Adding Hibernate Search Annotations
14.1.5. Indexing 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
Hibernate Search will transparently index every entity persisted, updated or removed through Hibernate Core. However, you have to create an initial Lucene index for the data already present in your database. Once you have added the above properties and annotations it is time to trigger an initial batch index of your books. You can achieve this by using one of the following code snippets (see also Section 14.4.3, “Rebuilding the Index”):
Example 14.3. Using the Hibernate Session to Index Data
FullTextSession fullTextSession = org.hibernate.search.Search.getFullTextSession(session); fullTextSession.createIndexer().startAndWait();
FullTextSession fullTextSession = org.hibernate.search.Search.getFullTextSession(session);
fullTextSession.createIndexer().startAndWait();
Example 14.4. Using JPA to Index Data
EntityManager em = entityManagerFactory.createEntityManager(); FullTextEntityManager fullTextEntityManager = org.hibernate.search.jpa.Search.getFullTextEntityManager(em); fullTextEntityManager.createIndexer().startAndWait();
EntityManager em = entityManagerFactory.createEntityManager();
FullTextEntityManager fullTextEntityManager = org.hibernate.search.jpa.Search.getFullTextEntityManager(em);
fullTextEntityManager.createIndexer().startAndWait();
After executing the above code, you should be able to see a Lucene index under
/var/lucene/indexes/example.Book. Go ahead an inspect this index with Luke. It will help you to understand how Hibernate Search works.
14.1.6. Searching 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
To execute a search, create a Lucene query using either the Lucene API (Section 14.3.1.1, “Building a Lucene Query Using the Lucene API”) or the Hibernate Search query DSL (Section 14.3.1.2, “Building a Lucene Query”). Wrap the query in a
org.hibernate.Query to get the required functionality from the Hibernate API. The following code prepares a query against the indexed fields. Executing the code returns a list of Books.
Example 14.5. Using a Hibernate Search Session to Create and Execute a Search
Example 14.6. Using JPA to Create and Execute a Search
14.1.7. Analyzer 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
Assuming that the title of an indexed book entity is
Refactoring: Improving the Design of Existing Code and that hits are required for the following queries: refactor, refactors, refactored, and refactoring. Select an analyzer class in Lucene that applies word stemming when indexing and searching. Hibernate Search offers several ways to configure the analyzer (see Section 14.2.3.1, “Default Analyzer and Analyzer by Class” for more information):
- Set the
analyzerproperty in the configuration file. The specified class becomes the default analyzer. - Set the
annotation at the entity level.@Analyzer - Set the
@annotation at the field level.Analyzer
Specify the fully qualified classname or the analyzer to use, or see an analyzer defined by the
@AnalyzerDef annotation with the @Analyzer annotation. The Solr analyzer framework with its factories are utilized for the latter option. For more information about factory classes, see the Solr JavaDoc or read the corresponding section on the Solr Wiki (http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters)
In the example, a
StandardTokenizerFactory is used by two filter factories: LowerCaseFilterFactory and SnowballPorterFilterFactory. The tokenizer splits words at punctuation characters and hyphens but keeping email addresses and internet hostnames intact. The standard tokenizer is ideal for this and other general operations. The lowercase filter converts all letters in the token into lowercase and the snowball filter applies language specific stemming.
If using the Solr framework, use the tokenizer with an arbitrary number of filters.
Example 14.7. Using @AnalyzerDef and the Solr Framework to Define and Use an Analyzer
Use
@AnalyzerDef to define an analyzer, then apply it to entities and properties using @Analyzer. In the example, the customanalyzer is defined but not applied on the entity. The analyzer is only applied to the title and subtitle properties. An analyzer definition is global. Define the analyzer for an entity and reuse the definition for other entities as required.