Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
14.2. Mapping Entities to the Index Structure
14.2.1. Mapping an Entity Link kopierenLink in die Zwischenablage kopiert!
14.2.1.1. Basic Mapping Link kopierenLink in die Zwischenablage kopiert!
- @Indexed
- @Field
- @NumericField
- @Id
14.2.1.1.1. @Indexed Link kopierenLink in die Zwischenablage kopiert!
@Indexed (all entities not annotated with @Indexed will be ignored by the indexing process):
Example 14.8. Making a class indexable with @Indexed
@Entity
@Indexed
public class Essay {
...
}
@Entity
@Indexed
public class Essay {
...
}
index attribute of the @Indexed annotation to change the default name of the index.
14.2.1.1.2. @Field Link kopierenLink in die Zwischenablage kopiert!
@Field does declare a property as indexed and allows to configure several aspects of the indexing process by setting one or more of the following attributes:
name: describe under which name, the property should be stored in the Lucene Document. The default value is the property name (following the JavaBeans convention)store: describe whether or not the property is stored in the Lucene index. You can store the valueStore.YES(consuming more space in the index but allowing projection, see Section 14.3.1.10.5, “Projection”), store it in a compressed wayStore.COMPRESS(this does consume more CPU), or avoid any storageStore.NO(this is the default value). When a property is stored, you can retrieve its original value from the Lucene Document. This is not related to whether the element is indexed or not.index: describe whether the property is indexed or not. The different values areIndex.NO(no indexing, ie cannot be found by a query),Index.YES(the element gets indexed and is searchable). The default value isIndex.YES.Index.NOcan be useful for cases where a property is not required to be searchable, but should be available for projection.Note
Index.NOin combination withAnalyze.YESorNorms.YESis not useful, sinceanalyzeandnormsrequire the property to be indexedanalyze: determines whether the property is analyzed (Analyze.YES) or not (Analyze.NO). The default value isAnalyze.YES.Note
Whether or not you want to analyze a property depends on whether you wish to search the element as is, or by the words it contains. It make sense to analyze a text field, but probably not a date field.Note
Fields used for sorting must not be analyzed.norms: describes whether index time boosting information should be stored (Norms.YES) or not (Norms.NO). Not storing it can save a considerable amount of memory, but there won't be any index time boosting information available. The default value isNorms.YES.termVector: describes collections of term-frequency pairs. This attribute enables the storing of the term vectors within the documents during indexing. The default value isTermVector.NO.The different values of this attribute are:Expand Value Definition TermVector.YES Store the term vectors of each document. This produces two synchronized arrays, one contains document terms and the other contains the term's frequency. TermVector.NO Do not store term vectors. TermVector.WITH_OFFSETS Store the term vector and token offset information. This is the same as TermVector.YES plus it contains the starting and ending offset position information for the terms. TermVector.WITH_POSITIONS Store the term vector and token position information. This is the same as TermVector.YES plus it contains the ordinal positions of each occurrence of a term in a document. TermVector.WITH_POSITION_OFFSETS Store the term vector, token position and offset information. This is a combination of the YES, WITH_OFFSETS and WITH_POSITIONS. indexNullAs: Per default null values are ignored and not indexed. However, usingindexNullAsyou can specify a string which will be inserted as token for thenullvalue. Per default this value is set toField.DO_NOT_INDEX_NULLindicating thatnullvalues should not be indexed. You can set this value toField.DEFAULT_NULL_TOKENto indicate that a defaultnulltoken should be used. This defaultnulltoken can be specified in the configuration usinghibernate.search.default_null_token. If this property is not set and you specifyField.DEFAULT_NULL_TOKENthe string "_null_" will be used as default.Note
When theindexNullAsparameter is used it is important to use the same token in the search query to search fornullvalues. It is also advisable to use this feature only with un-analyzed fields ().analyze=Analyze.NOWarning
When implementing a customFieldBridgeorTwoWayFieldBridgeit is up to the developer to handle the indexing of null values (see JavaDocs ofLuceneOptions.indexNullAs()).
14.2.1.1.3. @NumericField Link kopierenLink in die Zwischenablage kopiert!
@Field called @NumericField that can be specified in the same scope as @Field or @DocumentId. It can be specified for Integer, Long, Float, and Double properties. At index time the value will be indexed using a Trie structure. When a property is indexed as numeric field, it enables efficient range query and sorting, orders of magnitude faster than doing the same query on standard @Field properties. The @NumericField annotation accept the following parameters:
| Value | Definition |
|---|---|
| forField | (Optional) Specify the name of the related @Field that will be indexed as numeric. It's only mandatory when the property contains more than a @Field declaration |
| precisionStep | (Optional) Change the way that the Trie structure is stored in the index. Smaller precisionSteps lead to more disk space usage and faster range and sort queries. Larger values lead to less space used and range query performance more close to the range query in normal @Fields. Default value is 4. |
@NumericField supports only Double, Long, Integer and Float. It is not possible to take any advantage from similar functionality in Lucene for the other numeric types, so remaining types should use the string encoding via the default or custom TwoWayFieldBridge.
NumericFieldBridge assuming you can deal with the approximation during type transformation:
Example 14.9. Defining a custom NumericFieldBridge
14.2.1.1.4. @Id Link kopierenLink in die Zwischenablage kopiert!
id (identifier) property of an entity is a special property used by Hibernate Search to ensure index uniqueness of a given entity. By design, an id must be stored and must not be tokenized. To mark a property as an index identifier, use the @DocumentId annotation. If you are using JPA and you have specified @Id you can omit @DocumentId. The chosen entity identifier will also be used as the document identifier.
Example 14.10. Specifying indexed properties
id , Abstract, text and grade . Note that by default the field name is not capitalized, following the JavaBean specification. The grade field is annotated as numeric with a slightly larger precision step than the default.
14.2.1.2. Mapping Properties Multiple Times Link kopierenLink in die Zwischenablage kopiert!
Example 14.11. Using @Fields to map a property multiple times
summary is indexed twice, once as summary in a tokenized way, and once as summary_forSort in an untokenized way.
14.2.1.3. Embedded and Associated Objects Link kopierenLink in die Zwischenablage kopiert!
address.city:Atlanta). The place fields will be indexed in the Place index. The Place index documents will also contain the fields address.id, address.street, and address.city which you will be able to query.
Example 14.12. Indexing associations
@IndexedEmbedded technique, Hibernate Search must be aware of any change in the Place object and any change in the Address object to keep the index up to date. To ensure the Place Lucene document is updated when it's Address changes, mark the other side of the bidirectional relationship with @ContainedIn.
Note
@ContainedIn is useful on both associations pointing to entities and on embedded (collection of) objects.
Example 14.13. Nested usage of @IndexedEmbedded and @ContainedIn
@*ToMany, @*ToOne and @Embedded attribute can be annotated with @IndexedEmbedded. The attributes of the associated class will then be added to the main entity index. In Example 14.13, “Nested usage of @IndexedEmbedded and @ContainedIn” the index will contain the following fields:
- id
- name
- address.street
- address.city
- address.ownedBy_name
propertyName., following the traditional object navigation convention. You can override it using the prefix attribute as it is shown on the ownedBy property.
Note
depth property is necessary when the object graph contains a cyclic dependency of classes (not instances). For example, if Owner points to Place. Hibernate Search will stop including Indexed embedded attributes after reaching the expected depth (or the object graph boundaries are reached). A class having a self reference is an example of cyclic dependency. In our example, because depth is set to 1, any @IndexedEmbedded attribute in Owner (if any) will be ignored.
@IndexedEmbedded for object associations allows you to express queries (using Lucene's query syntax) such as:
- Return places where name contains JBoss and where address city is Atlanta. In Lucene query this would be
+name:jboss +address.city:atlanta
+name:jboss +address.city:atlantaCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Return places where name contains JBoss and where owner's name contain Joe. In Lucene query this would be
+name:jboss +address.ownedBy_name:joe
+name:jboss +address.ownedBy_name:joeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
@Indexed
@ContainedIn (as seen in the previous example). If not, Hibernate Search has no way to update the root index when the associated entity is updated (in our example, a Place index document has to be updated when the associated Address instance is updated).
@IndexedEmbedded is not the object type targeted by Hibernate and Hibernate Search. This is especially the case when interfaces are used in lieu of their implementation. For this reason you can override the object type targeted by Hibernate Search using the targetElement parameter.
Example 14.14. Using the targetElement property of @IndexedEmbedded
14.2.1.4. Limiting Object Embedding to Specific Paths Link kopierenLink in die Zwischenablage kopiert!
@IndexedEmbedded annotation provides also an attribute includePaths which can be used as an alternative to depth, or be combined with it.
depth all indexed fields of the embedded type will be added recursively at the same depth. This makes it harder to select only a specific path without adding all other fields as well, which might not be needed.
includePaths property of @IndexedEmbedded”
Example 14.15. Using the includePaths property of @IndexedEmbedded
includePaths property of @IndexedEmbedded”, you would be able to search on a Person by name and/or surname, and/or the name of the parent. It will not index the surname of the parent, so searching on parent's surnames will not be possible but speeds up indexing, saves space and improve overall performance.
@IndexedEmbeddedincludePaths will include the specified paths in addition to what you would index normally specifying a limited value for depth. When using includePaths, and leaving depth undefined, behavior is equivalent to setting depth=0: only the included paths are indexed.
Example 14.16. Using the includePaths property of @IndexedEmbedded
includePaths property of @IndexedEmbedded”, every human will have its name and surname attributes indexed. The name and surname of parents will also be indexed, recursively up to second line because of the depth attribute. It will be possible to search by name or surname, of the person directly, his parents or of his grand parents. Beyond the second level, we will in addition index one more level but only the name, not the surname.
id- as primary key_hibernate_class- stores entity typename- as direct fieldsurname- as direct fieldparents.name- as embedded field at depth 1parents.surname- as embedded field at depth 1parents.parents.name- as embedded field at depth 2parents.parents.surname- as embedded field at depth 2parents.parents.parents.name- as additional path as specified byincludePaths. The firstparents.is inferred from the field name, the remaining path is the attribute ofincludePaths
14.2.2. Boosting Link kopierenLink in die Zwischenablage kopiert!
14.2.2.1. Static Index Time Boosting Link kopierenLink in die Zwischenablage kopiert!
@Boost annotation. You can use this annotation within @Field or specify it directly on method or class level.
Example 14.17. Different ways of using @Boost
Essay's probability to reach the top of the search list will be multiplied by 1.7. The summary field will be 3.0 (2 * 1.5, because @Field.boost and @Boost on a property are cumulative) more important than the isbn field. The text field will be 1.2 times more important than the isbn field. Note that this explanation is wrong in strictest terms, but it is simple and close enough to reality for all practical purposes.
14.2.2.2. Dynamic Index Time Boosting Link kopierenLink in die Zwischenablage kopiert!
@Boost annotation used in Section 14.2.2.1, “Static Index Time Boosting” defines a static boost factor which is independent of the state of the indexed entity at runtime. However, there are usecases in which the boost factor may depend on the actual state of the entity. In this case you can use the @DynamicBoost annotation together with an accompanying custom BoostStrategy.
Example 14.18. Dynamic boost example
VIPBoostStrategy as implementation of the BoostStrategy interface to be used at indexing time. You can place the @DynamicBoost either at class or field level. Depending on the placement of the annotation either the whole entity is passed to the defineBoost method or just the annotated field/property value. It's up to you to cast the passed object to the correct type. In the example all indexed values of a VIP person would be double as important as the values of a normal person.
Note
BoostStrategy implementation must define a public no-arg constructor.
@Boost and @DynamicBoost annotations in your entity. All defined boost factors are cumulative.
14.2.3. Analysis Link kopierenLink in die Zwischenablage kopiert!
Analysis is the process of converting text into single terms (words) and can be considered as one of the key features of a full-text search engine. Lucene uses the concept of Analyzers to control this process. In the following section we cover the multiple ways Hibernate Search offers to configure the analyzers.
14.2.3.1. Default Analyzer and Analyzer by Class Link kopierenLink in die Zwischenablage kopiert!
hibernate.search.analyzer property. The default value for this property is org.apache.lucene.analysis.standard.StandardAnalyzer.
Example 14.19. Different ways of using @Analyzer
EntityAnalyzer is used to index tokenized property (name), except summary and body which are indexed with PropertyAnalyzer and FieldAnalyzer respectively.
Warning
14.2.3.2. Named Analyzers Link kopierenLink in die Zwischenablage kopiert!
@Analyzer declarations and is composed of:
- a name: the unique string used to refer to the definition
- a list of char filters: each char filter is responsible to pre-process input characters before the tokenization. Char filters can add, change, or remove characters; one common usage is for characters normalization
- a tokenizer: responsible for tokenizing the input stream into individual words
- a list of filters: each filter is responsible to remove, modify, or sometimes even add words into the stream provided by the tokenizer
Tokenizer starts the tokenizing process by turning the character input into tokens which are then further processed by the TokenFilters. Hibernate Search supports this infrastructure by utilizing the Solr analyzer framework.
Note
lucene-snowball jar and for the PhoneticFilterFactory you need the commons-codec jar. Your distribution of Hibernate Search provides these dependencies in its lib/optional directory.
@AnalyzerDef and the Solr framework”. First a char filter is defined by its factory. In our example, a mapping char filter is used, and will replace characters in the input based on the rules specified in the mapping file. Next a tokenizer is defined. This example uses the standard tokenizer. Last but not least, a list of filters is defined by their factories. In our example, the StopFilter filter is built reading the dedicated words property file. The filter is also expected to ignore case.
Example 14.20. @AnalyzerDef and the Solr framework
Note
@AnalyzerDef annotation. Order matters!
resource_charset parameter.
Example 14.21. Use a specific charset to load the property file
@Analyzer declaration as seen in Example 14.22, “Referencing an analyzer by name”.
Example 14.22. Referencing an analyzer by name
@AnalyzerDef are also available by their name in the SearchFactory which is quite useful when building queries.
Analyzer analyzer = fullTextSession.getSearchFactory().getAnalyzer("customanalyzer");
Analyzer analyzer = fullTextSession.getSearchFactory().getAnalyzer("customanalyzer");
14.2.3.3. Available Analyzers Link kopierenLink in die Zwischenablage kopiert!
| Factory | Description | Parameters | Additional dependencies |
|---|---|---|---|
MappingCharFilterFactory | Replaces one or more characters with one or more characters, based on mappings specified in the resource file | mapping: points to a resource file containing the mappings using the format:
| none |
HTMLStripCharFilterFactory | Remove HTML standard tags, keeping the text | none | none |
| Factory | Description | Parameters | Additional dependencies |
|---|---|---|---|
StandardTokenizerFactory | Use the Lucene StandardTokenizer | none | none |
HTMLStripCharFilterFactory | Remove HTML tags, keep the text and pass it to a StandardTokenizer. | none | solr-core |
PatternTokenizerFactory | Breaks text at the specified regular expression pattern. | pattern: the regular expression to use for tokenizing
group: says which pattern group to extract into tokens
| solr-core |
| Factory | Description | Parameters | Additional dependencies |
|---|---|---|---|
StandardFilterFactory | Remove dots from acronyms and 's from words | none | solr-core |
LowerCaseFilterFactory | Lowercases all words | none | solr-core |
StopFilterFactory | Remove words (tokens) matching a list of stop words | words: points to a resource file containing the stop words
ignoreCase: true if
case should be ignored when comparing stop words, false otherwise
| solr-core |
SnowballPorterFilterFactory | Reduces a word to it's root in a given language. (example: protect, protects, protection share the same root). Using such a filter allows searches matching related words. | language: Danish, Dutch, English, Finnish, French, German, Italian, Norwegian, Portuguese, Russian, Spanish, Swedish and a few more | solr-core |
ISOLatin1AccentFilterFactory | Remove accents for languages like French | none | solr-core |
PhoneticFilterFactory | Inserts phonetically similar tokens into the token stream | encoder: One of DoubleMetaphone, Metaphone, Soundex or RefinedSoundex
inject:
true will add tokens to the stream, false will replace the existing token
maxCodeLength: sets the maximum length of the code to be generated. Supported only for Metaphone and DoubleMetaphone encodings
| solr-core and commons-codec |
CollationKeyFilterFactory | Converts each token into its java.text.CollationKey, and then encodes the CollationKey with IndexableBinaryStringTools, to allow it to be stored as an index term. | custom, language, country, variant, strength, decomposition
For more information, see Lucene's
CollationKeyFilter javadocs
| solr-core and commons-io |
org.apache.solr.analysis.TokenizerFactory and org.apache.solr.analysis.TokenFilterFactory in your IDE to see the implementations available.
14.2.3.4. Dynamic Analyzer Selection Link kopierenLink in die Zwischenablage kopiert!
BlogEntry class for example the analyzer could depend on the language property of the entry. Depending on this property the correct language specific stemmer should be chosen to index the actual text.
AnalyzerDiscriminator annotation. Example 14.23, “Usage of @AnalyzerDiscriminator” demonstrates the usage of this annotation.
Example 14.23. Usage of @AnalyzerDiscriminator
@AnalyzerDiscriminator is that all analyzers which are going to be used dynamically are predefined via @AnalyzerDef definitions. If this is the case, one can place the @AnalyzerDiscriminator annotation either on the class or on a specific property of the entity for which to dynamically select an analyzer. Via the impl parameter of the AnalyzerDiscriminator you specify a concrete implementation of the Discriminator interface. It is up to you to provide an implementation for this interface. The only method you have to implement is getAnalyzerDefinitionName() which gets called for each field added to the Lucene document. The entity which is getting indexed is also passed to the interface method. The value parameter is only set if the AnalyzerDiscriminator is placed on property level instead of class level. In this case the value represents the current value of this property.
Discriminator interface has to return the name of an existing analyzer definition or null if the default analyzer should not be overridden. Example 14.23, “Usage of @AnalyzerDiscriminator” assumes that the language parameter is either 'de' or 'en' which matches the specified names in the @AnalyzerDefs.
14.2.3.5. Retrieving an Analyzer Link kopierenLink in die Zwischenablage kopiert!
Note
Example 14.24. Using the scoped analyzer when building a full-text query
title and a stemming analyzer is used in the field title_stemmed. By using the analyzer provided by the search factory, the query uses the appropriate analyzer depending on the field targeted.
Note
@AnalyzerDef by their definition name using searchFactory.getAnalyzer(String).
14.2.4. Bridges Link kopierenLink in die Zwischenablage kopiert!
@Field have to be converted to strings to be indexed. The reason we have not mentioned it so far is, that for most of your properties Hibernate Search does the translation job for you thanks to set of built-in bridges. However, in some cases you need a more fine grained control over the translation process.
14.2.4.1. Built-in Bridges Link kopierenLink in die Zwischenablage kopiert!
- null
- Per default
nullelements are not indexed. Lucene does not support null elements. However, in some situation it can be useful to insert a custom token representing thenullvalue. See Section 14.2.1.1.2, “@Field” for more information. - java.lang.String
- Strings are indexed as are
- short, Short, integer, Integer, long, Long, float, Float, double, Double, BigInteger, BigDecimal
- Numbers are converted into their string representation. Note that numbers cannot be compared by Lucene (that is, used in ranged queries) out of the box: they have to be padded.
Note
Using a Range query has drawbacks, an alternative approach is to use a Filter query which will filter the result query to the appropriate range.Hibernate Search also supports the use of a custom StringBridge as described in Section 14.2.4.2, “Custom Bridges”. - java.util.Date
- Dates are stored as yyyyMMddHHmmssSSS in GMT time (200611072203012 for Nov 7th of 2006 4:03PM and 12ms EST). You shouldn't really bother with the internal format. What is important is that when using a TermRangeQuery, you should know that the dates have to be expressed in GMT time.Usually, storing the date up to the millisecond is not necessary.
@DateBridgedefines the appropriate resolution you are willing to store in the index (@DateBridge(resolution=Resolution.DAY)). The date pattern will then be truncated accordingly.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Warning
A Date whose resolution is lower thanMILLISECONDcannot be a@DocumentId.Important
The defaultDatebridge uses Lucene'sDateToolsto convert from and toString. This means that all dates are expressed in GMT time. If your requirements are to store dates in a fixed time zone you have to implement a custom date bridge. Make sure you understand the requirements of your applications regarding to date indexing and searching. - java.net.URI, java.net.URL
- URI and URL are converted to their string representation.
- java.lang.Class
- Class are converted to their fully qualified class name. The thread context class loader is used when the class is rehydrated.
14.2.4.2. Custom Bridges Link kopierenLink in die Zwischenablage kopiert!
14.2.4.2.1. StringBridge Link kopierenLink in die Zwischenablage kopiert!
Object to String bridge. To do so you need to implement the org.hibernate.search.bridge.StringBridge interface. All implementations have to be thread-safe as they are used concurrently.
Example 14.25. Custom StringBridge implementation
StringBridge implementation”, any property or field can use this bridge thanks to the @FieldBridge annotation:
@FieldBridge(impl = PaddedIntegerBridge.class) private Integer length;
@FieldBridge(impl = PaddedIntegerBridge.class)
private Integer length;
14.2.4.2.2. Parameterized Bridge Link kopierenLink in die Zwischenablage kopiert!
ParameterizedBridge interface and parameters are passed through the @FieldBridge annotation.
Example 14.26. Passing parameters to your bridge implementation
ParameterizedBridge interface can be implemented by StringBridge, TwoWayStringBridge, FieldBridge implementations.
14.2.4.2.3. Type Aware Bridge Link kopierenLink in die Zwischenablage kopiert!
- the return type of the property for field/getter-level bridges.
- the class type for class-level bridges.
AppliedOnTypeAwareBridge will get the type the bridge is applied on injected. Like parameters, the type injected needs no particular care with regard to thread-safety.
14.2.4.2.4. Two-Way Bridge Link kopierenLink in die Zwischenablage kopiert!
@DocumentId ), you need to use a slightly extended version of StringBridge named TwoWayStringBridge. Hibernate Search needs to read the string representation of the identifier and generate the object out of it. There is no difference in the way the @FieldBridge annotation is used.
Example 14.27. Implementing a TwoWayStringBridge usable for id properties
Important
14.2.4.2.5. FieldBridge Link kopierenLink in die Zwischenablage kopiert!
FieldBridge. This interface gives you a property value and let you map it the way you want in your Lucene Document. You can for example store a property in two different document fields. The interface is very similar in its concept to the Hibernate UserTypes.
Example 14.28. Implementing the FieldBridge Interface
LuceneOptions helper; this helper will apply the options you have selected on @Field, like Store or TermVector, or apply the choosen @Boost value. It is especially useful to encapsulate the complexity of COMPRESS implementations. Even though it is recommended to delegate to LuceneOptions to add fields to the Document, nothing stops you from editing the Document directly and ignore the LuceneOptions in case you need to.
Note
LuceneOptions are created to shield your application from changes in Lucene API and simplify your code. Use them if you can, but if you need more flexibility you're not required to.
14.2.4.2.6. ClassBridge Link kopierenLink in die Zwischenablage kopiert!
@ClassBridge and @ClassBridges annotations can be defined at the class level, as opposed to the property level. In this case the custom field bridge implementation receives the entity instance as the value parameter instead of a particular property. Though not shown in Example 14.29, “Implementing a class bridge”, @ClassBridge supports the termVector attribute discussed in section Section 14.2.1.1, “Basic Mapping”.
Example 14.29. Implementing a class bridge
CatFieldsClassBridge is applied to the department instance, the field bridge then concatenate both branch and network and index the concatenation.