Search

Developing Hibernate Applications

download PDF
Red Hat JBoss Enterprise Application Platform 7.3

Instructions and information for developers and administrators who want to develop and deploy Jakarta Persistence API (JPA) or Hibernate applications for Red Hat JBoss Enterprise Application Platform.

Red Hat Customer Content Services

Abstract

This document provides information for developers and administrators who want to develop and deploy Jakarta Persistence or Hibernate applications with Red Hat JBoss Enterprise Application Platform.

Chapter 1. Introduction

1.1. About Hibernate Core

Hibernate Core is an object-relational mapping framework for the Java language. It provides a framework for mapping an object-oriented domain model to a relational database, allowing applications to avoid direct interaction with the database. Hibernate solves object-relational impedance mismatch problems by replacing direct, persistent database accesses with high-level object handling functions.

1.2. Hibernate EntityManager

Hibernate EntityManager implements programming interfaces and lifecycle rules as defined by the Jakarta Persistence 2.2 specification. Together with Hibernate Annotations, this wrapper implements a standalone Jakarta Persistence solution on top of the mature Hibernate Core. You can use a combination of all three together, annotations without Jakarta Persistence programming interfaces and lifecycle, or even pure native Hibernate Core, depending on the business and technical needs of your project. You can at all times fall back to Hibernate native APIs, or if required, even to native JDBC and SQL. It provides JBoss EAP with a complete Jakarta Persistence solution.

The 7.3 release of JBoss EAP is compliant with the Jakarta Persistence 2.2 specification defined in Jakarta EE 8.

Hibernate also provides additional features to the specification. To get started with Jakarta Persistence and JBoss EAP, see the bean-validation, greeter, and kitchensink quickstarts that ship with JBoss EAP.

Jakarta Persistence is available in containers like Jakarta Enterprise Beans 3 or the more modern Jakarta Contexts and Dependency Injection, as well as in standalone Java SE applications that execute outside of a particular container. The following programming interfaces and artifacts are available in both environments.

Important

If you plan to use a security manager with Hibernate, be aware that Hibernate supports it only when EntityManagerFactory is bootstrapped by the JBoss EAP server. It is not supported when the EntityManagerFactory or SessionFactory is bootstrapped by the application.

EntityManagerFactory
An entity manager factory provides entity manager instances, all instances are configured to connect to the same database, to use the same default settings as defined by the particular implementation, etc. You can prepare several entity manager factories to access several data stores. This interface is similar to the SessionFactory in native Hibernate.
EntityManager
The EntityManager API is used to access a database in a particular unit of work. It is used to create and remove persistent entity instances, to find entities by their primary key identity, and to query over all entities. This interface is similar to the Session in Hibernate.
Persistence context
A persistence context is a set of entity instances in which for any persistent entity identity there is a unique entity instance. Within the persistence context, the entity instances and their lifecycle is managed by a particular entity manager. The scope of this context can either be the transaction, or an extended unit of work.
Persistence unit
The set of entity types that can be managed by a given entity manager is defined by a persistence unit. A persistence unit defines the set of all classes that are related or grouped by the application, and which must be collocated in their mapping to a single data store.
Container-managed entity manager
An entity manager whose lifecycle is managed by the container.
Application-managed entity manager
An entity manager whose lifecycle is managed by the application.
Jakarta Transactions entity manager
Entity manager involved in a Jakarta Transactions transaction.
Resource-local entity manager
Entity manager using a resource transaction (not a Jakarta Transactions transaction).

Additional Resources

  • For information about how to download and run the quickstarts, see Using the Quickstart Examples in the JBoss EAP Getting Started Guide.
  • For more information about security managers, see Java Security Manager in the JBoss EAP How to Configure Server Security .

Chapter 2. Hibernate Configuration

2.1. Hibernate Configuration

The configuration for entity managers both inside an application server and in a standalone application reside in a persistence archive. A persistence archive is a JAR file which must define a persistence.xml file that resides in the META-INF/ folder.

You can connect to the database using the persistence.xml file. There are two ways of doing this:

  • Specifying a data source which is configured in the datasources subsystem in JBoss EAP.

    The jta-data-source points to the Java Naming and Directory Interface name of the data source this persistence unit maps to. The java:jboss/datasources/ExampleDS here points to the H2 DB embedded in the JBoss EAP.

    Example of object-relational-mapping in the persistence.xml File

    <persistence>
       <persistence-unit name="myapp">
          <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
          <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source>
          <properties>
             ... ...
          </properties>
       </persistence-unit>
    </persistence>

  • Explicitly configuring the persistence.xml file by specifying the connection properties.

    Example of Specifying Connection Properties in the persistence.xml file

    <property name="javax.persistence.jdbc.driver" value="org.hsqldb.jdbcDriver"/>
    <property name="javax.persistence.jdbc.user" value="sa"/>
    <property name="javax.persistence.jdbc.password" value=""/>
    <property name="javax.persistence.jdbc.url" value="jdbc:hsqldb:."/>

    For the complete list of connection properties, see Connection Properties Configurable in the persistence.xml File.

There are a number of properties that control the behavior of Hibernate at runtime. All are optional and have reasonable default values. These Hibernate properties are all used in the persistence.xml file. For the complete list of all configurable Hibernate properties, see Hibernate Properties.

2.2. Second-Level Caches

2.2.1. About Second-level Caches

A second-level cache is a local data store that holds information persisted outside the application session. The cache is managed by the persistence provider, improving runtime by keeping the data separate from the application.

JBoss EAP supports caching for the following purposes:

  • Web Session Clustering
  • Stateful Session Bean Clustering
  • SSO Clustering
  • Hibernate Second-level Cache
  • Jakarta Persistence Second-level Cache
Warning

Each cache container defines a repl and a dist cache. These caches should not be used directly by user applications.

2.2.2. Configure a Second-level Cache for Hibernate

The configuration of Infinispan to act as the second-level cache for Hibernate can be done in two ways:

Configuring a Second-level Cache for Hibernate Using Hibernate Native Applications
  1. Create the hibernate.cfg.xml file in the deployment’s class path.
  2. Add the following XML to the hibernate.cfg.xml file. The XML needs to be within the <session-factory> tag:

    <property name="hibernate.cache.use_second_level_cache">true</property>
    <property name="hibernate.cache.use_query_cache">true</property>
    <property name="hibernate.cache.region.factory_class">org.jboss.as.jpa.hibernate5.infinispan.InfinispanRegionFactory</property>
  3. In order to use the Hibernate native APIs within your application, you must add the following dependencies to the MANIFEST.MF file:

    Dependencies: org.infinispan,org.hibernate

Chapter 3. Hibernate Annotations

3.1. Hibernate Annotations

The org.hibernate.annotations package contains some annotations which are offered by Hibernate, on top of the standard Jakarta Persistence annotations.

Table 3.1. General Annotations
AnnotationDescription

Check

Arbitrary SQL check constraints which can be defined at the class, property or collection level.

Immutable

Mark an Entity or a Collection as immutable. No annotation means the element is mutable.

An immutable entity may not be updated by the application. Updates to an immutable entity will be ignored, but no exception is thrown.

@Immutable placed on a collection makes the collection immutable, meaning additions and deletions to and from the collection are not allowed. A HibernateException is thrown in this case.

Table 3.2. Caching Entities
AnnotationDescription

Cache

Add caching strategy to a root entity or a collection.

Table 3.3. Collection Related Annotations
AnnotationDescription

MapKeyType

Defines the type of key of a persistent map.

ManyToAny

Defines a ToMany association pointing to different entity types. Matching the entity type is done through a metadata discriminator column. This kind of mapping should be only marginal.

OrderBy

Order a collection using SQL ordering (not HQL ordering).

OnDelete

Strategy to use on collections, arrays and on joined subclasses delete. OnDelete of secondary tables is currently not supported.

Persister

Specify a custom persister.

Sort

Collection sort (Java level sorting).

Where

Where clause to add to the element Entity or target entity of a collection. The clause is written in SQL.

WhereJoinTable

Where clause to add to the collection join table. The clause is written in SQL.

Table 3.4. Custom SQL for CRUD Operations
AnnotationDescription

Loader

Overwrites Hibernate default FIND method.

SQLDelete

Overwrites the Hibernate default DELETE method.

SQLDeleteAll

Overwrites the Hibernate default DELETE ALL method.

SQLInsert

Overwrites the Hibernate default INSERT INTO method.

SQLUpdate

Overwrites the Hibernate default UPDATE method.

Subselect

Maps an immutable and read-only entity to a given SQL subselect expression.

Synchronize

Ensures that auto-flush happens correctly and that queries against the derived entity do not return stale data. Mostly used with Subselect.

Table 3.5. Entity
AnnotationDescription

Cascade

Apply a cascade strategy on an association.

Entity

Adds additional metadata that may be needed beyond what is defined in the standard @Entity.

  • mutable: whether this entity is mutable or not
  • dynamicInsert: allow dynamic SQL for inserts
  • dynamicUpdate: allow dynamic SQL for updates
  • selectBeforeUpdate: Specifies that Hibernate should never perform an SQL UPDATE unless it is certain that an object is actually modified.
  • polymorphism: whether the entity polymorphism is of PolymorphismType.IMPLICIT (default) or PolymorphismType.EXPLICIT
  • optimisticLock: optimistic locking strategy (OptimisticLockType.VERSION, OptimisticLockType.NONE, OptimisticLockType.DIRTY or OptimisticLockType.ALL)

    Note

    The annotation "Entity" is deprecated and scheduled for removal in future releases. Its individual attributes or values should become annotations.

Polymorphism

Used to define the type of polymorphism Hibernate will apply to entity hierarchies.

Proxy

Lazy and proxy configuration of a particular class.

Table

Complementary information to a table either primary or secondary.

Tables

Plural annotation of Table.

Target

Defines an explicit target, avoiding reflection and generics resolving.

Tuplizer

Defines a tuplizer for an entity or a component.

Tuplizers

Defines a set of tuplizers for an entity or a component.

Table 3.6. Fetching
AnnotationDescription

BatchSize

Batch size for SQL loading.

FetchProfile

Defines the fetching strategy profile.

FetchProfiles

Plural annotation for @FetchProfile.

LazyGroup

Specifies that an entity attribute should be fetched along with all the other attributes belonging to the same group. In order to load entity attributes lazily, bytecode enhancement is needed. By default, all non-collection attributes are loaded in one group named DEFAULT. This annotation allows defining different groups of attributes to be initialized together when accessing one attribute in the group.

Table 3.7. Filters
AnnotationDescription

Filter

Adds filters to an entity or a target entity of a collection.

FilterDef

Filter definition.

FilterDefs

Array of filter definitions.

FilterJoinTable

Adds filters to a join table collection.

FilterJoinTables

Adds multiple @FilterJoinTable to a collection.

Filters

Adds multiple @Filter.

ParamDef

A parameter definition.

Table 3.8. Primary Keys
AnnotationDescription

Generated

This annotated property is generated by the database.

GenericGenerator

Generator annotation describing any kind of Hibernate generator in a detyped manner.

GenericGenerators

Array of generic generator definitions.

NaturalId

Specifies that a property is part of the natural id of the entity.

Parameter

Key/value pattern.

RowId

Support for ROWID mapping feature of Hibernate.

Table 3.9. Inheritance
AnnotationDescription

DiscriminatorFormula

Discriminator formula to be placed at the root entity.

DiscriminatorOptions

Optional annotation to express Hibernate specific discriminator properties.

MetaValue

Maps a given discriminator value to the corresponding entity type.

Table 3.10. Mapping JP-QL/HQL Queries
AnnotationDescription

NamedNativeQueries

Extends NamedNativeQueries to hold Hibernate NamedNativeQuery objects.

NamedNativeQuery

Extends NamedNativeQuery with Hibernate features.

NamedQueries

Extends NamedQueries to hold Hibernate NamedQuery objects.

NamedQuery

Extends NamedQuery with Hibernate features.

Table 3.11. Mapping Simple Properties
AnnotationDescription

AccessType

Property access type.

Columns

Support an array of columns. Useful for component user type mappings.

ColumnTransformer

Custom SQL expression used to read the value from and write a value to a column. Use for direct object loading/saving as well as queries. The write expression must contain exactly one '?' placeholder for the value.

ColumnTransformers

Plural annotation for @ColumnTransformer. Useful when more than one column is using this behavior.

Table 3.12. Property
AnnotationDescription

Formula

To be used as a replacement for @Column in most places. The formula has to be a valid SQL fragment.

Index

Defines a database index.

JoinFormula

To be used as a replacement for @JoinColumn in most places. The formula has to be a valid SQL fragment.

Parent

Reference the property as a pointer back to the owner (generally the owning entity).

Type

Hibernate type.

TypeDef

Hibernate type definition.

TypeDefs

Hibernate type definition array.

Table 3.13. Single Association Related Annotations
AnnotationDescription

Any

Defines a ToOne association pointing to several entity types. Matching the according entity type is done through a metadata discriminator column. This kind of mapping should be only marginal.

AnyMetaDef

Defines @Any and @ManyToAny metadata.

AnyMetaDefs

Defines @Any and @ManyToAny set of metadata. Can be defined at the entity level or the package level.

Fetch

Defines the fetching strategy used for the given association.

LazyCollection

Defines the lazy status of a collection.

LazyToOne

Defines the lazy status of a ToOne association (i.e. OneToOne or ManyToOne).

NotFound

Action to do when an element is not found on an association.

Table 3.14. Optimistic Locking
AnnotationDescription

OptimisticLock

Whether or not a change of the annotated property will trigger an entity version increment. If the annotation is not present, the property is involved in the optimistic lock strategy (default).

OptimisticLocking

Used to define the style of optimistic locking to be applied to an entity. In a hierarchy, only valid on the root entity.

Source

Optional annotation in conjunction with Version and timestamp version properties. The annotation value decides where the timestamp is generated.

Chapter 4. Hibernate Query Language

4.1. About Hibernate Query Language

Introduction to Java Persistence query language

The Java Persistence query language is a platform-independent object-oriented query language defined as part of the Java Persistence API specification. The Jakarta equivalent of Java Persistence query language is Jakarta Persistence query language, and it is defined as part of the Jakarta Persistence specification.

Java Persistence query language is used to make queries against entities stored in a relational database. It is heavily inspired by SQL, and its queries resemble SQL queries in syntax, but operate against Java Persistence API entity objects rather than directly with database tables.

Introduction to HQL

The Hibernate Query Language (HQL) is a powerful query language, similar in appearance to SQL. Compared with SQL, however, HQL is fully object-oriented and understands notions like inheritance, polymorphism and association.

HQL is a superset of Java Persistence query language. An HQL query is not always a valid Java Persistence query language query, but a Java Persistence query language query is always a valid HQL query.

Both HQL and Java Persistence query language are non-type-safe ways to perform query operations. Criteria queries offer a type-safe approach to querying.

4.2. About HQL Statements

Both HQL and Java Persistence query language allow SELECT, UPDATE, and DELETE statements. HQL additionally allows INSERT statements, in a form similar to a SQL INSERT-SELECT.

The following table shows the syntax in Backus-Naur Form (BNF) notation for the various HQL statements.

Table 4.1. HQL Statements
StatementDescription

SELECT

The BNF for SELECT statements in HQL is:

select_statement :: =
        [select_clause]
        from_clause
        [where_clause]
        [groupby_clause]
        [having_clause]
        [orderby_clause]

UPDATE

The BNF for UPDATE statement in HQL is the same as it is in Java Persistence query language.

update_statement ::= update_clause [where_clause]

update_clause ::= UPDATE entity_name [[AS] identification_variable]
        SET update_item {, update_item}*

update_item ::= [identification_variable.]{state_field | single_valued_object_field}
        = new_value

new_value ::= scalar_expression |
                simple_entity_expression |
                NULL

DELETE

The BNF for DELETE statements in HQL is the same as it is in Java Persistence query language.

delete_statement ::= delete_clause [where_clause]

delete_clause ::= DELETE FROM entity_name [[AS] identification_variable]

INSERT

The BNF for INSERT statement in HQL is:

insert_statement ::= insert_clause select_statement

insert_clause ::= INSERT INTO entity_name (attribute_list)

attribute_list ::= state_field[, state_field ]*

There is no Java Persistence query language equivalent to this.

Warning

Hibernate allows the use of Data Manipulation Language (DML) to bulk insert, update and delete data directly in the mapped database through the Hibernate Query Language (HQL).

Using DML may violate the object/relational mapping and may affect object state. Object state stays in memory and by using DML, the state of an in-memory object is not affected, depending on the operation that is performed on the underlying database. In-memory data must be used with care if DML is used.

About the UPDATE and DELETE Statements

The pseudo-syntax for UPDATE and DELETE statements is:

( UPDATE | DELETE ) FROM? EntityName (WHERE where_conditions)?.

Note

The FROM keyword and the WHERE Clause are optional. The FROM clause is responsible for defining the scope of object model types available to the rest of the query. It also is responsible for defining all the identification variables available to the rest of the query. The WHERE clause allows you to refine the list of instances returned.

The result of execution of a UPDATE or DELETE statement is the number of rows that are actually affected (updated or deleted).

Example: Bulk Update Statement

Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();

String hqlUpdate = "update Company set name = :newName where name = :oldName";
int updatedEntities = s.createQuery( hqlUpdate )
        .setString( "newName", newName )
        .setString( "oldName", oldName )
        .executeUpdate();
tx.commit();
session.close();

Example: Bulk Delete Statement

Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();

String hqlDelete = "delete Company where name = :oldName";
int deletedEntities = s.createQuery( hqlDelete )
        .setString( "oldName", oldName )
        .executeUpdate();
tx.commit();
session.close();

The int value returned by the Query.executeUpdate() method indicates the number of entities within the database that were affected by the operation.

Internally, the database might use multiple SQL statements to execute the operation in response to a DML Update or Delete request. This might be because of relationships that exist between tables and the join tables that need to be updated or deleted.

For example, issuing a delete statement, as in the example above, may actually result in deletes being executed against not just the Company table for companies that are named with oldName, but also against joined tables. Therefore a Company table in a bidirectional, many-to-many relationship with an Employee table would also lose rows from the corresponding join table, Company_Employee, as a result of the successful execution of the previous example.

The deletedEntries value contains a count of all the rows affected due to this operation, including the rows in the join tables.

Important

Care should be taken when executing bulk update or delete operations because they may result in inconsistencies between the database and the entities in the active persistence context. In general, bulk update and delete operations should only be performed within a transaction in a new persistence context or before fetching or accessing entities whose state might be affected by such operations.

About the INSERT Statement

HQL adds the ability to define INSERT statements. There is no Java Persistence query language equivalent to this. The Backus-Naur Form (BNF) for an HQL INSERT statement is:

insert_statement ::= insert_clause select_statement

insert_clause ::= INSERT INTO entity_name (attribute_list)

attribute_list ::= state_field[, state_field ]*

The attribute_list is analogous to the column specification in the SQL INSERT statement. For entities involved in mapped inheritance, only attributes directly defined on the named entity can be used in the attribute_list. Superclass properties are not allowed and subclass properties do not make sense. In other words, INSERT statements are inherently non-polymorphic.

Warning

The select_statement can be any valid HQL select query, with the caveat that the return types must match the types expected by the insert. Currently, this is checked during query compilation rather than allowing the check to relegate to the database. This can cause problems with Hibernate Types that are equivalent as opposed to equal. For example, this might cause mismatch issues between an attribute mapped as an org.hibernate.type.DateType and an attribute defined as a org.hibernate.type.TimestampType, even though the database might not make a distinction or might be able to handle the conversion.

For the id attribute, the insert statement gives you two options. You can either explicitly specify the id property in the attribute_list, in which case its value is taken from the corresponding select expression, or omit it from the attribute_list in which case a generated value is used. This latter option is only available when using id generators that operate "in the database"; attempting to use this option with any "in memory" type generators will cause an exception during parsing.

For optimistic locking attributes, the insert statement again gives you two options. You can either specify the attribute in the attribute_list in which case its value is taken from the corresponding select expressions, or omit it from the attribute_list in which case the seed value defined by the corresponding org.hibernate.type.VersionType is used.

Example: INSERT Query Statements

String hqlInsert = "insert into DelinquentAccount (id, name) select c.id, c.name from Customer c where ...";
int createdEntities = s.createQuery(hqlInsert).executeUpdate();

Example: Bulk Insert Statement

Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();

String hqlInsert = "insert into Account (id, name) select c.id, c.name from Customer c where ...";
int createdEntities = s.createQuery( hqlInsert )
        .executeUpdate();
tx.commit();
session.close();

If you do not supply the value for the id attribute using the SELECT statement, an identifier is generated for you, as long as the underlying database supports auto-generated keys. The return value of this bulk insert operation is the number of entries actually created in the database.

4.3. About HQL Ordering

The results of the query can also be ordered. The ORDER BY clause is used to specify the selected values to be used to order the result. The types of expressions considered valid as part of the order-by clause include:

  • state fields
  • component/embeddable attributes
  • scalar expressions such as arithmetic operations, functions, etc.
  • identification variable declared in the select clause for any of the previous expression types

HQL does not mandate that all values referenced in the order-by clause must be named in the select clause, but it is required by Java Persistence query language. Applications desiring database portability should be aware that not all databases support referencing values in the order-by clause that are not referenced in the select clause.

Individual expressions in the order-by can be qualified with either ASC (ascending) or DESC (descending) to indicate the desired ordering direction.

Example: Order By

// legal because p.name is implicitly part of p
select p
from Person p
order by p.name

select c.id, sum( o.total ) as t
from Order o
    inner join o.customer c
group by c.id
order by t

4.4. About Collection Member References

References to collection-valued associations actually refer to the values of that collection.

Example: Collection References

select c
from Customer c
    join c.orders o
    join o.lineItems l
    join l.product p
where o.status = 'pending'
  and p.status = 'backorder'

// alternate syntax
select c
from Customer c,
    in(c.orders) o,
    in(o.lineItems) l
    join l.product p
where o.status = 'pending'
  and p.status = 'backorder'

In the example, the identification variable o actually refers to the object model type Order which is the type of the elements of the Customer#orders association.

The example also shows the alternate syntax for specifying collection association joins using the IN syntax. Both forms are equivalent. Which form an application chooses to use is simply a matter of taste.

4.5. About Qualified Path Expressions

It was previously stated that collection-valued associations actually refer to the values of that collection. Based on the type of collection, there are also available a set of explicit qualification expressions.

Table 4.2. Qualified Path Expressions
ExpressionDescription

VALUE

Refers to the collection value. Same as not specifying a qualifier. Useful to explicitly show intent. Valid for any type of collection-valued reference.

INDEX

According to HQL rules, this is valid for both Maps and Lists which specify a javax.persistence.OrderColumn annotation to refer to the Map key or the List position (aka the OrderColumn value). Java Persistence query language however, reserves this for use in the List case and adds KEY for the MAP case. Applications interested in Jakarta Persistence provider portability should be aware of this distinction.

KEY

Valid only for Maps. Refers to the map’s key. If the key is itself an entity, can be further navigated.

ENTRY

Only valid only for Maps. Refers to the Map’s logical java.util.Map.Entry tuple (the combination of its key and value). ENTRY is only valid as a terminal path and only valid in the select clause.

Example: Qualified Collection References

// Product.images is a Map<String,String> : key = a name, value = file path

// select all the image file paths (the map value) for Product#123
select i
from Product p
    join p.images i
where p.id = 123

// same as above
select value(i)
from Product p
    join p.images i
where p.id = 123

// select all the image names (the map key) for Product#123
select key(i)
from Product p
    join p.images i
where p.id = 123

// select all the image names and file paths (the 'Map.Entry') for Product#123
select entry(i)
from Product p
    join p.images i
where p.id = 123

// total the value of the initial line items for all orders for a customer
select sum( li.amount )
from Customer c
        join c.orders o
        join o.lineItems li
where c.id = 123
  and index(li) = 1

4.6. About HQL Functions

HQL defines some standard functions that are available regardless of the underlying database in use. HQL can also understand additional functions defined by the dialect and the application.

4.6.1. About HQL Standardized Functions

The following functions are available in HQL regardless of the underlying database in use.

Table 4.3. HQL Standardized Functions
FunctionDescription

BIT_LENGTH

Returns the length of binary data.

CAST

Performs an SQL cast. The cast target should name the Hibernate mapping type to use.

EXTRACT

Performs an SQL extraction on datetime values. An extraction returns a part of the date/time value, for example, the year. See the abbreviated forms below.

SECOND

Abbreviated extract form for extracting the second.

MINUTE

Abbreviated extract form for extracting the minute.

HOUR

Abbreviated extract form for extracting the hour.

DAY

Abbreviated extract form for extracting the day.

MONTH

Abbreviated extract form for extracting the month.

YEAR

Abbreviated extract form for extracting the year.

STR

Abbreviated form for casting a value as character data.

4.6.2. About HQL Non-Standardized Functions

Hibernate dialects can register additional functions known to be available for that particular database product. They would only be available when using that database or dialect. Applications that aim for database portability should avoid using functions in this category.

Application developers can also supply their own set of functions. This would usually represent either custom SQL functions or aliases for snippets of SQL. Such function declarations are made by using the addSqlFunction method of org.hibernate.cfg.Configuration.

4.6.3. About the Concatenation Operation

HQL defines a concatenation operator in addition to supporting the concatenation (CONCAT) function. This is not defined by Java Persistence query language, so portable applications should avoid using it. The concatenation operator is taken from the SQL concatenation operator (||).

Example: Concatenation Operation Example

select 'Mr. ' || c.name.first || ' ' || c.name.last
from Customer c
where c.gender = Gender.MALE

4.7. About Dynamic Instantiation

There is a particular expression type that is only valid in the select clause. Hibernate calls this "dynamic instantiation". Java Persistence query language supports some of this feature and calls it a "constructor expression".

Example: Dynamic Instantiation Example - Constructor

select new Family( mother, mate, offspr )
from DomesticCat as mother
    join mother.mate as mate
    left join mother.kittens as offspr

So rather than dealing with the Object[] here we are wrapping the values in a type-safe java object that will be returned as the results of the query. The class reference must be fully qualified and it must have a matching constructor.

The class here does not need to be mapped. If it does represent an entity, the resulting instances are returned in the NEW state (not managed!).

This is the part Java Persistence query language supports as well. HQL supports additional "dynamic instantiation" features. First, the query can specify to return a List rather than an Object[] for scalar results:

Example: Dynamic Instantiation Example - List

select new list(mother, offspr, mate.name)
from DomesticCat as mother
    inner join mother.mate as mate
    left outer join mother.kittens as offspr

The results from this query will be a List<List> as opposed to a List<Object[]>.

HQL also supports wrapping the scalar results in a Map.

Example: Dynamic Instantiation Example - Map

select new map( mother as mother, offspr as offspr, mate as mate )
from DomesticCat as mother
    inner join mother.mate as mate
    left outer join mother.kittens as offspr

select new map( max(c.bodyWeight) as max, min(c.bodyWeight) as min, count(*) as n )
from Cat cxt

The results from this query will be a List<Map<String,Object>> as opposed to a List<Object[]>. The keys of the map are defined by the aliases given to the select expressions.

4.8. About HQL Predicates

Predicates form the basis of the where clause, the having clause and searched case expressions. They are expressions which resolve to a truth value, generally TRUE or FALSE, although boolean comparisons involving NULL values generally resolve to UNKNOWN.

HQL Predicates
  • Null Predicate

    Check a value for null. Can be applied to basic attribute references, entity references and parameters. HQL additionally allows it to be applied to component/embeddable types.

    Example: NULL Check

    // select everyone with an associated address
    select p
    from Person p
    where p.address is not null
    
    // select everyone without an associated address
    select p
    from Person p
      where p.address is null

  • Like Predicate

    Performs a like comparison on string values. The syntax is:

    like_expression ::=
           string_expression
           [NOT] LIKE pattern_value
           [ESCAPE escape_character]

    The semantics follow that of the SQL like expression. The pattern_value is the pattern to attempt to match in the string_expression. Just like SQL, pattern_value can use _ (underscore) and % (percent) as wildcards. The meanings are the same. The _ matches any single character. The % matches any number of characters.

    The optional escape_character is used to specify an escape character used to escape the special meaning of _ and % in the pattern_value. This is useful when needing to search on patterns including either _ or %.

    Example: LIKE Predicate

    select p
    from Person p
    where p.name like '%Schmidt'
    
    select p
    from Person p
    where p.name not like 'Jingleheimmer%'
    
    // find any with name starting with "sp_"
    select sp
    from StoredProcedureMetadata sp
    where sp.name like 'sp|_%' escape '|'

  • Between Predicate

    Analogous to the SQL BETWEEN expression. Perform an evaluation that a value is within the range of 2 other values. All the operands should have comparable types.

    Example: BETWEEN Predicate

    select p
    from Customer c
        join c.paymentHistory p
    where c.id = 123
      and index(p) between 0 and 9
    
    select c
    from Customer c
    where c.president.dateOfBirth
            between {d '1945-01-01'}
                and {d '1965-01-01'}
    
    select o
    from Order o
    where o.total between 500 and 5000
    
    select p
    from Person p
    where p.name between 'A' and 'E'

  • IN Predicate

    The IN predicate performs a check that a particular value is in a list of values. Its syntax is:

    in_expression ::= single_valued_expression
                [NOT] IN single_valued_list
    
    single_valued_list ::= constructor_expression |
                (subquery) |
                collection_valued_input_parameter
    
    constructor_expression ::= (expression[, expression]*)

    The types of the single_valued_expression and the individual values in the single_valued_list must be consistent. Java Persistence query language limits the valid types here to string, numeric, date, time, timestamp, and enum types. In Java Persistence query language , single_valued_expression can only refer to:

    • "state fields", which is its term for simple attributes. Specifically this excludes association and component/embedded attributes.
    • entity type expressions.

      In HQL, single_valued_expression can refer to a far more broad set of expression types. Single-valued association are allowed. So are component/embedded attributes, although that feature depends on the level of support for tuple or "row value constructor syntax" in the underlying database. Additionally, HQL does not limit the value type in any way, though application developers should be aware that different types may incur limited support based on the underlying database vendor. This is largely the reason for the Java Persistence query language limitations.

      The list of values can come from a number of different sources. In the constructor_expression and collection_valued_input_parameter, the list of values must not be empty; it must contain at least one value.

      Example: IN Predicate

      select p
      from Payment p
      where type(p) in (CreditCardPayment, WireTransferPayment)
      
      select c
      from Customer c
      where c.hqAddress.state in ('TX', 'OK', 'LA', 'NM')
      
      select c
      from Customer c
      where c.hqAddress.state in ?
      
      select c
      from Customer c
      where c.hqAddress.state in (
          select dm.state
          from DeliveryMetadata dm
          where dm.salesTax is not null
      )
      
      // Not Java Persistence query language compliant!
      select c
      from Customer c
      where c.name in (
          ('John','Doe'),
          ('Jane','Doe')
      )
      
      // Not Java Persistence query language compliant!
      select c
      from Customer c
      where c.chiefExecutive in (
          select p
          from Person p
          where ...
      )

4.9. About Relational Comparisons

Comparisons involve one of the comparison operators - =, >, >=, <, ⇐, <>. HQL also defines != as a comparison operator synonymous with <>. The operands should be of the same type.

Example: Relational Comparison Examples

// numeric comparison
select c
from Customer c
where c.chiefExecutive.age < 30

// string comparison
select c
from Customer c
where c.name = 'Acme'

// datetime comparison
select c
from Customer c
where c.inceptionDate < {d '2000-01-01'}

// enum comparison
select c
from Customer c
where c.chiefExecutive.gender = com.acme.Gender.MALE

// boolean comparison
select c
from Customer c
where c.sendEmail = true

// entity type comparison
select p
from Payment p
where type(p) = WireTransferPayment

// entity value comparison
select c
from Customer c
where c.chiefExecutive = c.chiefTechnologist

Comparisons can also involve subquery qualifiers - ALL, ANY, SOME. SOME and ANY are synonymous.

The ALL qualifier resolves to true if the comparison is true for all of the values in the result of the subquery. It resolves to false if the subquery result is empty.

Example: ALL Subquery Comparison Qualifier Example

// select all players that scored at least 3 points
// in every game.
select p
from Player p
where 3 > all (
   select spg.points
   from StatsPerGame spg
   where spg.player = p
)

The ANY/SOME qualifier resolves to true if the comparison is true for at least one of the values in the result of the subquery. It resolves to false if the subquery result is empty.

4.10. Bytecode Enhancement

4.10.1. Lazy Attribute Loading

Lazy attribute loading is a bytecode enhancement which allows you to tell Hibernate that only certain parts of an entity should be loaded upon fetching from the database, and when the other remaining parts should be loaded as well. This is different from proxy-based idea of lazy loading which is entity-centric where the entity’s state is loaded at once as needed. With bytecode enhancement, individual attributes or groups of attributes are loaded as needed.

Lazy attributes can be designated to be loaded together and this is called a lazy group. By default, all singular attributes are part of a single group. When one lazy singular attribute is accessed, all lazy singular attributes are loaded. Contrary to lazy singular group, lazy plural attributes are each a discrete lazy group. This behavior is explicitly controllable through the @org.hibernate.annotations.LazyGroup annotation.

@Entity
public class Customer {

    @Id
    private Integer id;

    private String name;

    @Basic( fetch = FetchType.LAZY )
    private UUID accountsPayableXrefId;

    @Lob
    @Basic( fetch = FetchType.LAZY )
    @LazyGroup( "lobs" )
    private Blob image;

    public Integer getId() {
        return id;
    }

    public void setId(Integer id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public UUID getAccountsPayableXrefId() {
        return accountsPayableXrefId;
    }

    public void setAccountsPayableXrefId(UUID accountsPayableXrefId) {
        this.accountsPayableXrefId = accountsPayableXrefId;
    }

    public Blob getImage() {
        return image;
    }

    public void setImage(Blob image) {
        this.image = image;
    }
}

In the example above, there are two lazy attributes: accountsPayableXrefId and image. Each of these attributes is part of a different fetch group. The accountsPayableXrefId attribute is a part of the default fetch group, which means that accessing accountsPayableXrefId will not force the loading of the image attribute, and vice versa.

Chapter 5. Hibernate Services

5.1. About Hibernate Services

Services are classes that provide Hibernate with pluggable implementations of various types of functionality. Specifically they are implementations of certain service contract interfaces. The interface is known as the service role; the implementation class is known as the service implementation. Generally speaking, users can plug in alternate implementations of all standard service roles (overriding); they can also define additional services beyond the base set of service roles (extending).

5.2. About Service Contracts

The basic requirement for a service is to implement the marker interface org.hibernate.service.Service. Hibernate uses this internally for some basic type safety.

Optionally, the service can also implement the org.hibernate.service.spi.Startable and org.hibernate.service.spi.Stoppable interfaces to receive notifications of being started and stopped. Another optional service contract is org.hibernate.service.spi.Manageable which marks the service as manageable in Jakarta Management provided the Jakarta Management integration is enabled.

5.3. Types of Service Dependencies

Services are allowed to declare dependencies on other services using either of the following approaches:

@org.hibernate.service.spi.InjectService

Any method on the service implementation class accepting a single parameter and annotated with @InjectService is considered requesting injection of another service.

By default the type of the method parameter is expected to be the service role to be injected. If the parameter type is different than the service role, the serviceRole attribute of the InjectService should be used to explicitly name the role.

By default injected services are considered required, that is the startup will fail if a named dependent service is missing. If the service to be injected is optional, the required attribute of the InjectService should be declared as false. The default is true.

org.hibernate.service.spi.ServiceRegistryAwareService

The second approach is a pull approach where the service implements the optional service interface org.hibernate.service.spi.ServiceRegistryAwareService which declares a single injectServices method.

During startup, Hibernate will inject the org.hibernate.service.ServiceRegistry itself into services which implement this interface. The service can then use the ServiceRegistry reference to locate any additional services it needs.

5.3.1. The Service Registry

5.3.1.1. About the ServiceRegistry

The central service API, aside from the services themselves, is the org.hibernate.service.ServiceRegistry interface. The main purpose of a service registry is to hold, manage and provide access to services.

Service registries are hierarchical. Services in one registry can depend on and utilize services in that same registry as well as any parent registries.

Use org.hibernate.service.ServiceRegistryBuilder to build a org.hibernate.service.ServiceRegistry instance.

Example Using ServiceRegistryBuilder to Create a ServiceRegistry

ServiceRegistryBuilder registryBuilder =
    new ServiceRegistryBuilder( bootstrapServiceRegistry );
    ServiceRegistry serviceRegistry = registryBuilder.buildServiceRegistry();

5.3.2. Custom Services

5.3.2.1. About Custom Services

Once a org.hibernate.service.ServiceRegistry is built it is considered immutable; the services themselves might accept reconfiguration, but immutability here means adding or replacing services. So another role provided by the org.hibernate.service.ServiceRegistryBuilder is to allow tweaking of the services that will be contained in the org.hibernate.service.ServiceRegistry generated from it.

There are two means to tell a org.hibernate.service.ServiceRegistryBuilder about custom services.

  • Implement a org.hibernate.service.spi.BasicServiceInitiator class to control on-demand construction of the service class and add it to the org.hibernate.service.ServiceRegistryBuilder using its addInitiator method.
  • Just instantiate the service class and add it to the org.hibernate.service.ServiceRegistryBuilder using its addService method.

Either approach is valid for extending a registry, such as adding new service roles, and overriding services, such as replacing service implementations.

Example: Use ServiceRegistryBuilder to Replace an Existing Service with a Custom Service

ServiceRegistryBuilder registryBuilder =
    new ServiceRegistryBuilder(bootstrapServiceRegistry);
registryBuilder.addService(JdbcServices.class, new MyCustomJdbcService());
ServiceRegistry serviceRegistry = registryBuilder.buildServiceRegistry();

public class MyCustomJdbcService implements JdbcServices{

   @Override
   public ConnectionProvider getConnectionProvider() {
       return null;
   }

   @Override
   public Dialect getDialect() {
       return null;
   }

   @Override
   public SqlStatementLogger getSqlStatementLogger() {
       return null;
   }

   @Override
   public SqlExceptionHelper getSqlExceptionHelper() {
       return null;
   }

   @Override
   public ExtractedDatabaseMetaData getExtractedMetaDataSupport() {
       return null;
   }

   @Override
   public LobCreator getLobCreator(LobCreationContext lobCreationContext) {
       return null;
   }

   @Override
   public ResultSetWrapper getResultSetWrapper() {
       return null;
   }
}

5.3.3. The Boot-Strap Registry

5.3.3.1. About the Boot-strap Registry

The boot-strap registry holds services that absolutely have to be available for most things to work. The main service here is the ClassLoaderService which is a perfect example. Even resolving configuration files needs access to class loading services i.e. resource look ups. This is the root registry, no parent, in normal use.

Instances of boot-strap registries are built using the org.hibernate.service.BootstrapServiceRegistryBuilder class.

Using BootstrapServiceRegistryBuilder

Example: Using BootstrapServiceRegistryBuilder

BootstrapServiceRegistry bootstrapServiceRegistry =
    new BootstrapServiceRegistryBuilder()
    // pass in org.hibernate.integrator.spi.Integrator instances which are not
    // auto-discovered (for whatever reason) but which should be included
    .with(anExplicitIntegrator)
    // pass in a class loader that Hibernate should use to load application classes
    .with(anExplicitClassLoaderForApplicationClasses)
    // pass in a class loader that Hibernate should use to load resources
    .with(anExplicitClassLoaderForResources)
    // see BootstrapServiceRegistryBuilder for rest of available methods
    ...
    // finally, build the bootstrap registry with all the above options
    .build();

5.3.3.2. BootstrapRegistry Services
org.hibernate.service.classloading.spi.ClassLoaderService

Hibernate needs to interact with class loaders. However, the manner in which Hibernate, or any library, should interact with class loaders varies based on the runtime environment that is hosting the application. Application servers, OSGi containers, and other modular class loading systems impose very specific class loading requirements. This service provides Hibernate an abstraction from this environmental complexity. And just as importantly, it does so in a single-swappable-component manner.

In terms of interacting with a class loader, Hibernate needs the following capabilities:

  • the ability to locate application classes
  • the ability to locate integration classes
  • the ability to locate resources, such as properties files and XML files
  • the ability to load java.util.ServiceLoader

    Note

    Currently, the ability to load application classes and the ability to load integration classes are combined into a single load class capability on the service. That may change in a later release.

org.hibernate.integrator.spi.IntegratorService

Applications, add-ons and other modules need to integrate with Hibernate. The previous approach required a component, usually an application, to coordinate the registration of each individual module. This registration was conducted on behalf of each module’s integrator.

This service focuses on the discovery aspect. It leverages the standard Java java.util.ServiceLoader capability provided by the org.hibernate.service.classloading.spi.ClassLoaderService in order to discover implementations of the org.hibernate.integrator.spi.Integrator contract.

Integrators would simply define a file named /META-INF/services/org.hibernate.integrator.spi.Integrator and make it available on the class path.

This file is used by the java.util.ServiceLoader mechanism. It lists, one per line, the fully qualified names of classes which implement the org.hibernate.integrator.spi.Integrator interface.

5.3.4. SessionFactory Registry

While it is best practice to treat instances of all the registry types as targeting a given org.hibernate.SessionFactory, the instances of services in this group explicitly belong to a single org.hibernate.SessionFactory.

The difference is a matter of timing in when they need to be initiated. Generally they need access to the org.hibernate.SessionFactory to be initiated. This special registry is org.hibernate.service.spi.SessionFactoryServiceRegistry.

5.3.4.1. SessionFactory Services

org.hibernate.event.service.spi.EventListenerRegistry

Description
Service for managing event listeners.
Initiator
org.hibernate.event.service.internal.EventListenerServiceInitiator
Implementations
org.hibernate.event.service.internal.EventListenerRegistryImpl

5.3.5. Integrators

The org.hibernate.integrator.spi.Integrator is intended to provide a simple means for allowing developers to hook into the process of building a functioning SessionFactory. The org.hibernate.integrator.spi.Integrator interface defines two methods of interest:

  • integrate allows us to hook into the building process
  • disintegrate allows us to hook into a SessionFactory shutting down.
Note

There is a third method defined in org.hibernate.integrator.spi.Integrator, an overloaded form of integrate, accepting a org.hibernate.metamodel.source.MetadataImplementor instead of org.hibernate.cfg.Configuration.

In addition to the discovery approach provided by the IntegratorService, applications can manually register Integrator implementations when building the BootstrapServiceRegistry.

5.3.5.1. Integrator Use Cases

The main use cases for an org.hibernate.integrator.spi.Integrator are registering event listeners and providing services, see org.hibernate.integrator.spi.ServiceContributingIntegrator.

Example: Registering Event Listeners

public class MyIntegrator implements org.hibernate.integrator.spi.Integrator {

    public void integrate(
            Configuration configuration,
            SessionFactoryImplementor sessionFactory,
            SessionFactoryServiceRegistry serviceRegistry) {
        // As you might expect, an EventListenerRegistry is the thing with which event listeners are registered  It is a
        // service so we look it up using the service registry
        final EventListenerRegistry eventListenerRegistry = serviceRegistry.getService(EventListenerRegistry.class);

        // If you wish to have custom determination and handling of "duplicate" listeners, you would have to add an
        // implementation of the org.hibernate.event.service.spi.DuplicationStrategy contract like this
        eventListenerRegistry.addDuplicationStrategy(myDuplicationStrategy);

        // EventListenerRegistry defines 3 ways to register listeners:
        //     1) This form overrides any existing registrations with
        eventListenerRegistry.setListeners(EventType.AUTO_FLUSH, myCompleteSetOfListeners);
        //     2) This form adds the specified listener(s) to the beginning of the listener chain
        eventListenerRegistry.prependListeners(EventType.AUTO_FLUSH, myListenersToBeCalledFirst);
        //     3) This form adds the specified listener(s) to the end of the listener chain
        eventListenerRegistry.appendListeners(EventType.AUTO_FLUSH, myListenersToBeCalledLast);
    }
}

Chapter 6. Hibernate Envers

6.1. About Hibernate Envers

Hibernate Envers is an auditing and versioning system, providing JBoss EAP with a means to track historical changes to persistent classes. Audit tables are created for entities annotated with @Audited, which store the history of changes made to the entity. The data can then be retrieved and queried.

Envers allows developers to:

  • audit all mappings defined by the Jakarta Persistence specification
  • audit all hibernate mappings that extend the Jakarta Persistence specification
  • audit entities mapped by or using the native Hibernate API
  • log data for each revision using a revision entity
  • query historical data

6.2. About Auditing Persistent Classes

Auditing of persistent classes is done in JBoss EAP through Hibernate Envers and the @Audited annotation. When the annotation is applied to a class, a table is created, which stores the revision history of the entity.

Each time a change is made to the class, an entry is added to the audit table. The entry contains the changes to the class, and is given a revision number. This means that changes can be rolled back, or previous revisions can be viewed.

6.3. Auditing Strategies

6.3.1. About Auditing Strategies

Auditing strategies define how audit information is persisted, queried and stored. There are currently two audit strategies available with Hibernate Envers:

Default Audit Strategy
  • This strategy persists the audit data together with a start revision. For each row that is inserted, updated or deleted in an audited table, one or more rows are inserted in the audit tables, along with the start revision of its validity.
  • Rows in the audit tables are never updated after insertion. Queries of audit information use subqueries to select the applicable rows in the audit tables, which are slow and difficult to index.
Validity Audit Strategy
  • This strategy stores the start revision, as well as the end revision of the audit information. For each row that is inserted, updated or deleted in an audited table, one or more rows are inserted in the audit tables, along with the start revision of its validity.
  • At the same time, the end revision field of the previous audit rows (if available) is set to this revision. Queries on the audit information can then use between start and end revision, instead of subqueries. This means that persisting audit information is a little slower because of the extra updates, but retrieving audit information is a lot faster.
  • This can also be improved by adding extra indexes.

For more information on auditing, see About Auditing Persistent Classes. To set the auditing strategy for the application, see Set the Auditing Strategy.

6.3.2. Set the Auditing Strategy

There are two audit strategies supported by JBoss EAP:

  • The default audit strategy
  • The validity audit strategy
Define an Auditing Strategy

Configure the org.hibernate.envers.audit_strategy property in the persistence.xml file of the application. If the property is not set in the persistence.xml file, then the default audit strategy is used.

Set the Default Audit Strategy

<property name="org.hibernate.envers.audit_strategy" value="org.hibernate.envers.strategy.DefaultAuditStrategy"/>

Set the Validity Audit Strategy

<property name="org.hibernate.envers.audit_strategy" value="org.hibernate.envers.strategy.ValidityAuditStrategy"/>

6.3.3. Adding Auditing Support to a Jakarta Persistence Entity

Procedure

JBoss EAP uses entity auditing through Hibernate Envers to track the historical changes of a persistent class. This section covers adding auditing support for a Jakarta Persistence entity.

Add Auditing Support to a Jakarta Persistence Entity

  1. Configure the available auditing parameters to suit the deployment. See Configure Envers Parameters for details.
  2. Open the Jakarta Persistence entity to be audited.
  3. Import the org.hibernate.envers.Audited interface.
  4. Apply the @Audited annotation to each field or property to be audited, or apply it once to the whole class.

    Example: Audit Two Fields

    import org.hibernate.envers.Audited;
    
    import javax.persistence.Entity;
    import javax.persistence.Id;
    import javax.persistence.GeneratedValue;
    import javax.persistence.Column;
    
    @Entity
    public class Person {
        @Id
        @GeneratedValue
        private int id;
    
        @Audited
        private String name;
    
        private String surname;
    
        @ManyToOne
        @Audited
        private Address address;
    
        // add getters, setters, constructors, equals and hashCode here
    }

    Example: Audit an Entire Class

    import org.hibernate.envers.Audited;
    
    import javax.persistence.Entity;
    import javax.persistence.Id;
    import javax.persistence.GeneratedValue;
    import javax.persistence.Column;
    
    @Entity
    @Audited
    public class Person {
        @Id
        @GeneratedValue
        private int id;
    
        private String name;
    
        private String surname;
    
        @ManyToOne
        private Address address;
    
        // add getters, setters, constructors, equals and hashCode here
    }

When the Jakarta Persistence entity has been configured for auditing, a table called _AUD is created to store the historical changes.

6.4. Configuration

6.4.1. Configure Envers Parameters

JBoss EAP uses entity auditing, through Hibernate Envers, to track the historical changes of a persistent class.

Configuring the Available Envers Parameters

  1. Open the persistence.xml file for the application.
  2. Add, remove or configure Envers properties as required. For a list of available properties, see Envers Configuration Properties.

    Example: Envers Parameters

    <persistence-unit name="mypc">
      <description>Persistence Unit.</description>
      <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source>
      <shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
      <properties>
        <property name="hibernate.hbm2ddl.auto" value="create-drop" />
        <property name="hibernate.show_sql" value="true" />
        <property name="hibernate.cache.use_second_level_cache" value="true" />
        <property name="hibernate.cache.use_query_cache" value="true" />
        <property name="hibernate.generate_statistics" value="true" />
        <property name="org.hibernate.envers.versionsTableSuffix" value="_V" />
        <property name="org.hibernate.envers.revisionFieldName" value="ver_rev" />
      </properties>
    </persistence-unit>

6.4.2. Enable or Disable Auditing at Runtime

Enable or Disable Entity Version Auditing at Runtime

  1. Subclass the AuditEventListener class.
  2. Override the following methods that are called on Hibernate events:

    • onPostInsert
    • onPostUpdate
    • onPostDelete
    • onPreUpdateCollection
    • onPreRemoveCollection
    • onPostRecreateCollection
  3. Specify the subclass as the listener for the events.
  4. Determine if the change should be audited.
  5. Pass the call to the superclass if the change should be audited.

6.4.3. Configure Conditional Auditing

Hibernate Envers persists audit data in reaction to various Hibernate events, using a series of event listeners. These listeners are registered automatically if the Envers JAR is in the class path.

Implement Conditional Auditing

  1. Set the hibernate.listeners.envers.autoRegister Hibernate property to false in the persistence.xml file.
  2. Subclass each event listener to be overridden. Place the conditional auditing logic in the subclass, and call the super method if auditing should be performed.
  3. Create a custom implementation of org.hibernate.integrator.spi.Integrator, similar to org.hibernate.envers.event.EnversIntegrator. Use the event listener subclasses created in step two, rather than the default classes.
  4. Add a META-INF/services/org.hibernate.integrator.spi.Integrator file to the JAR. This file should contain the fully qualified name of the class implementing the interface.

6.4.4. Envers Configuration Properties

Table 6.1. Entity Data Versioning Configuration Parameters
Property NameDefault ValueDescription

org.hibernate.envers.audit_table_prefix

 

A string that is prepended to the name of an audited entity, to create the name of the entity that will hold the audit information.

org.hibernate.envers.audit_table_suffix

_AUD

A string that is appended to the name of an audited entity to create the name of the entity that will hold the audit information. For example, if an entity with a table name of Person is audited, Envers will generate a table called Person_AUD to store the historical data.

org.hibernate.envers.revision_field_name

REV

The name of the field in the audit entity that holds the revision number.

org.hibernate.envers.revision_type_field_name

REVTYPE

The name of the field in the audit entity that holds the type of revision. The current types of revisions possible are: add, mod and del for inserting, modifying or deleting respectively.

org.hibernate.envers.revision_on_collection_change

true

This property determines if a revision should be generated if a relation field that is not owned changes. This can either be a collection in a one-to-many relation, or the field using the mappedBy attribute in a one-to-one relation.

org.hibernate.envers.do_not_audit_optimistic_locking_field

true

When true, properties used for optimistic locking (annotated with @Version) will automatically be excluded from auditing.

org.hibernate.envers.store_data_at_delete

false

This property defines whether or not entity data should be stored in the revision when the entity is deleted, instead of only the ID, with all other properties marked as null. This is not usually necessary, as the data is present in the last-but-one revision. Sometimes, however, it is easier and more efficient to access it in the last revision. However, this means the data the entity contained before deletion is stored twice.

org.hibernate.envers.default_schema

null (same as normal tables)

The default schema name used for audit tables. Can be overridden using the @AuditTable(schema="…​") annotation. If not present, the schema will be the same as the schema of the normal tables.

org.hibernate.envers.default_catalog

null (same as normal tables)

The default catalog name that should be used for audit tables. Can be overridden using the @AuditTable(catalog="…​") annotation. If not present, the catalog will be the same as the catalog of the normal tables.

org.hibernate.envers.audit_strategy

org.hibernate.envers.strategy.DefaultAuditStrategy

This property defines the audit strategy that should be used when persisting audit data. By default, only the revision where an entity was modified is stored. Alternatively, org.hibernate.envers.strategy.ValidityAuditStrategy stores both the start revision and the end revision. Together, these define when an audit row was valid.

org.hibernate.envers.audit_strategy_validity_end_rev_field_name

REVEND

The column name that will hold the end revision number in audit entities. This property is only valid if the validity audit strategy is used.

org.hibernate.envers.audit_strategy_validity_store_revend_timestamp

false

This property defines whether the timestamp of the end revision, where the data was last valid, should be stored in addition to the end revision itself. This is useful to be able to purge old audit records out of a relational database by using table partitioning. Partitioning requires a column that exists within the table. This property is only evaluated if the ValidityAuditStrategy is used.

org.hibernate.envers.audit_strategy_validity_revend_timestamp_field_name

REVEND_TSTMP

Column name of the timestamp of the end revision at which point the data was still valid. Only used if the ValidityAuditStrategy is used, and org.hibernate.envers.audit_strategy_validity_store_revend_timestamp evaluates to true.

6.5. Querying Audit Information

6.5.1. Retrieve Auditing Information Through Queries

Hibernate Envers provides the functionality to retrieve audit information through queries.

Note

Queries on the audited data will be, in many cases, much slower than corresponding queries on live data, as they involve correlated subselects.

Querying for Entities of a Class at a Given Revision

The entry point for this type of query is:

AuditQuery query = getAuditReader()
    .createQuery()
    .forEntitiesAtRevision(MyEntity.class, revisionNumber);

Constraints can then be specified, using the AuditEntity factory class. The query below only selects entities where the name property is equal to John:

query.add(AuditEntity.property("name").eq("John"));

The queries below only select entities that are related to a given entity:

query.add(AuditEntity.property("address").eq(relatedEntityInstance));
// or
query.add(AuditEntity.relatedId("address").eq(relatedEntityId));

The results can then be ordered, limited, and have aggregations and projections (except grouping) set. The example below is a full query.

List personsAtAddress = getAuditReader().createQuery()
    .forEntitiesAtRevision(Person.class, 12)
    .addOrder(AuditEntity.property("surname").desc())
    .add(AuditEntity.relatedId("address").eq(addressId))
    .setFirstResult(4)
    .setMaxResults(2)
    .getResultList();

Query Revisions where Entities of a Given Class Changed

The entry point for this type of query is:

AuditQuery query = getAuditReader().createQuery()
    .forRevisionsOfEntity(MyEntity.class, false, true);

Constraints can be added to this query in the same way as the previous example. There are additional possibilities for this query:

AuditEntity.revisionNumber()
Specify constraints, projections and order on the revision number in which the audited entity was modified.
AuditEntity.revisionProperty(propertyName)
Specify constraints, projections and order on a property of the revision entity, corresponding to the revision in which the audited entity was modified.
AuditEntity.revisionType()
Provides accesses to the type of the revision (ADD, MOD, DEL).

The query results can then be adjusted as necessary. The query below selects the smallest revision number at which the entity of the MyEntity class, with the entityId ID has changed, after revision number 42:

Number revision = (Number) getAuditReader().createQuery()
    .forRevisionsOfEntity(MyEntity.class, false, true)
    .setProjection(AuditEntity.revisionNumber().min())
    .add(AuditEntity.id().eq(entityId))
    .add(AuditEntity.revisionNumber().gt(42))
    .getSingleResult();

Queries for revisions can also minimize/maximize a property. The query below selects the revision at which the value of the actualDate for a given entity was larger than a given value, but as small as possible:

Number revision = (Number) getAuditReader().createQuery()
    .forRevisionsOfEntity(MyEntity.class, false, true)
    // We are only interested in the first revision
    .setProjection(AuditEntity.revisionNumber().min())
    .add(AuditEntity.property("actualDate").minimize()
        .add(AuditEntity.property("actualDate").ge(givenDate))
        .add(AuditEntity.id().eq(givenEntityId)))
    .getSingleResult();

The minimize() and maximize() methods return a criteria, to which constraints can be added, which must be met by the entities with the maximized/minimized properties.

There are two boolean parameters passed when creating the query.

selectEntitiesOnly

This parameter is only valid when an explicit projection is not set.
If true, the result of the query will be a list of entities that changed at revisions satisfying the specified constraints.
If false, the result will be a list of three element arrays. The first element will be the changed entity instance. The second will be an entity containing revision data. If no custom entity is used, this will be an instance of DefaultRevisionEntity. The third element array will be the type of the revision (ADD, MOD, DEL).
selectDeletedEntities
This parameter specifies if revisions in which the entity was deleted must be included in the results. If true, the entities will have the revision type DEL, and all fields, except id, will have the value null.

Query Revisions of an Entity that Modified a Given Property

The query below will return all revisions of MyEntity with a given id, where the actualDate property has been changed.

AuditQuery query = getAuditReader().createQuery()
  .forRevisionsOfEntity(MyEntity.class, false, true)
  .add(AuditEntity.id().eq(id));
  .add(AuditEntity.property("actualDate").hasChanged())

The hasChanged condition can be combined with additional criteria. The query below will return a horizontal slice for MyEntity at the time the revisionNumber was generated. It will be limited to the revisions that modified prop1, but not prop2.

AuditQuery query = getAuditReader().createQuery()
  .forEntitiesAtRevision(MyEntity.class, revisionNumber)
  .add(AuditEntity.property("prop1").hasChanged())
  .add(AuditEntity.property("prop2").hasNotChanged());

The result set will also contain revisions with numbers lower than the revisionNumber. This means that this query cannot be read as "Return all MyEntities changed in revisionNumber with prop1 modified and prop2 untouched."

The query below shows how this result can be returned, using the forEntitiesModifiedAtRevision query:

AuditQuery query = getAuditReader().createQuery()
  .forEntitiesModifiedAtRevision(MyEntity.class, revisionNumber)
  .add(AuditEntity.property("prop1").hasChanged())
  .add(AuditEntity.property("prop2").hasNotChanged());

Query Entities Modified in a Given Revision

The example below shows the basic query for entities modified in a given revision. It allows entity names and corresponding Java classes changed in a specified revision to be retrieved:

Set<Pair<String, Class>> modifiedEntityTypes = getAuditReader()
    .getCrossTypeRevisionChangesReader().findEntityTypes(revisionNumber);

There are a number of other queries that are also accessible from org.hibernate.envers.CrossTypeRevisionChangesReader:

List<Object> findEntities(Number)
Returns snapshots of all audited entities changed (added, updated and removed) in a given revision. Executes n+1 SQL queries, where n is a number of different entity classes modified within the specified revision.
List<Object> findEntities(Number, RevisionType)
Returns snapshots of all audited entities changed (added, updated or removed) in a given revision filtered by modification type. Executes n+1 SQL queries, where n is a number of different entity classes modified within specified revision. Map<RevisionType, List<Object>>
findEntitiesGroupByRevisionType(Number)
Returns a map containing lists of entity snapshots grouped by modification operation, for example, addition, update or removal. Executes 3n+1 SQL queries, where n is a number of different entity classes modified within specified revision.

6.5.2. Traversing Entity Associations Using Properties of Referenced Entities

You can use the properties of a referenced entity to traverse entities in a query. This enables you to query for one-to-one and many-to-one associations.

The examples below demonstrate some of the ways you can traverse entities in a query.

  • In revision number 1, find cars where the owner is age 20 or lives at address number 30, then order the result set by car make.

    List<Car> resultList = auditReader.createQuery()
                    .forEntitiesAtRevision( Car.class, 1 )
                    .traverseRelation( "owner", JoinType.INNER, "p" )
                    .traverseRelation( "address", JoinType.INNER, "a" )
                    .up().up().add( AuditEntity.disjunction().add(AuditEntity.property( "p", "age" )
                           .eq( 20 ) ).add( AuditEntity.property( "a", "number" ).eq( 30 ) ) )
                    .addOrder( AuditEntity.property( "make" ).asc() ).getResultList();
  • In revision number 1, find the car where the owner age is equal to the owner address number.

    Car result = (Car) auditReader.createQuery()
                    .forEntitiesAtRevision( Car.class, 1 )
                    .traverseRelation( "owner", JoinType.INNER, "p" )
                    .traverseRelation( "address", JoinType.INNER, "a" )
                    .up().up().add(AuditEntity.property( "p", "age" )
                            .eqProperty( "a", "number" ) ).getSingleResult();
  • In revision number 1, find all cars where the owner is age 20 or where there is no owner.

    List<Car> resultList = auditReader.createQuery()
                    .forEntitiesAtRevision( Car.class, 1 )
                    .traverseRelation( "owner", JoinType.LEFT, "p" )
                    .up().add( AuditEntity.or( AuditEntity.property( "p", "age").eq( 20 ),
                            AuditEntity.relatedId( "owner" ).eq( null ) ) )
                    .addOrder( AuditEntity.property( "make" ).asc() ).getResultList();
  • In revision number 1, find all cars where the make equals "car3", and where the owner is age 30 or there is no no owner.

    List<Car> resultList = auditReader.createQuery()
                    .forEntitiesAtRevision( Car.class, 1 )
                    .traverseRelation( "owner", JoinType.LEFT, "p" )
                    .up().add( AuditEntity.and( AuditEntity.property( "make" ).eq( "car3" ), AuditEntity.property( "p", "age" ).eq( 30 ) ) )
                    .getResultList();
  • In revision number 1, find all cars where the make equals "car3" or where or the owner is age 10 or where there is no owner.

    List<Car> resultList = auditReader.createQuery()
                    .forEntitiesAtRevision( Car.class, 1 )
                    .traverseRelation( "owner", JoinType.LEFT, "p" )
                    .up().add( AuditEntity.or( AuditEntity.property( "make" ).eq( "car3" ), AuditEntity.property( "p", "age" ).eq( 10 ) ) )
                    .getResultList();

6.6. Performance Tuning

6.6.1. Alternative Batch Loading Algorithms

Hibernate allows you to load data for associations using one of four fetching strategies: join, select, subselect and batch. Out of these four strategies, batch loading allows for the biggest performance gains as it is an optimization strategy for select fetching. In this strategy, Hibernate retrieves a batch of entity instances or collections in a single SELECT statement by specifying a list of primary or foreign keys. Batch fetching is an optimization of the lazy select fetching strategy.

There are two ways to configure batch fetching: per-class level or per-collection level.

  • Per-class Level

    When Hibernate loads data on a per-class level, it requires the batch size of the association to pre-load when queried. For example, consider that at runtime you have 30 instances of a car object loaded in session. Each car object belongs to an owner object. If you were to iterate through all the car objects and request their owners, with lazy loading, Hibernate will issue 30 select statements - one for each owner. This is a performance bottleneck.

    You can instead, tell Hibernate to pre-load the data for the next batch of owners before they have been sought via a query. When an owner object has been queried, Hibernate will query many more of these objects in the same SELECT statement.

    The number of owner objects to query in advance depends upon the batch-size parameter specified at configuration time:

    <class name="owner" batch-size="10"></class>

    This tells Hibernate to query at least 10 more owner objects in expectation of them being needed in the near future. When a user queries the owner of car A, the owner of car B may already have been loaded as part of batch loading. When the user actually needs the owner of car B, instead of going to the database (and issuing a SELECT statement), the value can be retrieved from the current session.

    In addition to the batch-size parameter, Hibernate 4.2.0 has introduced a new configuration item to improve in batch loading performance. The configuration item is called Batch Fetch Style configuration and specified by the hibernate.batch_fetch_style parameter.

    Three different batch fetch styles are supported: LEGACY, PADDED and DYNAMIC. To specify which style to use, use org.hibernate.cfg.AvailableSettings#BATCH_FETCH_STYLE.

    • LEGACY: In the legacy style of loading, a set of pre-built batch sizes based on ArrayHelper.getBatchSizes(int) are utilized. Batches are loaded using the next-smaller pre-built batch size from the number of existing batchable identifiers.

      Continuing with the above example, with a batch-size setting of 30, the pre-built batch sizes would be [30, 15, 10, 9, 8, 7, .., 1]. An attempt to batch load 29 identifiers would result in batches of 15, 10, and 4. There will be 3 corresponding SQL queries, each loading 15, 10 and 4 owners from the database.

    • PADDED - Padded is similar to LEGACY style of batch loading. It still utilizes pre-built batch sizes, but uses the next-bigger batch size and pads the extra identifier placeholders.

      As with the example above, if 30 owner objects are to be initialized, there will only be one query executed against the database.

      However, if 29 owner objects are to be initialized, Hibernate will still execute only one SQL select statement of batch size 30, with the extra space padded with a repeated identifier.

    • Dynamic - While still conforming to batch-size restrictions, this style of batch loading dynamically builds its SQL SELECT statement using the actual number of objects to be loaded.

      For example, for 30 owner objects, and a maximum batch size of 30, a call to retrieve 30 owner objects will result in one SQL SELECT statement. A call to retrieve 35 will result in two SQL statements, of batch sizes 30 and 5 respectively. Hibernate will dynamically alter the second SQL statement to keep at 5, the required number, while still remaining under the restriction of 30 as the batch-size. This is different to the PADDED version, as the second SQL will not get PADDED, and unlike the LEGACY style, there is no fixed size for the second SQL statement - the second SQL is created dynamically.

      For a query of less than 30 identifiers, this style will dynamically only load the number of identifiers requested.

  • Per-collection Level

    Hibernate can also batch load collections honoring the batch fetch size and styles as listed in the per-class section above.

    To reverse the example used in the previous section, consider that you need to load all the car objects owned by each owner object. If 10 owner objects are loaded in the current session iterating through all owners will generate 10 SELECT statements, one for every call to getCars() method. If you enable batch fetching for the cars collection in the mapping of Owner, Hibernate can pre-fetch these collections, as shown below.

    <class name="Owner"><set name="cars" batch-size="5"></set></class>

    Thus, with a batch size of five and using legacy batch style to load 10 collections, Hibernate will execute two SELECT statements, each retrieving five collections.

6.6.2. Second Level Caching of Object References for Non-mutable Data

Hibernate automatically caches data within memory for improved performance. This is accomplished by an in-memory cache which reduces the number of times that database lookups are required, especially for data that rarely changes.

Hibernate maintains two types of caches. The primary cache, also called the first-level cache, is mandatory. This cache is associated with the current session and all requests must pass through it. The secondary cache, also called the second-level cache, is optional, and is only consulted after the primary cache has been consulted.

Data is stored in the second-level cache by first disassembling it into a state array. This array is deep copied, and that deep copy is put into the cache. The reverse is done for reading from the cache. This works well for data that changes (mutable data), but is inefficient for immutable data.

Deep copying data is an expensive operation in terms of memory usage and processing speed. For large data sets, memory and processing speed become a performance-limiting factor. Hibernate allows you to specify that immutable data be referenced rather than copied. Instead of copying entire data sets, Hibernate can now store the reference to the data in the cache.

This can be done by changing the value of the configuration setting hibernate.cache.use_reference_entries to true. By default, hibernate.cache.use_reference_entries is set to false.

When hibernate.cache.use_reference_entries is set to true, an immutable data object that does not have any associations is not copied into the second-level cache, and only a reference to it is stored.

Warning

When hibernate.cache.use_reference_entries is set to true, immutable data objects with associations are still deep copied into the second-level cache.

Appendix A. Reference Material

A.1. Hibernate Properties

Table A.1. Connection Properties Configurable in the persistence.xml File
Property NameValueDescription

javax.persistence.jdbc.driver

org.hsqldb.jdbcDriver

The class name of the JDBC driver to be used.

javax.persistence.jdbc.user

sa

The username.

javax.persistence.jdbc.password

 

The password.

javax.persistence.jdbc.url

jdbc:hsqldb:.

The JDBC connection URL.

Table A.2. Hibernate Configuration Properties
Property NameDescription

hibernate.dialect

The class name of a Hibernate org.hibernate.dialect.Dialect. Allows Hibernate to generate SQL optimized for a particular relational database.

In most cases Hibernate will be able to choose the correct org.hibernate.dialect.Dialect implementation, based on the JDBC metadata returned by the JDBC driver.

hibernate.show_sql

Boolean. Writes all SQL statements to console. This is an alternative to setting the log category org.hibernate.SQL to debug.

hibernate.format_sql

Boolean. Pretty print the SQL in the log and console.

hibernate.default_schema

Qualify unqualified table names with the given schema/tablespace in generated SQL.

hibernate.default_catalog

Qualifies unqualified table names with the given catalog in generated SQL.

hibernate.session_factory_name

The org.hibernate.SessionFactory will be automatically bound to this name in Java Naming and Directory Interface after it has been created. For example, jndi/composite/name.Jpa

hibernate.max_fetch_depth

Sets a maximum depth for the outer join fetch tree for single-ended associations (one-to-one, many-to-one). A 0 disables default outer join fetching. The recommended value is between 0 and 3.

hibernate.default_batch_fetch_size

Sets a default size for Hibernate batch fetching of associations. The recommended values are 4, 8, and 16.

hibernate.default_entity_mode

Sets a default mode for entity representation for all sessions opened from this SessionFactory. Values include: dynamic-map, dom4j, pojo.

hibernate.order_updates

Boolean. Forces Hibernate to order SQL updates by the primary key value of the items being updated. This will result in fewer transaction deadlocks in highly concurrent systems.

hibernate.generate_statistics

Boolean. If enabled, Hibernate will collect statistics useful for performance tuning.

hibernate.use_identifier_rollback

Boolean. If enabled, generated identifier properties will be reset to default values when objects are deleted.

hibernate.use_sql_comments

Boolean. If turned on, Hibernate will generate comments inside the SQL, for easier debugging. Default value is false.

hibernate.id.new_generator_mappings

Boolean. This property is relevant when using @GeneratedValue. It indicates whether or not the new IdentifierGenerator implementations are used for javax.persistence.GenerationType.AUTO, javax.persistence.GenerationType.TABLE and javax.persistence.GenerationType.SEQUENCE. Default value is true.

hibernate.ejb.naming_strategy

Chooses the org.hibernate.cfg.NamingStrategy implementation when using Hibernate EntityManager. hibernate.ejb.naming_strategy is no longer supported in Hibernate 5.0. If used, a deprecation message will be logged indicating that it is no longer supported and has been removed in favor of the split ImplicitNamingStrategy and PhysicalNamingStrategy.

If the application does not use EntityManager, follow the instructions here to configure the NamingStrategy: Hibernate Reference Documentation - Naming Strategies.

For an example on native bootstrapping using MetadataBuilder and applying the implicit naming strategy, see http://docs.jboss.org/hibernate/orm/5.0/userguide/html_single/Hibernate_User_Guide.html#bootstrap-native-metadata in the Hibernate 5.0 documentation. The physical naming strategy can be applied by using MetadataBuilder.applyPhysicalNamingStrategy(). For further details on org.hibernate.boot.MetadataBuilder, see https://docs.jboss.org/hibernate/orm/5.0/javadocs/.

hibernate.implicit_naming_strategy

Specifies the org.hibernate.boot.model.naming.ImplicitNamingStrategy class to be used. hibernate.implicit_naming_strategy can also be used to configure a custom class that implements ImplicitNamingStrategy. Following short names are defined for this setting:

  • default - ImplicitNamingStrategyJpaCompliantImpl
  • jpa - ImplicitNamingStrategyJpaCompliantImpl
  • legacy-jpa - ImplicitNamingStrategyLegacyJpaImpl
  • legacy-hbm - ImplicitNamingStrategyLegacyHbmImpl
  • component-path - ImplicitNamingStrategyComponentPathImpl

The default setting is defined by the ImplicitNamingStrategy in the default short name. If the default setting is empty, the fallback is to use ImplicitNamingStrategyJpaCompliantImpl.

hibernate.physical_naming_strategy

Pluggable strategy contract for applying physical naming rules for database object names. Specifies the PhysicalNamingStrategy class to be used. PhysicalNamingStrategyStandardImpl is used by default. hibernate.physical_naming_strategy can also be used to configure a custom class that implements PhysicalNamingStrategy.

Important

For hibernate.id.new_generator_mappings, new applications should keep the default value of true. Existing applications that used Hibernate 3.3.x may need to change it to false to continue using a sequence object or table based generator, and maintain backward compatibility.

Table A.3. Hibernate JDBC and Connection Properties
Property NameDescription

hibernate.jdbc.fetch_size

A non-zero value that determines the JDBC fetch size (calls Statement.setFetchSize()).

hibernate.jdbc.batch_size

A non-zero value enables use of JDBC2 batch updates by Hibernate. The recommended values are between 5 and 30.

hibernate.jdbc.batch_versioned_data

Boolean. Set this property to true if the JDBC driver returns correct row counts from executeBatch(). Hibernate will then use batched DML for automatically versioned data. Default value is to false.

hibernate.jdbc.factory_class

Select a custom org.hibernate.jdbc.Batcher. Most applications will not need this configuration property.

hibernate.jdbc.use_scrollable_resultset

Boolean. Enables use of JDBC2 scrollable resultsets by Hibernate. This property is only necessary when using user-supplied JDBC connections. Hibernate uses connection metadata otherwise.

hibernate.jdbc.use_streams_for_binary

Boolean. This is a system-level property. Use streams when writing/reading binary or serializable types to/from JDBC.

hibernate.jdbc.use_get_generated_keys

Boolean. Enables use of JDBC3 PreparedStatement.getGeneratedKeys() to retrieve natively generated keys after insert. Requires JDBC3+ driver and JRE1.4+. Set to false if JDBC driver has problems with the Hibernate identifier generators. By default, it tries to determine the driver capabilities using connection metadata.

hibernate.connection.provider_class

The class name of a custom org.hibernate.connection.ConnectionProvider which provides JDBC connections to Hibernate.

hibernate.connection.isolation

Sets the JDBC transaction isolation level. Check java.sql.Connection for meaningful values, but note that most databases do not support all isolation levels and some define additional, non-standard isolations. Standard values are 1, 2, 4, 8.

hibernate.connection.autocommit

Boolean. This property is not recommended for use. Enables autocommit for JDBC pooled connections.

hibernate.connection.release_mode

Specifies when Hibernate should release JDBC connections. By default, a JDBC connection is held until the session is explicitly closed or disconnected. The default value auto chooses after_statement for the Jakarta Transactions and CMT transaction strategies, and after_transaction for the JDBC transaction strategy.

Available values are auto (default), on_close, after_transaction, after_statement.

This setting only affects the session returned from SessionFactory.openSession. For the session obtained through SessionFactory.getCurrentSession, the CurrentSessionContext implementation configured for use controls the connection release mode for that session.

hibernate.connection.<propertyName>

Pass the JDBC property <propertyName> to DriverManager.getConnection().

hibernate.jndi.<propertyName>

Pass the property <propertyName> to the Java Naming and Directory Interface InitialContextFactory.

Table A.4. Hibernate Cache Properties
Property NameDescription

hibernate.cache.region.factory_class

The class name of a custom CacheProvider.

hibernate.cache.use_minimal_puts

Boolean. Optimizes second-level cache operation to minimize writes, at the cost of more frequent reads. This setting is most useful for clustered caches and, in Hibernate3, is enabled by default for clustered cache implementations.

hibernate.cache.use_query_cache

Boolean. Enables the query cache. Individual queries still have to be set cacheable.

hibernate.cache.use_second_level_cache

Boolean. Used to completely disable the second level cache, which is enabled by default for classes that specify a <cache> mapping.

hibernate.cache.query_cache_factory

The class name of a custom QueryCache interface. The default value is the built-in StandardQueryCache.

hibernate.cache.region_prefix

A prefix to use for second-level cache region names.

hibernate.cache.use_structured_entries

Boolean. Forces Hibernate to store data in the second-level cache in a more human-friendly format.

hibernate.cache.default_cache_concurrency_strategy

Setting used to give the name of the default org.hibernate.annotations.CacheConcurrencyStrategy to use when either @Cacheable or @Cache is used. @Cache(strategy="..") is used to override this default.

Table A.5. Hibernate Transaction Properties
Property NameDescription

hibernate.transaction.factory_class

The classname of a TransactionFactory to use with Hibernate Transaction API. Defaults to JDBCTransactionFactory).

jta.UserTransaction

A Java Naming and Directory Interface name used by JTATransactionFactory to obtain the Jakarta Transactions UserTransaction from the application server.

hibernate.transaction.manager_lookup_class

The classname of a TransactionManagerLookup. It is required when JVM-level caching is enabled or when using hilo generator in a Jakarta Transactions environment.

hibernate.transaction.flush_before_completion

Boolean. If enabled, the session will be automatically flushed during the before completion phase of the transaction. Built-in and automatic session context management is preferred.

hibernate.transaction.auto_close_session

Boolean. If enabled, the session will be automatically closed during the after completion phase of the transaction. Built-in and automatic session context management is preferred.

Table A.6. Miscellaneous Hibernate Properties
Property NameDescription

hibernate.current_session_context_class

Supply a custom strategy for the scoping of the "current" Session. Values include jta, thread, managed, custom.Class.

hibernate.query.factory_class

Chooses the HQL parser implementation: org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory or org.hibernate.hql.internal.classic.ClassicQueryTranslatorFactory.

hibernate.query.substitutions

Used to map from tokens in Hibernate queries to SQL tokens (tokens might be function or literal names). For example, hqlLiteral=SQL_LITERAL, hqlFunction=SQLFUNC.

hibernate.query.conventional_java_constants

Indicates whether the Java constants follow the Java naming conventions or not. Default is false. Existing applications may set it to true only if conventional Java constants are being used in the applications.

Setting this to true has significant performance improvement because then Hibernate can determine if an alias should be treated as a Java constant simply by checking if the alias follows the Java naming conventions.

When this property is set to false, Hibernate determines an alias should be treated as a Java constant by attempting to load the alias as a class, which is an overhead for the application. If alias fails to load as a class, then Hibernate treats the alias as a Java constant.

hibernate.hbm2ddl.auto

Automatically validates or exports schema DDL to the database when the SessionFactory is created. With create-drop, the database schema will be dropped when the SessionFactory is closed explicitly. Property value options are validate, update, create, create-drop

hibernate.hbm2ddl.import_files

Comma-separated names of the optional files containing SQL DML statements executed during the SessionFactory creation. This is useful for testing or demonstrating. For example, by adding INSERT statements, the database can be populated with a minimal set of data when it is deployed. An example value is /humans.sql,/dogs.sql.

File order matters, as the statements of a given file are executed before the statements of the following files. These statements are only executed if the schema is created, for example if hibernate.hbm2ddl.auto is set to create or create-drop.

hibernate.hbm2ddl.import_files_sql_extractor

The classname of a custom ImportSqlCommandExtractor. Defaults to the built-in SingleLineSqlCommandExtractor. This is useful for implementing a dedicated parser that extracts a single SQL statement from each import file. Hibernate also provides MultipleLinesSqlCommandExtractor, which supports instructions/comments and quoted strings spread over multiple lines (mandatory semicolon at the end of each statement).

hibernate.bytecode.use_reflection_optimizer

Boolean. This is a system-level property, which cannot be set in the hibernate.cfg.xml file. Enables the use of bytecode manipulation instead of runtime reflection. Reflection can sometimes be useful when troubleshooting. Hibernate always requires either cglib or javassist even if the optimizer is turned off.

hibernate.bytecode.provider

Both javassist or cglib can be used as byte manipulation engines. The default is javassist. The value is either javassist or cglib.

Table A.7. Hibernate SQL Dialects (hibernate.dialect)
RDBMSDialect

DB2

org.hibernate.dialect.DB2Dialect

DB2 AS/400

org.hibernate.dialect.DB2400Dialect

DB2 OS390

org.hibernate.dialect.DB2390Dialect

Firebird

org.hibernate.dialect.FirebirdDialect

FrontBase

org.hibernate.dialect.FrontbaseDialect

H2 Database

org.hibernate.dialect.H2Dialect

HypersonicSQL

org.hibernate.dialect.HSQLDialect

Informix

org.hibernate.dialect.InformixDialect

Ingres

org.hibernate.dialect.IngresDialect

Interbase

org.hibernate.dialect.InterbaseDialect

MariaDB 10

org.hibernate.dialect.MariaDB10Dialect

MariaDB Galera Cluster 10

org.hibernate.dialect.MariaDB10Dialect

Mckoi SQL

org.hibernate.dialect.MckoiDialect

Microsoft SQL Server 2000

org.hibernate.dialect.SQLServerDialect

Microsoft SQL Server 2005

org.hibernate.dialect.SQLServer2005Dialect

Microsoft SQL Server 2008

org.hibernate.dialect.SQLServer2008Dialect

Microsoft SQL Server 2012

org.hibernate.dialect.SQLServer2012Dialect

Microsoft SQL Server 2014

org.hibernate.dialect.SQLServer2012Dialect

Microsoft SQL Server 2016

org.hibernate.dialect.SQLServer2012Dialect

MySQL5

org.hibernate.dialect.MySQL5Dialect

MySQL5.5

org.hibernate.dialect.MySQL55Dialect

MySQL5.7

org.hibernate.dialect.MySQL57Dialect

Oracle (any version)

org.hibernate.dialect.OracleDialect

Oracle 9i

org.hibernate.dialect.Oracle9iDialect

Oracle 10g

org.hibernate.dialect.Oracle10gDialect

Oracle 11g

org.hibernate.dialect.Oracle10gDialect

Oracle 12c

org.hibernate.dialect.Oracle12cDialect

Pointbase

org.hibernate.dialect.PointbaseDialect

PostgreSQL

org.hibernate.dialect.PostgreSQLDialect

PostgreSQL 9.2

org.hibernate.dialect.PostgreSQL9Dialect

PostgreSQL 9.3

org.hibernate.dialect.PostgreSQL9Dialect

PostgreSQL 9.4

org.hibernate.dialect.PostgreSQL94Dialect

Postgres Plus Advanced Server

org.hibernate.dialect.PostgresPlusDialect

Progress

org.hibernate.dialect.ProgressDialect

SAP DB

org.hibernate.dialect.SAPDBDialect

Sybase

org.hibernate.dialect.SybaseASE15Dialect

Sybase 15.7

org.hibernate.dialect.SybaseASE157Dialect

Sybase 16

org.hibernate.dialect.SybaseASE157Dialect

Sybase Anywhere

org.hibernate.dialect.SybaseAnywhereDialect

Important

The hibernate.dialect property should be set to the correct org.hibernate.dialect.Dialect subclass for the application database. If a dialect is specified, Hibernate will use sensible defaults for some of the other properties. This means that they do not have to be specified manually.





Revised on 2022-02-01 13:02:21 UTC

Legal Notice

Copyright © 2022 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.