Chapter 19. Logging
The logging mechanism allows you to store information about the execution of a process instance. It is provided by a special event listener that listens to the Process Engine for any relevant events to be logged, so that the information can be stored separately from other non-log information stored either in the server built-in database (H2) or a connected data source using JPA or Hibernate.
The jbpm-audit
module provides the event listener and also allows you to store process-related information directly in a database using JPA or Hibernate. The data of the following entities is stored as follows:
-
Process instance as
processinstancelog
-
Element instance as
nodeinstancelog
-
Variable instance as
variableinstancelog
Field | Description | Nullable |
---|---|---|
id | The primary key of the log entity | No |
end_date | The end date of the process instance | Yes |
processid | The name (id) of the underlying process | Yes |
processinstanceid | The id of the process instance | No |
start_date | The start date of the process instance | Yes |
status | The status of the process instance | Yes |
parentProcessInstanceId | The process instance id of the parent process instance if applicable | Yes |
outcome | The outcome of the process instance (details on the process finish, such as error code) | Yes |
Field | Description | Nullable |
---|---|---|
id | The primary key of the log entity | No |
log_date | The date of the event | Yes |
nodeid | The node id of the underlying Process Element | Yes |
nodeinstanceid | The id of the node instance | Yes |
nodename | The name of the underlying node | Yes |
processid | The id of the underlying process | Yes |
processinstanceid | The id of the parent process instance | No |
type |
The type of the event ( | No |
Field | Description | Nullable |
---|---|---|
id | The primary key of the log entity | No |
log_date | The date of the event | Yes |
processid | The name (id) of the underlying process | Yes |
processinstanceid | The id of the process instance | No |
value | The value of the variable at log time | Yes |
variableid | The variable id as defined in the process definition | Yes |
variableinstanceid | The id of the variable instance | Yes |
outcome | The outcome of the process instance (details on the process finish, such as error code) | Yes |
If necessary, define your own data model of custom information and use the process event listeners to extract the information.
19.1. Logging events to database
To log an event that occurs on runtime in a Process instance, an Element instance, or a variable instance, you need to do the following:
Map the Log classes to the data source, so that the given data source accepts the log entries. On Red Hat JBoss EAP, edit the data source properties in the
persistence.xml
file.Example 19.1. The ProcessInstanceLog, NodeInstanceLog and VariableInstanceLog classes enabled for processInstanceDS
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <persistence version="1.0" xsi:schemaLocation= "http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_1_0.xsd" xmlns:orm="http://java.sun.com/xml/ns/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/persistence"> <persistence-unit name="org.jbpm.persistence.jpa"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>jdbc/processInstanceDS</jta-data-source> <class>org.drools.persistence.info.SessionInfo</class> <class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class> <class>org.drools.persistence.info.WorkItemInfo</class> <class>org.jbpm.process.audit.ProcessInstanceLog</class> <class>org.jbpm.process.audit.NodeInstanceLog</class> <class>org.jbpm.process.audit.VariableInstanceLog</class> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.max_fetch_depth" value="3"/> <property name="hibernate.hbm2ddl.auto" value="update"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.BTMTransactionManagerLookup"/> </properties> </persistence-unit> </persistence>
Register a logger on your Kie Session.
Example 19.2. Import the Loggers
import org.jbpm.process.audit.AuditLogService; import org.jbpm.process.audit.AuditLoggerFactory; import org.jbpm.process.audit.AuditLoggerFactory.Type; import org.jbpm.process.audit.JPAAuditLogService; ...
Example 19.3. Registering a Logger to a Kie Session
@PersistenceUnit(unitName = PERSISTENCE_UNIT_NAME) private EntityManagerFactory emf; private AuditLogService auditLogService; @PostConstruct public void configure() { auditLogService = new JPAAuditLogService(emf); ((JPAAuditLogService) auditLogService).setPersistenceUnitName(PERSISTENCE_UNIT_NAME); RuntimeEngine runtime = singletonManager.getRuntimeEngine(EmptyContext.get()); KieSession ksession = runtime.getKieSession(); AuditLoggerFactory.newInstance(Type.JPA, ksession, null); }
-
Optionally, call the method
addFilter
on the logger to filter out irrelevant information. Only information accepted by all filters appears in the database. Logger classes can be viewed in the Audit View:
<dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-audit</artifactId> <version>6.5.0.Final-redhat-2</version> </dependency>
19.2. Logback Functionality
Red Hat JBoss BPM Suite provides logback
functionality for logging configuration.
Accordingly, everything configured is logged to the Simple Logging Facade for Java SLF4J, which delegates any log to Logback, Apache Commons Logging, Log4j or java.util.logging. Add a dependency to the logging adaptor for your logging framework of choice. If you are not using any logging framework yet, you can use Logback by adding this Maven dependency:
<dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>1.x</version> </dependency>
slf4j-nop
and slf4j-simple
are ideal for a light environment.
19.3. Configuring Logging
To configure the logging level of the packages, create a logback.xml
file in business-central.war/WEB-INF/classes/logback.xml
. To set the logging level of the org.drools
package to "debug" for verbose logging, you would need to add the following line to the file:
<configuration> <logger name="org.drools" level="debug"/> ... <configuration>
Similarly, you can configure logging for packages such as the following:
-
org.guvnor
-
org.jbpm
-
org.kie
-
org.slf4j
-
org.dashbuilder
-
org.uberfire
-
org.errai
- etc…
If configuring with log4j
, the log4j.xml
can be located at business-central.war/WEB-INF/classes/log4j.xml
and can be configured in the following way:
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"> <category name="org.drools"> <priority value="debug" /> </category> ... </log4j:configuration>
Additional logging can be configured in the individual container. To configure logging for JBoss Enterprise Application Platform, please refer to the Red Hat JBoss Enterprise Application Platform Administration and Configuration Guide.
19.4. Managing log files
Red Hat JBoss BPM Suite manages most of the required maintenance. Automatically cleaned runtime data includes:
- Process instances data, which is removed upon process instance completion.
- Work items data, which is removed upon work item completion.
- Task instances data, which is removed upon completion of a process to which given task belongs.
Runtime data, which may not be automatically cleaned, includes session information data. This depends on the selected runtime strategy:
- Singleton strategy ensures that session information runtime data will not be automatically removed.
- Per request strategy allows automatic removal when a given request terminates.
- Per process instances will be automatically removed when process instance mapped to a given session completes or is aborted.
Red Hat JBoss BPM Suite does not remove executor request and error information.
In order not to lose track of process instances, Red Hat JBoss BPM Suite offers audit data tables. These are used by default and keep track of the BPM Suite environment. JBoss BPM Suite offers two ways of how to manage and maintain the audit data tables:
- Automatic clean-up
- Manual clean-up
19.4.1. Automatic Clean-Up
Automatic clean-up uses the LogCleanupCommand
executor command, which consists of logic to clean up all or selected data automatically. An advantage of the automatic clean-up method is the ability to schedule repeated clean-ups by using reoccurring job feature of the JBoss BPM Suite executor. This means that when one job completes, it provides information to the JBoss BPM Suite executor if and when the next instance of this job should be executed. By default, LogCleanupCommand
is executed once a day but can be reconfigured to run on different intervals.
There are several important configuration options that can be used with the LogCleanupCommand
command:
Name | Description | Is Exclusive |
---|---|---|
SkipProcessLog | Indicates if the clean-up of process instances, node instances and variables log cleanup should be omitted (default: false) | No, can be used with other parameters |
SkipTaskLog | Indicates if the task audit and the task event log clean-up should be omitted (default: false) | No, can be used with other parameters |
SkipExecutorLog | Indicates if the JBoss BPM Suite executor entries clean-up should be omitted (default: false) | No, can be used with other parameters |
SingleRun | Indicates if the job routine should run only once (default: false) | No, can be used with other parameters |
NextRun | Sets a date for the next run. For example, 12h is set for jobs to be executed every 12 hours. If the option is not given, the next job will run 24 hours after the completion of the current job | No, can be used with other parameters |
OlderThan |
Causes logs older than the given date to be removed. The date format is | Yes, cannot be used when the OlderThanPeriod parameter is used |
OlderThanPeriod | Causes logs older than the given timer expression should be removed. For example, set 30d to remove logs older than 30 day from current time | Yes, cannot be used when the OlderThan parameter is used |
ForProcess | Specifies process definition ID for which logs should be removed | No, can be used with other parameters |
ForDeployment | Specifies deployment ID for which logs should be removed | No, can be used with other parameters |
EmfName | Persistence unit name that shall be used to perform operation deletion | N/A |
LogCleanupCommand does not remove any active instances, such as running process instances, task instances, or executor jobs.
While all audit tables have a time stamp, some may be missing other parameters, such as process id, or deployment id. For that reason, it is recommended to use the date parameter when managing the clean-up job routine.
19.4.2. Setting up Automatic Clean-up Job
To set up automatic clean-up job, do the following:
-
Open Business Central in your web browser (if running locally http://localhost:8080/business-central) and log in as a user with the
admin
role. -
Go to Deploy
Jobs. -
Click
in the top right hand corner of the page.
Enter a name, due date and time. Enter the following into the Type text field:
org.jbpm.executor.commands.LogCleanupCommand
- Click on if you wish to use parameters listed above. In the key section, enter a parameter name. In the value section, enter true or false, depending on the desired outcome.
- Click Create to finalize the job creation wizard. You have successfully created an automatic clean-up job.
19.4.3. Manual Clean-Up
You may make use of audit API to do the clean-up manually with more control over parameters and thus more control over what will be removed. Audit API is divided into following areas:
-
Process audit, which is used to clean up process, node and variables logs, accessible in the
jbpm-audit
module -
Task audit, which is used to clean up tasks and task events, accessible in the
jbpm-human-task-audit
module -
Executor jobs, which is used to clean up executor jobs and errors, accessible in the
jbpm-executor
module
Modules are sorted hierarchically and can be accessed as follows:
-
org.jbpm.process.audit.JPAAuditLogService
-
org.jbpm.services.task.audit.service.TaskJPAAuditService
-
org.jbpm.executor.impl.jpa.ExecutorJPAAuditService
Several examples of manual clean-up follow:
Example 19.4. Removal of completed process instance logs
import org.jbpm.process.audit.JPAAuditLogService; import org.kie.internal.runtime.manager.audit.query.ProcessInstanceLogDeleteBuilder; import org.kie.api.runtime.process.ProcessInstance; JPAAuditLogService auditService = new JPAAuditLogService(emf); ProcessInstanceLogDeleteBuilder updateBuilder = auditService.processInstanceLogDelete().status(ProcessInstance.STATE_COMPLETED); int result = updateBuilder.build().execute();
Example 19.5. Task audit logs removal for the org.jbpm:HR:1.0 deployment
import org.jbpm.services.task.audit.service.TaskJPAAuditService; import org.kie.internal.task.query.AuditTaskDeleteBuilder; TaskJPAAuditService auditService = new TaskJPAAuditService(emf); AuditTaskDeleteBuilder updateBuilder = auditService.auditTaskDelete().deploymentId("org.jbpm:HR:1.0"); int result = updateBuilder.build().execute();
Example 19.6. Executor error and requests removal
import org.jbpm.executor.impl.jpa.ExecutorJPAAuditService; import org.kie.internal.runtime.manager.audit.query.ErrorInfoDeleteBuilder; import org.kie.internal.runtime.manager.audit.query.RequestInfoLogDeleteBuilder; ExecutorJPAAuditService auditService = new ExecutorJPAAuditService(emf); ErrorInfoDeleteBuilder updateBuilder = auditService.errorInfoLogDeleteBuilder().dateRangeEnd(new Date()); int result = updateBuilder.build().execute(); RequestInfoLogDeleteBuilder updateBuilder2 = auditService.requestInfoLogDeleteBuilder().dateRangeEnd(new Date()); result = updateBuilder.build().execute();
When removing executor entries, ensure that the error information is removed before the request information because of constraints setup on database.
Parts of the code utilize internal API. While this does not have any direct impact on the functionality of our product, internal API is subject to change and Red Hat cannot guarantee backward compatibility.