Chapter 20. Java APIs
Red Hat JBoss BRMS and Red Hat JBoss BPM Suite provide various Java APIs which enable you to embed runtime engines into your application.
It is recommended to use the services described in Section 20.3, “KIE Services”. These high-level APIs deal with low-level details and enable you to focus solely on business logic.
20.1. KIE API
The KIE (Knowledge Is Everything) API is used to load and execute business processes. To interact with the process engine—for example to start a process—you need to set up a session, which is used to communicate with the process engine. A session must have a reference to a knowledge base, which contains references to all the relevant process definitions and searches the definitions whenever necessary.
To create a session:
- First, create a knowledge base and load all the necessary process definitions. Process definitions can be loaded from various sources, such as the class path, file system, or a process repository.
- Instantiate a session.
Once a session is set, you can use the session to execute processes. Every time a process is started, a new process instance of that particular process defition is created. The process instance maintains its state throughout the process life cycle.
For example, to write an application that processes sales orders, define one or more process definitions that specify how the orders must be processed. When starting the application, create a knowledge base that contains the specified process definitions. Based on the knowledge base, instantiate a session such that each time a new sales order comes in, a new process instance is started for that sales order. The process instance then contains the state of the process for that specific sales request.
A knowledge base can be shared across sessions and is usually created once, at the start of the application. Knowledge bases can be dynamically changed, which allows you to add or remove processes at runtime.
It is possible to create more independent sessions or multiple sessions; for example, to separate all processes for one customer from processes for another customer, create an independent session for each one. For scalability reasons, multiple sessions can be used.
The Red Hat JBoss BPM Suite projects have a clear separation between the APIs users interact with and the actual implementation classes. The public API exposes most of the features that users can safely use, however, experienced users can still access internal classes. Keep in mind that the internal APIs may change in the future.
20.1.1. KIE Framework
In the Red Hat JBoss BPM Suite environment, the life cycle of KIE systems is divided into the following labels:
Author | Knowledge authoring: creating DRLs, BPMN2 sources, decision tables, and class models. |
Build | Building the authored knowledge into deployable units; kJARs. |
Test | Testing the knowledge artifacts before they are deployed to the application. |
Deploy | Deploying the artifacts to be used to a Maven repository. |
Utilize | Loading of a kJAR exposed at runtime using a KIE container. A session, which the application can interact with, is created from the KIE Container. |
Run | Interacting with a session using the KIE API. |
Work | Interacting with a session using the user interface. |
Manage | Managing any session or a KIE container. |
20.1.2. KIE Base
The KIE API enables you to create a knowledge base that includes all the process definitions that may need to be executed. To create a knowledge base, use KieHelper
to load processes from various resources (for example, from the class path or from the file system), and then create a new knowledge base from that helper. The following code snippet shows how to manually create a simple knowledge base consisting of only one process definition, using a resource from the class path:
KieBase kBase = new KieHelper() .addResource(ResourceFactory.newClassPathResource("MyProcess.bpmn")) .build();
The code snippet above uses org.kie.internal.utils.KieHelper
and org.kie.internal.io.ResourceFactory
that are a part of the internal API. Using RuntimeManager
is the recommended way of creating a knowledge base and a knowledge session.
KieBase
or KiePackage
serialization is not supported in Red Hat JBoss BPM Suite 6.4. For more information, see Is serialization of kbase/package supported in BRMS 6/BPM Suite 6/RHDM 7?.
The classes belonging to the internal API (org.kie.internal
) are not supported because they are subject to change.
KieBase
is a repository that contains all knowledge definitions of the application—rules, processes, forms, and data models—but does not contain any runtime data. Knowledge sessions are created based on a particular KieBase
. While creating knowledge bases can be onerous, creating knowledge sessions is very light. Therefore, it is recommended to cache knowledge bases as much as possible to allow repeated session creation. The caching mechanism is automatically provided by KieContainer
.
See the following KieBase
attributes:
- name
The name which retrieves
KieBase
fromKieContainer
. This attribute is mandatory.Default Value Admitted Values None
Any
- includes
A comma-separated list of other
KieBase
objects contained in thiskmodule
. TheKieBase
artifacts are included as well. A knowledge base can be contained in multiple KIE modules, assuming that it is declared as a dependency in thepom.xml
file of the modules.Default Value Admitted Values None
A comma-separated list
- packages
|By default, all artifacts (such as rules and processes) in the
resources
directory are included into a knowledge base. This attribute enables you to limit the number of compiled artifacts. Only the packages belonging to the list specified in this attribute are compiled.Default Value Admitted Values All
A comma-separated list
- default
|Defines whether a knowledge base is the default knowledge base for a module, and therefore it can be created from the KIE container without passing any name. Each module can have at most one default knowledge base.
Default Value Admitted Values false
true
orfalse
- scope
The CDI bean scope that is set for the CDI bean representing the
KieBase
, for exampleApplicationScoped
,SessionScoped
, orRequestScoped
. See the CDI specification for more information about the CDI scope definition.The scope can be specified in two ways;
-
As
javax.enterprise.context.INTERFACE
, for example. - As INTERFACE.
The
javax.enterprise.context
package is added automatically if no package is specified.Default Value Admitted Values javax.enterprise.context.ApplicationScoped
A name of an interface in the javax.enterprise.context package representing a valid CDI bean scope.
-
As
- equalsBehavior
Defines the behavior of Red Hat JBoss BRMS when a new fact is inserted into the working memory.
If set to
identity
, a newFactHandle
is always created unless the same object is already present in the working memory.If set to
equality
, a newFactHandle
is created only if the newly inserted object is not equal, according to itsequals()
method, to an existing fact.Default Value Admitted Values identity
identity
orequality
- eventProcessingMode
If set to
cloud
,KieBase
treats events as normal facts.If set to
stream
, temporal reasoning on events is allowed.See Section 7.6, “Temporal Operations” for more information.
Default Value Admitted Values cloud
cloud
orstream
The following example shows how to update assets using the KieBase object:
import org.kie.api.KieBase; import org.kie.api.KieServices; import org.kie.api.runtime.KieSession; import org.kie.api.runtime.KieSessionConfiguration; // build kbase with the replace-version-1.bpmn process KieBase kbase = KieServices.Factory.get().newKieClasspathContainer().getKieBase(); kbase.addKnowledgePackages(getProcessPackages("replace-version-1.bpmn")); KieSession ksession = kbase.newStatefulKnowledgeSession(); try { // start a replace-version-1.bpmn process instance ksession.startProcess("com.sample.process", Collections.<String, Object>singletonMap("name", "process1")); // add the replace-version-2.bpmn process and start its instance kbase.addKnowledgePackages(getProcessPackages("replace-version-2.bpmn")); ksession.startProcess("com.sample.process", Collections.<String, Object>singletonMap("name", "process2")); // signal all processes in the session to continue (both instances finish) ksession.signalEvent("continue", null); } finally { ksession.dispose(); }
20.1.3. KIE Session
Once the knowledge base is loaded, create a session to interact with the engine. The session can then be used to start new processes and signal events. The following code snippet shows how to create a session and start a new process instance:
KieSession ksession = kbase.newKieSession(); ProcessInstance processInstance = ksession.startProcess("com.sample.MyProcess");
KieSession
stores and executes runtime data. It is created from a knowledge base, or, more easily, directly from KieContainer
if it is defined in the kmodule.xml
file.
- name
A unique name of the
KieSession
used to fetchKieSession
fromKieContainer
. This attribute is mandatory.Default Value Admitted Values None
Any
- type
A session set to
stateful
enables you to iteratively work with the working memory, while a session set tostateless
is used for a one-off execution of rules only.A stateless session stores a knowledge state. Therefore, a state is changed every time a new fact is added, updated, or deleted, as well as every time a rule is fired. An execution in a stateless session has no information about previous actions, for example rule fires.
Default Value Admitted Values stateful
stateful
orstateless
- default
Defines whether the
KieSession
is the default one for a module, and therefore it can be created from KieContainer without passing any name to it. There can be at most one defaultKieSession
of each type in a module.Default Value Admitted Values false
true
orfalse
- clockType
Defines whether event time stamps are determined by the system clock or by a pseudo clock controlled by the application. This clock is especially useful for unit testing temporal rules.
Default Value Admitted Values realtime
realtime
orpseudo
- beliefSystem
Defines a type of a belief system used by
KieSession
. A belief system is a truth maintenance system. For more information, see Section 6.4, “Truth Maintenance”.A belief system tries to deduce the truth from knowledge (facts). For example, if a new fact is inserted based on another fact which is later removed from the engine, the system can determine that the newly inserted fact should be removed as well.
Default Value Admitted Values simple
simple
,jtms
, ordefeasible
Alternatively, you can get a KIE session from the Runtime Manager:
import org.kie.api.runtime.KieSession; import org.kie.api.runtime.manager.RuntimeEngine; import org.kie.api.runtime.manager.RuntimeManager; import org.kie.api.runtime.manager.RuntimeManagerFactory; import org.kie.internal.runtime.manager.context.ProcessInstanceIdContext; ... RuntimeManager manager = RuntimeManagerFactory.Factory.get() .newPerProcessInstanceRuntimeManager(environment); RuntimeEngine runtime = manager.getRuntimeEngine( ProcessInstanceIdContext.get()); KieSession ksession = runtime.getKieSession(); // do something here, for example: ksession.startProcess(“org.jbpm.hello”); manager.disposeRuntimeEngine(engine); manager.close();
For Maven dependencies, see Embedded jBPM Engine Dependencies. For further information about the Runtime Manager, see Section 20.2, “Runtime Manager”.
20.1.3.1. Process Runtime Interface
The ProcessRuntime
interface, which is extended by KieSession
, defines methods for interacting with processes. See the interface below:
package org.kie.api.runtime.process; interface ProcessRuntime { /** * Start a new process instance. The process (definition) that should * be used is referenced by the given process ID. * * @param processId The ID of the process that should be started * @return the ProcessInstance that represents the instance * of the process that was started */ ProcessInstance startProcess(String processId); /** * Start a new process instance. The process (definition) that should * be used is referenced by the given process id. Parameters can be passed * to the process instance (as name-value pairs), and these will be set * as variables of the process instance. * * @param processId the ID of the process that should be started * @param parameters the process variables that should be set when * starting the process instance * @return the ProcessInstance that represents the instance * of the process that was started */ ProcessInstance startProcess(String processId, Map<String, Object> parameters); /** * Signals the engine that an event has occurred. The type parameter defines * which type of event and the event parameter can contain additional information * related to the event. All process instances that are listening to this type * of (external) event will be notified. For performance reasons, this type of event * signaling should only be used if one process instance should be able to notify * other process instances. For internal event within one process instance, use the * signalEvent method that also include the processInstanceId of the process instance * in question. * * @param type the type of event * @param event the data associated with this event */ void signalEvent(String type, Object event); /** * Signals the process instance that an event has occurred. The type parameter defines * which type of event and the event parameter can contain additional information * related to the event. All node instances inside the given process instance that * are listening to this type of (internal) event will be notified. Note that the event * will only be processed inside the given process instance. All other process instances * waiting for this type of event will not be notified. * * @param type the type of event * @param event the data associated with this event * @param processInstanceId the id of the process instance that should be signaled */ void signalEvent(String type, Object event, long processInstanceId); /** * Returns a collection of currently active process instances. Note that only process * instances that are currently loaded and active inside the engine will be returned. * When using persistence, it is likely not all running process instances will be loaded * as their state will be stored persistently. It is recommended not to use this * method to collect information about the state of your process instances but to use * a history log for that purpose. * * @return a collection of process instances currently active in the session */ Collection<ProcessInstance> getProcessInstances(); /** * Returns the process instance with the given id. Note that only active process instances * will be returned. If a process instance has been completed already, * this method will return null. * * @param id the id of the process instance * @return the process instance with the given id or null if it cannot be found */ ProcessInstance getProcessInstance(long processInstanceId); /** * Aborts the process instance with the given id. If the process instance has been completed * (or aborted), or the process instance cannot be found, this method will throw an * IllegalArgumentException. * * @param id the id of the process instance */ void abortProcessInstance(long processInstanceId); /** * Returns the WorkItemManager related to this session. This can be used to * register new WorkItemHandlers or to complete (or abort) WorkItems. * * @return the WorkItemManager related to this session */ WorkItemManager getWorkItemManager(); }
20.1.3.2. Event Listeners
A knowledge session provides methods for registering and removing listeners.
The KieRuntimeEventManager
interface is implemented by KieRuntime
. KieRuntime
provides two interfaces: RuleRuntimeEventManager
and ProcessEventManager
.
20.1.3.2.1. Process Event Listeners
Use the ProcessEventListener
class to listen to process-related events, such as starting and completing processes, entering and leaving nodes, or changing values of process variables. An event object provides an access to related information, for example, what is the process and node instances linked to the event.
Use this API to register your own event listeners. See the methods of the ProcessEventListener
interface:
package org.kie.api.event.process; public interface ProcessEventListener { void beforeProcessStarted(ProcessStartedEvent event); void afterProcessStarted(ProcessStartedEvent event); void beforeProcessCompleted(ProcessCompletedEvent event); void afterProcessCompleted(ProcessCompletedEvent event); void beforeNodeTriggered(ProcessNodeTriggeredEvent event); void afterNodeTriggered(ProcessNodeTriggeredEvent event); void beforeNodeLeft(ProcessNodeLeftEvent event); void afterNodeLeft(ProcessNodeLeftEvent event); void beforeVariableChanged(ProcessVariableChangedEvent event); void afterVariableChanged(ProcessVariableChangedEvent event); }
The before
and after
events follow the structure of a stack. For example, if a node is triggered as result of leaving a different node, ProcessNodeTriggeredEvent
occurs in between the BeforeNodeLeftEvent
and AfterNodeLeftEvent
of the first node. Similarly, all the NodeTriggered
and NodeLeft
events that are a direct result of starting a process occur in between the beforeProcessStarted
and afterProcessStarted
events. This feature enables you to derive cause relationships between events more easily.
In general, to be notified when a particular event happens, consider only the before
events, as they occur immediately before the event actually occurs. If you are considering only the after
events, it may appear that the events arise in the wrong order. As the after
events are executed in the same order as any items in a stack, these events are triggered only after all the events executed as a result of this event have already triggered. Use the after
events to ensure that any process-related action has ended. For example, use the after
event to be notified when starting of a particular process instance has ended.
Not all nodes always generate the NodeTriggered
or NodeLeft
events; depending on the type of a node, some nodes might only generate the NodeLeft
events, or the NodeTriggered
events.
Catching intermediate events is similar to generating the NodeLeft
events, as they are not triggered by another node, but activated from outside. Similarly, throwing intermediate events is similar to generating the NodeTriggered
events, as they have no outgoing connection.
20.1.3.2.2. Rule Event Listeners
The RuleRuntimeEventManager
interface enables you to add and remove listeners to listen to the events for the working memory and the agenda.
The following code snippet shows how to declare a simple agenda listener and attach the listener to a session. The code prints the events after they fire.
Example 20.1. Adding AgendaEventListener
import org.kie.api.runtime.process.EventListener; ksession.addEventListener(new DefaultAgendaEventListener() { public void afterMatchFired(AfterMatchFiredEvent event) { super.afterMatchFired(event); System.out.println(event); } });
Red Hat JBoss BRMS also provides the DebugRuleRuntimeEventListener
and DebugAgendaEventListener
classes which implement each method of the RuleRuntimeEventListener
interface with a debug print statement. To print all the working memory events, add a listener as shown below:
Example 20.2. Adding DebugRuleRuntimeEventListener
ksession.addEventListener(new DebugRuleRuntimeEventListener());
Each event implements the KieRuntimeEvent
interface which can be used to retrieve KnowlegeRuntime
, from which the event originated.
The supported events are as follows:
-
MatchCreatedEvent
-
MatchCancelledEvent
-
BeforeMatchFiredEvent
-
AfterMatchFiredEvent
-
AgendaGroupPushedEvent
-
AgendaGroupPoppedEvent
-
ObjectInsertEvent
-
ObjectDeletedEvent
-
ObjectUpdatedEvent
-
ProcessCompletedEvent
-
ProcessNodeLeftEvent
-
ProcessNodeTriggeredEvent
-
ProcessStartEvent
20.1.3.3. Loggers
Red Hat JBoss BPM Suite provides a listener for creating an audit log to the console or a file on the file system. You can use these logs for debugging purposes as it contains all the events occurring at runtime. Red Hat JBoss BPM Suite provides the following logger implementations:
- Console logger
-
This logger prints all the events to the console. The
KieServices
object provides aKieRuntimeLogger
logger that you can add to your session. When you create a console logger, pass the knowledge session as an argument. - File logger
- This logger writes all events to a file using an XML representation. You can use this log file in your IDE to generate a tree-based visualization of the events that occurs during execution. For the file logger, you need to provide a name.
- Threaded file logger
- As a file logger writes the events to disk only when closing the logger or when the number of events in the logger reaches a predefined level. You cannot use it when debugging processes at runtime. A threaded file logger writes the events to a file after a specified time interval, making it possible to use the logger to visualize the progress in real-time, while debugging processes. For the threaded file logger, you need to provide the interval (in milliseconds) after which the events must be saved. You must always close the logger at the end of your application.
See an example of using FileLogger
logger:
Example 20.3. FileLogger
import org.kie.api.KieServices; import org.kie.api.logger.KieRuntimeLogger; ... KieRuntimeLogger logger = KieServices.Factory .get().getLoggers().newFileLogger(ksession, "test"); // Add invocations to the process engine here, // for example ksession.startProcess(processId); ... logger.close();
KieRuntimeLogger
uses the comprehensive event system in Red Hat JBoss BRMS to create an audit log that can be used to log the execution of an application for later inspection, using tools such as the Red Hat JBoss Developer Studio audit viewer.
20.1.3.4. Correlation Keys
When working with processes, you may require to assign a given process instance a business identifier for later reference without knowing the generated process instance ID. To provide such capabilities, Red Hat JBoss BPM Suite enables you to use the CorrelationKey
interface that is composed of CorrelationProperties
. CorrelationKey
can have a single property describing it. Alternatively, CorrelationKey
can be represented as multi-valued property set. Note that CorrelationKey
is a unique identifier for an active process instance, and is not passed on to the subprocesses.
Correlation is usually used with long running processes and thus require persistence to be enabled in order to permanently store correlation information. Correlation capabilities are provided as part of the CorrelationAwareProcessRuntime
interface.
The CorrelationAwareProcessRuntime
interface exposes following methods:
package org.kie.internal.process; interface CorrelationAwareProcessRuntime { /** * Start a new process instance. The process (definition) that should * be used is referenced by the given process id. Parameters can be passed * to the process instance (as name-value pairs), and these will be set * as variables of the process instance. * * @param processId the id of the process that should be started * @param correlationKey custom correlation key that can be used to identify process instance * @param parameters the process variables that should be set * when starting the process instance * @return the ProcessInstance that represents the instance of the process that was started */ ProcessInstance startProcess(String processId, CorrelationKey correlationKey, Map<String, Object> parameters); /** * Creates a new process instance (but does not yet start it). The process * (definition) that should be used is referenced by the given process id. * Parameters can be passed to the process instance (as name-value pairs), * and these will be set as variables of the process instance. You should only * use this method if you need a reference to the process instance before actually * starting it. Otherwise, use startProcess. * * @param processId the id of the process that should be started * @param correlationKey custom correlation key that can be used to identify process instance * @param parameters the process variables that should be set * when creating the process instance * @return the ProcessInstance that represents the instance of the process * that was created (but not yet started) */ ProcessInstance createProcessInstance(String processId, CorrelationKey correlationKey, Map<String, Object> parameters); /** * Returns the process instance with the given correlationKey. * Note that only active process instances will be returned. * If a process instance has been completed already, this method will return null. * * @param correlationKey the custom correlation key assigned * when process instance was created * @return the process instance with the given id or null if it cannot be found */ ProcessInstance getProcessInstance(CorrelationKey correlationKey); }
You can create and use a correlation key with single or multiple properties. In case of correlation keys with multiple properties, it is not necessary that you know all parts of the correlation key in order to search for a process instance. Red Hat JBoss BPM Suite enables you to set a part of the correlation key properties and get a list of entities that match the properties. That is, you can search for process instances even with partial correlation keys.
For example, consider a scenario when you have a unique identifier customerId
per customer. Each customer can have many applications (process instances) running simultaneously. To retrieve a list of all the currently running applications and choose to continue any one of them, use a correlation key with multiple properties (such as customerId
and applicationId
) and use only customerId
to retrieve the entire list.
Red Hat JBoss BPM Suite runtime provides the operations to find single process instance by complete correlation key and many process instances by partial correlation key. The following methods of RuntimeDataService
can be used (see Section 20.3.4, “Runtime Data Service”):
/** * Returns active process instance description found for given correlation key * if found otherwise null. At the same time it will * fetch all active tasks (in status: Ready, Reserved, InProgress) to provide * information what user task is keeping instance and who owns them * (if were already claimed). * * @param correlationKey correlation key assigned to process instance * @return Process instance information, in the form of * a {@link ProcessInstanceDesc} instance. */ ProcessInstanceDesc getProcessInstanceByCorrelationKey(CorrelationKey correlationKey); /** * Returns process instances descriptions (regardless of their states) * found for given correlation key if found otherwise empty list. * This query uses 'like' to match correlation key so it allows to pass only partial keys, * though matching is done based on 'starts with'. * * @param correlationKey correlation key assigned to process instance * @return A list of {@link ProcessInstanceDesc} instances representing the process * instances that match the given correlation key */ Collection<ProcessInstanceDesc> getProcessInstancesByCorrelationKey (CorrelationKey correlationKey);
20.1.3.5. Threads
Multi-threading is divided into technical and logical multi-threading.
- Technical multi-threading
- Occurs when multiple threads or processes are started on a computer.
- Logical multi-threading
- Occurs in a BPM process, for example after a process reaches a parallel gateway. The original process then splits into two processes that are executed in parallel.
The Red Hat JBoss BPM Suite engine supports logical multi-threading which is implemented using only one technical thread. The logical implementation was chosen because multiple technical threads need to communicate state information with each other, if they are working on the same process. While multi-threading provides performance benefits, the extra logic used to ensure the different threads work together well, means that this is not guaranteed. There is additional overhead of avoiding race conditions and deadlocks.
The Red Hat JBoss BPM Suite engine executes actions serially. For example, if a process encounters a parallel gateway, it sequentially triggers each of the outgoing branches, one after the other. This is possible since execution is usually instantaneous. As a result, you may not even notice this behaivor. Similarly, when the engine encounters a script task in a process, it synchronously executes that script and waits for it to complete before continuing execution.
For example, calling a Thread.sleep(…)
method as a part of a script does not make the engine continue execution elsewhere, but blocks the engine thread during that period. The same principle applies to service tasks.
When a service task is reached in a process, the engine invokes the handler of the service synchronously. The engine waits for the completeWorkItem(…)
method to return before continuing execution. It is important that your service handler executes your service asynchronously if its execution is not instantaneous. For example, a service task that invokes an external service. Since the delay in invoking the service remotely and waiting for the results can take too long, invoking this service asynchronously is advised. Asynchronous call invokes the service and notifies the engine later when the results are available. After invoking the service, the process engine continues execution of the process.
Human tasks are a typical example of a service that needs to be invoked asynchronously, as the engine does not have to wait until a human actor responds to the request. The human task handler only creates a new task when the human task node is triggered. The engine then is able to continue the execution of the process (if necessary) and the handler notifies the engine asynchronously when the user completes the task.
20.1.3.6. Globals
Globals are named objects that are visible to the engine differently from facts; changes in a global do not trigger reevaluation of rules. Globals are useful for providing static information, as an object offering services that are used in the RHS of a rule, or as a means to return objects from the rule engine. When you use a global on the LHS of a rule, make sure it is immutable, or, at least, do not expect changes to have any effect on the behavior of your rules.
A global must be declared as a Java object in a rules file:
global java.util.List list
With the Knowledge Base now aware of the global identifier and its type, it is now possible to call the ksession.setGlobal()
method with the global’s name and an object, for any session, to associate the object with the global. Failure to declare the global type and identifier in DRL code will result in an exception being thrown from this call.
List list = new ArrayList(); ksession.setGlobal("list", list);
Set any global before it is used in the evaluation of a rule. Failure to do so results in a NullPointerException
exception.
You can also initialize global variables while instantiating a process:
-
Define the variables as a
Map
ofString
andObject
values. Provide the map as a parameter to the
startProcess()
method.Map<String, Object> params = new HashMap<String, Object>(); params.put("VARIABLE_NAME", "variable value"); ksession.startProcess("my.process.id", params);
To access your global variable, use the getVariable()
method:
processInstance.getContextInstance().getVariable("globalStatus");
20.1.4. KIE File System
You can define the a KIE base and a KIE session that belong to a KIE module programmatically instead of using definitions in the kmodule.xml
file. The API also enables you to add the file that contains the KIE artifacts instead of automatically reading the files from the resources folder of your project. To add KIE artifacts manually, create a KieFileSystem
object, which is a sort of virtual file system, and add all the resources contained in your project to it.
To use the KIE file system:
-
Create a
KieModuleModel
instance fromKieServices
. -
Configure your
KieModuleModel
instance with the desired KIE base and KIE session. -
Convert your
KieModuleModel
instance into XML and add the XML toKieFileSystem
.
This process is shown by the following example:
Example 20.4. Creating kmodule.xml Programmatically and Adding It to KieFileSystem
import org.kie.api.KieServices; import org.kie.api.builder.model.KieModuleModel; import org.kie.api.builder.model.KieBaseModel; import org.kie.api.builder.model.KieSessionModel; import org.kie.api.builder.KieFileSystem; KieServices kieServices = KieServices.Factory.get(); KieModuleModel kieModuleModel = kieServices.newKieModuleModel(); KieBaseModel kieBaseModel1 = kieModuleModel.newKieBaseModel("KBase1") .setDefault(true) .setEqualsBehavior(EqualityBehaviorOption.EQUALITY) .setEventProcessingMode(EventProcessingOption.STREAM); KieSessionModel ksessionModel1 = kieBaseModel1.newKieSessionModel("KSession1") .setDefault(true) .setType(KieSessionModel.KieSessionType.STATEFUL) .setClockType(ClockTypeOption.get("realtime")); KieFileSystem kfs = kieServices.newKieFileSystem(); kfs.writeKModuleXML(kieModuleModel.toXML());
Add remaining KIE artifacts that you use in your project to your KieFileSystem
instance. The artifacts must be in a Maven project file structure.
Example 20.5. Adding Kie Artifacts to KieFileSystem
import org.kie.api.builder.KieFileSystem; KieFileSystem kfs = ... kfs.write("src/main/resources/KBase1/ruleSet1.drl", stringContainingAValidDRL) .write("src/main/resources/dtable.xls", kieServices.getResources().newInputStreamResource(dtableFileStream));
The example above shows that it is possible to add the KIE artifacts both as a String variable and as Resource
instance. The Resource
instance can be created by the KieResources
factory, also provided by the KieServices
instance. The KieResources
class provides factory methods to convert an InputStream
, URL
, and File
objects, or a String representing a path of your file system to a Resource
instance that can be managed by the KieFileSystem
.
The type of Resource
can be inferred from the extension of the name used to add it to the KieFileSystem
instance. However, it is also possible not to follow the KIE conventions about file extensions and explicitly assign a ResourceType
property to a Resource
object as shown below:
Example 20.6. Creating and Adding Resource with Explicit Type
import org.kie.api.builder.KieFileSystem; KieFileSystem kfs = ... kfs.write("src/main/resources/myDrl.txt", kieServices.getResources().newInputStreamResource(drlStream) .setResourceType(ResourceType.DRL));
Add all the resources to your KieFileSystem
instance and build it by passing the KieFileSystem
instance to KieBuilder
.
When you build KieFileSystem
, the resulting KieModule
is automatically added to the KieRepository
singleton. KieRepository
is a singleton acting as a repository for all the available KieModule
instances.
20.1.5. KIE Module
Red Hat JBoss BRMS and Red Hat JBoss BPM Suite use Maven and align with Maven practices. A KIE project or a KIE module is a Maven project or a module with an additional metadata file META-INF/kmodule.xml
. This file is a descriptor that selects resources to knowledge bases and configures sessions. There is also alternative XML support through Spring and OSGi BluePrints.
While Maven can build and package KIE resources, it does not provide validation at build time by default. A Maven plug-in, kie-maven-plugin
, is recommended to get build time validation. The plug-in also generates many classes, making the runtime loading faster. See Section 20.1.7, “KIE Maven Plug-in” for more information about the kie-maven-plugin
plug-in.
KIE uses default values to minimize the amount of required configuration; an empty kmodule.xml
file is the simplest configuration. The kmodule.xml
file is required, even if it is empty, as it is used for discovery of the JAR and its contents.
Maven can use the following commands:
-
mvn install
to deploy a KIE module to the local machine, where all other applications on the local machine use it. -
mvn deploy
to push the KIE module to a remote Maven repository. Building the application will pull in the KIE module and populate the local Maven repository in the process.
JAR files and libraries can be deployed in one of two ways:
- Added to the class path, similar to a standard JAR in a Maven dependency listing
- Dynamically loaded at runtime.
KIE scans the class path to find all the JAR files with a kmodule.xml
file in it. Each found JAR is represented by the KieModule
interface. The terms class path KIE module and dynamic KIE module are used to refer to the two loading approaches. While dynamic modules support side by side versioning, class path modules do not. Once a module is on the class path, no other version may be loaded dynamically.
The kmodule.xml
file enables you to define and configure one or more KIE bases. Additionally, you can create one or more KIE sessions from each KIE base, as shown in the following example. For more information about KieBase
attributes, see Section 20.1.2, “KIE Base”. For more information about KieSession
attributes, see Section 20.1.3, “KIE Session”.
Example 20.7. Sample kmodule.xml File
<kmodule xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.drools.org/xsd/kmodule"> <kbase name="KBase1" default="true" eventProcessingMode="cloud" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg1"> <ksession name="KSession1_1" type="stateful" default="true" /> <ksession name="KSession1_2" type="stateless" default="false" beliefSystem="jtms" /> </kbase> <kbase name="KBase2" default="false" eventProcessingMode="stream" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg2, org.domain.pkg3" includes="KBase1"> <ksession name="KSession2_1" type="stateful" default="false" clockType="realtime"> <fileLogger file="debugInfo" threaded="true" interval="10" /> <workItemHandlers> <workItemHandler name="name" type="new org.domain.WorkItemHandler()" /> </workItemHandlers> <listeners> <ruleRuntimeEventListener type="org.domain.RuleRuntimeListener" /> <agendaEventListener type="org.domain.FirstAgendaListener" /> <agendaEventListener type="org.domain.SecondAgendaListener" /> <processEventListener type="org.domain.ProcessListener" /> </listeners> </ksession> </kbase> </kmodule>
The example above defines two KIE bases. It is possible to instantiate a different number of KIE sessions from each KIE base. In this example, two KIE sessions are instantiated from the KBase1
KIE base, while only one KIE session from the second KIE base.
You can specify properties in the <configuration>
element of the kmodule.xml
file:
<kmodule> ... <configuration> <property key="drools.dialect.default" value="java"/> ... </configuration> ... </kmodule>
See the list of supported properties:
- drools.dialect.default
-
Sets the default Drools dialect. Possible values are
java
andmvel
. - drools.accumulate.function.FUNCTION
Links a class that implements an accumulate function to a specified function name, which allows to add custom accumulate functions into the engine. For example:
<property key="drools.accumulate.function.hyperMax" value="org.drools.custom.HyperMaxAccumulate"/>
- drools.evaluator.EVALUATION
Links a class that implements an evaluator definition to a specified evaluator name, which allows to add custom evaluators into the engine. Evaluator is similar to a custom operator. For example:
<property key="drools.evaluator.soundslike" value="org.drools.core.base.evaluators.SoundslikeEvaluatorsDefinition"/>
- drools.dump.dir
-
Sets a path to the Drools
dump/log
directory. - drools.defaultPackageName
- Sets the default package.
- drools.parser.processStringEscapes
-
Sets the String escape function. Possible values are
true
andfalse
. If set tofalse
, the\n
character will not be interpreted as the newline character. The default value istrue
. - drools.kbuilder.severity.SEVERITY
Sets the severity of problems in a knowledge definition. Possible severities are
duplicateRule
,duplicateProcess
, andduplicateFunction
. Possible values are for exampleERROR
andWARNING
. The default value isINFO
.When you build a KIE base, it uses this setting for reporting found problems. For example, if there are two function definitions in a DRL file with the same name and the property is set to the following, then building KIE base throws an error.
<property key="drools.kbuilder.severity.duplicateFunction" value="ERROR"/>
- drools.propertySpecific
-
Sets the property reactivity of the engine. Possible values are
DISABLED
,ALLOWED
, andALWAYS
. - drools.lang.level
-
Sets the DRL language level. Possible values are
DRL5
,DRL6
, andDRL6_STRICT
. The default value isDRL6_STRICT
.
For more information about the kmodule.xml
file, download the Red Hat JBoss BPM Suite 6.4.0 Source Distribution ZIP file from the Red Hat Customer Portal and see the kmodule.xsd
XML schema located at FILE_HOME/jboss-bpmsuite-6.4.0.GA-sources/kie-api-parent-6.5.0.Final-redhat-2/kie-api/src/main/resources/org/kie/api/
.
Since default values have been provided for all configuration aspects, the simplest kmodule.xml
file can contain just an empty kmodule
tag, such as:
Example 20.8. Empty kmodule.xml File
<?xml version="1.0" encoding="UTF-8"?> <kmodule xmlns="http://www.drools.org/xsd/kmodule"/>
In this way the KIE module will contain a single default KIE base. All KIE assets stored in the resources directory, or any directory in it, will be compiled and added to the default KIE base. To build the artifacts, it is sufficient to create a KIE container for them.
20.1.6. KIE Container
The following example shows how to build a KieContainer
object that reads resources built from the class path:
Example 20.9. Creating KieContainer From Classpath
import org.kie.api.KieServices; import org.kie.api.runtime.KieContainer; KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer();
After defining named KIE bases and sessions in the kmodule.xml
file, you can retrieve KieBase
and KieSession
objects from KieContainer
using their names. For example:
Example 20.10. Retrieving KieBases and KieSessions from KieContainer
import org.kie.api.KieServices; import org.kie.api.runtime.KieContainer; import org.kie.api.KieBase; import org.kie.api.runtime.KieSession; import org.kie.api.runtime.StatelessKieSession; KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer(); KieBase kBase1 = kContainer.getKieBase("KBase1"); KieSession kieSession1 = kContainer.newKieSession("KSession2_1"); StatelessKieSession kieSession2 = kContainer.newStatelessKieSession("KSession2_2");
Because KSession2_1
is stateful and KSession2_2
is stateless, the example uses different methods to create the two objects. Use method corresponding to the session type when creating a KIE session. Otherwise, KieContainer
will throw a RuntimeException
exception. Additionally, because kmodule.xml
has default KieBase
and KieSession
definitions, you can instantiate them from KieContainer
without invoking their name:
Example 20.11. Retrieving Default KieBases and KieSessions from KieContainer
import org.kie.api.runtime.KieContainer; import org.kie.api.KieBase; import org.kie.api.runtime.KieSession; KieContainer kContainer = ... KieBase kBase1 = kContainer.getKieBase(); // returns KBase1 KieSession kieSession1 = kContainer.newKieSession(); // returns KSession2_1
Because a KIE project is also a Maven project, the groupId
, artifactId
and version
values declared in the pom.xml
file are used to generate a ReleaseId
object that uniquely identifies your project inside your application. You can create a new KieContainer
from the project by passing its ReleaseId
to the KieServices
.
Example 20.12. Creating KieContainer of Existing Project by ReleaseId
import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId("org.acme", "myartifact", "1.0"); KieContainer kieContainer = kieServices.newKieContainer( releaseId );
Use the KieServices
interface to access KIE building and runtime facilities.
The example shows how to compile all the Java sources and the KIE resources and deploy them into your KIE container, which makes its content available for use at runtime.
20.1.6.1. KIE Base Configuration
Sometimes, for instance in an OSGi environment, the KieBase
object needs to resolve types that are not in the default class loader. To do so, create a KieBaseConfiguration
instance with an additional class loader and pass it to KieContainer
when creating a new KieBase
object. For example:
Example 20.13. Creating a New KieBase with Custom Class Loader
import org.kie.api.KieServices; import org.kie.api.KieServices.Factory; import org.kie.api.KieBaseConfiguration; import org.kie.api.KieBase; import org.kie.api.runtime.KieContainer; KieServices kieServices = KieServices.Factory.get(); KieBaseConfiguration kbaseConf = kieServices .newKieBaseConfiguration( null, MyType.class.getClassLoader()); KieBase kbase = kieContainer.newKieBase(kbaseConf);
The KieBase
object can create, and optionally keep references to, KieSession
objects. When you modify KieBase
, the modifications are applied against the data in the sessions. This reference is a weak reference and it is also optional, which is controlled by a boolean flag.
If you are using Oracle WebLogic Server, note how it finds and loads application class files at runtime. When using a non-exploded WAR deployment, Oracle WebLogic Server packs the contents of WEB-INF/classes
into WEB-INF/lib/_wl_cls_gen.jar
. Consequently, when you use KIE-Spring
to create KieBase
and KieSession
from resources stored in WEB-INF/classes
, KIE-Spring
fails to locate these resources. For this reason, the recommended deployment method on Oracle WebLogic Server is to use the exploded archives contained within the product ZIP file.
20.1.7. KIE Maven Plug-in
The KIE Maven Plug-in validates and pre-compiles artifact resources. It is recommended that the plug-in is used at all times. To use the plug-in, add it to the build section of your Maven pom.xml
file:
Example 20.14. Adding KIE Plug-in to Maven pom.xml
<build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>${project.version}</version> <extensions>true</extensions> </plugin> </plugins> </build>
For the supported Maven artifact version, see Supported Component Versions of the Red Hat JBoss BPM Suite Installation Guide.
The kie-maven-plugin
artifact requires Maven version 3.1.1 or above due to the migration of sonatype-aether
to eclipse-aether
. Aether implementation on Sonatype is no longer maintained and supported. As the eclipse-aether requires Maven version 3.1.1 or above, the kie-maven-plugin
requires it too.
Building a KIE module without the Maven plugin copies all the resources into the resulting JAR file. When the JAR file is loaded at runtime, all the resources are built. In case of compilation issues, it returns a null KieContainer
. It also pushes the compilation overhead to the runtime. To prevent these issues, it is recommended that you use the Maven plugin.
For compiling decision tables and processes, add their dependencies to project dependencies (as compile scope) or as plugin dependencies. For decision tables the dependency is org.drools:drools-decisiontables
and for processes org.jbpm:jbpm-bpmn2
.
20.1.8. KIE Repository
When you build the content of KieFileSystem
, the resulting KieModule
is automatically added to KieRepository
. KieRepository
is a singleton acting as a repository for all the available KIE modules.
After this, you can create a new KieContainer
for the KieModule
using its ReleaseId
identifier. However, because KieFileSystem
does not contain pom.xml
file (it is possible to add pom.xml
using the KieFileSystem.writePomXML
method), KIE cannot determine the ReleaseId
of the KieModule
. Consequently, it assigns a default ReleaseId
to the module. The default ReleaseId
can be obtained from the KieRepository
and used to identify the KieModule
inside the KieRepository
itself.
The following example shows this process.
Example 20.15. Building Content of KieFileSystem and Creating KieContainer
import org.kie.api.KieServices; import org.kie.api.KieServices.Factory; import org.kie.api.builder.KieFileSystem; import org.kie.api.builder.KieBuilder; import org.kie.api.runtime.KieContainer; KieServices kieServices = KieServices.Factory.get(); KieFileSystem kfs = ... kieServices.newKieBuilder( kfs ).buildAll(); KieContainer kieContainer = kieServices .newKieContainer(kieServices.getRepository().getDefaultReleaseId());
At this point, you can get KIE bases and create new KIE sessions from this KieContainer
in the same way as in the case of a KieContainer
created directly from the class path.
It is a best practice to check the compilation results. The KieBuilder
reports compilation results of three different severities:
- ERROR
- WARNING
- INFO
An ERROR indicates that the compilation of the project failed, no KieModule
is produced, and nothing is added to the KieRepository
singleton. WARNING and INFO results can be ignored, but are available for inspection.
Example 20.16. Checking that Compilation Did Not Produce Any Error
import org.kie.api.builder.KieBuilder; import org.kie.api.KieServices; KieBuilder kieBuilder = kieServices.newKieBuilder( kfs ).buildAll(); assertEquals(0, kieBuilder.getResults().getMessages(Message.Level.ERROR).size());
20.1.9. KIE Scanner
The KIE Scanner continuously monitors your Maven repository to check for a new release of your KIE project. A new release is deployed in the KieContainer
wrapping that project. The use of the KieScanner
requires kie-ci.jar
to be on the class path.
Avoid using a KIE scanner with business processes. Using a KIE scanner with processes can lead to unforeseen updates that can then cause errors in long-running processes when changes are not compatible with running process instances.
A KieScanner
can be registered on a KieContainer
as in the following example.
Example 20.17. Registering and Starting KieScanner on KieContainer
import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.builder.KieScanner; ... KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices .newReleaseId("org.acme", "myartifact", "1.0-SNAPSHOT"); KieContainer kContainer = kieServices.newKieContainer(releaseId); KieScanner kScanner = kieServices.newKieScanner(kContainer); // Start the KieScanner polling the Maven repository every 10 seconds: kScanner.start(10000L);
In this example the KieScanner
is configured to run with a fixed time interval, but it is also possible to run it on demand by invoking the scanNow()
method on it. If the KieScanner
finds in the Maven repository an updated version of the KIE project used by KieContainer
for which it is configured, the KieScanner
automatically downloads the new version and triggers an incremental build of the new project. From this moment all the new KieBase
and KieSession
objects created from the KieContainer
will use the new project version.
Since KieScanner
relies on Maven, Maven should be configured with the correct updatePolicy
of always
as shown in the following example:
<profile> <id>guvnor-m2-repo</id> <repositories> <repository> <id>guvnor-m2-repo</id> <name>BRMS Repository</name> <url>http://10.10.10.10:8080/business-central/maven2/</url> <layout>default</layout> <releases> <enabled>true</enabled> <updatePolicy>always</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>always</updatePolicy> </snapshots> </repository> </repositories> </profile>
20.1.10. Command Executor
The CommandExecutor
interface enables commands to be executed on both stateful and stateless KIE sessions. The stateless KIE session executes fireAllRules()
at the end before disposing the session.
SetGlobalCommand
and GetGlobalCommand
are two commands relevant to Red Hat JBoss BRMS. SetGlobalCommand
calls setGlobal
method on a KIE session.
The optional Boolean indicates whether the command should return the value of the global as a part of the ExecutionResults
. If true
it uses the same name as the global name. A String can be used instead of the Boolean, if an alternative name is desired.
Example 20.18. Set Global Command
import org.kie.api.runtime.StatelessKieSession; import org.kie.api.runtime.ExecutionResults; StatelessKieSession ksession = kbase.newStatelessKieSession(); ExecutionResults results = ksession.execute (CommandFactory.newSetGlobal("stilton", new Cheese("stilton"), true)); Cheese stilton = results.getValue("stilton");
Example 20.19. Get Global Command
import org.kie.api.runtime.StatelessKieSession; import org.kie.api.runtime.ExecutionResults; StatelessKieSession ksession = kbase.newStatelessKieSession(); ExecutionResults results = ksession.execute(CommandFactory.getGlobal("stilton")); Cheese stilton = results.getValue("stilton");
All the above examples execute single commands. The BatchExecution
represents a composite command, created from a list of commands. The execution engine will iterate over the list and execute each command in turn. This means you can insert objects, start a process, call fireAllRules
, and execute a query in a single execute(…)
call.
The StatelessKieSession
session will execute fireAllRules()
automatically at the end. The FireAllRules
command is allowed even for the stateless session, because using it disables the automatic execution at the end. It is similar to manually overriding the function.
Any command in the batch that has an out identifier set will add its results to the returned ExecutionResults
instance.
Example 20.20. BatchExecution Command
import org.kie.api.runtime.StatelessKieSession; import org.kie.api.runtime.ExecutionResults; StatelessKieSession ksession = kbase.newStatelessKieSession(); List cmds = new ArrayList(); cmds.add(CommandFactory.newInsertObject(new Cheese("stilton", 1), "stilton")); cmds.add(CommandFactory.newStartProcess("process cheeses")); cmds.add(CommandFactory.newQuery("cheeses")); ExecutionResults results = ksession.execute(CommandFactory.newBatchExecution(cmds)); Cheese stilton = (Cheese) results.getValue("stilton"); QueryResults qresults = (QueryResults) results.getValue("cheeses");
In the example above, multiple commands are executed, two of which populate the ExecutionResults
. The query command uses the same identifier as the query name by default, but you can map it to a different identifier.
All commands support XML (using XStream or JAXB marshallers) and JSON marshalling. For more information, see Section 20.1.10.1, “Marshalling”.
20.1.10.1. Marshalling
XML marshalling and unmarshalling of the JBoss BRMS Commands requires the use of special classes. This section describes these classes.
20.1.10.1.1. XStream
To use the XStream commands marshaller, you need to use the DroolsHelperProvider
to obtain an XStream
instance. It is required because it has the commands converters registered. Also ensure that the drools-compiler
library is present on the classpath.
BatchExecutionHelper.newXStreamMarshaller().toXML(command);
BatchExecutionHelper.newXStreamMarshaller().fromXML(xml);
The fully-qualified class name of the BatchExecutionHelper
class is org.kie.internal.runtime.helper.BatchExecutionHelper
.
JSON
JSON API to marshalling/unmarshalling is similar to XStream API:
BatchExecutionHelper.newJSonMarshaller().toXML(command);
BatchExecutionHelper.newJSonMarshaller().fromXML(xml);
JAXB
There are two options for using JAXB. You can define your model in an XSD file or have a POJO model. In both cases you have to declare your model inside JAXBContext
. In order to do this, you need to use Drools Helper classes. Once you have the JAXBContext
, you need to create the Unmarshaller/Marshaller as needed.
XSD File
With your model defined in a XSD file, you need to have a KBase that has your XSD model added as a resource.
To do this, add the XSD file as a XSD ResourceType
into the KBase. Finally you can create the JAXBContext
using the KBase (created with the KnowledgeBuilder
). Ensure that the drools-compiler
and jaxb-xjc
libraries are present on the classpath.
import org.kie.api.conf.Option; import org.kie.api.KieBase; Options xjcOpts = new Options(); xjcOpts.setSchemaLanguage(Language.XMLSCHEMA); JaxbConfiguration jaxbConfiguration = KnowledgeBuilderFactory.newJaxbConfiguration( xjcOpts, "xsd"); kbuilder.add (ResourceFactory.newClassPathResource ("person.xsd", getClass()), ResourceType.XSD, jaxbConfiguration); KieBase kbase = kbuilder.newKnowledgeBase(); List<String> classesName = new ArrayList<String>(); classesName.add("org.drools.compiler.test.Person"); JAXBContext jaxbContext = KnowledgeBuilderHelper .newJAXBContext(classesName.toArray(new String[classesName.size()]), kbase);
Using POJO Model
Use DroolsJaxbHelperProviderImpl
to create the JAXBContext
. DroolsJaxbHelperProviderImpl.createDroolsJaxbContext()
has two parameters:
- classNames
- A list with the canonical name of the classes that you want to use in the marshalling/unmarshalling process.
- properties
- JAXB custom properties.
List<String> classNames = new ArrayList<String>(); classNames.add("org.drools.compiler.test.Person"); JAXBContext jaxbContext = DroolsJaxbHelperProviderImpl .createDroolsJaxbContext(classNames, null); Marshaller marshaller = jaxbContext.createMarshaller();
Ensure that the drools-compiler
and jaxb-xjc
libraries are present on the classpath. The fully-qualified class name of the DroolsJaxbHelperProviderImpl
class is org.drools.compiler.runtime.pipeline.impl.DroolsJaxbHelperProviderImpl
.
20.1.10.2. Supported Commands
Red Hat JBoss BRMS supports the following list of commands:
-
BatchExecutionCommand
-
InsertObjectCommand
-
RetractCommand
-
ModifyCommand
-
GetObjectCommand
-
InsertElementsCommand
-
FireAllRulesCommand
-
StartProcessCommand
-
SignalEventCommand
-
CompleteWorkItemCommand
-
AbortWorkItemCommand
-
QueryCommand
-
SetGlobalCommand
-
GetGlobalCommand
-
GetObjectsCommand
The code snippets provided in the examples for these commands use a POJO org.drools.compiler.test.Person
with the following fields:
-
name
: String -
age
: Integer
20.1.10.2.1. BatchExecutionCommand
The BatchExecutionCommand
command wraps multiple commands to be executed together. It has the following attributes:
Name | Description | Required |
---|---|---|
| Sets the knowledge session ID on which the commands are going to be executed. |
|
| List of commands to be executed. |
|
Creating BatchExecutionCommand
BatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); InsertObjectCommand insertObjectCommand = new InsertObjectCommand(new Person("john", 25)); FireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand(); command.getCommands().add(insertObjectCommand); command.getCommands().add(fireAllRulesCommand); ksession.execute(command);
XML Output
XStream:
<batch-execution lookup="ksession1"> <insert> <org.drools.compiler.test.Person> <name>john</name> <age>25</age> </org.drools.compiler.test.Person> </insert> <fire-all-rules/> </batch-execution>
JSON:
{"batch-execution":{"lookup":"ksession1","commands":[{"insert":{"object":{"org.drools.compiler.test.Person":{"name":"john","age":25}}}},{"fire-all-rules":""}]}}
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <batch-execution lookup="ksession1"> <insert> <object xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>25</age> <name>john</name> </object> </insert> <fire-all-rules max="-1"/> </batch-execution>
20.1.10.2.2. InsertObjectCommand
The InsertObjectCommand
command is used to insert an object in the knowledge session. It has the following attributes:
Name | Description | Required |
---|---|---|
| The object to be inserted. |
|
| ID to identify the FactHandle created in the object insertion and added to the execution results. |
|
|
Boolean to establish if the object must be returned in the execution results. Default value is |
|
| Entrypoint for the insertion. |
|
Creating InsertObjectCommand
Command insertObjectCommand = CommandFactory.newInsert(new Person("john", 25), "john", false, null); ksession.execute(insertObjectCommand);
XML Output
XStream:
<insert out-identifier="john" entry-point="my stream" return-object="false"> <org.drools.compiler.test.Person> <name>john</name> <age>25</age> </org.drools.compiler.test.Person> </insert>
JSON:
{ "insert": { "entry-point": "my stream", "object": { "org.drools.compiler.test.Person": { "age": 25, "name": "john" } }, "out-identifier": "john", "return-object": false } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <insert out-identifier="john" entry-point="my stream" > <object xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>25</age> <name>john</name> </object> </insert>
20.1.10.2.3. RetractCommand
The RetractCommand
command is used to retract an object from the knowledge session. It has the following attributes:
Name | Description | Required |
---|---|---|
| The FactHandle associated to the object to be retracted. |
|
Creating RetractCommand
There are two ways to create RetractCommand
. You can either create the Fact Handle from a string, with the same output result as shown below:
RetractCommand retractCommand = new RetractCommand(); retractCommand.setFactHandleFromString("123:234:345:456:567");
Or set the Fact Handle that you received when the object was inserted, as shown below:
RetractCommand retractCommand = new RetractCommand(factHandle);
XML Output
XStream:
<retract fact-handle="0:234:345:456:567"/>
JSON:
{ "retract": { "fact-handle": "0:234:345:456:567" } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <retract fact-handle="0:234:345:456:567"/>
20.1.10.2.4. ModifyCommand
The ModifyCommand
command allows you to modify a previously inserted object in the knowledge session. It has the following attributes:
Name | Description | Required |
---|---|---|
|
The |
|
| List of setters object’s modifications. |
|
Creating ModifyCommand
ModifyCommand modifyCommand = new ModifyCommand(); modifyCommand.setFactHandleFromString("123:234:345:456:567"); List<Setter> setters = new ArrayList<Setter>(); setters.add(new SetterImpl("age", "30")); modifyCommand.setSetters(setters);
XML Output
XStream:
<modify fact-handle="0:234:345:456:567"> <set accessor="age" value="30"/> </modify>
JSON:
{ "modify": { "fact-handle": "0:234:345:456:567", "setters": { "accessor": "age", "value": 30 } } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <modify fact-handle="0:234:345:456:567"> <set value="30" accessor="age"/> </modify>
20.1.10.2.5. GetObjectCommand
The GetObjectCommand
command is used to get an object from a knowledge session. It has the following attributes:
Name | Description | Required |
---|---|---|
|
The |
|
|
ID to identify the |
|
Creating GetObjectCommand
GetObjectCommand getObjectCommand = new GetObjectCommand(); getObjectCommand.setFactHandleFromString("123:234:345:456:567"); getObjectCommand.setOutIdentifier("john");
XML Output
XStream:
<get-object fact-handle="0:234:345:456:567" out-identifier="john"/>
JSON:
{ "get-object": { "fact-handle": "0:234:345:456:567", "out-identifier": "john" } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <get-object out-identifier="john" fact-handle="0:234:345:456:567"/>
20.1.10.2.6. InsertElementsCommand
The InsertElementsCommand
command is used to insert a list of objects. It has the following attributes:
Name | Description | Required |
---|---|---|
| The list of objects to be inserted on the knowledge session. |
|
|
ID to identify the |
|
|
Boolean to establish if the object must be returned in the execution results. Default value: |
|
| Entrypoint for the insertion. |
|
Creating InsertElementsCommand
List<Object> objects = new ArrayList<Object>(); objects.add(new Person("john", 25)); objects.add(new Person("sarah", 35)); Command insertElementsCommand = CommandFactory.newInsertElements(objects);
XML Output
XStream:
<insert-elements> <org.drools.compiler.test.Person> <name>john</name> <age>25</age> </org.drools.compiler.test.Person> <org.drools.compiler.test.Person> <name>sarah</name> <age>35</age> </org.drools.compiler.test.Person> </insert-elements>
JSON:
{ "insert-elements": { "objects": [ { "containedObject": { "@class": "org.drools.compiler.test.Person", "age": 25, "name": "john" } }, { "containedObject": { "@class": "Person", "age": 35, "name": "sarah" } } ] } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <insert-elements return-objects="true"> <list> <element xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>25</age> <name>john</name> </element> <element xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>35</age> <name>sarah</name> </element> <list> </insert-elements>
20.1.10.2.7. FireAllRulesCommand
The FireAllRulesCommand
command is used to allow execution of the rules activations created. It has the following attributes:
Name | Description | Required |
---|---|---|
|
The maximum number of rules activations to be executed. default is |
|
| Add the number of rules activations fired on the execution results. |
|
| Allow the rules execution using an Agenda Filter. |
|
Creating FireAllRulesCommand
FireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand(); fireAllRulesCommand.setMax(10); fireAllRulesCommand.setOutIdentifier("firedActivations");
XML Output
XStream:
<fire-all-rules max="10" out-identifier="firedActivations"/>
JSON:
{ "fire-all-rules": { "max": 10, "out-identifier": "firedActivations" } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <fire-all-rules out-identifier="firedActivations" max="10"/>
20.1.10.2.8. StartProcessCommand
The StartProcessCommand
command allows you to start a process using the ID. Additionally, you can pass parameters and initial data to be inserted. It has the following attributes:
Name | Description | Required |
---|---|---|
| The ID of the process to be started. |
|
| A Map <String>, <Object> to pass parameters in the process startup. |
|
| A list of objects to be inserted in the knowledge session before the process startup. |
|
Creating StartProcessCommand
StartProcessCommand startProcessCommand = new StartProcessCommand(); startProcessCommand.setProcessId("org.drools.task.processOne");
XML Output
XStream:
<start-process processId="org.drools.task.processOne"/>
JSON:
{ "start-process": { "process-id": "org.drools.task.processOne" } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <start-process processId="org.drools.task.processOne"> <parameter/> </start-process>
20.1.10.2.9. SignalEventCommand
The SignalEventCommand
command is used to send a signal event. It has the following attributes:
Name | Description | Required |
---|---|---|
| The type of the incoming event. |
|
| The ID of the process instance to be signalled. |
|
| The data of the incoming event. |
|
Creating SignalEventCommand
SignalEventCommand signalEventCommand = new SignalEventCommand(); signalEventCommand.setProcessInstanceId(1001); signalEventCommand.setEventType("start"); signalEventCommand.setEvent(new Person("john", 25));
XML Output
XStream:
<signal-event process-instance-id="1001" event-type="start"> <org.drools.pipeline.camel.Person> <name>john</name> <age>25</age> </org.drools.pipeline.camel.Person> </signal-event>
JSON:
{ "signal-event": { "@event-type": "start", "event-type": "start", "object": { "org.drools.pipeline.camel.Person": { "age": 25, "name": "john" } }, "process-instance-id": 1001 } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <signal-event event-type="start" process-instance-id="1001"> <event xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>25</age> <name>john</name> </event> </signal-event>
20.1.10.2.10. CompleteWorkItemCommand
The CompleteWorkItemCommand
command allows you to complete a WorkItem. It has the following attributes:
Name | Description | Required |
---|---|---|
| The ID of the WorkItem to be completed. |
|
| The result of the WorkItem. |
|
Creating CompleteWorkItemCommand
CompleteWorkItemCommand completeWorkItemCommand = new CompleteWorkItemCommand(); completeWorkItemCommand.setWorkItemId(1001);
XML Output
XStream:
<complete-work-item id="1001"/>
JSON:
{ "complete-work-item": { "id": 1001 } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <complete-work-item id="1001"/>
20.1.10.2.11. AbortWorkItemCommand
The AbortWorkItemCommand
command enables you to abort a work item the same way as ksession.getWorkItemManager().abortWorkItem(workItemId)
. It has the following attributes:
Name | Description | Required |
---|---|---|
| The ID of the WorkItem to be aborted. |
|
Creating AbortWorkItemCommand
AbortWorkItemCommand abortWorkItemCommand = new AbortWorkItemCommand(); abortWorkItemCommand.setWorkItemId(1001);
XML Output
XStream:
<abort-work-item id="1001"/>
JSON:
{ "abort-work-item": { "id": 1001 } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <abort-work-item id="1001"/>
20.1.10.2.12. QueryCommand
The QueryCommand
command executes a query defined in the knowledge base. It has the following attributes:
Name | Description | Required |
---|---|---|
| The query name. |
|
| The identifier of the query results. The query results are going to be added in the execution results with this identifier. |
|
| A list of objects to be passed as a query parameter. |
|
Creating QueryCommand
QueryCommand queryCommand = new QueryCommand(); queryCommand.setName("persons"); queryCommand.setOutIdentifier("persons");
XML Output
XStream:
<query out-identifier="persons" name="persons"/>
JSON:
{ "query": { "name": "persons", "out-identifier": "persons" } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <query name="persons" out-identifier="persons"/>
20.1.10.2.13. SetGlobalCommand
The SetGlobalCommand
command enables you to set an object to global state. It has the following attributes:
Name | Description | Required |
---|---|---|
| The identifier of the global defined in the knowledge base. |
|
| The object to be set into the global. |
|
| A boolean to exclude the global you set from the execution results. |
|
| The identifier of the global execution result. |
|
Creating SetGlobalCommand
SetGlobalCommand setGlobalCommand = new SetGlobalCommand(); setGlobalCommand.setIdentifier("helper"); setGlobalCommand.setObject(new Person("kyle", 30)); setGlobalCommand.setOut(true); setGlobalCommand.setOutIdentifier("output");
XML Output
XStream:
<set-global identifier="helper" out-identifier="output"> <org.drools.compiler.test.Person> <name>kyle</name> <age>30</age> </org.drools.compiler.test.Person> </set-global>
JSON:
{ "set-global": { "identifier": "helper", "object": { "org.drools.compiler.test.Person": { "age": 30, "name": "kyle" } }, "out-identifier": "output" } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <set-global out="true" out-identifier="output" identifier="helper"> <object xsi:type="person" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <age>30</age> <name>kyle</name> </object> </set-global>
20.1.10.2.14. GetGlobalCommand
The GetGlobalCommand
command allows you to get a previously defined global object. It has the following attributes:
Name | Description | Required |
---|---|---|
| The identifier of the global defined in the knowledge base. |
|
| The identifier to be used in the execution results. |
|
Creating GetGlobalCommand
GetGlobalCommand getGlobalCommand = new GetGlobalCommand(); getGlobalCommand.setIdentifier("helper"); getGlobalCommand.setOutIdentifier("helperOutput");
XML Output
XStream:
<get-global identifier="helper" out-identifier="helperOutput"/>
JSON:
{ "get-global": { "identifier": "helper", "out-identifier": "helperOutput" } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <get-global out-identifier="helperOutput" identifier="helper"/>
20.1.10.2.15. GetObjectsCommand
The GetObjectsCommand
command returns all the objects from the current session as a Collection. It has the following attributes:
Name | Description | Required |
---|---|---|
| An ObjectFilter to filter the objects returned from the current session. |
|
| The identifier to be used in the execution results. |
|
Creating GetObjectsCommand
GetObjectsCommand getObjectsCommand = new GetObjectsCommand(); getObjectsCommand.setOutIdentifier("objects");
XML Output
XStream:
<get-objects out-identifier="objects"/>
JSON:
{ "get-objects": { "out-identifier": "objects" } }
JAXB:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <get-objects out-identifier="objects"/>
20.1.11. KIE Configuration
20.1.11.1. Build Result Severity
In some cases, it is possible to change the default severity of a type of build result. For instance, when a new rule with the same name of an existing rule is added to a package, the default behavior is to replace the old rule by the new rule and report it as an INFO. This is probably ideal for most use cases, but in some deployments the user might want to prevent the rule update and report it as an error.
Changing the default severity for a result type, configured like any other option in BRMS, can be done by API calls, system properties or configuration files. As of this version, BRMS supports configurable result severity for rule updates and function updates. To configure it using system properties or configuration files, the user has to use the following properties:
Example 20.21. Setting the severity using properties
// Sets the severity of rule updates: drools.kbuilder.severity.duplicateRule = <INFO|WARNING|ERROR> // Sets the severity of function updates: drools.kbuilder.severity.duplicateFunction = <INFO|WARNING|ERROR>
20.1.11.2. StatelessKieSession
The StatelessKieSession
wraps the KieSession
, instead of extending it. Its main focus is on the decision service type scenarios. It avoids the need to call dispose()
. Stateless sessions do not support iterative insertions and the method call fireAllRules()
from Java code; the act of calling execute()
is a single-shot method that will internally instantiate a KieSession
, add all the user data and execute user commands, call fireAllRules()
, and then call dispose()
. While the main way to work with this class is via the BatchExecution
(a subinterface of Command
) as supported by the CommandExecutor
interface, two convenience methods are provided for when simple object insertion is all that’s required. The CommandExecutor
and BatchExecution
are talked about in detail in their own section.
Our simple example shows a stateless session executing a given collection of Java objects using the convenience API. It will iterate the collection, inserting each element in turn.
Example 20.22. Simple StatelessKieSession Execution with Collection
import org.kie.api.runtime.StatelessKieSession; StatelessKieSession ksession = kbase.newStatelessKieSession(); ksession.execute(collection);
If this was done as a single command it would be as follows:
Example 20.23. Simple StatelessKieSession Execution with InsertElements Command
ksession.execute(CommandFactory.newInsertElements(collection));
If you wanted to insert the collection itself, and the collection’s individual elements, then CommandFactory.newInsert(collection)
would do the job.
Methods of the CommandFactory
create the supported commands, all of which can be marshalled using XStream and the BatchExecutionHelper
. BatchExecutionHelper
provides details on the XML format as well as how to use BRMS Pipeline to automate the marshalling of BatchExecution
and ExecutionResults
.
StatelessKieSession
supports globals, scoped in a number of ways. We cover the non-command way first, as commands are scoped to a specific execution call. Globals can be resolved in three ways.
The
StatelessKieSession
methodgetGlobals()
returns a Globals instance which provides access to the session’s globals. These are shared for all execution calls. Exercise caution regarding mutable globals because execution calls can be executing simultaneously in different threads.Example 20.24. Session Scoped Global
import org.kie.api.runtime.StatelessKieSession; StatelessKieSession ksession = kbase.newStatelessKieSession(); // Set a global hbnSession, that can be used for DB interactions in the rules. ksession.setGlobal("hbnSession", hibernateSession); // Execute while being able to resolve the "hbnSession" identifier. ksession.execute(collection);
-
Using a delegate is another way of global resolution. Assigning a value to a global (with
setGlobal(String, Object)
) results in the value being stored in an internal collection mapping identifiers to values. Identifiers in this internal collection will have priority over any supplied delegate. Only if an identifier cannot be found in this internal collection, the delegate global (if any) will be used. -
The third way of resolving globals is to have execution scoped globals. Here, a
Command
to set a global is passed to theCommandExecutor
.
The CommandExecutor
interface also offers the ability to export data through "out" parameters. Inserted facts, globals and query results can all be returned.
Example 20.25. Out Identifiers
import org.kie.api.runtime.ExecutionResults; // Set up a list of commands: List cmds = new ArrayList(); cmds.add(CommandFactory.newSetGlobal("list1", new ArrayList(), true)); cmds.add(CommandFactory.newInsert(new Person("jon", 102), "person")); cmds.add(CommandFactory.newQuery("Get People" "getPeople")); // Execute the list: ExecutionResults results = ksession.execute(CommandFactory.newBatchExecution(cmds)); // Retrieve the ArrayList: results.getValue("list1"); // Retrieve the inserted Person fact: results.getValue("person"); // Retrieve the query as a QueryResults instance: results.getValue("Get People");
20.1.11.2.1. Sequential Mode
In a stateless session, the initial data set cannot be modified, and rules cannot be added or removed with the ReteOO algorithm. See the section called “PHREAK and Sequential Mode” for more information about PHREAK and sequential mode. Sequential mode can be used with stateless sessions only.
- Sequential Mode Workflow
If you enable sequential mode, the rule engine executes the following:
- Rules are ordered by salience and position in the ruleset.
- An element for each possible rule match is created. The element position indicates the firing order.
- Node memory is disabled, with the exception of the right-input object memory.
-
The left-input adapter node propagation is disconnected, and the object with the node are referenced in a
Command
object. TheCommand
object is put into a list in the working memory for later execution. -
All objects are asserted. Afterwards, the list of
Command
objects is checked and executed. - All matches resulting from executing the list are placed into elements based on the sequence number of the rule.
- The elements containing matches are executed in a sequence.
- If you set the maximum number of rule executions, the evaluation network may exit too early.
In sequential mode, the
LeftInputAdapterNode
node creates aCommand
object and adds it to a list in the working memory. ThisCommand
object holds a reference to theLeftInputAdapterNode
node and the propagated object. This stops any left-input propagations at insertion time, so the right-input propagation will never need to attempt a join with the left-inputs. This removes the need for the left-input memory.All nodes have their memory turned off, including the left-input tuple memory, but excluding the right-input object memory. Once all the assertions are finished and the right-input memory of all the objects is populated, the list of
LeftInputAdatperNode
Command
objects is iterated over. The objects will propagate down the network attempting to join with the right-input objects, but they will not be remembered in the left input.The agenda with a priority queue to schedule the tuples is replaced by an element for each rule. The sequence number of the
RuleTerminalNode
node indicates the element where to place the match. Once allCommand
objects have finished, the elements are checked and existing matches are fired. To improve performance, the first and the last populated cell in the elements are remembered.When the network is constructed, each
RuleTerminalNode
node receives a sequence number based on its salience number and the order in which it was added to the network.The right-input node memories are typically hash maps for fast object deletion. Because object deletions is not supported, a list is used when the values of the object are not indexed. For a large number of objects, indexed hash maps provide a performance increase. In case an object only has a few instances, indexing may not be advantageous, and a list can be used.
- Advantages of Sequential Mode
- The rule execution is faster because the data does not change after the initial data set insertion.
- Limitations of Sequential Mode
-
The
insert
,update
,delete
, ormodify
operations in the right-hand side (RHS) of the rules are not supported for the ReteOO algorithm. For the PHREAK algorithm, themodify
andupdate
operations are supported. - How to Enable Sequential Mode
Sequential mode is disabled by default. To enable it, do one of the following:
-
Set the system property
drools.sequential
totrue
. Enable sequential mode while creating the KIE Base in the client code.
For example:
KieServices services = KieServices.Factory.get(); KieContainer container = services.newKieContainer(releaseId); KieBaseConfiguration conf = KieServices.Factory.get().newKieBaseConfiguration(); conf.setOption(SequentialOption.YES); KieBase kieBase = kc.newKieBase(conf);
For sequential mode to use a dynamic agenda, do one of the following:
-
Set the system property
drools.sequential.agenda
todynamic
. Set the sequential agenda option while creating the KIE Base in the client code.
For example:
KieServices services = KieServices.Factory.get(); KieContainer container = services.newKieContainer(releaseId); KieBaseConfiguration conf = KieServices.Factory.get().newKieBaseConfiguration(); conf.setOption(SequentialAgendaOption.DYNAMIC); KieBase kieBase = kc.newKieBase(conf);
-
Set the system property
20.1.11.3. Marshalling
The KieMarshallers
are used to marshal and unmarshal KieSessions.
An instance of the KieMarshallers
can be retrieved from the KieServices
. A simple example is shown below:
Example 20.26. Simple Marshaller Example
import org.kie.api.runtime.KieSession; import org.kie.api.KieBase; import org.kie.api.marshalling.Marshaller; // ksession is the KieSession // kbase is the KieBase ByteArrayOutputStream baos = new ByteArrayOutputStream(); Marshaller marshaller = KieServices.Factory.get().getMarshallers().newMarshaller(kbase); marshaller.marshall( baos, ksession ); baos.close();
However, with marshalling, you will need more flexibility when dealing with referenced user data. To achieve this use the ObjectMarshallingStrategy
interface. Two implementations are provided, but users can implement their own. The two supplied strategies are IdentityMarshallingStrategy
and SerializeMarshallingStrategy
. SerializeMarshallingStrategy
is the default, as shown in the example above, and it just calls the Serializable
or Externalizable
methods on a user instance. IdentityMarshallingStrategy
creates an integer id for each user object and stores them in a Map, while the id is written to the stream. When unmarshalling it accesses the IdentityMarshallingStrategy
map to retrieve the instance. This means that if you use the IdentityMarshallingStrategy
, it is stateful for the life of the Marshaller instance and will create ids and keep references to all objects that it attempts to marshal. Below is the code to use an Identity Marshalling Strategy.
Example 20.27. IdentityMarshallingStrategy
import org.kie.api.marshalling.KieMarshallers; import org.kie.api.marshalling.ObjectMarshallingStrategy; import org.kie.api.marshalling.Marshaller; ByteArrayOutputStream baos = new ByteArrayOutputStream(); KieMarshallers kMarshallers = KieServices.Factory.get().getMarshallers() ObjectMarshallingStrategy oms = kMarshallers.newIdentityMarshallingStrategy() Marshaller marshaller = kMarshallers.newMarshaller(kbase, new ObjectMarshallingStrategy[]{ oms }); marshaller.marshall(baos, ksession); baos.close();
In most cases, a single strategy is insufficient. For added flexibility, the ObjectMarshallingStrategyAcceptor
interface can be used. This Marshaller has a chain of strategies, and while reading or writing a user object it iterates the strategies asking if they accept responsibility for marshalling the user object. One of the provided implementations is ClassFilterAcceptor
. This allows strings and wild cards to be used to match class names. The default is .
, so in the above example the Identity Marshalling Strategy is used which has a default .
acceptor.
Assuming that we want to serialize all classes except for one given package, where we will use identity lookup, we could do the following:
Example 20.28. IdentityMarshallingStrategy with Acceptor
import org.kie.api.marshalling.KieMarshallers; import org.kie.api.marshalling.ObjectMarshallingStrategy; import org.kie.api.marshalling.Marshaller; ByteArrayOutputStream baos = new ByteArrayOutputStream(); KieMarshallers kMarshallers = KieServices.Factory.get().getMarshallers() ObjectMarshallingStrategyAcceptor identityAcceptor = kMarshallers.newClassFilterAcceptor(new String[] { "org.domain.pkg1.*" }); ObjectMarshallingStrategy identityStrategy = kMarshallers.newIdentityMarshallingStrategy(identityAcceptor); ObjectMarshallingStrategy sms = kMarshallers.newSerializeMarshallingStrategy(); Marshaller marshaller = kMarshallers.newMarshaller (kbase, new ObjectMarshallingStrategy[]{ identityStrategy, sms }); marshaller.marshall( baos, ksession ); baos.close();
Note that the acceptance checking order is in the natural order of the supplied elements.
Also note that if you are using scheduled matches (for example some of your rules use timers or calendars) they are marshallable only if, before you use it, you configure your KieSession to use a trackable timer job factory manager as follows:
Example 20.29. Configuring a trackable timer job factory manager
import org.kie.api.runtime.KieSessionConfiguration; import org.kie.api.KieServices.Factory; import org.kie.api.runtime.conf.TimerJobFactoryOption; KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration(); ksconf.setOption(TimerJobFactoryOption.get("trackable")); KSession ksession = kbase.newKieSession(ksconf, null);
20.1.11.4. KIE Persistence
Longterm out of the box persistence with Java Persistence API (JPA) is possible with BRMS. It is necessary to have some implementation of the Java Transaction API (JTA) installed. For development purposes the Bitronix Transaction Manager is suggested, as it’s simple to set up and works embedded, but for production use JBoss Transactions is recommended.
Example 20.30. Simple example using transactions
import org.kie.api.KieServices; import org.kie.api.runtime.Environment; import org.kie.api.runtime.EnvironmentName; import org.kie.api.runtime.KieSessionConfiguration; KieServices kieServices = KieServices.Factory.get(); Environment env = kieServices.newEnvironment(); env.set(EnvironmentName.ENTITY_MANAGER_FACTORY, Persistence.createEntityManagerFactory("emf-name")); env.set(EnvironmentName.TRANSACTION_MANAGER, TransactionManagerServices.getTransactionManager()); // KieSessionConfiguration may be null, and a default will be used: KieSession ksession = kieServices.getStoreServices().newKieSession(kbase, null, env); int sessionId = ksession.getId(); UserTransaction ut = (UserTransaction) new InitialContext().lookup("java:comp/UserTransaction"); ut.begin(); ksession.insert(data1); ksession.insert(data2); ksession.startProcess("process1"); ut.commit();
To use a JPA, the Environment must be set with both the EntityManagerFactory
and the TransactionManager
. If rollback occurs the ksession state is also rolled back, hence it is possible to continue to use it after a rollback. To load a previously persisted KieSession you’ll need the id, as shown below:
Example 20.31. Loading a KieSession
import org.kie.api.runtime.KieSession; KieSession ksession = kieServices.getStoreServices().loadKieSession(sessionId, kbase, null, env);
To enable persistence several classes must be added to your persistence.xml
, as in the example below:
Example 20.32. Configuring JPA
<persistence-unit name="org.drools.persistence.jpa" transaction-type="JTA"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>jdbc/BitronixJTADataSource</jta-data-source> <class>org.drools.persistence.info.SessionInfo</class> <class>org.drools.persistence.info.WorkItemInfo</class> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.max_fetch_depth" value="3"/> <property name="hibernate.hbm2ddl.auto" value="update" /> <property name="hibernate.show_sql" value="true" /> <property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.BTMTransactionManagerLookup" /> </properties> </persistence-unit>
The JDBC JTA data source would have to be configured first. Bitronix provides a number of ways of doing this, and its documentation should be consulted for details. For a quick start, here is the programmatic approach:
Example 20.33. Configuring JTA DataSource
PoolingDataSource ds = new PoolingDataSource(); ds.setUniqueName("jdbc/BitronixJTADataSource"); ds.setClassName("org.h2.jdbcx.JdbcDataSource"); ds.setMaxPoolSize(3); ds.setAllowLocalTransactions(true); ds.getDriverProperties().put("user", "sa"); ds.getDriverProperties().put("password", "sasa"); ds.getDriverProperties().put("URL", "jdbc:h2:mem:mydb"); ds.init();
Bitronix also provides a simple embedded JNDI service, ideal for testing. To use it, add a jndi.properties
file to your META-INF
folder and add the following line to it:
Example 20.34. JNDI Properties
java.naming.factory.initial=bitronix.tm.jndi.BitronixInitialContextFactory
20.1.12. KIE Sessions
20.1.12.1. Stateless KIE Sessions
A stateless KIE session is a session without inference. A stateless session can be called like a function in that you can use it to pass data and then receive the result back.
Stateless KIE sessions are useful in situations requiring validation, calculation, routing, and filtering.
20.1.12.1.1. Configuring Rules in Stateless Session
Create a data model like the driver’s license example below:
public class Applicant { private String name; private int age; private boolean valid; // getter and setter methods here }
Write the first rule. In this example, a rule is added to disqualify any applicant younger than 18:
package com.company.license rule "Is of valid age" when $a : Applicant(age < 18) then $a.setValid(false); end
When the
Applicant
object is inserted into the rule engine, each rule’s constraints evaluate it and search for a match. There is always an implied constraint of "object type" after which there can be any number of explicit field constraints.$a
is a binding variable. It exists to make possible a reference to the matched object in the rule’s consequence (from which place the object’s properties can be updated).NoteUse of the dollar sign (
$
) is optional. It helps to differentiate between variable names and field names.In the
Is of valid age
rule there are two constraints:-
The fact being matched must be of type
Applicant
. -
The value of
age
must be less than eighteen.
-
The fact being matched must be of type
-
To use this rule, save it in a file with
.drl
extension (for example,licenseApplication.drl
), and store it in a KIE Project. A KIE Project has the structure of a normal Maven project with an additionalkmodule.xml
file defining the KieBases and KieSessions. Place this file in theresources/META-INF
folder of the Maven project. Store all the other artifacts, such as thelicenseApplication.drl
containing any former rule, in the resources folder or in any other subfolder under it. Create a
KieContainer
that reads the files to be built, from the classpath:KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer();
This compiles all the rule files found on the classpath and put the result of this compilation, a
KieModule
, in theKieContainer
.If there are no errors, you can go ahead and create your session from the
KieContainer
and execute against some data:StatelessKieSession kSession = kContainer.newStatelessKieSession(); Applicant applicant = new Applicant("Mr John Smith", 16); assertTrue(applicant.isValid()); ksession.execute(applicant); assertFalse(applicant.isValid());
Here, since the applicant is under the age of eighteen, their application will be marked as invalid.
Result
The preceding code executes the data against the rules. Since the applicant is under the age of 18, the application is marked as invalid.
20.1.12.1.2. Configuring Rules with Multiple Objects
To execute rules against any object-implementing
iterable
(such as a collection), add another class as shown in the example code below:public class Applicant { private String name; private int age; // getter and setter methods here } public class Application { private Date dateApplied; private boolean valid; // getter and setter methods here }
In order to check that the application was made within a legitimate time-frame, add this rule:
package com.company.license rule "Is of valid age" when Applicant(age < 18) $a : Application() then $a.setValid(false); end rule "Application was made this year" when $a : Application(dateApplied > "01-jan-2009") then $a.setValid(false); end
Use the JDK converter to implement the iterable interface. This method commences with the line
Arrays.asList(…)
. The code shown below executes rules against an iterable list. Every collection element is inserted before any matched rules are fired:StatelessKieSession ksession = kbase.newStatelessKnowledgeSession(); Applicant applicant = new Applicant("Mr John Smith", 16); Application application = new Application(); assertTrue(application.isValid()); ksession.execute(Arrays.asList(new Object[] { application, applicant })); assertFalse(application.isValid());
NoteThe
execute(Object object)
andexecute(Iterable objects)
methods are actually "wrappers" around a further method calledexecute(Command command)
which comes from theBatchExecutor
interface.Use the
CommandFactory
to create instructions, so that the following is equivalent toexecute(Iterable it)
:ksession.execute (CommandFactory.newInsertIterable(new Object[] { application, applicant }));
Use the
BatchExecutor
andCommandFactory
when working with many different commands or result output identifiers:List<Command> cmds = new ArrayList<Command>(); cmds.add(CommandFactory.newInsert(new Person("Mr John Smith"), "mrSmith")); cmds.add(CommandFactory.newInsert(new Person("Mr John Doe"), "mrDoe")); BatchExecutionResults results = ksession.execute(CommandFactory.newBatchExecution(cmds)); assertEquals(new Person("Mr John Smith"), results.getValue("mrSmith"));
NoteCommandFactory
supports many other commands that can be used in theBatchExecutor
. Some of these areStartProcess
,Query
andSetGlobal
.
20.1.12.2. Stateful KIE Sessions
A stateful session allow you to make iterative changes to facts over time. As with the StatelessKnowledgeSession
, the StatefulKnowledgeSession
supports the BatchExecutor
interface. The only difference is the FireAllRules
command is not automatically called at the end.
Ensure that the dispose()
method is called after running a stateful session. This is to ensure that there are no memory leaks. This is due to the fact that knowledge bases will obtain references to stateful knowledge sessions when they are created.
20.1.12.2.1. Common Use Cases for Stateful Sessions
- Monitoring
- For example, you can monitor a stock market and automate the buying process.
- Diagnostics
- Stateful sessions can be used to run fault-finding processes. They could also be used for medical diagnostic processes.
- Logistical
- For example, they could be applied to problems involving parcel tracking and delivery provisioning.
- Ensuring compliance
- For example, to validate the legality of market trades.
20.1.12.2.2. Stateful Session Monitoring Example
Create a model of what you want to monitor. In this example involving fire alarms, the rooms in a house have been listed. Each has one sprinkler. A fire can start in any of the rooms:
public class Room { private String name; // getter and setter methods here } public class Sprinkler { private Room room; private boolean on; // getter and setter methods here } public class Fire { private Room room; // getter and setter methods here } public class Alarm { }
- The rules must express the relationships between multiple objects (to define things such as the presence of a sprinkler in a certain room). To do this, use a binding variable as a constraint in a pattern. This results in a cross-product.
Create an instance of the
Fire
class and insert it into the session.The rule below adds a binding to
Fire
object’s room field to constrain matches. This so that only the sprinkler for that room is checked. When this rule fires and the consequence executes, the sprinkler activates:rule "When there is a fire turn on the sprinkler" when Fire($room : room) $sprinkler : Sprinkler(room == $room, on == false) then modify($sprinkler) { setOn(true) }; System.out.println("Turn on the sprinkler for room "+$room.getName()); end
Whereas the stateless session employed standard Java syntax to modify a field, the rule above uses the
modify
statement. It acts much like awith
statement.
20.2. Runtime Manager
The RuntimeManager
interface enables and simplifies the usage of KIE API. The interface provides configurable strategies that control actual runtime execution. The strategies are as follows:
- Singleton
-
The runtime manager maintains a single
KieSession
regardless of the number of processes available. - Per Process Instance
-
The runtime manager maintains mapping between a process instance and a
KieSession
and always provides the sameKieSession
when working with the original process instance. - Per Request
-
The runtime manager delivers a new
KieSession
for every request.
See the fragment of RuntimeManager
interface with further comments below:
package org.kie.api.runtime.manager; public interface RuntimeManager { /** * Returns a fully initialized RuntimeEngine instance: * KieSession is created or loaded depending on the strategy. * TaskService is initialized and attached to a ksession * (using a listener). * WorkItemHandlers are initialized and registered on the ksession. * EventListeners (Process, Agenda, WorkingMemory) are initialized * and added to the ksession. * * @param context: a concrete implementation of a context * supported by the given RuntimeManager * @return an instance of the RuntimeEngine */ RuntimeEngine getRuntimeEngine(Context<?> context); ... }
The runtime manager is responsible for managing and delivering instances of RuntimeEngine
to the caller. The RuntimeEngine
interface contains two important parts of the process engine, KieSession
and TaskService
:
public interface RuntimeEngine { /** * Returns KieSession configured for this RuntimeEngine. * @return */ KieSession getKieSession(); /** * Returns TaskService configured for this RuntimeEngine. * @return */ TaskService getTaskService(); }
Both these components are configured to work with each other without any additional changes from an end user, and it is therefore not required to register a human task handler and keep track of its connection to the service. Regardless of a strategy, the runtime manager provides the same capabilities when initializing and configuring RuntimeEngine
:
-
KieSession
is loaded with the same factories, either in memory or JPA-based. -
Work item handlers as well as event listeners are registered on each
KieSession
. TaskService
is configured with:- The JTA transaction manager.
-
The same entity manager factory as a
KieSession
. -
UserGroupCallback
from the environment.
Additionally, the runtime manager provides dedicated methods to dispose RuntimeEngine
when it is no longer required to release any resources it might have acquired.
20.2.1. Usage
20.2.1.1. Usage Scenario
Regular usage scenario for RuntimeManager
is:
At application startup:
-
Build the
RuntimeManager
and keep it for the entire life time of the application. It is thread safe and you can access it concurrently.
-
Build the
At request:
-
Get
RuntimeEngine
fromRuntimeManager
using proper context instance dedicated to strategy ofRuntimeManager
. -
Get
KieSession
orTaskService
fromRuntimeEngine
. -
Perform operations on
KieSession
orTaskService
such asstartProcess
andcompleteTask
. -
Once done with processing, dispose
RuntimeEngine
using theRuntimeManager.disposeRuntimeEngine
method.
-
Get
At application shutdown:
-
Close
RuntimeManager
.
-
Close
When the RuntimeEngine
is obtained from RuntimeManager
within an active JTA transaction, then there is no need to dispose RuntimeEngine
at the end, as it automatically disposes the RuntimeEngine
on transaction completion (regardless of the completion status commit or rollback).
20.2.1.2. Building Runtime Manager
Here is how you can build RuntimeManager
(with RuntimeEnvironment
) and get RuntimeEngine
(that encapsulates KieSession
and TaskService
) from it:
// First, configure environment that will be used by RuntimeManager: RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get() .newDefaultInMemoryBuilder() .addAsset(ResourceFactory.newClassPathResource ("BPMN2-ScriptTask.bpmn2"), ResourceType.BPMN2) .get(); // Next, create RuntimeManager - in this case singleton strategy is chosen: RuntimeManager manager = RuntimeManagerFactory .Factory.get().newSingletonRuntimeManager(environment); // Then, get RuntimeEngine out of manager - using empty context as singleton // does not keep track of runtime engine as there is only one: RuntimeEngine runtime = manager.getRuntimeEngine(EmptyContext.get()); // Get KieSession from runtime runtimeEngine - already initialized with all handlers, // listeners, and others, that were configured on the environment: KieSession ksession = runtimeEngine.getKieSession(); // Add invocations to the process engine here, // for example ksession.startProcess(processId); // and last dispose the runtime engine: manager.disposeRuntimeEngine(runtimeEngine);
Runtime Manager Identifier
During runtime execution, the identifier of the runtime manager is deploymentId
. If a task is persisted, the identifier of the task is persisted as deploymentId
as well. The deploymentId
of the task is then used to identify the runtime manager after the task is completed and its process instance is resumed. The deploymentId
is also persisted as externalId
in a history log.
If the identifier is not specified during the creation of the runtime manager, a default value is used. Therefore, the same deployment is used during the application’s lifecycle. It is possible to maintain multiple runtime managers in one application. However, it is required to specify their identifiers. For example, Deployment Service (see Section 20.3.1, “Deployment Service”) maintains more runtime managers with identifiers based on the kJAR’s GAV. The Business Central web application depends on Deployment Service, so it has multiple runtime managers as well.
20.2.2. Runtime Environment
The complexity of knowing when to create, dispose, and register handlers is taken away from the end user and moved to the runtime manager that knows when and how to perform such operations. But it still allows to have a fine grained control over this process by providing comprehensive configuration of the RuntimeEnvironment
.
The RuntimeEnvironment
interface provides access to the data kept as part of the environment. You can use RuntimeEnvironmentBuilder
that provides fluent API to configure RuntimeEnvironment
with predefined settings. You can obtain instances of the RuntimeEnvironmentBuilder
through RuntimeEnvironmentBuilderFactory
that provides preconfigured sets of builder to simplify and help you build the environment for the RuntimeManager
.
Besides KieSession
, Runtime Manager also provides access to TaskService
. The default builder comes with predefined set of elements that consists of:
- Persistence unit name
-
It is set to
org.jbpm.persistence.jpa
(for both process engine and task service). - Human task handler
-
This is automatically registered on the
KieSession
. - JPA based history log event listener
-
This is automatically registered on the
KieSession
. - Event listener to trigger rule task evaluation (fireAllRules)
-
This is automatically registered on the
KieSession
.
The MVELUserGroupCallback
class fails to initialize in an OSGi environment. Do not use or include MVELUserGroupCallback
as it is not designed for production purposes.
20.2.3. Strategies
There are multiple strategies of managing KIE sessions that can be used when working with the Runtime Manager.
20.2.3.1. Singleton Strategy
This instructs the RuntimeManager
to maintain single instance of RuntimeEngine
and in turn single instance of KieSession
and TaskService
. Access to the RuntimeEngine
is synchronized and the thread is safe although it comes with a performance penalty due to synchronization. This strategy is considered to be the easiest one and recommended to start with. It has the following characteristics:
- Small memory footprint, that is a single instance of runtime engine and task service.
- Simple and compact in design and usage.
- Good fit for low to medium load on process engine due to synchronized access.
-
Due to single
KieSession
instance, all state objects (such as facts) are directly visible to all process instances and vice versa. -
Not contextual, that is when retrieving instances of
RuntimeEngine
from singletonRuntimeManager
, Context instance is not important and usually theEmptyContext.get()
method is used, although null argument is acceptable as well. -
Keeps track of the ID of the
KieSession
used betweenRuntimeManager
restarts, to ensure it uses the same session. This ID is stored as serialized file on disc in a temporary location that depends on the environment.
Consider the following warnings when using the Singleton strategy:
- Do not use the Singleton runtime strategy with the EJB Timer Scheduler (the default scheduler in Process Server) in a production environment. This combination can result in Hibernate problems under load. For more information about this limitation, see Hibernate issues with Singleton strategy and EJBTimerScheduler.
Do not use the Singleton runtime strategy with JTA transactions (
UserTransaction
or CMT). This combination can result in anIllegalStateException
error with a message similar to "Process instance X is disconnected". For more information about this limitation, see Hibernate errors with Singleton RuntimeManager and outer transaction.To avoid this problem, put the transaction invocations into synchronized blocks, as shown in the following example:
synchronized (ksession) { try { tx.begin(); // use ksession application logic tx.commit(); } catch (Exception e) { ... } }
20.2.3.2. Per Request Strategy
This instructs the RuntimeManager
to provide new instance of RuntimeEngine
for every request. As the RuntimeManager
request considers one or more invocations within single transaction. It must return same instance of RuntimeEngine
within single transaction to ensure correctness of state as otherwise the operation in one call would not be visible in the other. This a kind of stateless strategy that provides only request scope state. Once the request is completed, the RuntimeEngine
is permanently destroyed. The KieSession
information is then removed from the database in case you used persistence. It has following characteristics:
- Completely isolated process engine and task service operations for every request.
- Completely stateless, storing facts makes sense only for the duration of the request.
- A good fit for high load, stateless processes (no facts or timers involved that shall be preserved between requests).
-
KieSession
is only available during life time of request and at the end is destroyed. -
Not contextual, that is when retrieving instances of
RuntimeEngine
from per requestRuntimeManager
, Context instance is not important and usually theEmptyContext.get()
method is used, althoughnull
argument is also acceptable.
20.2.3.3. Per Process Instance Strategy
This instructs the RuntimeManager
to maintain a strict relationship between KieSession
and ProcessInstance
. That means that the KieSession
will be available as long as the ProcessInstance
that it belongs to is active. This strategy provides the most flexible approach to use advanced capabilities of the engine like rule evaluation in isolation (for given process instance only). It provides maximum performance and reduction of potential bottlenecks introduced by synchronization. Additionally, it reduces number of KieSessions
to the actual number of process instances, rather than number of requests (in contrast to per request strategy). It has the following characteristics:
- Most advanced strategy to provide isolation to given process instance only.
-
Maintains strict relationship between
KieSession
andProcessInstance
to ensure it will always deliver sameKieSession
for givenProcessInstance
. -
Merges life cycle of
KieSession
withProcessInstance
making both to be disposed on process instance completion (complete or abort). - Allows to maintain data (such as facts, timers) in scope of process instance, that is, only process instance will have access to that data.
-
Introduces a bit of overhead due to need to look up and load
KieSession
for process instance. -
Validates usage of
KieSession
, so it can not be used for other process instances. In such cases, an exception is thrown. -
Is contextual. It accepts
EmptyContext
,ProcessInstanceIdContext
, andCorrelationKeyContext
context instances.
20.2.4. Handlers and Listeners
Runtime Manager provides various ways how to register work item handlers and process event listeners.
20.2.4.1. Registering Through Registerable Items Factory
The implementation of RegisterableItemsFactory
provides a dedicated mechanism to create your own handlers or listeners.
/** * Returns new instances of WorkItemHandler that will be registered on RuntimeEngine. * * @param runtime provides RuntimeEngine in case handler need to make use of it internally * @return map of handlers to be registered - in case of no handlers * empty map shall be returned */ Map<String, WorkItemHandler> getWorkItemHandlers(RuntimeEngine runtime); /** * Returns new instances of ProcessEventListener that will be registered on RuntimeEngine. * * @param runtime provides RuntimeEngine in case listeners need to make use of it internally * @return list of listeners to be registered - in case of no listeners * empty list shall be returned */ List<ProcessEventListener> getProcessEventListeners(RuntimeEngine runtime); /** * Returns new instances of AgendaEventListener that will be registered on RuntimeEngine. * * @param runtime provides RuntimeEngine in case listeners need to make use of it internally * @return list of listeners to be registered - in case of no listeners * empty list shall be returned */ List<AgendaEventListener> getAgendaEventListeners(RuntimeEngine runtime); /** * Returns new instances of WorkingMemoryEventListener that will be registered * on RuntimeEngine. * * @param runtime provides RuntimeEngine in case listeners need to make use of it internally * @return list of listeners to be registered - in case of no listeners * empty list shall be returned */ List<WorkingMemoryEventListener> getWorkingMemoryEventListeners(RuntimeEngine runtime);
Extending out-of-the-box implementation and adding your own is a good practice. You may not always need extensions, as the default implementations of RegisterableItemsFactory
provides a mechanism to define custom handlers and listeners. Following is a list of available implementations ordered in the hierarchy of inheritance:
- org.jbpm.runtime.manager.impl.SimpleRegisterableItemsFactory
- This is the simplest possible implementation that comes empty and is based on a reflection to produce instances of handlers and listeners based on given class names.
- org.jbpm.runtime.manager.impl.DefaultRegisterableItemsFactory
-
This is an extension of the simple implementation that introduces defaults described above and still provides same capabilities as the
SimpleRegisterableItemsFactory
implementation. - org.jbpm.runtime.manager.impl.KModuleRegisterableItemsFactory
-
This is an extension of the default implementation (
DefaultRegisterableItemsFactory
) that provides specific capabilities for KIE module and still provides the same capabilities as the simple implementation (SimpleRegisterableItemsFactory
). - org.jbpm.runtime.manager.impl.cdi.InjectableRegisterableItemsFactory
-
This is an extension of the default implementation (
DefaultRegisterableItemsFactory
) that is tailored for CDI environments and provides CDI style approach to finding handlers and listeners through producers.
20.2.4.2. Registering Through Configuration Files
Alternatively, you may also register simple (stateless or requiring only KieSession
) work item handlers by defining them as part of CustomWorkItem.conf
file and update the class path. To use this approach do the following:
-
Create a file called
drools.session.conf
insideMETA-INF
of the root of the class path (WEB-INF/classes/META-INF
for web applications). Add the following line to the
drools.session.conf file
:drools.workItemHandlers = CustomWorkItemHandlers.conf
-
Create a file called
CustomWorkItemHandlers.conf
insideMETA-INF
of the root of the class path (WEB-INF/classes/META-INF
for web applications). Define custom work item handlers in MVEL format inside the
CustomWorkItemHandlers.conf
file:[ "Log": new org.jbpm.process.instance.impl.demo.SystemOutWorkItemHandler(), "WebService": new org.jbpm.process.workitem.webservice.WebServiceWorkItemHandler(ksession), "Rest": new org.jbpm.process.workitem.rest.RESTWorkItemHandler(), "Service Task" : new org.jbpm.process.workitem.bpmn2.ServiceTaskHandler(ksession) ]
These steps register the work item handlers for any KieSession
created by the application, regardless of it using the RuntimeManager
or not.
20.2.4.3. Registering in CDI Environment
When you are using RuntimeManager
in CDI environment, you can use the dedicated interfaces to provide custom WorkItemHandlers
and EventListeners
to the RuntimeEngine
.
public interface WorkItemHandlerProducer { /** * Returns map of (key = work item name, value work item handler instance) * of work items to be registered on KieSession. * Parameters that might be given are as follows: * ksessiontaskService * runtimeManager * * @param identifier - identifier of the owner - usually RuntimeManager that allows * the producer to filter out and provide valid instances * for given owner * @param params - owner might provide some parameters, usually KieSession, * TaskService, RuntimeManager instances * @return map of work item handler instances (recommendation is to always * return new instances when this method is invoked) */ Map<String, WorkItemHandler> getWorkItemHandlers(String identifier, Map<String, Object> params); }
The event listener producer is annotated with proper qualifier to indicate what type of listeners they provide. You can select one of the following to indicate the type:
- @Process
-
for
ProcessEventListener
- @Agenda
-
for
AgendaEventListener
- @WorkingMemory
-
for
WorkingMemoryEventListener
public interface EventListenerProducer<T> { /** * Returns list of instances for given (T) type of listeners. * Parameters that might be given are as follows: * ksession * taskServiceruntimeManager * * @param identifier - identifier of the owner - usually RuntimeManager that allows * the producer to filter out and provide valid instances * for given owner * @param params - owner might provide some parameters, usually KieSession, * TaskService, RuntimeManager instances * @return list of listener instances (recommendation is to always return new * instances when this method is invoked) */ List<T> getEventListeners(String identifier, Map<String, Object> params); }
Package these interface implementations as bean archive that includes beans.xml
inside META-INF
folder and update the application classpath (for example, WEB-INF/lib
for web application). This enables the CDI based RuntimeManager
to discover them and register on every KieSession
that is created or loaded from the data store.
All the components (KieSession
, TaskService
, and RuntimeManager
) are provided to the producers to allow handlers or listeners to be more stateful and be able to do more advanced things with the engine. You can also apply filtering based on the identifier (that is given as argument to the methods) to decide if the given RuntimeManager
can receive handlers or listeners or not.
Whenever there is a need to interact with the process engine or task service from within handler or listener, recommended approach is to use RuntimeManager
and retrieve RuntimeEngine
(and then KieSession
or TaskService
) from it as that ensures a proper state.
20.2.5. Control Parameters
The following control parameters are available to alter engine default behavior:
Engine Behavior Bootstrap Switches
- jbpm.business.calendar.properties
The location of the configuration file with Business Calendar properties.
Default Value Admitted Values /jbpm.business.calendar.properties
Path
- jbpm.data.dir
The location where data files produced by Red Hat JBoss BPM Suite must be stored.
Default Value Admitted Values ${java.io.tmpdir}
${jboss.server.data.dir}
if available, otherwise${java.io.tmpdir}
- jbpm.enable.multi.con
Allows Web Designer to use multiple incoming or outgoing connections for tasks. If not enabled, the tasks are marked as invalid.
Default Value Admitted Values false
true
orfalse
- jbpm.loop.level.disabled
Enables or disables loop iteration tracking to allow advanced loop support when using XOR gateways.
Default Value Admitted Values true
true
orfalse
- jbpm.overdue.timer.delay
Specifies the delay for overdue timers to allow proper initialization, in milliseconds.
Default Value Admitted Values 2000
Number (
Long
)- jbpm.process.name.comparator
An alternative comparator class to empower the Start Process by Name feature.
Default Value Admitted Values org.jbpm.process.instance.StartProcessHelper.NumberVersionComparator
Fully qualified name
- jbpm.usergroup.callback.properties
The location of the usergroup callback property file when
org.jbpm.ht.callback
is set tojaas
ordb
.Default Value Admitted Values classpath:/jbpm.usergroup.callback.properties
Path
- jbpm.user.group.mapping
An alternative classpath location of user information configuration (used by
LDAPUserInfoImpl
).Default Value Admitted Values ${jboss.server.config.dir}/roles.properties
Path
- jbpm.user.info.properties
An alternative classpath location for user group callback implementation (LDAP, DB). For more information, see
org.jbpm.ht.userinfo
.Default Value Admitted Values classpath:/userinfo.properties
Path
- jbpm.ut.jndi.lookup
An alternative JNDI name to be used when there is no access to the default one for user transactions (
java:comp/UserTransaction
).Default Value Admitted Values N/A
JNDI name
- org.jbpm.ht.callback
Specifies the implementation of user group callback to be used:
-
mvel
: Default; mostly used for testing. -
ldap
: LDAP; requires additional configuration in thejbpm.usergroup.callback.properties
file. -
db
: Database; requires additional configuration in thejbpm.usergroup.callback.properties
file. -
jaas
: JAAS; delegates to the container to fetch information about user data. -
props
: A simple property file; requires additional file that will keep all information (users and groups). -
custom
: A custom implementation; you must specify the fully qualified name of the class in theorg.jbpm.ht.custom.callback
.
Default Value Admitted Values jaas
mvel
,ldap
,db
,jaas
,props
, orcustom
-
- org.jbpm.ht.custom.callback
A custom implementation of the
UserGroupCallback
interface in case theorg.jbpm.ht.callback
property is set tocustom
.Default Value Admitted Values N/A
Fully qualified name
- org.jbpm.ht.custom.userinfo
A custom implementation of the
UserInfo
interface in case theorg.jbpm.ht.userinfo
property is set tocustom
.Default Value Admitted Values N/A
Fully qualified name
- org.jbpm.ht.userinfo
Specifies what implementation of the
UserInfo
interface to use for user or group information providers.-
ldap
: LDAP; needs to be configured in the file specified injbpm-user.info.properties
. -
db
: Database; needs to be configured in the file specified injbpm-user.info.properties
. -
props
: A simple property file; set the propertyjbpm.user.info.properties
to specify the path to the file. -
custom
: A custom implementation; you must specify the fully qualified name of the class in theorg.jbpm.ht.custom.userinfo
property.
Default Value Admitted Values N/A
ldap
,db
,props
, orcustom
-
- org.jbpm.ht.user.separator
An alternative separator when loading actors and groups for user tasks from a
String
.Default Value Admitted Values ,
(comma)String
- org.kie.executor.disabled
Disables the async job executor.
Default Value Admitted Values false
true
orfalse
- org.kie.executor.jms
Enables or disables the JMS support of the executor. Set to
false
to disable JMS support.Default Value Admitted Values true
true
orfalse
- org.kie.executor.interval
The time between the moment the async job executor finishes a job and the moment it starts a new one, in a time unit specified in
org.kie.executor.timeunit
.Default Value Admitted Values 3
Number (
Integer
)- org.kie.executor.pool.size
The number of threads used by the async job executor.
Default Value Admitted Values 1
Number (
Integer
)- org.kie.executor.retry.count
The number of retries the async job executor attempts on a failed job.
Default Value Admitted Values 3
Number (
Integer
)
- org.kie.executor.timeunit
The time unit in which the
org.kie.executor.interval
is specified.Default Value Admitted Values SECONDS
A
java.util.concurrent.TimeUnit
constant- org.kie.mail.session
The JNDI name of the mail session as registered in the application server, for use by
EmailWorkItemHandler
.Default Value Admitted Values mail/jbpmMailSession
String
- org.quartz.properties
The location of the Quartz configuration file to activate the Quartz timer service.
Default Value Admitted Values N/A
Path
These allow you to fine tune the execution for the environment needs and actual requirements. All of these parameters are set as JVM system properties, usually with -D
when starting a program such as an application server.
20.2.6. Variable Persistence Strategy
Objects in Red Hat JBoss BPM Suite that are used as process variables must be serializable. That is, they must implement the java.io.Serializable
interface. Objects that are not serializable can be used as process variables but for these you must implement and use a marshaling strategy and register it. The default strategy will not convert these variables into bytes. By default all objects need to be serializable.
For internal objects, which are modified only by the engine, it is sufficient if java.io.Serializable
is implemented. The variable will be transformed into a byte stream and stored in a database.
For external data that can be modified by external systems and people (like documents from a CMS, or other database entities), other strategies need to be implemented.
Red Hat JBoss BPM Suite uses what is known as the pluggable Variable Persistence Strategy — that is, it uses serialization for objects that do implement the java.io.Serializable
interface but uses the JPA-based JPAPlaceholderResolverStrategy
class to work on objects that are entities (not implementing the java.io.Serializable
interface).
JPA Placeholder Resolver Strategy
To use this strategy, configure it by placing it in your Runtime Environment used for creating your Knowledge Sessions. This strategy should be set as the first one and the serialization based strategy as the last, default one. An example on how to set this is shown here:
// Create entity manager factory: EntityManagerFactory emf = Persistence.createEntityManagerFactory("com.redhat.sample"); RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get().newDefaultBuilder() .entityManagerFactory(emf) .addEnvironmentEntry(EnvironmentName.OBJECT_MARSHALLING_STRATEGIES, new ObjectMarshallingStrategy[] { // Set the entity manager factory to JPA strategy so it knows how to store and read entities: new JPAPlaceholderResolverStrategy(emf), // Set the serialization-based strategy as last one to deal with non entity classes: new SerializablePlaceholderResolverStrategy(ClassObjectMarshallingStrategyAcceptor.DEFAULT)}) .addAsset(ResourceFactory.newClassPathResource("example.bpmn"), ResourceType.BPMN2) .get(); // Now create the runtime manager and start using entities as part of your process: RuntimeManager manager = RuntimeManagerFactory.Factory .get().newSingletonRuntimeManager(environment);
Make sure to add your entity classes into persistence.xml
configuration file that will be used by the JPA strategy.
At runtime, process variables that need persisting are evaluated using the available strategy. It is up to the strategy to accept or reject the variable. If the variable is rejected by the first strategy, it is passed on till it reaches the default strategy.
A JPA based strategy will only accept classes that declare a field with the @Id
annotation (javax.persistence.Id
) This is the unique id that is used to retrieve the variable. On the other hand, a serialization based strategy simply accepts all variables by default.
Once the variable has been accepted, a JPA marshalling operation to store the variable is performed by the marshal()
method, while the unmarshal()
method will retrieve the variable from the storage.
Creating Custom Strategy
The previous section alluded to the two methods that are used to marshal()
and unmarshal()
objects. These methods are part of the org.kie.api.marshalling.ObjectMarshallingStrategy
interface and you can implement this interface to create a custom persistence strategy.
public interface ObjectMarshallingStrategy { public boolean accept(Object object); public void write(ObjectOutputStream os, Object object) throws IOException; public Object read(ObjectInputStream os) throws IOException, ClassNotFoundException; public byte[] marshal(Context context, ObjectOutputStream os, Object object) throws IOException; public Object unmarshal(Context context, ObjectInputStream is, byte[] object, ClassLoader classloader) throws IOException, ClassNotFoundException; public Context createContext(); }
The methods read()
and write()
are for backwards compatibility. Use the methods accept()
, marshal()
and unmarshal()
to create your strategy.
20.3. KIE Services
Red Hat JBoss BPM Suite provides a set of high level services on top of the Runtime Manager API. These services are the easiest way to embed BPM capabilities into a custom application. These services are split into several modules to ease their adoption in various environments:
- jbpm-services-api
- Service interfaces and other common classes
- jbpm-kie-services
- Core implementation of the services API in pure Java (without any framework-specific dependencies)
- jbpm-services-cdi
- CDI wrappers of the core services implementation
- jbpm-services-ejb
- EJB wrappers of the core services implementation including EJB remote client implementation
- jbpm-executor
- Executor Service core implementation
- jbpm-executor-cdi
- CDI wrapper of the Executor Service core implementation
When working with KIE Services, you do not have to create your own wrappers around Runtime Manager, Runtime Engine, and KIE Session. KIE Services make use of Runtime Manager API best practices and thus, eliminate various risks when working with that API.
20.3.1. Deployment Service
The Deployment Service is responsible for managing deployment units which include resources such as rules, processes, and forms. It can be used to:
- Deploy and undeploy deployment units
- Activate and deactivate deployments
- List all deployed units
- Get deployment unit for a given deployment and check its status
- Retrieve Runtime Manager instance dedicated to a given deployment
There are some restrictions on EJB remote client to do not expose Runtime Manager as it will not make any sense on the client side (after it was serialized).
Typical use case for this service is to provide dynamic behavior into your system so that multiple kjars can be active at the same time and executed simultaneously.
// create deployment unit by giving GAV DeploymentUnit deploymentUnit = new KModuleDeploymentUnit(GROUP_ID, ARTIFACT_ID, VERSION); // deploy deploymentService.deploy(deploymentUnit); // retrieve deployed unit DeployedUnit deployedUnit = deploymentService.getDeployedUnit(deploymentUnit.getIdentifier()); // get runtime manager RuntimeManager manager = deployedUnit.getRuntimeManager();
20.3.2. Definition Service
The Definition Service provides details about processes extracted from their BPMN2 definitions. Before using any method to get some information, you must invoke the buildProcessDefinition
method to populate the repository with process information taken from the BPMN2 content.
The Definition Service provides access to the following BPMN2 data :
- Process definitions, reusable subprocesses, and process variables
- Java classes and rules referred in a given process
- All organizational entities involved in a given process
- Service tasks defined in a given process
- User task definitions, task input and output mappings
Depending on the actual process definition, the returned values for users and groups can contain actual user or group name or process variable that is used to get actual user or group name on runtime.
20.3.3. Process Service
The Process Service provides access to the execution environment. Before using this service, a deployment unit containing process definitions needs to be created (see section Section 20.3.1, “Deployment Service”). Process Service can be used to:
- Start new process instances and abort the existing ones
- Get process instance information
- Get and modify process variables
- Signal a single process instance or all instances in a given deployment
- List all available signals in the current state of a given process instance
- List, complete, and abort work items
- Execute commands on the underlying command executor
The Process Service is mostly focused on runtime operations that affect process execution and not on read operations for which there is dedicated Runtime Data Service (see section Section 20.3.4, “Runtime Data Service”).
An example on how to deploy and run a process can be done as follows:
KModuleDeploymentUnit deploymentUnit = new KModuleDeploymentUnit(groupId, artifactId, version); deploymentService.deploy(deploymentUnit); long processInstanceId = processService.startProcess(deploymentUnit.getIdentifier(), "HiringProcess"); ProcessInstance pi = processService.getProcessInstance(processInstanceId);
20.3.4. Runtime Data Service
The Runtime Data Service provides access to actual data that is available on runtime such as:
- Process definitions by various query parameters
- Active process instances by various query parameters
- Current and previous values of process variables
- List of active tasks by various parameters
- Active and completed nodes of given process instance
Use this service as the main source of information whenever building list based user interface to show process definitions, process instances, and tasks for a given user.
The Runtime Data Service provides only basic querying capabilities. Use Query Service to create and execute more advanced queries (see section Section 20.3.6, “Query Service”).
There are two important arguments that most of the Runtime Data Service operations support:
- QueryContext
- This provides capabilities for efficient management result set like pagination, sorting, and ordering.
- QueryFilter
- This applies additional filtering to task queries in order to provide more advanced capabilities when searching for user tasks.
20.3.5. User Task Service
The User Task Service covers a complete life cycle of a task so it can be managed from start to end. It also provides a way to manipulate task content and other task properties.
The User Task Service allows you to:
- Execute task operations (such as claim, start, and complete)
- Change various task properties (such as priority and expiration date)
- Manipulate task content, comments, and attachments
- Execute various task commands
The User Task Service focuses on executing task operations and manipulating task content rather than task querying. Use the Runtime Data Service to get task details or list tasks based on some parameter (see section Section 20.3.4, “Runtime Data Service”).
Example of how to start a process and complete a user task:
long processInstanceId = processService.startProcess(deploymentUnit.getIdentifier(), "HiringProcess"); List<Long> taskIds = runtimeDataService.getTasksByProcessInstanceId(processInstanceId); Long taskId = taskIds.get(0); userTaskService.start(taskId, "john"); UserTaskInstanceDesc task = runtimeDataService.getTaskById(taskId); // do something with task data Map<String, Object> results = new HashMap<String, Object>(); results.put("Result", "some document data"); userTaskService.complete(taskId, "john", results);
20.3.6. Query Service
The Query Service provides advanced search capabilities that are based on DashBuilder Data Sets. As a user, you have a control over how to retrieve data from the underlying data store. This includes complex joins with external tables such as JPA entities tables and custom systems database tables.
Query Service is build around two parts:
- Management operations
- Registering, unregistering, replacing, and getting query definitions
- Runtime operations
- Executing simple and advanced queries
The DashBuilder Data Sets provide support for multiple data sources (such as CSV, SQL, ElasticSearch) while the process engine focuses on SQL based data sets as its backend is RDBMS based. So the Query Service is a subset of DashBuilder Data Sets capabilities and allows efficient queries with simple API.
20.3.6.1. Terminology
The Query Service uses the following four classes describing queries and their results:
- QueryDefinition
- Represents definition of the data set which consists of unique name, SQL expression (the query) and source - JNDI name of the data source to use when performing the query.
- QueryParam
- Basic structure that represents individual query parameter - condition - that consists of column name, operator, expected value(s).
- QueryResultMapper
- Responsible for mapping raw data set data (rows and columns) into object representation.
- QueryParamBuilder
- Responsible for building query filters that are applied on the query definition for given query invocation.
While using the QueryDefinition
and QueryParam
classes is straightforward, the QueryResultMapper
and QueryParamBuilder
classes are more advanced and require more attention to make use of their capabilities.
20.3.6.2. Query Result Mapper
The Query Result Mapper maps data taken out from database (from data set) into object representation (like ORM providers such as Hibernate map tables to entities). As there can be many object types that you can use for representing data set results, it is almost impossible to provide them out of the box. Mappers are powerful and thus are pluggable. You can implement your own mapper to transform the result into any type. Red Hat JBoss BPM Suite comes with the following mappers out of the box:
- org.jbpm.kie.services.impl.query.mapper.ProcessInstanceQueryMapper
-
Registered with name
ProcessInstances
- org.jbpm.kie.services.impl.query.mapper.ProcessInstanceWithVarsQueryMapper
-
Registered with name
ProcessInstancesWithVariables
- org.jbpm.kie.services.impl.query.mapper.ProcessInstanceWithCustomVarsQueryMapper
-
Registered with name
ProcessInstancesWithCustomVariables
- org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceQueryMapper
-
Registered with name
UserTasks
- org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceWithVarsQueryMapper
-
Registered with name
UserTasksWithVariables
- org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceWithCustomVarsQueryMapper
-
Registered with name
UserTasksWithCustomVariables
- org.jbpm.kie.services.impl.query.mapper.TaskSummaryQueryMapper
-
Registered with name
TaskSummaries
- org.jbpm.kie.services.impl.query.mapper.RawListQueryMapper
-
Registered with name
RawList
Each mapper is registered under the given name to allow simple lookup by name instead of referencing its class name. This is especially important when using EJB remote flavor of services where it is important to reduce the number of dependencies and thus not relying on implementation on client side. Hence, to be able to reference the QueryResultMapper
class by name, use the NamedQueryMapper
class, which is a part of the KIE Services API. It acts as a delegate (lazy delegate) as it looks up the actual mapper when the query is performed.
queryService.query("my query def", new NamedQueryMapper<Collection<ProcessInstanceDesc>>("ProcessInstances"), new QueryContext());
20.3.6.3. Query Parameter Builder
The QueryParamBuilder
class provides an advanced way of building filters for data sets. By default when using a query method of the Query Service (that accepts zero or more QueryParam
instances), all of these parameters will be joined with an AND operator. Therefore, all of them must match. However, that is not always the case, hence you can use QueryParamBuilder
to provide filters at the time the query is issued.
The QueryParamBuilder
available out of the box is used to cover default QueryParams
. The default QueryParams
are based on core functions, which are SQL based conditions and includes following:
- IS_NULL
- NOT_NULL
- EQUALS_TO
- NOT_EQUALS_TO
- LIKE_TO
- GREATER_THAN
- GREATER_OR_EQUALS_TO
- LOWER_THAN
- LOWER_OR_EQUALS_TO
- BETWEEN
- IN
- NOT_IN
The QueryParamBuilder
is a simple interface that is invoked as long as its build method returns a non-null value before the query is performed. So you can build up a complex filter options that could not be simply expressed by list of QueryParams
. Here is a basic implementation of QueryParamBuilder
to give you a jump start to implement your own (note that, it relies on the DashBuilder Data Set API):
public class TestQueryParamBuilder implements QueryParamBuilder<ColumnFilter> { private Map<String, Object> parameters; private boolean built = false; public TestQueryParamBuilder(Map<String, Object> parameters) { this.parameters = parameters; } @Override public ColumnFilter build() { // return null if it was already invoked if (built) { return null; } String columnName = "processInstanceId"; ColumnFilter filter = FilterFactory.OR( FilterFactory.greaterOrEqualsTo((Long)parameters.get("min")), FilterFactory.lowerOrEqualsTo((Long)parameters.get("max"))); filter.setColumnId(columnName); built = true; return filter; } }
Once you have a QueryParamBuilder
implemented, you can use its instance when performing query via QueryService
:
queryService.query("my query def", ProcessInstanceQueryMapper.get(), new QueryContext(), paramBuilder);
20.3.6.4. Typical usage scenario
First thing you need to do is to define a data set (the view of the data you want to work with), using QueryDefinition
in the KIE Services API:
SqlQueryDefinition query = new SqlQueryDefinition("getAllProcessInstances", "java:jboss/datasources/ExampleDS"); query.setExpression("select * from processinstancelog");
This is the simplest possible query definition. The constructor takes a unique name that identifies it on runtime and data source JNDI name used when performing queries on this definition. The expression is the SQL statement that builds up the view to be filtered when performing queries.
Once you create the SQL query definition, you can register it to be used later for actual queries:
queryService.registerQuery(query);
From now on, you can use this query definition to perform actual queries (or data look-ups to use terminology from data sets). Following is the basic one that collects data as is, without any filtering:
Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext());
The above query uses defaults from QueryContext
(paging and sorting). However, you can change these defaults:
QueryContext ctx = new QueryContext(0, 100, "start_date", true); Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), ctx);
You can perform the data filtering in the following way:
// single filter parameter Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext(), QueryParam.likeTo(COLUMN_PROCESSID, true, "org.jboss%")); // multiple filter parameters (AND) Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext(), QueryParam.likeTo(COLUMN_PROCESSID, true, "org.jboss%"), QueryParam.in(COLUMN_STATUS, 1, 3));
With this mechanism, you can define what data are retrieved and how they should be fetched, without being limited by JPA provider. This also promotes the use of tailored queries for a given environment, as in most of the cases, there may be a single database used. Thus, specific features of that database can be utilized to increase performance.
20.3.7. Process Instance Migration Service
Process instance migration is available only with Red Hat JBoss BPM Suite 6.4 and higher.
The Process Instance Migration Service provides administrative utility to move given process instance(s) from one deployment to another or from one process definition to another. Its main responsibility is to allow basic upgrade of process definition behind a given process instance. This may include mapping of currently active nodes to other nodes in a new definition.
Processes or task variables are not affected by migration. Process instance migration means a change of underlying process definition that the process engine uses to move on with a process instance.
Even though process instance migration is available, it is recommended to let active process instances finish and then start new instances with new version whenever possible. In case you can not use this approach, carefully plan the migration of active process instances before its execution, as it might lead to unexpected issues.
Ensure to take into account the following points:
- Is the new process definition backward compatible?
- Are there any data changes (variables that could affect process instance decisions after migration)?
- Is there a need for node mapping?
Answers to these questions might save a lot of production problems after migration. Opt for the backward compatible processes, like extending process definition rather than removing nodes. However, that may not always be possible and in some cases there is a need to remove certain nodes from a process definition. In that situation, migration needs to be instructed how to map nodes that were removed in new definition if the active process instance is at the moment in such a node.
Node mapping is given as a map of node IDs (unique IDs that are set in the definition) where key is the source node ID (from the process definition used by the process instance) to target node ID (in the new process definition).
Node mapping can only be used to map the same type of nodes, for example user task to user task.
Migration can either be performed for a single process instance or multiple process instances at the same time. Multiple process instances migration is a utility method on top of a single instance. Instead of calling it multiple times, you can call it once and then the service will take care of the migration of individual process instances.
Multi instance migration migrates each instance separately to ensure that one will not affect the other and then produces dedicated migration reports for each process instance.
20.3.7.1. Migration report
Migration is always concluded with a migration report for each process instance. The migration report provides the following information:
- start and end date of the migration
outcome of the migration
- success or failure
complete log entry
- all steps performed during migration
- entry can be INFO, WARN or ERROR (in case of ERROR there will be at most one as they are causing migration to be immediately terminated)
20.3.7.2. Known limitations
There are some process instance migration scenarios which are not supported at the moment:
- When a new or modified task requires inputs, which are not available in the new process instance.
- Modifying the tasks prior to the active task where the changes have an impact on further processing.
- Removing a human task, which is currently active (can only be replaced and requires to be mapped to another human task)
- Adding a new task parallel to the single active task (all branches in parallel gateway are not activated - process will stuck)
- Changing or removing the active recurring timer events (will not be changed in database)
- Fixing or updating inputs and outputs in an active task (task data are not migrated)
-
Node mapping updates only the task node name and description (other task fields will not be mapped including the
TaskName
variable)
20.3.7.3. Example
Following is an example of how to invoke the migration:
// first deploy both versions deploymentUnitV1 = new KModuleDeploymentUnit(MIGRATION_GROUP_ID, MIGRATION_ARTIFACT_ID, MIGRATION_VERSION_V1); deploymentService.deploy(deploymentUnitV1); // ... version 2 deploymentUnitV2 = new KModuleDeploymentUnit(MIGRATION_GROUP_ID, MIGRATION_ARTIFACT_ID, MIGRATION_VERSION_V2); deploymentService.deploy(deploymentUnitV2); // next start process instance in version 1 long processInstanceId = processService.startProcess(deploymentUnitV1.getIdentifier(), "processID-V1"); // and once the instance is active it can be migrated MigrationReport report = migrationService.migrate(deploymentUnitV1.getIdentifier(), processInstanceId, deploymentUnitV2.getIdentifier(), "processID-V2"); // as last step check if the migration finished successfully if (report.isSuccessful()) { // do something }
20.3.8. Form Provider Service
The Form Provider Service provides access to the process and task forms. It is built on the concept of isolated form providers.
Implementations of the FormProvider
interface must define a priority, as this is the main driver for the Form Provider Service to ask for the content of the form from a given provider. The Form Provider Service collects all available providers and iterates over them asking for the form content in the order of the specified priority. The lower the priority number, the higher priority it gets during evaluation. For example, a provider with priority 5 is evaluated before a provider with priority 10. FormProviderService
iterates over available providers as long as one delivers the content. In a worse case scenario, it returns simple text-based forms.
The FormProvider
interface shown below describes contract for the implementations:
public interface FormProvider { int getPriority(); String render(String name, ProcessDesc process, Map<String, Object> renderContext); String render(String name, ProcessDesc process, Task task, Map<String, Object> renderContext); }
Red Hat JBoss BPM Suite comes with the following FormProvider
implementations out of the box:
- Additional form provider available with the form modeler. The priority number of this form provider is 2.
- Freemarker based implementation to support process and task forms. The priority number of this form provider is 3.
- Default form provider that provides simplest possible forms. It has the lowest priority and is the last option if none of the other providers delivers content.
20.3.9. Executor Service
The Executor Service gives you access to the Job Executor, which provides advanced features for asynchronous execution (see Section 11.12.3, “Job Executor for Asynchronous Execution” for more details).
Executor Service provides:
- Scheduling and cancelling requests (execution of commands)
- Executor configuration (interval, number of retries, thread pool size)
- Administration operations (clearing requests and errors)
- Queries to access runtime data by various parameters (requests and errors)
20.4. CDI Integration
Apart from the API based approach, Red Hat JBoss BPM Suite 6 also provides the Context and Dependency Injection (CDI) to build your custom applications.
The jbpm-services-cdi
module provides CDI wrappers of Section 20.3, “KIE Services” that enable these services to be injected in any CDI bean.
A workaround is needed on the Oracle WebLogic Server for CDI to work. For more information, see Additional Notes in the Red Hat JBoss BPM Suite Oracle WebLogic Installation and Configuration Guide.
20.4.1. Configuring CDI Integration
To use the KIE Services in your CDI container, you must provide several CDI beans for these services to satisfy their dependencies. For example:
- Entity manager and entity manager factory.
- User group callback for human tasks.
- Identity provider to pass authenticated user information to the services.
Here is an example of a producer bean that satisfies all the requirements of KIE Services in a Java EE environment, such as the Red Hat JBoss Enterprise Application Server (EAP):
public class EnvironmentProducer { @PersistenceUnit(unitName = "org.jbpm.domain") private EntityManagerFactory emf; @Inject @Selectable private UserGroupInfoProducer userGroupInfoProducer; @Inject @Kjar private DeploymentService deploymentService; @Produces public EntityManagerFactory getEntityManagerFactory() { return this.emf; } @Produces public org.kie.api.task.UserGroupCallback produceSelectedUserGroupCalback() { return userGroupInfoProducer.produceCallback(); } @Produces public UserInfo produceUserInfo() { return userGroupInfoProducer.produceUserInfo(); } @Produces @Named("Logs") public TaskLifeCycleEventListener produceTaskAuditListener() { return new JPATaskLifeCycleEventListener(true); } @Produces public DeploymentService getDeploymentService() { return this.deploymentService; } @Produces public IdentityProvider produceIdentityProvider { return new IdentityProvider() { // implement identity provider } } }
Provide an alternative for user group callback in the beans.xml
configuration file. For example, the org.jbpm.kie.services.cdi.producer.JAASUserGroupInfoProducer
class allows Red Hat JBoss EAP to reuse security settings on application server regardless of the settings (such as LDAP or DB):
<beans xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://docs.jboss.org/cdi/beans_1_0.xsd"> <alternatives> <class>org.jbpm.kie.services.cdi.producer.JAASUserGroupInfoProducer</class> </alternatives> </beans>
Optionally, you can use several other producers provided to deliver components like process, agenda, WorkingMemory
event listeners, and WorkItemHandlers
. To provide these components, implement the following interfaces:
-
org.kie.internal.runtime.manager.WorkItemHandlerProducer
-
org.kie.internal.runtime.manager.EventListenerProducer
CDI beans that implement the above-mentioned interfaces are collected at runtime and used when building a KieSession
by the RuntimeManager
.
20.4.2. Deployment Service as CDI Bean
Deployment Service fires CDI events when deployment units are deployed or undeployed. This allows application components to react real time to the CDI events and store or remove deployment details from the memory. An event with the @Deploy
qualifier is fired on deployment; an event with the @Undeploy
qualifier is fired on undeployment. You can use CDI observer mechanism to get a notification on these events.
20.4.2.1. Saving and Removing Deployments from Database
The deployment service stores the deployed units in memory by default. To save deployments in the data store of your choice:
public void saveDeployment(@Observes @Deploy DeploymentEvent event) { DeployedUnit deployedUnit = event.getDeployedUnit(); // store deployed unit info for further needs }
To remove a saved deployment when undeployed:
public void removeDeployment(@Observes @Undeploy DeploymentEvent event) { // remove deployment with ID event.getDeploymentId() }
The deployment service contains deployment synchronization mechanisms that enable you to persist deployed units into a database.
20.4.2.2. Available Deployment Services
You can use qualifiers to instruct the CDI container which deployment service to use. Red Hat JBoss BPM Suite contains the following Deployment Services:
-
@Kjar
: A KIE module deployment service configured to work withKModuleDeploymentUnit
; a small descriptor on top of a KJAR. -
@Vfs
: A VFS deployment service that enables you to deploy assets from VFS (Virtual File System).
Note that every implementation of deployment service must have a dedicated implementation of deployment unit as the services mentioned above.
20.4.3. Runtime Manager as CDI Bean
You can inject RuntimeManager
as CDI bean into any other CDI bean within your application. RuntimeManager
comes with the following predefined strategies and each of them have CDI qualifiers:
-
@Singleton
-
@PerRequest
-
@PerProcessInstance
Though you can directly inject RuntimeManager
as a CDI bean, it is recommended to utilize KIE services when frameworks like CDI, EJB or Spring are used. KIE services provide significant amount of features that encapsulate best practices when using RuntimeManager
.
Here is an example of a producer method implementation that provides RuntimeEnvironment
:
public class EnvironmentProducer { // add the same producers as mentioned above in the configuration section @Produces @Singleton @PerRequest @PerProcessInstance public RuntimeEnvironment produceEnvironment(EntityManagerFactory emf) { RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get() .newDefaultBuilder() .entityManagerFactory(emf) .userGroupCallback(getUserGroupCallback()) .registerableItemsFactory(InjectableRegisterableItemsFactory .getFactory(beanManager, null)) .addAsset(ResourceFactory.newClassPathResource("HiringProcess.bpmn2"), ResourceType.BPMN2) .addAsset(ResourceFactory.newClassPathResource("FiringProcess.bpmn2"), ResourceType.BPMN2) .get(); return environment; } }
In the example above, a single producer method is capable of providing RuntimeEnvironment
for all strategies of RuntimeManager
by specifying all qualifiers on the method level. Once a complete producer is available, you can inject RuntimeManager
into the application CDI bean as shown below:
public class ProcessEngine { @Inject @Singleton private RuntimeManager singletonManager; public void startProcess() { RuntimeEngine runtime = singletonManager.getRuntimeEngine(EmptyContext.get()); KieSession ksession = runtime.getKieSession(); ProcessInstance processInstance = ksession.startProcess("HiringProcess"); singletonManager.disposeRuntimeEngine(runtime); } }
It is recommended to use DeploymentService
when you need multiple RuntimeManager
instances active in your application instead of a single RuntimeManager
.
As an alternative to DeploymentService
, the application can inject RuntimeManagerFactory
and then create RuntimeManager
instance manually. In such cases, EnvironmentProducer
remains the same as the DeploymentService
. Here is an example of a simple ProcessEngine
bean:
public class ProcessEngine { @Inject private RuntimeManagerFactory managerFactory; @Inject private EntityManagerFactory emf; @Inject private BeanManager beanManager; public void startProcess() { RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get() .newDefaultBuilder() .entityManagerFactory(emf) .addAsset(ResourceFactory.newClassPathResource("HiringProcess.bpmn2"), ResourceType.BPMN2) .addAsset(ResourceFactory.newClassPathResource("FiringProcess.bpmn2"), ResourceType.BPMN2) .registerableItemsFactory(InjectableRegisterableItemsFactory .getFactory(beanManager, null)) .get(); RuntimeManager manager = managerFactory.newSingletonRuntimeManager(environment); RuntimeEngine runtime = manager.getRuntimeEngine(EmptyContext.get()); KieSession ksession = runtime.getKieSession(); ProcessInstance processInstance = ksession.startProcess("HiringProcess"); manager.disposeRuntimeEngine(runtime); manager.close(); } }