Chapter 4. Business Process Modeling and Notation Version 2.0
The Business Process Modeling and Notation Version 2.0 (BPMN2) specification is an Object Management Group (OMG) specification that defines standards for graphically representing a business process, defines execution semantics for the elements, and provides process definitions in XML format.
A process is defined or determined by its process definition. It exists in a knowledge base and is identified by its ID.
Label | Description |
---|---|
Name | Enter the name of the process. |
Documentation | Describes the process. The text in this field is included in the process documentation, if applicable. |
ID |
Enter an identifier for this process, for example |
Package |
Enter the package location for this process in your Red Hat Process Automation Manager project, for example |
ProcessType | Specify whether the process is public or private. (Currently not supported.) |
Version | Enter the artifact version for the process. |
Ad hoc | Select this option if this process is an ad hoc sub-process. |
Process Instance Description | Enter a description of the purpose of the process. |
Imports | Click to open the Imports window and add any data type classes required for your process. |
Executable | Select this option to make the process executable part of your Red Hat Process Automation Manager project. |
SLA Due Date | Enter the service level agreement (SLA) expiration date. |
Process Variables |
Add any process variables for the process. Process variables are visible within the specific process instance. Process variables are initialized at process creation and destroyed on process completion. Variable tags provide greater control over variable behavior, for example whether the variable is tagged as |
Metadata Attributes | Add any custom metadata attribute name and value that you want to use for custom event listeners, such as a listener to implement some action when a metadata attribute is present. |
Global Variables | Add any global variables for the process. Global variables are visible to all process instances and assets in a project. Global variables are typically used by business rules and constraints and are created dynamically by the rules or constraints. |
A process is a container for a set of modeling elements. It contains elements that specify the execution workflow of a business process or its parts using flow objects and flows. Each process has its own BPMN2 diagram. Red Hat Process Automation Manager contains the new process designer for creating BPMN2 diagrams and the legacy process designer to open the old BPMN2 diagram with .bpmn2
extension. The new process designer has an improved layout and feature set and continues to develop. By default, the new diagrams are created in the new process designer.
4.1. Red Hat Process Automation Manager support for BPMN2
With Red Hat Process Automation Manager, you can model your business processes using the BPMN 2.0 standard. You can then use Red Hat Process Automation Manager to run, manage, and monitor these business processes. The full BPMN 2.0 specification also includes details on how to represent items such as choreographies and collaboration. However, Red Hat Process Automation Manager uses only the parts of the specification that you can use to specify executable processes. This includes almost all elements and attributes as defined in the Common Executable subclass of the BPMN2 specification, extended with some additional elements and attributes.
The following table contains a list of icons used to indicate whether a BPMN2 element is supported in the legacy process designer, the legacy and new process designer, or not supported.
Key | Description |
---|---|
| Supported in the legacy and new process designer |
| Supported in the legacy process designer only |
| Not supported |
Elements that have no icon do not exist in the BPMN2 specification.
Element Name | Start | Intermediate |
---|---|---|
None |
| |
Message |
|
|
Timer |
|
|
Error |
|
|
Escalation |
|
|
Cancel |
| |
Compensation |
|
|
Conditional |
|
|
Link |
| |
Signal |
|
|
Multiple |
|
|
Parallel Multiple |
|
|
Element Name | Throwing | Non-interrupting | ||
---|---|---|---|---|
End | Intermediate | Start | Intermediate | |
None |
| |||
Message |
|
|
|
|
Timer |
|
| ||
Error |
| |||
Escalation |
|
|
|
|
Cancel |
|
|
| |
Compensation |
|
| ||
Conditional |
|
| ||
Link |
| |||
Signal |
|
|
|
|
Terminate |
| |||
Multiple |
|
|
|
|
Parallel Multiple |
|
|
Element type | Element | Supported |
---|---|---|
Task | Business rule |
|
Script |
| |
User task |
| |
Service task |
| |
Sub-processes, including multiple instance sub-processes | Embedded |
|
Ad hoc |
| |
Reusable |
| |
Event |
| |
Gateways | Inclusive |
|
Exclusive |
| |
Parallel |
| |
Event-based |
| |
Complex |
| |
Connecting objects | Sequence flows |
|
Association flows |
| |
Swimlanes | Swimlanes |
|
Artifacts | Group |
|
Text annotation |
| |
Data object |
|
For more information about the background and applications of BPMN2, see the OMG Business Process Model and Notation (BPMN) Version 2.0 specification.
4.2. BPMN2 events in process designer
An event is something that happens to a business process. BPMN2 supports three categories of events:
- Start
- End
- Intermediate
A start event catches an event trigger, an end event throws an event trigger, and an intermediate event can both catch and throw event triggers.
The following business process diagram shows examples of events:
In this example, the following events occurred:
- The ATM Card Inserted signal start event is triggered when the signal is received.
- The timeout intermediate event is an interrupting event based on a timer trigger. This means that the Wait for PIN sub-process is canceled when the timer event is triggered.
- Depending on the inputs to the process, either end event associated with the Validate User Pin task or the end event associated with the Inform User of Timeout task ends the process.
4.2.1. Start events
Use start events to indicate the start of a business process. A start event cannot have an incoming sequence flow and must have only one outgoing sequence flow. You can use none start events in top-level processes, embedded sub-process, callable sub-processes, and event sub-processes.
All start events, with the exception of the none start event, are catch events. For example, a signal start event starts the process only when the referenced signal (event trigger) is received. You can configure start events in event sub-processes to be interrupting or non-interrupting. An interrupting start event for an event sub-process stops or interrupts the execution of the containing or parent process. A non-interrupting start event does not stop or interrupt the execution of the containing or parent process.
Start event type | Top-level | sub-processes | |
---|---|---|---|
Interrupt | Non-interrupt | ||
| |||
|
|
| |
|
| ||
| |||
|
|
| |
|
|
| |
|
|
| |
|
|
|
None
The none start event is a start event without a trigger condition. A process or a sub-process can contain at most one none start event, which is triggered on process or sub-process start by default, and the outgoing flow is taken immediately.
When you use a none start event in a sub-process, the execution of the process flow is transferred from the parent process into the sub-process and the none start event is triggered. This means that the token (the current location within the process flow) is passed from the parent process into the sub-process activity and the none start event of the sub-process generates a token of its own.
Conditional
The conditional start event is a start event with a Boolean condition definition. The execution is triggered when the condition is first evaluated to false
and then to true
. The process execution starts only if the condition is evaluated to true
after the start event has been instantiated.
A process can contain multiple conditional start events.
Compensation
A compensation start event is used to start a compensation event sub-process when using a sub-process as the target activity of a compensation intermediate event.
Error
A process or sub-process can contain multiple error start events, which are triggered when an error object with a particular ErrorRef
property is received. The error object can be produced by an error end event. It indicates an incorrect process ending. The process instance with the error start event starts execution after it has received the respective error object. The error start event is executed immediately upon receiving the error object and its outgoing flow is taken.
Escalation
The escalation start event is a start event that is triggered by an escalation with a particular escalation code. Processes can contain multiple escalation start events. The process instance with an escalation start event starts its execution when it receives the defined escalation object. The process is instantiated and the escalation start event is executed immediately and its outgoing flow is taken.
Message
A process or an event sub-process can contain multiple message start events, which are triggered by a particular message. The process instance with a message start event only starts its execution from this event after it has received the respective message. After the message is received, the process is instantiated and its message start event is executed immediately (its outgoing flow is taken).
Because a message can be consumed by an arbitrary number of processes and process elements, including no elements, one message can trigger multiple message start events and therefore instantiate multiple processes.
Signal
The signal start event is triggered by a signal with a particular signal code. A process can contain multiple signal start events. The signal start event only starts its execution within the process instance after the instance has received the respective signal. Then, the signal start event is executed and its outgoing flow is taken.
Timer
The timer start event is a start event with a timing mechanism. A process can contain multiple timer start events, which are triggered at the start of the process, after which the timing mechanism is applied.
When you use a timer start event in a sub-process, execution of the process flow is transferred from the parent process into the sub-process and the timer start event is triggered. The token is taken from the parent sub-process activity and the timer start event of the sub-process is triggered and waits for the timer to trigger. After the time defined by the timing definition has been reached, the outgoing flow is taken.
4.2.2. Intermediate events
Intermediate events drive the flow of a business process. Intermediate events are used to either catch or throw an event during the execution of the business process. These events are placed between the start and end events and can also be used on the boundary of an activity, like a sub-process or a human task, as a catch event. In the BPMN modeler, you can set a data output in the Data Output and Assignments field for a boundary event, which is used in a further process to access the process instance details. Note that the compensation events do not support the feature of setting a data output variable.
For example, you can set the following data output variables for a boundary event:
-
nodeInstance
: Carries the node instance details to use in a further process when the boundary event is triggered. -
signal
: Carries the name of the signal. -
event
: Carries the event details. -
workItem
: Carries the work item details. This variable can be set for work item or user task.
The boundary catch events can be configured as interrupting or non-interrupting. An interrupting boundary catch event cancels the bound activity whereas a non-interrupting event does not.
An intermediate event handles a particular situation that occurs during process execution. The situation is a trigger for an intermediate event. In a process, intermediate events with one outgoing flow can be placed on an activity boundary.
If the event occurs while the activity is being executed, the event triggers its execution to the outgoing flow. One activity may have multiple boundary intermediate events. Note that depending on the behavior you require from the activity with the boundary intermediate event, you can use either of the following intermediate event types:
- Interrupting: The activity execution is interrupted and the execution of the intermediate event is triggered.
- Non-interrupting: The intermediate event is triggered and the activity execution continues.
Intermediate event type | Catching | Boundary | Throwing | |
---|---|---|---|---|
Interrupt | Non-interrupt | |||
|
|
|
| |
|
|
| ||
| ||||
|
|
|
| |
|
|
| ||
|
|
| ||
|
|
|
| |
|
|
Message
A message intermediate event is an intermediate event that enables you to manage a message object. Use one of the following events:
- A throwing message intermediate event produces a message object based on the defined properties.
- A catching message intermediate event listens for a message object with the defined properties.
Timer
A timer intermediate event enables you to delay workflow execution or to trigger the workflow execution periodically. It represents a timer that can trigger one or multiple times after a specified period of time. When the timer intermediate event is triggered, the timer condition, which is the defined time, is checked and the outgoing flow is taken. When the timer intermediate event is placed in the process workflow, it has one incoming flow and one outgoing flow. Its execution starts when the incoming flow transfers to the event. When a timer intermediate event is placed on an activity boundary, the execution is triggered at the same time as the activity execution.
The timer is canceled if the timer element is canceled, for example by completing or aborting the enclosing process instance.
Conditional
A conditional intermediate event is an intermediate event with a boolean condition as its trigger. The event triggers further workflow execution when the condition evaluates to true
and its outgoing flow is taken.
The event must define the Expression
property. When a conditional intermediate event is placed in the process workflow, it has one incoming flow, one outgoing flow, and its execution starts when the incoming flow transfers to the event. When a conditional intermediate event is placed on an activity boundary, the execution is triggered at the same time as the activity execution. Note that if the event is non-interrupting, the event triggers continuously while the condition is true
.
Signal
A signal intermediate event enables you to produce or consume a signal object. Use either of the following options:
- A throwing signal intermediate event produces a signal object based on the defined properties.
- A catching signal intermediate event listens for a signal object with the defined properties.
Error
An error intermediate event is an intermediate event that can be used only on an activity boundary. It enables the process to react to an error end event in the respective activity. The activity must not be atomic. When the activity finishes with an error end event that produces an error object with the respective ErrorCode
property, the error intermediate event catches the error object and execution continues to its outgoing flow.
Compensation
A compensation intermediate event is a boundary event attached to an activity in a transaction sub-process. It can finish with a compensation end event or a cancel end event. The compensation intermediate event must be associated with a flow, which is connected to the compensation activity.
The activity associated with the boundary compensation intermediate event is executed if the transaction sub-process finishes with the compensation end event. The execution continues with the respective flow.
Escalation
An escalation intermediate event is an intermediate event that enables you to produce or consume an escalation object. Depending on the action the event element should perform, you need to use either of the following options:
- A throwing escalation intermediate event produces an escalation object based on the defined properties.
- A catching escalation intermediate event listens for an escalation object with the defined properties.
Link
A link intermediate event is an intermediate event that makes the process diagram easier to understand without adding additional logic to the process. Link intermediate event is limited to a single process level, for example, link intermediate event cannot connect a parent process with a sub-process.
Use either of the following options:
- A throwing link intermediate event produces a link object based on the defined properties.
- A catching link intermediate event listens for a link object with the defined properties.
4.2.3. End events
End events are used to end a business process and may not have any outgoing sequence flows. There may be multiple end events in a business process. All end events, with the exception of the none and terminate end events, are throw events.
End events indicate the completion of a business process. An end event is a node that ends a particular workflow. It has one or more incoming sequence flows and no outgoing flow.
A process must contain at least one end event.
During run time, an end event finishes the process workflow. The end event can finish only the workflow that reached it, or all workflows in the process instance, depending on the end event type.
End event | Icon |
---|---|
| |
| |
| |
| |
| |
| |
|
None
The none end event specifies that no other special behavior is associated with the end of the process.
Message
When a flow enters a message end event, the flow finishes and the end event produces a message as defined in its properties.
Signal
A throwing signal end event is used to finish a process or sub-process flow. When the execution flow enters the element, the execution flow finishes and produces a signal identified by its SignalRef
property.
Error
The throwing error end event finishes the incoming workflow, which means consumes the incoming token, and produces an error object. Any other running workflows in the process or sub-process remain uninfluenced.
Compensation
A compensation end event is used to finish a transaction sub-process and trigger the compensation defined by the compensation intermediate event attached to the boundary of the sub-process activities.
Escalation
The escalation end event finishes the incoming workflow, which means consumes the incoming token, and produces an escalation signal as defined in its properties, triggering the escalation process.
Terminate
The terminate end event finishes all execution flows in the specified process instance. Activities being executed are canceled. The sub-process instance terminates if it reaches a terminate end event.
4.3. BPMN2 tasks in process designer
A task is an automatic activity that is defined in the process model and the smallest unit of work in a process flow. The following task types defined in the BPMN2 specification are available in the Red Hat Process Automation Manager process designer palette:
- Business rule task
- Script task
- User task
- Service task
- None task
Business rule task |
|
Script task |
|
User task |
|
Service task |
|
None task |
|
In addition, the BPMN2 specification provides the ability to create custom tasks. For more information about custom tasks, see Section 4.4, “BPMN2 custom tasks in process designer”.
Business rule task
A business rule task defines a way to make a decision either through a DMN model or a rule flow group.
When a process reaches a business rule task defined by a DMN model, the process engine executes the DMN model decision with the inputs provided.
When a process reaches a business rule task defined by a rule flow group, the process engine begins executing the rules in the defined rule flow group. When there are no more active rules in the rule flow group, the execution continues to the next element. During the rule flow group execution, new activations belonging to the active rule flow group can be added to the agenda because these activations are changed by other rules.
Script task
A script task represents a script to be executed during the process execution.
The associated script can access process variables and global variables. Review the following list before using a script task:
- Avoid low-level implementation details in the process. A script task can be used to manipulate variables, but consider using a service task or a custom task when modelling more complex operations.
- Ensure that the script is executed immediately, otherwise use an asynchronous service task.
- Avoid contacting external services through a script task. Use a service task to model communication with an external service.
- Ensure scripts do not throw exceptions. Runtime exceptions should be caught and managed, for example, inside the script or transformed into signals or errors that can then be handled inside the process.
When a script task is reached during execution, the script is executed and the outgoing flow is taken.
User task
User tasks are tasks in the process workflow that cannot be performed automatically by the system and therefore require the intervention of a human user, the actor.
On execution, the User task element is instantiated as a task that appears in the list of tasks of one or more actors. If a User task element defines the Groups
attribute, it is displayed in task lists of all users that are members of the group. Any user who is a member of the group can claim the task.
After it is claimed, the task disappears from the task list of the other users.
User tasks are implemented as domain-specific tasks and serve as a base for custom tasks.
Service task
Service tasks are tasks that do not require human interaction. They are completed automatically by an external software service.
None task
None tasks are completed on activation. This is a conceptual model only. A none task is never actually executed by an IT system.
4.4. BPMN2 custom tasks in process designer
The BPMN2 specification supports the ability to extend the bpmn2:task
element to create custom tasks in a software implementation. Similar to standard BPMN tasks, custom tasks identify actions to be completed in a business process model, but they also include specialized functionality, such as compatibility with an external service of a specific type (REST, email, or web service) or checkpoint behavior within a process (milestone).
Red Hat Process Automation Manager provides the following predefined custom tasks under Custom Tasks in the BPMN modeler palette:
Custom task type | Custom task node |
---|---|
Rest |
|
|
|
Log |
|
WebService |
|
Milestone |
|
DecisionTask |
|
BusinessRuleTask |
|
KafkaPublishMessages |
|
For more information about enabling or disabling custom tasks in Business Central, see Chapter 58, Managing custom tasks in Business Central.
In the BPMN modeler, you can configure the following general properties for a selected custom task:
Label | Description |
---|---|
Name | Identifies the name of the task. You can also double-click the task node to edit the name. |
Documentation | Describes the task. The text in this field is included in the process documentation, if applicable. |
Is Async | Determines whether this task is invoked asynchronously. |
AdHoc Autostart | Determines whether this is an ad hoc task that is started automatically. This option enables the task to automatically start when the process is created instead of being started by a signal event. |
On Entry Action | Defines a Java, JavaScript, or MVEL script that directs an action at the start of the task. |
On Exit Action | Defines a Java, JavaScript, or MVEL script that directs an action at the end of the task. |
SLA Due Date |
Specifies the duration (string type) when the service level agreement (SLA) expires. You can specify the duration in days, minutes, seconds, and milliseconds. For example, |
Assignments | Defines data input and output for the task. |
Rest
A rest custom task is used to invoke a remote RESTful service or perform an HTTP request from a process.
To use the rest custom task, you can set the URL, HTTP method, and credentials in the process modeler. When a process reaches a rest custom task, it generates an HTTP request and returns the response as a string.
You can click Assignments in the Properties panel to open the REST Data I/O window. In the REST Data I/O window, you can configure the data input and output as required. For example, to execute a rest custom task, enter the following data inputs in Data Inputs and Assignments fields:
- Url: Endpoint URL for the REST service. This attribute is mandatory.
-
Method: Method of the endpoint called, such as
GET
, andPOST
. The default value isGET
. -
ContentType: Data type when sending data. This attribute is mandatory for
POST
andPUT
requests. -
ContentTypeCharset: Character set for the
ContentType
. - Content: Data you want to send. This attribute supports backward compatibility, use the ContentData attribute instead.
-
ContentData: Data you want to send. This attribute is mandatory for
POST
andPUT
requests. - ConnectTimeout: Connection timeout (in seconds). The default value is 60000 milliseconds. You must provide the input value in milliseconds.
- ReadTimeout: Timeout (in seconds) on response. The default value is 60000 milliseconds. You must provide the input value in milliseconds.
- Username: User name for authentication.
- Password: Password for authentication.
- AuthUrl: URL that is handling authentication.
- AuthType: Type of URL that is handling authentication.
- HandleResponseErrors (Optional): Instructs handler to throw errors in case of an unsuccessful response codes (except 2XX).
- ResultClass: Valid name of the class to which the response is unmarshalled. If not provided, then the raw response is returned in a string format.
- AcceptHeader: Value of the accept header.
- AcceptCharset: Character set of the accept header.
-
Headers: Headers to pass for REST call, such as
content-type=text/html
.
You can add the following data output in Data Outputs and Assignments to store the output of the task execution:
- Result: Output variable (object type) of the rest custom task.
An email custom task is used to send an email from a process. It contains email body associated with it.
When an email custom task is activated, the email data is assigned to the data input property of the task. An email custom task completes when the associated email is sent.
You can click Assignments in the Properties panel to open the Email Data I/O window. In the Email Data I/O window, you can configure the data input as required. For example, to execute an email custom task, enter the following data inputs in Data Inputs and Assignments fields:
- Body: Body of the email.
- From: Email address of the sender.
- Subject: Subject of the email.
- To: Email address of the recipient. You can specify multiple email addresses separated by semicolon (;).
-
Template (Optional): Template to generate body of the email. The
Template
attribute overrides theBody
parameter, if entered. - Reply-To: Email address to which reply message is sent.
- Cc: Email address of the copied recipient. You can specify multiple email addresses separated by semicolon (;).
- Bcc: Email address of the blind copied recipient. You can specify multiple email addresses separated by semicolon (;).
- Attachments: Email attachment to send along with the email.
- Debug: Flag to enable the debug logging.
Log
A log custom task is used to log a message from a process. When a business process reaches a log custom task, the message data is assigned to the data input property.
A log custom task completes when the associated message is logged. You can click Assignments in the Properties panel to open the Log Data I/O window. In the Log Data I/O window, you can configure the data input as required. For example, to execute a log custom task, enter the following data inputs in Data Inputs and Assignments fields:
- Message: Log message from the process.
WebService
A web service custom task is used to invoke a web service from a process. This custom task serves as a web service client with the web service response stored as a string.
To invoke a web service from a process, you must use the correct task type. You can click Assignments in the Properties panel to open the WS Data I/O window. In the WS Data I/O window, you can configure the data input and output as required. For example, to execute a web service task, enter the following data inputs in Data Inputs and Assignments fields:
- Endpoint: Endpoint location of the web service to invoke.
-
Interface: Name of a service, such as
Weather
. -
Mode: Mode of a service, such as
SYNC
,ASYNC
, orONEWAY
. -
Namespace: Namespace of the web service, such as
http://ws.cdyne.com/WeatherWS/
. - Operation: Method name to call.
- Parameter: Object or array to be sent for the operation.
-
Url: URL of the web service, such as
http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL
.
You can add the following data output in Data Outputs and Assignments to store the output of the task execution:
- Result: Output variable (object type) of the web service task.
Milestone
A milestone represents a single point of achievement within a process instance. You can use milestones to flag certain events to trigger other tasks or track the progress of the process.
Milestones are useful for Key Performance Indicator (KPI) tracking or for identifying the tasks that are still to be completed. Milestones can occur at the end of a stage in a process or they can be the result of achieving other milestones.
Milestones can reach the following states during process execution:
-
Active
: A milestone condition has been defined for the milestone node but it has not been met. -
Completed
: A milestone condition has been met (if applicable), the milestone has been achieved, and the process can proceed to the next task or can end.
You can click Assignments in the Properties panel to open the Milestone Data I/O window. In the Milestone Data I/O window, you can configure the data input as required. For example, to execute a milestone, enter the following data inputs in Data Inputs and Assignments fields:
- Condition: Condition for the milestone to meet. For example, you can enter a Java expression (string data type) that uses a process variable.
DecisionTask
A decision task is used to execute a DMN diagram and invoke a decision engine service from a process. By default, a decision task maps to the DMN decision.
You can use decision tasks to make an operational decision in a process. Decision tasks are useful for identifying key decisions in a process that need to be made.
You can click Assignments in the Properties panel to open the Decision Task Data I/O window. In the Decision Task Data I/O window, you can configure the data input as required. For example, to execute a decision task, enter the following data inputs in Data Inputs and Assignments fields:
- Decision: Decision for a process to make.
- Language: Language of the decision task, defaults to DMN.
- Model: Name of the DMN model.
- Namespace: Namespace of the DMN model.
BusinessRuleTask
A business rule task is used to evaluate a DRL rule and invoke a decision engine service from a process. By default, a business rule task maps to the DRL rules.
You can use business rule tasks to evaluate key business rules in a business process. You can click Assignments in the Properties panel to open the Business Rule Task Data I/O window. In the Business Rule Task Data I/O window, you can configure the data input as required. For example, to execute a business rule task, enter the following data inputs in Data Inputs and Assignments fields:
- KieSessionName: Name of the KIE session.
- KieSessionType: Type of the KIE session.
- Language: Language of the business rule task, defaults to DRL.
KafkaPublishMessages
A Kafka work item is used to send events to a Kafka topic. This custom task includes a work item handler, which uses the Kafka producer to send messages to a specific Kafka server topic. For example, KafkaPublishMessages
task publishes messages from a process to a Kafka topic.
You can click Assignments in the Properties panel to open the KafkaPublishMessages Data I/O window. In the KafkaPublishMessages Data I/O window, you can configure the data input and output as required. For example, to execute a Kafka work item, enter the following data inputs in Data Inputs and Assignments fields:
- Key: Key of the Kafka message to be sent.
- Topic: Name of a Kafka topic.
- Value: Value of the Kafka message to be sent.
You can add the following data output in Data Outputs and Assignments to store the output of the work item execution:
- Result: Output variable (string type) of the work item.
For more information about KafkaPublishMessages
in a business process, see Integrating Red Hat Process Automation Manager with Red Hat AMQ Streams.
4.5. BPMN2 sub-processes in process designer
A sub-process is an activity that contains nodes. You can embed part of the main process within a sub-process. You can also include variable definitions within the sub-process. These variables are accessible to all nodes inside the sub-process.
A sub-process must have at least one incoming connection and one outgoing connection. A terminate end event inside a sub-process ends the sub-process instance but does not automatically end the parent process instance. A sub-process ends when there are no more active elements in it.
The following sub-process types are supported in Red Hat Process Automation Manager:
- Embedded sub-process: A sub-process that is a part of the parent process execution and shares the parent process data, along with declaring its own local sub-process variables.
- Ad hoc sub-process: A sub-process that has no strict element execution order.
- Reusable sub-process: A sub-process that is independent of its parent process.
- Event sub-process: A sub-process that is only triggered on a start event or a timer.
- Multi-instance sub-process: A sub-process that is instantiated multiple times.
In the following example, the Place order sub-process checks whether sufficient stock is available to place the order and updates the stock information if the order can be placed. The customer is then notified through the main process based on whether or not the order was placed.
Embedded sub-process
An embedded sub-process encapsulates a part of the process. It must contain a start event and at least one end event. Note that the element enables you to define local sub-process variables that are accessible to all elements inside this container.
AdHoc sub-process
An ad hoc sub-process or process contains a number of embedded inner activities and is intended to be executed with a more flexible ordering compared to the typical process flow. Unlike regular processes, an ad hoc sub-process does not contain a complete, structured BPMN2 diagram description, for example, from start event to end event. Instead, the ad hoc sub-process contains only activities, sequence flows, gateways, and intermediate events. An ad hoc sub-process can also contain data objects and data associations. The activities within the ad hoc sub-processes are not required to have incoming and outgoing sequence flows. However, you can specify sequence flows between some of the contained activities. When used, sequence flows provide the same ordering constraints as in a regular process. To have any meaning, intermediate events must have outgoing sequence flows and they can be triggered multiple times while the ad hoc sub-process is active.
Reusable sub-process
Reusable sub-processes appear collapsed within the parent process. To configure a reusable sub-process, select the reusable sub-process, click , and expand Implementation/Execution. Set the following properties:
- Called Element: The ID of the sub-process that the activity calls and instantiates.
- Independent: If selected, the sub-process is started as an independent process. If not selected, the active sub-process is canceled when the parent process is terminated.
Abort Parent: If selected, non-independent reusable sub-processes can abort the parent process when there is an error during the execution of the called process instance. For example, when there’s an error when trying to invoke the sub-process or when the sub-process instance is aborted. This property is visible only when the Independent property is not selected. The following rules apply:
- If the reusable sub-process is independent, Abort parent is not available.
- If the reusable sub-process is not independent, Abort parent is available.
-
Wait for completion: If selected, the specified On Exit Action is not performed until the called sub-process instance is terminated. The parent process execution continues when the On Exit Action completes. This property is selected (set to
true
) by default. - Is Async: Select if the task should be invoked asynchronously and cannot be executed instantly.
Multiple Instance: Select to execute the sub-process elements a specified number of times. If selected, the following options are available:
- MI Execution mode: Indicates if the multiple instances execute in parallel or sequentially. If set to Sequential, new instances are not created until the previous instance completes.
- MI Collection input: Select a variable that represents a collection of elements for which new instances are created. The sub-process is instantiated as many times as the size of the collection.
- MI Data Input: Specifies the name of the variable containing the selected element in the collection. The variable is used to access elements in the collection.
- MI Collection output: Optional variable that represents the collection of elements that will gather the output of the multi-instance node.
- MI Data Output: Specifies the name of the variable that is added to the output collection that you selected in the MI Collection output property.
-
MI Completion Condition (mvel): MVEL expression that is evaluated on each completed instance to check if the specified multiple instance node can complete. If it evaluates to
true
, all remaining instances are canceled.
- On Entry Action: A Java or MVEL script that specifies an action at the start of the task.
- On Exit Action: A Java or MVEL script that specifies an action at the end of the task.
-
SLA Due Date: The date that the service level agreement (SLA) expires. You can specify the duration in days, minutes, seconds, and milliseconds. For example,
1m
value in SLA due date field indicates one minute.
Figure 4.1. Reusable sub-process properties
You can open the sub-process in a new editor in Business Central by clicking the Place order task in the main process and then clicking the Open Sub-process task icon.
Event sub-process
An event sub-process becomes active when its start event is triggered. It can interrupt the parent process context or run in parallel with it.
With no outgoing or incoming connections, only an event or a timer can trigger the sub-process. The sub-process is not part of the regular control flow. Although self-contained, it is executed in the context of the bounding process.
Use an event sub-process within a process flow to handle events that happen outside of the main process flow. For example, while booking a flight, two events may occur:
- Cancel booking (interrupting)
- Check booking status (non-interrupting)
You can model both of these events using the event sub-process.
Multiple instance sub-process
A multiple instances sub-process is instantiated multiple times when its execution is triggered. The instances are created sequentially or parallelly. If you set the sequential mode, a new sub-process instance is created only after the previous instance has finished. However, when you set the parallel mode, all the sub-process instances are created at once.
A multiple instances sub-process has one incoming connection and one outgoing connection.
4.6. BPMN2 gateways in process designer
Gateways are used to create or synchronize branches in the workflow using a set of conditions called the gating mechanism. BPMN2 supports two types of gateways:
- Converging gateways, merging multiple flows into one flow
- Diverging gateways, splitting one flow into multiple flows
One gateway cannot have multiple incoming and multiple outgoing flows.
In the following business process diagram, the XOR gateway evaluates only the incoming flow whose condition evaluates to true:
In this example, the customer details are verified by a user and the process is assigned to a user for approval. If approved, an approval notification is sent to the user. If the event of the request is rejected, a rejection notification is sent to the user.
Element type | Icon |
---|---|
exclusive (XOR) |
|
Inclusive |
|
Parallel |
|
Event |
|
Exclusive
In an exclusive diverging gateway, only the first incoming flow whose condition evaluates to true is chosen. In a converging gateway, the next node is triggered for each triggered incoming flow.
The gateway triggers exactly one outgoing flow. The flow with the constraint evaluated to true and the lowest priority number is taken.
Ensure that at least one of the outgoing flows evaluates to true at run time. Otherwise, the process instance terminates with a runtime exception.
The converging gateway enables a workflow branch to continue to its outgoing flow as soon as it reaches the gateway. When one of the incoming flows triggers the gateway, the workflow continues to the outgoing flow of the gateway. If it is triggered from more than one incoming flow, it triggers the next node for each trigger.
Inclusive
With an inclusive diverging gateway, the incoming flow is taken and all outgoing flows that evaluate to true are taken. Connections with lower priority numbers are triggered before triggering higher priority connections. Priorities are evaluated but the BPMN2 specification does not guarantee the priority order. Avoid depending on the priority
attribute in your workflow.
Ensure that at least one of the outgoing flows evaluates to true at run time. Otherwise, the process instance terminates with a runtime exception.
A converging inclusive gateway merges all incoming flows previously created by an inclusive diverging gateway. It acts as a synchronizing entry point for the inclusive gateway branches.
Parallel
Use a parallel gateway to synchronize and create parallel flows. With a parallel diverging gateway, the incoming flow is taken, all outgoing flows are taken simultaneously. With a converging parallel gateway, the gateway waits until all incoming flows have entered and only then triggers the outgoing flow.
Event
An event-based gateway is only diverging and enables you to react to possible events as opposed to the data-based exclusive gateway, which reacts to the process data. The outgoing flow is taken based on the event that occurs. Only one outgoing flow is taken at a time. The gateway might act as a start event, where the process is instantiated only if one of the intermediate events connected to the event-based gateway occurs.
4.7. BPMN2 connecting objects in process designer
Connecting objects create an association between two BPMN2 elements. When a connecting object is directed, the association is sequential and indicates that one of the elements is executed immediately before the other, within an instance of the process. Connecting objects can start and end at the top, bottom, right, or left of the process elements being associated. The OMG BPMN2 specification allows you to use your discretion, placing connecting objects in a way that makes the process behavior easy to understand and follow.
BPMN2 supports two main types of connecting objects:
- Sequence flows: Connect elements of a process and define the order in which those elements are executed within an instance.
- Association flows: Connect the elements of a process without execution semantics. Association flows can be undirected or unidirectional.
The new process designer supports only undirected association flows. The legacy designer supports one direction and Unidirectional flows.
4.8. BPMN2 swimlanes in process designer
Swimlanes are process elements that visually group tasks related to one group or user. You can use user tasks in combination with swimlanes to assign multiple user tasks to the same actor, due to Autoclaim
property of the swimlanes. When a potential owner of a group claims the first task in a swimlane, then other tasks are directly assigned to the same owner. Therefore, the claim for other tasks is not needed by the remaining owners of the group. The Autoclaim
property enables the auto-assignment of the tasks that are related to a swimlane.
If the remaining user tasks in a swimlane contain multiple predefined ActorIds
, then the user tasks are not assigned automatically.
In the following example, an analyst lane consists of two user tasks:
The Group field in the Update Customer Details and Resolve Customer Issue tasks contain the value analyst
. When the process is started, and the Update Customer Details task is claimed, started, or completed by an analyst, and the Resolve Customer Issue task is claimed and assigned to the user who completed the first task. However, if only the Update Customer Details task contains the analyst group assigned, and the second task contains no user or group assignments, and the process stops after the first task completes.
You can disable the Autoclaim
property of the swimlanes. If the Autoclaim
property is disabled, then the tasks related to a swimlane are not assigned automatically. By default, the value of Autoclaim
property is set as true
. If needed, you can also change the default value for the Autoclaim
property from project settings in Business Central or using the deployment descriptor file.
To change the default value of Autoclaim
property of swimlanes in Business Central:
- Go to project Settings.
-
Open Deployment
Environment entries. Enter the following values in the given fields:
-
Name -
Autoclaim
-
Value -
"false”
-
Name -
If you want to set the environment entry in the XML deployment descriptor, add the following code to the kie-deployment-descriptor.xml
file:
<environment-entries> .. <environment-entry> <resolver>mvel</resolver> <identifier>new String ("false")</identifier> <parameters/> <name>Autoclaim</name> </environment-entry> .. </environment-entries>
4.9. BPMN2 artifacts in process designer
Artifacts are used to provide additional information about a process. An artifact is any object depicted in the BPMN2 diagram that is not part of the process workflow. Artifacts have no incoming or outgoing flow objects.The purpose of artifacts is to provide additional information required to understand the diagram. The artifacts table lists the artifacts supported in the legacy process designer.
Artifact type | Description |
---|---|
Group | Organizes tasks or processes that have significance in the overall process. Group artifacts are not supported in the new process designer. |
Text annotation | Provides additional textual information for the BPMN2 diagram. |
Data object | Displays the data flowing through a process in the BPMN2 diagram. |
4.9.1. Creating data object
Data objects represent, for example, documents used in a process in physical and digital form. Data objects appear as a page with a folded top right corner. The following procedure is a generic overview of creating a data object.
In Red Hat Process Automation Manager 7.11.0, limited support for data objects is provided that excludes support for data inputs, data outputs, and associations.
Procedure
- Create a business process.
-
In the process designer, select the Artifacts
Data Object from the tool palette. - Either drag and drop a data object onto the process designer canvas or click a blank area of the canvas.
- If necessary, in the upper-right corner of the screen, click the Properties icon.
Add or define the data object information listed in the following table as required.
Table 4.14. Data object parameters Label Description Name
The name of the data object. You can also double-click the data object shape to edit the name.
Type
Select a type of the data object.
- Click Save.