Chapter 14. Reference


14.1. MicroProfile Config reference

14.1.1. Default MicroProfile Config attributes

The MicroProfile Config specification defines three ConfigSources by default.

ConfigSources are sorted according to their ordinal number. If a configuration must be overwritten for a later deployment, the lower ordinal ConfigSource is overwritten before a higher ordinal ConfigSource.

Table 14.1. Default MicroProfile Config attributes
ConfigSourceOrdinal

System properties

400

Environment variables

300

Property files META-INF/microprofile-config.properties found on the classpath

100

14.1.2. MicroProfile Config SmallRye ConfigSources

The microprofile-config-smallrye project defines more ConfigSources you can use in addition to the default MicroProfile Config ConfigSources.

Table 14.2. Additional MicroProfile Config attributes
ConfigSourceOrdinal

config-source in the Subsystem

100

ConfigSource from the Directory

100

ConfigSource from Class

100

An explicit ordinal is not specified for these ConfigSources. They inherit the default ordinal value found in the MicroProfile Config specification.

14.2. MicroProfile Fault Tolerance reference

14.2.1. MicroProfile Fault Tolerance configuration properties

SmallRye Fault Tolerance specification defines the following properties in addition to the properties defined in the MicroProfile Fault Tolerance specification.

Table 14.3. MicroProfile Fault Tolerance configuration properties
PropertyDefault valueDescription

io.smallrye.faulttolerance.mainThreadPoolSize

100

Maximum number of threads in the thread pool.

io.smallrye.faulttolerance.mainThreadPoolQueueSize

-1 (unbounded)

Size of the queue that the thread pool should use.

14.3. MicroProfile JWT reference

14.3.1. MicroProfile Config JWT standard properties

The microprofile-jwt-smallrye subsystem supports the following MicroProfile Config standard properties.

Table 14.4. MicroProfile Config JWT standard properties
PropertyDefaultDescription

mp.jwt.verify.publickey

NONE

String representation of the public key encoded using one of the supported formats. Do not set if you have set mp.jwt.verify.publickey.location.

mp.jwt.verify.publickey.location

NONE

The location of the public key, may be a relative path or URL. Do not be set if you have set mp.jwt.verify.publickey.

mp.jwt.verify.issuer

NONE

The expected value of any iss claim of any JWT token being validated.

Example microprofile-config.properties configuration:

mp.jwt.verify.publickey.location=META-INF/public.pem
mp.jwt.verify.issuer=jwt-issuer

14.4. MicroProfile OpenAPI reference

14.4.1. MicroProfile OpenAPI configuration properties

In addition to the standard MicroProfile OpenAPI configuration properties, JBoss EAP supports the following additional MicroProfile OpenAPI properties. These properties can be applied in both the global and the application scope.

Table 14.5. MicroProfile OpenAPI properties in JBoss EAP
PropertyDefault valueDescription

mp.openapi.extensions.enabled

true

Enables or disables registration of an OpenAPI endpoint.

When set to false, disables generation of OpenAPI documentation. You can set the value globally using the config subsystem, or for each application in a configuration file such as /META-INF/microprofile-config.properties.

You can parameterize this property to selectively enable or disable microprofile-openapi-smallrye in different environments, such as production or development.

You can use this property to control which application associated with a given virtual host should generate a MicroProfile OpenAPI model.

mp.openapi.extensions.path

/openapi

You can use this property for generating OpenAPI documentation for multiple applications associated with a virtual host.

Set a distinct mp.openapi.extensions.path on each application associated with the same virtual host.

mp.openapi.extensions.servers.relative

true

Indicates whether auto-generated server records are absolute or relative to the location of the OpenAPI endpoint.

Server records are necessary to ensure, in the presence of a non-root context path, that consumers of an OpenAPI document can construct valid URLs to REST services relative to the host of the OpenAPI endpoint.

The value true indicates that the server records are relative to the location of the OpenAPI endpoint. The generated record contains the context path of the deployment.

When set to false, JBoss EAP XP generates server records including all the protocols, hosts, and ports at which the deployment is accessible.

14.5. MicroProfile Reactive Messaging reference

14.5.1. MicroProfile reactive messaging connectors for integrating with external messaging systems

The following is a list of reactive messaging property key prefixes required by the MicroProfile Config specification:

  • mp.messaging.incoming.[channel-name].[attribute]=[value]
  • mp.messaging.outgoing.[channel-name].[attribute]=[value]
  • mp.messaging.connector.[connector-name].[attribute]=[value]

Note that channel-name is either the @Incoming.value() or the @Outgoing.value(). For clarification, look at this example of a pair of connector methods:

@Outgoing("to")
public int send() {
   int i = // Randomly generated...
   return i;
}

@Incoming("from")
public void receive(int i) {
   // Process payload
}

In this example, the required property prefixes are as follows:

  • mp.messaging.incoming.from. This defines the receive() method.
  • mp.messaging.outgoing.to. This defines the send() method.

Remember that this is an example. Because different connectors recognize different properties, the prefixes you indicate depend on the connector you want to configure.

14.5.2. Example of the data exchange between reactive messaging streams and user-initialized code

The following is an example of data exchange between reactive messaging streams and code that a user triggered through the @Channel and Emitter constructs:

@Path("/")
@ApplicationScoped
class MyBean {
    @Inject @Channel("my-stream")
    Emitter<String> emitter; 1

    Publisher<String> dest;

    public MyBean() { 2
    }

    @Inject
    public MyBean(@Channel("my-stream") Publisher<String> dest) {
        this.dest = subscribeAndAllowMultipleSubscriptions(dest);
    }

    private Publisher subscribeAndAllowMultipleSubscriptions(Publisher delegate) {
    } 3 4 5

    @POST
    public PublisherBuilder<String> publish(@FormParam("value") String value) {
        return emitter.send(value);
    }

    @GET
    public Publisher poll() {
        return dest;
    }

    @PreDestroy
    public void close() { 6

    }
}

In-line details:

1
Wraps the constructor-injected publisher.
2
You need this empty constructor to satisfy the Contexts and Dependency Injection (CDI) for Java specification.
3
Subscribe to the delegate.
4
Wrap the delegate in a publisher that can handle multiple subscriptions.
5
The wrapping publisher forwards data from the delegate.
6
Unsubscribe from the reactive messaging-provided publisher.

In this example, MicroProfile Reactive Messaging is listening to the my-stream memory stream, so messages sent through the Emitter are received on this injected publisher. Note, though, that the following conditions must be true for this data exchange to succeed:

  1. There must be an active subscription on the channel before you call Emitter.send(). In this example, notice that the subscribeAndAllowMultipleSubscriptions() method called by the constructor ensures that there’s an active subscription by the time the bean is available for user code calls.
  2. You can have only one Subscription on the injected Publisher. If you want to expose the receiving publisher with a REST call, where each call to the poll() method results in a new subscription to the dest publisher, you have to implement your own publisher to broadcast data from the injected to each client.

14.5.3. The Apache Kafka user API

You can use the Apache Kafka user API to get more information about messages Kafka received, and to influence how Kafka handles messages. This API is stored in the io/smallrye/reactive/messaging/kafka/api package, and it consists of the following classes:

  • IncomingKafkaRecordMetadata. This metadata contains the following information:

    • The Kafka record key, represented by a Message.
    • The Kafka topic and partition used for the Message, and the offset within those.
    • The Message timestamp and timestampType.
    • The Message headers. These are pieces of information that the application can attach on the producing side, and receive on the consuming side.
  • OutgoingKafkaRecordMetadata. With this metadata, you can specify or override how Kafka handles messages. It contains the following information:

    • The key. which Kafka treats as the message key.
    • The topic you want Kafka to use.
    • The partition.
    • The timestamp, if you don’t want the one that Kafka generates.
    • headers.
  • KafkaMetadataUtil contains utility methods to write OutgoingKafkaRecordMetadata to a Message, and to read IncomingKafkaRecordMetadata from a Message.
Important

If you write OutgoingKafkaRecordMetadata to a Message sent to a channel that’s not mapped to Kafka, the reactive messaging framework ignores it. Conversely, if you read IncomingKafkaRecordMetadata from a Message from a channel that’s not mapped to Kafka, that message returns as null.

Example of how to write and read a message key
@Inject
@Channel("from-user")
Emitter<Integer> emitter;

@Incoming("from-user")
@Outgoing("to-kafka")
public Message<Integer> send(Message<Integer> msg) {
    // Set the key in the metadata
    OutgoingKafkaRecordMetadata<String> md =
            OutgoingKafkaRecordMetadata.<String>builder()
                .withKey("KEY-" + i)
                .build();
    // Note that Message is immutable so the copy returned by this method
    // call is not the same as the parameter to the method
    return KafkaMetadataUtil.writeOutgoingKafkaMetadata(msg, md);
}

@Incoming("from-kafka")
public CompletionStage<Void> receive(Message<Integer> msg) {
    IncomingKafkaRecordMetadata<String, Integer> metadata =
        KafkaMetadataUtil.readIncomingKafkaMetadata(msg).get();

    // We can now read the Kafka record key
    String key = metadata.getKey();

    // When using the Message wrapper around the payload we need to explicitly ack
    // them
    return msg.ack();
}
Example of Kafka mapping in a microprofile-config.properties file
kafka.bootstrap.servers=kafka:9092

mp.messaging.outgoing.to-kafka.connector=smallrye-kafka
mp.messaging.outgoing.to-kafka.topic=some-topic
mp.messaging.outgoing.to-kafka.value.serializer=org.apache.kafka.common.serialization.IntegerSerializer
mp.messaging.outgoing.to-kafka.key.serializer=org.apache.kafka.common.serialization.StringSerializer

mp.messaging.incoming.from-kafka.connector=smallrye-kafka
mp.messaging.incoming.from-kafka.topic=some-topic
mp.messaging.incoming.from-kafka.value.deserializer=org.apache.kafka.common.serialization.IntegerDeserializer
mp.messaging.incoming.from-kafka.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
Note

You must specify the key.serializer for the outgoing channel and the key.deserializer for the incoming channel.

14.5.4. Example MicroProfile Config properties file for the Kafka connector

This is an example of a simple microprofile-config.properties file for a Kafka connector. Its properties correspond to the properties in the example in "MicroProfile reactive messaging connectors for integrating with external messaging systems."

kafka.bootstrap.servers=kafka:9092

mp.messaging.outgoing.to.connector=smallrye-kafka
mp.messaging.outgoing.to.topic=my-topic
mp.messaging.outgoing.to.value.serializer=org.apache.kafka.common.serialization.IntegerSerializer

mp.messaging.incoming.from.connector=smallrye-kafka
mp.messaging.incoming.from.topic=my-topic
mp.messaging.incoming.from.value.deserializer=org.apache.kafka.common.serialization.IntegerDeserializer
Table 14.6. Discussion of entries
EntryDescription

to, from

These are "channels."

send, receive

These are "methods."

Note that the to channel is on the send() method and the from channel is on the receive() method.

kafka.bootstrap.servers=kafka:9092

This specifies the URL of the Kafka broker that the application must connect to. You can also specify a URL at the channel level, like this: mp.messaging.outgoing.to.bootstrap.servers=kafka:9092

mp.messaging.outgoing.to.connector=smallrye-kafka

This indicates that you want the to channel to receive messages from Kafka.

SmallRye reactive messaging is a framework for building applications. Note that the smallrye-kafka value is SmallRye reactive messaging-specific. If you’re provisioning your own server using Galleon, you can enable the Kafka integration by including the microprofile-reactive-messaging-kafka Galleon layer.

mp.messaging.outgoing.to.topic=my-topic

This indicates that you want to send data to a Kafka topic called my-topic.

A Kafka "topic" is a category or feed name that messages are stored on and published to. All Kafka messages are organized into topics. Producer applications write data to topics and consumer applications read data from topics.

mp.messaging.outgoing.to.value.serializer=org.apache.kafka.common.serialization.IntegerSerializer

This tells the connector to use IntegerSerializer to serialize the values that the send() method outputs when it writes to a topic. Kafka provides serializers for standard Java types. You can implement your own serializer by writing a class that implements org.apache.kafka.common.serialization.Serializer, and then include that class in your deployment.

mp.messaging.incoming.from.connector=smallrye-kafka

This indicates that you want to use the from channel to receive messages from Kafka. Again, the smallrye-kafka value is SmallRye reactive messaging-specific.

mp.messaging.incoming.from.topic=my-topic

This indicates that your connector should read data from the Kafka topic called my-topic.

mp.messaging.incoming.from.value.deserializer=org.apache.kafka.common.serialization.IntegerDeserializer

This tells the connector to use IntegerDeserializer to deserialize the values from the topic before calling the receive() method. You can implement your own deserializer by writing a class that implements org.apache.kafka.common.serialization.Deserializer, and then include that class in your deployment.

Note

This list of properties is not comprehensive. See the SmallRye Reactive Messaging Apache Kafka documentation for more information.

Mandatory MicroProfile Reactive Messaging prefixes

The MicroProfile Reactive Messaging specification requires the following method property key prefixes for Kafka:

  • mp.messaging.incoming.[channel-name].[attribute]=[value]`
  • mp.messaging.outgoing.[channel-name].[attribute]=[value]`
  • mp.messaging.connector.[connector-name].[attribute]=[value]`

Note that channel-name is either the @Incoming.value() or the @Outgoing.value().

Now consider the following method pair example:

@Outgoing("to")
public int send() {
    int i = // Randomly generated...
    return i;
}

@Incoming("from")
public void receive(int i) {
    // Process payload
}

In this method pair example, note the following required property prefixes:

  • mp.messaging.incoming.from. This prefix selects the property as your configuration of the receive() method.
  • mp.messaging.outgoing.to. This prefix selects the property as your configuration of the send() method.

14.5.5. Example MicroProfile Config properties file for the AMQP connector

This is an example of a simple microprofile-config.properties file for an Advanced Message Queuing Protocol (AMQP) connector. Its properties correspond to the properties in the example in MicroProfile reactive messaging connectors for integrating with external messaging systems.

amqp-host=localhost
amqp-port=5672
amqp-username=artemis
amqp-password=artemis

mp.messaging.outgoing.to.connector=smallrye-amqp
mp.messaging.outgoing.to.address=my-topic

mp.messaging.incoming.from.connector=smallrye-amqp
mp.messaging.incoming.from.address=my-topic
Table 14.7. Discussion of entries
EntryDescription

to, from

These are "channels."

send, receive

These are "methods."

Note that the to channel is on the send() method and the from channel is on the receive() method.

amqp-host=localhost

This specifies the URL of the AMQP broker that the application must connect to. You can also specify a URL at the channel level, like this: mp.messaging.outgoing.to.host=localhost.The value defaults to localhost when no URL is specified.

amqp-port=5672

This specifies the port of the AMQP broker.

mp.messaging.outgoing.to.connector=smallrye-amqp

This indicates that you want the channel to send messages to AMQP.

SmallRye reactive messaging is a framework for building applications. Note that the smallrye-amqp value is SmallRye reactive messaging specific. If you’re provisioning your own server using Galleon, you can enable the AMQP integration by including the microprofile-reactive-messaging-amqp Galleon layer.

mp.messaging.outgoing.to.address=my-topic

This indicates that you want to send data to an AMQP queue on the address my-topic. If you do not specify a value for mp.messaging.outgoing.to.address, the value will default to the channel, which in this example is "to".

mp.messaging.incoming.from.connector=smallrye-amqp

This indicates that you want to use the from channel to receive messages from the AMQP broker. Again, the smallrye-amqp value is SmallRye reactive messaging-specific.

mp.messaging.incoming.from.address=my-topic

This indicates that you want to read data from the AMQP queue my-topic on the from channel.

For a complete list of properties supported by the SmallRye Reactive Messaging’s AMQP connector, see SmallRye Reactive Messaging AMQP Connector Configuration Reference.

Connecting to a secure AMQP broker

To connect with an AMQ broker secured with SSL/TLS and Simple Authentication and Security Layer (SASL), define the client-ssl-context to be used for the connection, in the microprofile-config.properties file. You can do this on connector level and also on channel level.

Example of connector level client-ssl-context definition

amqp-use-ssl=true
mp.messaging.connector.smallrye-amqp.wildfly.elytron.ssl.context=exampleSSLContext

The attribute mp.messaging.connector.smallrye-amqp.wildfly.elytron.ssl.context is only required when you use self-signed certificates.

Important

Do not use self-signed certificates in a production environment. Use only the certificates signed by a certificate authority (CA).

You can also specify the client-ssl-context for a channel as follows:

Example of channel-level client-ssl-context definition

mp.messaging.incoming.from.wildfly.elytron.ssl.context=exampleSSLContext

In the example, the exampleSSLContext is associated only with the incoming channel from.

Table 14.8. Discussion of entries
EntryDescription

amqp-use-ssl

This specifies that we want to use a secure connection when connecting to the broker.

mp.messaging.connector.smallrye-amqp.wildfly.elytron.ssl.context

You do not need to specify this attribute if the AMQ broker is secured with a Certificate Authority (CA)-signed certificate.

If you use a self-signed certificate, specify the SSLContext that is defined in the Elytron subsystem under /subsystem=elytron/client-ssl-context=* in the management model.

Important

Do not use self-signed certificates in a production environment. Use only the certificates signed by a certificate authority (CA).

You can define client-ssl-context by using the following management CLI command:

/subsystem=elytron/client-ssl-context=exampleSSLContext:add(key-manager=exampleServerKeyManager,trust-manager=exampleTLSTrustManager)

For more information, see Configuring a trust store and a trust manager for client certificates, Configuring a server certificate for two-way SSL/TLS in the Configuring SSL/TLS in JBoss EAP guide.

14.6. OpenTelemetry reference

14.6.1. OpenTelemetry subsystem attributes

You can modify opentelemetry subsystem attributes to configure its behavior. The attributes are grouped by the aspect they configure: exporter, sampler, and span processor.

Table 14.9. Exporter attribute group
AttributeDescriptionDefault value

endpoint

The URL to which OpenTelemetry pushes traces. Set this to the URL where your exporter listens.

http://localhost:14250/

exporter-type

The exporter to which traces are sent. It can be one of the following:

  • jaeger. The exporter you use is Jaeger.
  • otlp. The exporter you use works with the OpenTelemetry protocol.

jaeger

Table 14.10. Sampler attribute group
AttributeDescriptionDefault value

ratio

The ratio of traces to export. The value must be between 0.0 and 1.0. For example, to export one trace in every 100 traces created by an application, set the value to 0.01. This attribute takes effect only if you set the attribute sampler-type as ratio.

 

Table 14.11. Span processor attribute group
AttributeDescriptionDefault value

batch-delay

The interval in milliseconds between two consecutive exports by JBoss EAP. This attribute only takes effect if you set the attribute span-processor-type as batch.

5000

export-timeout

The maximum amount of time in milliseconds to allow for an export to complete before being cancelled.

30000

max-export-batch-size

The maximum number of traces that are published in each batch. This number should be should be lesser or equal to the value of max-queue-size. You can set this attribute only if you set the attribute span-processor-type as batch.

512

max-queue-size

The maximum number of traces to queue before exporting. If an application creates more traces, they are not recorded. This attribute only takes effect if you set the attribute span-processor-type as batch.

2048

span-processor-type

The type of span processor to use. The value can be one of the following:

  • batch: JBoss EAP exports traces in batches that are defined using the following attributes:

    • batch-delay
    • max-export-batch-size
    • max-queue-size
  • simple: JBoss EAP exports traces are as soon as they finish.

batch

Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.