이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Release Notes for AMQ Streams 1.8 on RHEL
For use with AMQ Streams on Red Hat Enterprise Linux
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Features
The features added in this release, and that were not in previous releases of AMQ Streams, are outlined below.
To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project.
1.1. Kafka 2.8.0 support
AMQ Streams now supports Apache Kafka version 2.8.0.
AMQ Streams uses Kafka 2.8.0. Only Kafka distributions built by Red Hat are supported.
For upgrade instructions, see AMQ Streams and Kafka upgrades.
Refer to the Kafka 2.7.0 and Kafka 2.8.0 Release Notes for additional information.
Kafka 2.7.x is supported only for the purpose of upgrading to AMQ Streams 1.8.
For more information on supported versions, see the Red Hat Knowledgebase article Red Hat AMQ 7 Component Details Page.
Kafka 2.8.0 requires ZooKeeper version 3.5.9. Therefore, you need to upgrade ZooKeeper when upgrading from AMQ Streams 1.7 to AMQ Streams 1.8, as described in the upgrade documentation.
Kafka 2.8.0 provides early access to self-managed mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol. Note that self-managed mode is not supported in AMQ Streams.
Chapter 2. Enhancements
The enhancements added in this release are outlined below.
2.1. Kafka 2.8.0 enhancements
For an overview of the enhancements introduced with Kafka 2.8.0, refer to the Kafka 2.8.0 Release Notes.
2.2. OAuth 2.0 authentication enhancements
Configure audience and scope
You can now configure the oauth.audience
and oauth.scope
properties and pass their values as parameters when obtaining a token. Both properties are configured in the OAuth 2.0 authentication listener configuration.
Use these properties in the following scenarios:
- When obtaining an access token for inter-broker authentication
-
In the name of a client for OAuth 2.0 over PLAIN client authentication, using a
clientId
andsecret
These properties affect whether a client can obtain a token and the content of the token. They do not affect token validation rules imposed by the listener.
Example configuration for oauth.audience
and oauth.scope
properties
listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.token.endpoint.uri="https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token" \ oauth.scope=""SCOPE"" \ oauth.audience="AUDIENCE" \ oauth.check.audience="true" \ # ...
Your authorization server might provide aud
(audience) claims in JWT access tokens. When audience checks are enabled by setting oauth.check.audience="true"
, the Kafka broker rejects tokens that do not contain the broker’s clientId
in their aud
claims. Audience checks are disabled by default.
See Configuring OAuth 2.0 support for Kafka brokers
Token endpoint not required with OAuth 2.0 over PLAIN
The oauth.token.endpoint.uri
parameter is no longer required when using the "client ID and secret" method for OAuth 2.0 over PLAIN authentication.
Example OAuth 2.0 over PLAIN listener configuration with token endpoint URI specified
listener.name.client.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ oauth.valid.issuer.uri="https://__AUTH-SERVER-ADDRESS__" \ oauth.jwks.endpoint.uri="https://__AUTH-SERVER-ADDRESS__/jwks" \ oauth.username.claim="preferred_username" \ oauth.token.endpoint.uri="http://__AUTH_SERVER__/auth/realms/__REALM__/protocol/openid-connect/token" ;
If the oauth.token.endpoint.uri
is not specified, the listener treats the:
-
username
parameter as the account name -
password
parameter as the raw access token, which is passed to the authorization server for validation (the same behavior as for OAUTHBEARER authentication)
The behavior of the "long-lived access token" method for OAuth 2.0 over PLAIN authentication is unchanged. The oauth.token.endpoint.uri
is not required when using this method.
Chapter 3. Technology Previews
Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about support scope, see Technology Preview Features Support Scope.
3.1. Kafka Static Quota plugin configuration
Use the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers.
Example Kafka Static Quota plugin configuration
client.quota.callback.class= io.strimzi.kafka.quotas.StaticQuotaCallback client.quota.callback.static.produce= 1000000 client.quota.callback.static.fetch= 1000000 client.quota.callback.static.storage.soft= 400000000000 client.quota.callback.static.storage.hard= 500000000000 client.quota.callback.static.storage.check-interval= 5
See Setting limits on brokers using the Kafka Static Quota plugin
3.2. Cruise Control for cluster rebalancing
Cruise Control remains in Technology Preview, with some new enhancements.
You can install Cruise Control and use it to rebalance your Kafka cluster using optimization goals — defined constraints on CPU, disk, network load, and more. In a balanced Kafka cluster, the workload is more evenly distributed across the broker pods.
Cruise Control helps to reduce the time and effort involved in running an efficient and balanced Kafka cluster.
A zipped distribution of Cruise Control is available for download from the Customer Portal. To install Cruise Control, you configure each Kafka broker to use the provided Metrics Reporter. Then, you set Cruise Control properties, including optimization goals, and start Cruise Control using the provided script.
The Cruise Control server is hosted on a single machine for the whole Kafka cluster.
When Cruise Control is running, you can use the REST API to:
- Generate dry run optimization proposals from multiple optimization goals
- Initiate an optimization proposal to rebalance the Kafka cluster
Other Cruise Control features are not currently supported, including anomaly detection, notifications, write-your-own goals, and changing the topic replication factor.
See Cruise Control for cluster rebalancing
3.2.1. Enhancements to the Technology Preview
Cruise Control version 2.5.59 provides significant performance improvements, including 10% faster optimization proposal calculations.
A zipped distribution of the latest version is available to download from the Red Hat Customer Portal.
See Customer Portal
Chapter 4. Deprecated features
The features deprecated in this release, and that were supported in previous releases of AMQ Streams, are outlined below.
4.1. Deprecated and removed Kafka features
This section gives advance notice of important deprecations and removals in the Apache Kafka project.
4.1.1. Planned for removal in Kafka version 3.0
Kafka version 3.0 will be shipped with the next major release of AMQ Streams.
The following table shows methods and components that were deprecated in Kafka 2.x or earlier and will be removed in Kafka 3.0. This list is not exhaustive.
API or component | Issue link | Description |
---|---|---|
Admin API | Remove deprecated Admin.electPreferredLeaders | |
Admin API | Reimplement KafkaFuture with CompletableFuture (deprecate KafkaFuture.Function) | |
Admin client |
Remove deprecated | |
All clients | Remove various deprecated methods from clients for 3.0 | |
All clients |
Remove deprecated config value | |
All clients | Remove deprecated security classes/methods | |
Broker |
Remove deprecated | |
Broker | Remove deprecated LogConfig.Compact | |
Broker | Remove deprecated SimpleAclAuthorizer | |
Broker | Remove PrincipalBuilder and DefaultPrincipalBuilder | |
Common |
Removed deprecated | |
Consumer API | Remove deprecated PartitionAssignor interface | |
Connect API | Remove deprecated rest.host.name and rest.port Connect worker configs | |
Connect API | Remove port, host.name, and related configs in 3.0 | |
Connect API | Remove internal converter config properties | |
Streams API | Deprecate eos-alpha | |
Streams API | Remove deprecated methods under StreamsMetrics | |
Streams API | Remove deprecated options from StreamsResetter | |
Streams API |
Removal of deprecated classes under | |
Streams API | Remove deprecated APIs of Kafka Streams in 3.0 | |
Streams API | Remove deprecated methods on WindowStore | |
Streams API | Remove deprecated WindowStore#put | |
Streams API | Remove deprecated schedule method in ProcessorContext | |
Streams API | Remove deprecated methods under Stores | |
Streams API | Remove deprecated method StreamsConfig#getConsumerConfig | |
Streams API | Deprecate the default.windowed.serde.inner.class configs | |
Streams API | Remove deprecated RocksDB#compactRange API | |
Streams API |
Remove deprecated | |
Streams API | Remove deprecated "UsePreviousTimeOnInvalidTimeStamp" | |
Streams API | Remove deprecated TopologyDescription.Source#topics | |
Streams API | Remove deprecated KafkaClientSupplier#getAdminClient | |
Streams API | Deprecated PartitionGrouper config is ignored | |
Streams API | Remove deprecated "TopologyTestDriver#pipeInput / readOutput" | |
Streams API | Remove deprecated methods StreamsBuilder#addGlobalStore | |
Streams API | Remove deprecated overloads for ProcessorContext#forward | |
Streams API | Remove deprecated methods from ReadOnlyWindowStore | |
Streams API | Remove deprecated Count and SampledTotal in 3.0 | |
Streams API | Remove deprecation annotation on long-based read operations in WindowStore | |
Streams API | Remove deprecated "KStream#groupBy/join", "Joined#named" overloads | |
Streams API | Migrate TaskMetadata to interface with internal implementation | |
Streams API | Remove PartitionGrouper interface and it’s config and move DefaultPartitionGrouper to internal package | |
Streams API | Remove segment/segmentInterval from Window definition | |
Streams API | Increase Version of RocksDB | |
Streams API | Allow users to opt-into spurious left/outer stream-stream join improvement | |
Tools |
Remove deprecated | |
Tools | Remove deprecated --zookeeper in shell commands |
4.1.2. Mirror Maker 1.0 planned for removal in Kafka version 4.0
Kafka version 4.0 will be shipped in a future major release of AMQ Streams.
The following table shows a feature that will be deprecated in Kafka 3.0 and removed in Kafka 4.0.
Component | Link to issue | Summary |
---|---|---|
Mirror Maker 1.0 | deprecate MirrorMaker v1 |
Chapter 5. Fixed issues
The following sections list the issues fixed in AMQ Streams 1.8.x. Red Hat recommends that you upgrade to the latest patch release
For details of the issues fixed in Kafka 2.8.0, refer to the Kafka 2.8.0 Release Notes.
5.1. Fixed issues for AMQ Streams 1.8.4
The AMQ Streams 1.8.4 patch release is now available.
For additional details about the issues resolved in AMQ Streams 1.8.4, see AMQ Streams 1.8.x Resolved Issues.
Log4j2 vulnerability
The 1.8.4 release fixes a remote code execution vulnerability for AMQ Streams components that use log4j2. The vulnerability could allow a remote code execution on the server if the system logs a string value from an unauthorized source. This affects log4j versions between 2.0 and 2.14.1.
For more information, see CVE-2021-44228.
5.2. Fixed issues for AMQ Streams 1.8.0
Issue Number | Description |
---|---|
The | |
Running Kafka Exporter leads to high CPU usage. | |
Fine tune the health checks to stop Kafka Exporter restarting during rolling updates. | |
File Source Connector stops in the case of a large file. |
Issue Number | Title | Description |
---|---|---|
CVE-2021-34428 jetty-server: jetty: SessionListener can prevent a session from being invalidated breaking logout. | A flaw was discovered in the jetty-server, where if an exception is thrown from the SessionListener#sessionDestroyed() method, then the session ID is not invalidated in the session ID manager. On deployments with clustered sessions and multiple contexts, this could result in a session not being invalidated and a shared-computer application being left logged in. The highest threat from this vulnerability is to data confidentiality and integrity. | |
CVE-2021-28169 jetty-server: jetty: requests to the ConcatServlet and WelcomeFilter are able to access protected resources within the WEB-INF directory. | - | |
CVE-2021-21409 netty: Request smuggling via content-length header. | A flaw was found in Netty. There is an issue where the content-length header is not validated correctly if the request uses a single Http2HeaderFrame with the endstream set to true. This flaw leads to request smuggling if the request is proxied to a remote peer and translated to HTTP/1.1. The highest threat from this vulnerability is to integrity. | |
CVE-2021-27568 json-smart: uncaught exception may lead to crash or information disclosure. | A flaw was found in json-smart. When an exception is thrown from a function, but is not caught, the program using the library may crash or expose sensitive information. The highest threat from this vulnerability is to data confidentiality and system availability. In OpenShift Container Platform (OCP), the Hive/Presto/Hadoop components that comprise the OCP Metering stack, ship the vulnerable version of json-smart package. Since the release of OCP 4.6, the Metering product has been deprecated [1], hence the affected components are marked as wontfix. This may be fixed in the future. | |
CVE-2021-21295 netty: possible request smuggling in HTTP/2 due missing validation. |
In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by | |
CVE-2021-21290 netty: Information disclosure via the local system temporary directory. | In Netty there is a vulnerability on Unix-like systems involving an insecure temp file. When netty’s multipart decoders are used, a local information disclosure can occur via the local system temporary directory if temporary storing uploads on the disk is enabled. On unix-like systems, the temporary directory is shared between all user. As such, writing to this directory using APIs that do not explicitly set the file/directory permissions can lead to information disclosure. | |
CVE-2020-13949 libthrift: potential DoS when processing untrusted payloads. | A flaw was found in libthrift. Applications using Thrift would not show an error upon receiving messages declaring containers of sizes larger than the payload. This results in malicious RPC clients with the ability to send short messages which would result in a large memory allocation, potentially leading to denial of service. The highest threat from this vulnerability is to system availability. | |
CVE-2020-9488 log4j: improper validation of certificate with host mismatch in SMTP appender. | - | |
CVE-2021-28163 jetty-server: jetty: Symlink directory exposes webapp directory contents. |
If the | |
CVE-2021-28164 jetty-server: jetty: Ambiguous paths can access WEB-INF. |
In Jetty the default compliance mode allows requests with URIs that contain | |
CVE-2021-28165 jetty-server: jetty: Resource exhaustion when receiving an invalid large TLS frame. | When using SSL/TLS with Jetty, either with HTTP/1.1, HTTP/2, or WebSocket, the server may receive an invalid large (greater than 17408) TLS frame that is incorrectly handled, causing high CPU resources utilization. The highest threat from this vulnerability is to service availability. | |
CVE-2021-29425 commons-io: apache-commons-io: Limited path traversal in Apache Commons IO 2.2 to 2.6. | - | |
CVE-2021-28168 jersey-common: jersey: Local information disclosure via system temporary directory. | - |
Chapter 6. Known issues
This section lists the known issues for AMQ Streams 1.8.
6.1. SMTP appender for log4j
AMQ Streams ships with a potentially vulnerable version of log4j (log4j-1.2.17.redhat-3
). The vulnerability lies with the SMTP appender functionality, which is not used by AMQ Streams in its default configuration.
Issue Number | Description |
---|---|
CVE-2020-9488 log4j: improper validation of certificate with host mismatch in SMTP appender [amq-st-1]. |
Workaround
If you are using the SMTP appender, ensure that mail.smtp.ssl.checkserveridentity
is set to true
.
Chapter 7. Supported integration products
AMQ Streams 1.8 supports integration with the following Red Hat products.
- Red Hat Single Sign-On 7.4 and later
- Provides OAuth 2.0 authentication and OAuth 2.0 authorization.
For information on the functionality these products can introduce to your AMQ Streams deployment, refer to the AMQ Streams 1.8 documentation.
Additional resources
Chapter 8. Important links
Revised on 2021-12-14 20:09:39 UTC