이 콘텐츠는 선택한 언어로 제공되지 않습니다.
HawtIO Diagnostic Console Guide
Manage applications with Red Hat build of HawtIO
Abstract
Preface 링크 복사링크가 클립보드에 복사되었습니다!
HawtIO provides enterprise monitoring tools for viewing and managing Red Hat HawtIO-enabled applications. It is a web-based console accessed from a browser to monitor and manage a running HawtIO-enabled container. HawtIO is based on the open source HawtIO software (https://hawt.io/). HawtIO Diagnostic Console Guide describes how to manage applications with HawtIO.
The audience for this guide are Apache Camel eco-system developers and administrators. This guide assumes familiarity with Apache Camel and the processing requirements for your organization.
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Overview of HawtIO 링크 복사링크가 클립보드에 복사되었습니다!
Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend implementing any Technology Preview features in production environments.
This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about support scope, see Technology Preview Features Support Scope.
HawtIO is a diagnostic Console for the Red Hat build of Apache Camel and Red Hat build of AMQ. It is a pluggable Web diagnostic console built with modern Web technologies such as React and PatternFly. HawtIO provides a central interface to examine and manage the details of one or more deployed HawtIO-enabled containers. HawtIO is available when you install HawtIO standalone or use HawtIO on OpenShift. The integrations that you can view and manage in HawtIO depend on the plugins that are running. You can monitor HawtIO and system resources, perform updates, and start or stop services.
The pluggable architecture is based on Webpack Module Federation and is highly extensible; you can dynamically extend HawtIO with your plugins or automatically discover plugins inside the JVM. HawtIO has built-in plugins already to make it highly useful out of the box for your JVM application. The plugins include Apache Camel, Connect, JMX, Logs, Runtime, Quartz, and Spring Boot. HawtIO is primarily designed to be used with Camel Quarkus and Camel Spring Boot. It’s also a tool for managing microservice applications. HawtIO is cloud-native; it’s ready to go over the cloud! You can deploy it to Kubernetes and OpenShift with the HawtIO Operator.
Benefits of HawtIO can be listed as follows:
- Runtime management of JVM via JMX, especially that of Camel applications and AMQ broker, with specialized views
- Visualization and debugging/tracing of Camel routes
- Simple managing and monitoring of application metrics
Chapter 2. Installing HawtIO 링크 복사링크가 클립보드에 복사되었습니다!
There are several options to start using the HawtIO console:
2.1. Adding Red Hat repositories to Maven 링크 복사링크가 클립보드에 복사되었습니다!
To access artifacts that are in Red Hat Maven repositories, you need to add those repositories to Maven’s settings.xml file. Maven looks for the settings.xml file in the .m2 directory of the user’s home directory. If there is not a user specified settings.xml file, Maven uses the system-level settings.xml file at M2_HOME/conf/settings.xml.
Prerequisite:
You know the location of the settings.xml file in which you want to add the Red Hat repositories.
Procedure:
In the
settings.xmlfile, addrepositoryelements for the Red Hat repositories as shown in this example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Running from CLI (JBang) 링크 복사링크가 클립보드에 복사되었습니다!
You can install and run HawtIO from CLI using JBang.
If you don’t have JBang locally yet, first install it: https://www.jbang.dev/download/
Procedure:
Install the latest HawtIO on your machine using the
jbangcommand:jbang app install -Dhawtio.jbang.version=4.0.0.redhat-00040 hawtio@hawtio/hawtio
$ jbang app install -Dhawtio.jbang.version=4.0.0.redhat-00040 hawtio@hawtio/hawtioCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis installation method is available only with jbang>=0.115.0.
It will install the HawtIO command. Launch a HawtIO instance with the following command:
hawtio
$ hawtioCopy to Clipboard Copied! Toggle word wrap Toggle overflow The command will automatically open the console at http://0.0.0.0:8080/hawtio/. To change the port number, run the following command:
hawtio --port 8090
$ hawtio --port 8090Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on the configuration options of the CLI, run the following code:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Running a Quarkus app 링크 복사링크가 클립보드에 복사되었습니다!
You can attach HawtIO to your Quarkus application in a single step.
Procedure:
Add
io.hawt:hawtio-quarkusand the supporting Camel Quarkus extensions to the dependencies inpom.xml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run HawtIO with your Quarkus application in development mode as follows:
mvn compile quarkus:dev
mvn compile quarkus:devCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Open http://localhost:8080/hawtio to view the HawtIO console.
2.4. Running a Spring Boot app 링크 복사링크가 클립보드에 복사되었습니다!
You can attach HawtIO to your Spring Boot application in two steps.
Procedure:
Add
io.hawt:hawtio-springbootand the supporting Camel Spring Boot starters to the dependencies inpom.xml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the HawtIO and Jolokia endpoints by adding the following lines to
application.properties:spring.jmx.enabled = true management.endpoints.web.exposure.include = hawtio,jolokia
spring.jmx.enabled = true management.endpoints.web.exposure.include = hawtio,jolokiaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run HawtIO with your Spring Boot application in development mode as follows:
mvn spring-boot:run
mvn spring-boot:runCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Open http://localhost:8080/actuator/hawtio to view the HawtIO console.
2.4.1. Configuring HawtIO path 링크 복사링크가 클립보드에 복사되었습니다!
If you don’t prefer to have the /actuator base path for the HawtIO endpoint, you can also execute the following:
Customize the Spring Boot management base path with the
management.endpoints.web.base-pathproperty:management.endpoints.web.base-path = /
management.endpoints.web.base-path = /Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also customize the path to the HawtIO endpoint by setting the
management.endpoints.web.path-mapping.hawtioproperty:management.endpoints.web.path-mapping.hawtio = hawtio/console
management.endpoints.web.path-mapping.hawtio = hawtio/consoleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
- There is a working Spring Boot example that shows how to monitor a web application that exposes information about Apache Camel routes, metrics, etc. with HawtIO Spring Boot example.
-
A good MBean for real-time values and charts is
java.lang/OperatingSystem. Try looking at Camel routes. Notice that as you change selections in the tree the list of tabs available changes dynamically based on the content.
Chapter 3. Configuration of HawtIO 링크 복사링크가 클립보드에 복사되었습니다!
HawtIO and its plugins can configure their behaviours through System properties.
3.1. Configuration properties 링크 복사링크가 클립보드에 복사되었습니다!
The following table lists the configuration properties for the HawtIO core system and various plugins.
| System | Default | Description |
|---|---|---|
| hawtio.disableProxy | false | With this property set to true, ProxyServlet (/hawtio/proxy/*) can be disabled. This makes the Connect plugin unavailable, which means HawtIO can no longer connect to remote JVMs, but sometimes users might want to do so because of security if the Connect plugin is not used. |
| hawtio.localAddressProbing | true | Whether local address probing for proxy allowlist is enabled or not upon startup. Set this property to false to disable it. |
| hawtio.proxyAllowlist | localhost, 127.0.0.1 | Comma-separated allowlist for target hosts that Connect plugin can connect to via ProxyServlet. All hosts not listed in this allowlist are denied to connect for security reasons. This option can be set to * to allow all hosts. Prefixing an element of the list with "r:" allows to define a regex (example: localhost,r:myserver[0-9]+.mydomain.com) |
| hawtio.redirect.scheme | The scheme is to redirect the URL to the login page when authentication is required. | |
| hawtio.sessionTimeout | The maximum time interval, in seconds, that the servlet container will keep this session open between client accesses. If this option is not configured, then HawtIO uses the default session timeout of the servlet container. |
3.1.1. Quarkus 링크 복사링크가 클립보드에 복사되었습니다!
For Quarkus, all those properties are configurable in application.properties or application.yaml with the quarkus.hawtio prefix.
For example:
quarkus.hawtio.disableProxy = true
quarkus.hawtio.disableProxy = true
3.1.2. Spring Boot 링크 복사링크가 클립보드에 복사되었습니다!
For Spring Boot, all those properties are configurable in application.properties or application.yaml as is.
For example:
hawtio.disableProxy = true
hawtio.disableProxy = true
3.2. Configuring Jolokia through system properties 링크 복사링크가 클립보드에 복사되었습니다!
The Jolokia agent is deployed automatically with io.hawt.web.JolokiaConfiguredAgentServlet that extends Jolokia native org.jolokia.http.AgentServlet class, defined in hawtio-war/WEB-INF/web.xml. If you want to customize the Jolokia Servlet with the configuration parameters that are defined in the Jolokia documentation, you can pass them as System properties prefixed with jolokia.
For example:
jolokia.policyLocation = file:///opt/hawtio/my-jolokia-access.xml
jolokia.policyLocation = file:///opt/hawtio/my-jolokia-access.xml
3.2.1. RBAC Restrictor 링크 복사링크가 클립보드에 복사되었습니다!
For some runtimes that support HawtIO RBAC (role-based access control), HawtIO provides a custom Jolokia Restrictor implementation that provides an additional layer of protection over JMX operations based on the ACL (access control list) policy.
You cannot use HawtIO RBAC with Quarkus and Spring Boot yet. Enabling the RBAC Restrictor on those runtimes only imposes additional load without any gains.
To activate the HawtIO RBAC Restrictor, configure the Jolokia parameter restrictorClass via System property to use io.hawt.web.RBACRestrictor as follows:
jolokia.restrictorClass = io.hawt.system.RBACRestrictor
jolokia.restrictorClass = io.hawt.system.RBACRestrictor
Chapter 4. Security and Authentication of HawtIO 링크 복사링크가 클립보드에 복사되었습니다!
HawtIO enables authentication out of the box depending on the runtimes/containers it runs with. To use HawtIO with your application, either setting up authentication for the runtime or disabling HawtIO authentication is necessary.
4.1. Configuration properties 링크 복사링크가 클립보드에 복사되었습니다!
The following table lists the Security-related configuration properties for the HawtIO core system.
| Name | Default | Description |
|---|---|---|
| hawtio.authenticationContainerDiscoveryClasses | io.hawt.web.tomcat.TomcatAuthenticationContainerDiscovery | List of used AuthenticationContainerDiscovery implementations separated by a comma. By default, there is just TomcatAuthenticationContainerDiscovery, which is used to authenticate users on Tomcat from tomcat-users.xml file. Feel free to remove it if you want to authenticate users on Tomcat from the configured JAAS login module or feel free to add more classes of your own. |
| hawtio.authenticationContainerTomcatDigestAlgorithm | NONE | When using the Tomcat tomcat-users.xml file, passwords can be hashed instead of plain text. Use this to specify the digest algorithm; valid values are NONE, MD5, SHA, SHA-256, SHA-384, and SHA-512. |
| hawtio.authenticationEnabled | true | Whether or not security is enabled. |
| hawtio.keycloakClientConfig | classpath:keycloak.json | Keycloak configuration file used for the front end. It is mandatory if Keycloak integration is enabled. |
| hawtio.keycloakEnabled | false | Whether to enable or disable Keycloak integration. |
| hawtio.noCredentials401 | false | Whether to return HTTP status 401 when authentication is enabled, but no credentials have been provided. Returning 401 will cause the browser popup window to prompt for credentials. By default this option is false, returning HTTP status 403 instead. |
| hawtio.realm | hawtio | The security realm used to log in. |
| hawtio.rolePrincipalClasses | Fully qualified principal class name(s). A comma can separate multiple classes. | |
| hawtio.roles | Admin, manager, viewer | The user roles are required to log in to the console. A comma can separate multiple roles to allow. Set to * or an empty value to disable role checking when HawtIO authenticates a user. |
| hawtio.tomcatUserFileLocation | conf/tomcat-users.xml | Specify an alternative location for the tomcat-users.xml file, e.g. /production/userlocation/. |
4.2. Quarkus 링크 복사링크가 클립보드에 복사되었습니다!
HawtIO is secured with the authentication mechanisms that Quarkus and also Keycloak provide.
If you want to disable HawtIO authentication for Quarkus, add the following configuration to application.properties:
quarkus.hawtio.authenticationEnabled = false
quarkus.hawtio.authenticationEnabled = false
4.2.1. Quarkus authentication mechanisms 링크 복사링크가 클립보드에 복사되었습니다!
HawtIO is just a web application in terms of Quarkus, so the various mechanisms Quarkus provides are used to authenticate HawtIO in the same way it authenticates a Web application.
Here we show how you can use the properties-based authentication with HawtIO for demonstrating purposes.
The properties-based authentication is not recommended for use in production. This mechanism is for development and testing purposes only.
To use the properties-based authentication with HawtIO, add the following dependency to
pom.xml:<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-elytron-security-properties-file</artifactId> </dependency><dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-elytron-security-properties-file</artifactId> </dependency>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can then define users in
application.propertiesto enable the authentication. For example, defining a userhawtiowith passwords3cr3t!and roleadminwould look like the following:quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.hawtio = s3cr3t! quarkus.security.users.embedded.roles.hawtio = admin
quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.hawtio = s3cr3t! quarkus.security.users.embedded.roles.hawtio = adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Example:
See Quarkus example for a working example of the properties-based authentication.
4.2.2. Quarkus with Keycloak 링크 복사링크가 클립보드에 복사되었습니다!
4.3. Spring Boot 링크 복사링크가 클립보드에 복사되었습니다!
In addition to the standard JAAS authentication, HawtIO on Spring Boot can be secured through Spring Security or Keycloak. If you want to disable HawtIO authentication for Spring Boot, add the following configuration to application.properties:
hawtio.authenticationEnabled = false
hawtio.authenticationEnabled = false
4.3.1. Spring Security 링크 복사링크가 클립보드에 복사되었습니다!
To use Spring Security with HawtIO:
Add
org.springframework.boot:spring-boot-starter-securityto the dependencies inpom.xml:<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency>
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Spring Security configuration in
src/main/resources/application.propertiesshould look like the following:spring.security.user.name = hawtio spring.security.user.password = s3cr3t! spring.security.user.roles = admin,viewer
spring.security.user.name = hawtio spring.security.user.password = s3cr3t! spring.security.user.roles = admin,viewerCopy to Clipboard Copied! Toggle word wrap Toggle overflow A security config class has to be defined to set up how to secure the application with Spring Security:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example:
See springboot-security example for a working example.
4.3.1.1. Connecting to a remote application with Spring Security 링크 복사링크가 클립보드에 복사되었습니다!
If you try to connect to a remote Spring Boot application with Spring Security enabled, make sure the Spring Security configuration allows access from the HawtIO console. Most likely, the default CSRF protection prohibits remote access to the Jolokia endpoint and thus causes authentication failures at the HawtIO console.
Be aware that it will expose your application to the risk of CSRF attacks.
The easiest solution is to disable CSRF protection for the Jolokia endpoint at the remote application as follows.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To secure the Jolokia endpoint even without Spring Security’s CSRF protection, you need to provide a
jolokia-access.xmlfile undersrc/main/resources/like the following (snippet) so that only trusted nodes can access it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.2. Spring Boot with Keycloak 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 5. Setting up HawtIO on OpenShift 4 링크 복사링크가 클립보드에 복사되었습니다!
On OpenShift 4.x, setting up HawtIO involves installing and deploying it. The preferred mechanism for this installation is using the HawtIO Operator available from the OperatorHub (see Section 5.1, “Installing and deploying HawtIO on OpenShift 4.x by using the OperatorHub”). Optionally, you can customize role-based access control (RBAC) for HawtIO as described in Section 2.3, “Role-based access control for HawtIO on OpenShift 4.x”.
5.1. Installing and deploying HawtIO on OpenShift 4 by using the OperatorHub 링크 복사링크가 클립보드에 복사되었습니다!
The HawtIO Operator is provided in the OpenShift OperatorHub for the installation of HawtIO. To deploy HawtIO you will have to deploy an instance of the installed operator as well as a HawtIO Custom Resource (CR).
To install and deploy HawtIO:
-
Log in to the OpenShift console in the web browser as a user with
cluster adminaccess. - Click Operators and then click OperatorHub.
- In the search field window, type HawtIO to filter the list of operators. Click HawtIO Operator.
In the HawtIO Operator install window, click Install. The Create Operator Subscription form opens:
- For Update Channel, select stable-v1.
For Installation Mode, accept the default (a specific namespace on the cluster).
NoteThis mode determines what namespaces the operator will monitor for HawtIO CRs. This is different to what namespaces HawtIO will monitor when it is fully deployed. The latter can be configured via the HawtIO CR.
- For Installed Namespace, select the namespace in which you want to install HawtIO Operator.
For the Update Approval, select Automatic or Manual to configure how OpenShift handles updates to HawtIO Operator.
- If the Automatic updates option is selected and a new version of HawtIO Operator is available, the OpenShift Operator Lifecycle Manager (OLM) automatically upgrades the running instance of HawtIO without human intervention;
- If the Manual updates option is selected and a newer version of an Operator is available, the OLM only creates an update request. A Cluster Administrator must then manually approve the update request to have HawtIO Operator updated to the new version.
- Click Install and OpenShift installs HawtIO Operator into the current namespace.
- To verify the installation, click Operators and then click Installed Operators. HawtIO should be visible in the list of operators.
To deploy HawtIO by using the OpenShift web console:
- In the list of Installed Operators, under the Name column, click HawtIO Operator.
- On the Operator Details page under Provided APIs, click Create HawtIO.
Accept the configuration default values or optionally edit them.
- For Replicas, to increase HawtIO performance (for example, in a high availability environment), the number of pods allocated to HawtIO can be increased;
- For RBAC (role-based access control), only specify a value in the Config Map field if you want to customize the default RBAC behaviour and if the ConfigMap file already exists in the namespace in which you installed HawtIO Operator
- For Nginx, see Performance tuning for HawtIO Operator installation
For Type, specify either:
- Cluster: for HawtIO to monitor all namespaces on the OpenShift cluster for any HawtIO-enabled applications;
- Namespace: for HawtIO to monitor only the HawtIO-enabled applications that have been deployed in the same namespace.
- Click Create. The HawtIO Operator Details page opens and shows the status of the deployment.
To open HawtIO:
- For a namespace deployment: In the OpenShift web console, open the project in which the HawtIO operator is installed, and then select Overview. In the Project Overview page, scroll down to the Launcher section and click the HawtIO link.
- For a cluster deployment, in the OpenShift web console’s title bar, click the grid icon. In the popup menu, under Red Hat Applications, click the HawtIO URL link.
- Log into HawtIO. An Authorize Access page opens in the browser listing the required permissions.
- Click Allow selected permissions. HawtIO opens in the browser and shows any HawtIO-enabled application pods that are authorized for access.
- Click Connect to view the monitored application. A new browser window opens showing the application in HawtIO.
5.2. Role-based access control for HawtIO on OpenShift 4 링크 복사링크가 클립보드에 복사되었습니다!
HawtIO offers role-based access control (RBAC) that infers access according to the user authorization provided by OpenShift. In HawtIO, RBAC determines a user’s ability to perform MBean operations on a pod.
For information on OpenShift authorization, see the Using RBAC to define and apply permissions section of the OpenShift documentation.
Role-based access is enabled by default when you use the Operator to install HawtIO on OpenShift. HawtIO RBAC leverages the user’s verb access on a pod resource in OpenShift to determine the user’s access to a pod’s MBean operations in HawtIO. By default, there are two user roles for HawtIO:
- admin: if a user can update a pod in OpenShift, then the user is conferred the admin role for HawtIO. The user can perform write MBean operations in HawtIO for the pod.
- viewer: if a user can get a pod in OpenShift, then the user is conferred the viewer role for HawtIO. The user can perform read-only MBean operations in HawtIO for the pod.
5.2.1. Determining access roles for HawtIO on OpenShift 4 링크 복사링크가 클립보드에 복사되었습니다!
HawtIO role-based access control is inferred from a user’s OpenShift permissions for a pod. To determine HawtIO access role granted to a particular user, obtain the OpenShift permissions granted to the user for a pod.
Prerequisites:
- The user’s name
- The pod’s name
Procedure:
To determine whether a user has HawtIO admin role for the pod, run the following command to see whether the user can update the pod on OpenShift:
oc auth can-i update pods/<pod> --as <user>
oc auth can-i update pods/<pod> --as <user>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the response is yes, the user has the admin role for the pod. The user can perform write operations in HawtIO for the pod.
To determine whether a user has HawtIO viewer role for the pod, run the following command to see whether the user can get a pod on OpenShift:
oc auth can-i get pods/<pod> --as <user>
oc auth can-i get pods/<pod> --as <user>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the response is yes, the user has the viewer role for the pod. The user can perform read-only operations in HawtIO for the pod. Depending on the context, HawtIO prevents the user with the viewer role from performing a write MBean operation, by disabling an option or by displaying an operation not allowed for this user message when the user attempts a write MBean operation.
- If the response is no, the user is not bound to any HawtIO roles and the user cannot view the pod in HawtIO.
5.2.2. Customizing role-based access to HawtIO on OpenShift 4 링크 복사링크가 클립보드에 복사되었습니다!
If you use the OperatorHub to install HawtIO, role-based access control (RBAC) is enabled by default. To customize HawtIO RBAC behaviour, before deployment of HawtIO, a ConfigMap resource (that defines the custom RBAC behaviour) must be provided. The name of this ConfigMap should be entered in the rbac configuration section of the HawtIO Custom Resource (CR).
The custom ConfigMap resource must be added in the same namespace in which the HawtIO Operator has been installed.
Prerequisite:
- The HawtIO Operator has been installed from the OperatorHub.
Procedure:
To customize HawtIO RBAC roles:
Create an RBAC ConfigMap:
Make sure the current OpenShift project is the project to which you want to install HawtIO. For example, to install HawtIO in the hawtio-test project, run this command:
oc project hawtio-test
oc project hawtio-testCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a HawtIO RBAC ConfigMap file from the template, and run this command:
oc process -f https://raw.githubusercontent.com/hawtio/hawtio-online/2.x/docker/ACL.yaml -p APP_NAME=custom-hawtio | oc create -f -
oc process -f https://raw.githubusercontent.com/hawtio/hawtio-online/2.x/docker/ACL.yaml -p APP_NAME=custom-hawtio | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the new custom ConfigMap, using the command:
oc edit ConfigMap custom-hawtio-rbac
oc edit ConfigMap custom-hawtio-rbacCopy to Clipboard Copied! Toggle word wrap Toggle overflow - By saving the edits, the ConfigMap resource will be updated
- Create a new HawtIO CR, as described above, and edit the rbac section by adding the name of the new ConfigMap under the property configMap.
- Click Create. The operator should deploy a new version of HawtIO making use of the custom ConfigMap
5.3. Migrating from Fuse Console 링크 복사링크가 클립보드에 복사되었습니다!
The version of the HawtIO Custom Resource Definition (CRD) has been upgraded in HawtIO from v1alpha1 to v1 and contains non-backwards compatible changes. Therefore, since the CRD is cluster-wide, this will have a detrimental impact on existing installations of Fuse Console if HawtIO is subsequently installed on the same cluster. Users are advised at this time to uninstall all versions of Fuse Console before proceeding with the installation of HawtIO.
Users wishing to migrate their existing HawtIO Custom Resources to HawtIO, can store the resource configuration in a file and re-apply them once the HawtIO Operator has been installed. On re-applying, the CR will be upgraded to the version v1 automatically. An important change in the new specification is that the version property can no longer be specified in the CR as the version is provided as an internal constant of the operator itself.
5.4. Upgrading HawtIO on OpenShift 4 링크 복사링크가 클립보드에 복사되었습니다!
Red Hat OpenShift 4.x handles updates to operators, including HawtIO operators. For more information see the Operators OpenShift documentation. In turn, the operator updates will trigger application upgrades, depending on how the application is configured.
5.5. Tuning the performance of HawtIO on OpenShift 4 링크 복사링크가 클립보드에 복사되었습니다!
By default, HawtIO uses the following Nginx settings:
-
clientBodyBufferSize: 256k -
proxyBuffers: 16 128k -
subrequestOutputBufferSize: 10m
For descriptions of these settings, see the Nginx documentation.
To tune the performance of HawtIO, you can set any of the clientBodyBufferSize, proxyBuffers, and subrequestOutputBufferSize environment variables. For example, if you are using HawtIO to monitor numerous pods and routes (for instance, 100 routes in total), you can resolve a loading timeout issue by setting HawtIO’s subrequestOutputBufferSize environment variable between 60m to 100m.
5.5.1. Performance tuning for HawtIO Operator installation 링크 복사링크가 클립보드에 복사되었습니다!
On Openshift 4.x, you can set the Nginx performance tuning environment variables before or after you deploy HawtIO. If you do so afterwards, OpenShift redeploys HawtIO.
Prerequisite:
-
You must have
cluster adminaccess to the OpenShift cluster.
Procedure:
You can set the environment variables before or after you deploy HawtIO.
To set the environment variables before deploying HawtIO:
- In the OpenShift web console, in a project that has HawtIO Operator installed, select Operators> Installed Operators> HawtIO Operator.
- Click the HawtIO tab, and then click Create HawtIO.
- On the Create HawtIO page, in the Form view, scroll down to the Config> Nginx section.
Expand the Nginx section and then set the environment variables. For example:
-
clientBodyBufferSize: 256k -
proxyBuffers: 16 128k -
subrequestOutputBufferSize: 100m
-
- Click Create to deploy HawtIO.
- After the deployment completes, open the Deployments> HawtIO-console page, and then click Environment to verify that the environment variables are in the list.
To set the environment variables after you deploy HawtIO:
- In the OpenShift web console, open the project in which HawtIO is deployed.
- Select Operators> Installed Operators> HawtIO Operator.
- Click the HawtIO tab, and then click HawtIO.
- Select Actions> Edit HawtIO.
-
In the Editor window, scroll down to the
specsection. Under the
specsection, add a newnginxsection and specify one or more environment variables, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Save. OpenShift redeploys HawtIO.
- After the redeployment completes, open the Workloads> Deployments> HawtIO-console page, and then click Environment to see the environment variables in the list.
5.5.2. Performance tuning for viewing applications on HawtIO 링크 복사링크가 클립보드에 복사되었습니다!
Enhanced performance tuning capability of HawtIO allows viewing of the applications with a large number of MBeans. To use this capability perform the following steps.
Prerequisite:
-
You must have
cluster adminaccess to the OpenShift cluster.
Procedure:
Increase the memory limit for the applications.
To increase the memory limits after deploying HawtIO:
- In the OpenShift web console, open the project in which HawtIO is deployed.
- Select Operators> Installed Operators> HawtIO Operator.
- Click the HawtIO tab, and then click HawtIO.
- Select Actions> Edit HawtIO.
-
In the Editor window, scroll down to the
spec.resourcessection. - Update the values for both requests and limits to preferred amounts
- Click Save
- HawtIO should re-deploy using the new resource specification.
Chapter 6. Setting up applications for HawtIO Online 링크 복사링크가 클립보드에 복사되었습니다!
This section shows how to deploy a Camel Quarkus app on OpenShift and make it HawtIO-enabled with Camel Quarkus. Once deployed on OpenShift, it will be discovered by HawtIO Online.
- This project uses Quarkus Container Images and Kubernetes extensions to build a container image and deploy it to a Kubernetes/OpenShift cluster (pom.xml).
The most important part in terms of the HawtIO-enabled configuration is defined in the
<properties>section. To make it HawtIO-enabled, the Jolokia agent must be attached to the application with HTTPS and SSL client-authentication configured. The client principal should match those that the HawtIO Online instance provides (the default ishawtio-online.hawtio.svc).Copy to Clipboard Copied! Toggle word wrap Toggle overflow Running the application locally:
Run in development mode with:
mvn compile quarkus:dev
mvn compile quarkus:devCopy to Clipboard Copied! Toggle word wrap Toggle overflow Or build the project and execute the runnable JAR:
mvn package && java -jar target/quarkus-app/quarkus-run.jar
mvn package && java -jar target/quarkus-app/quarkus-run.jarCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Running with the Jolokia agent locally:
You can run this example with Jolokia JVM agent locally as follows:
java -javaagent:target/quarkus-app/lib/main/org.jolokia.jolokia-agent-jvm-2.0.1-javaagent.jar -jar target/quarkus-app/quarkus-run.jar
java -javaagent:target/quarkus-app/lib/main/org.jolokia.jolokia-agent-jvm-2.0.1-javaagent.jar -jar target/quarkus-app/quarkus-run.jarCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy it to OpenShift:
To deploy it to a cluster, firstly change the container image parameters in pom.xml to fit the development environment. (The default image name is
quay.io/hawtio/hawtio-online-example-camel-quarkus:latest, which should be pushed to thehawtioorganisation on Quay.io.)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Then build the project with option
-Dquarkus.container-image.push=trueto push the build image to the preferred container registry:mvn install -Dquarkus.container-image.push=true
mvn install -Dquarkus.container-image.push=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow The resources file for deployment is also generated at target/kubernetes/kubernetes.yml. Use
kubectloroccommand to deploy the application with the resources file:kubectl apply -f target/kubernetes/kubernetes.yml
kubectl apply -f target/kubernetes/kubernetes.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - After deployment is successful and the pod has started, the application log can be seen on the cluster
Chapter 7. Viewing containers and applications 링크 복사링크가 클립보드에 복사되었습니다!
When you login to HawtIO for OpenShift, the HawtIO home page shows the available containers.
Procedure:
- To manage (create, edit, or delete) containers, use the OpenShift console.
- To view HawtIO-enabled applications and AMQ Brokers (if applicable) on the OpenShift cluster, click the Online tab
Chapter 8. Viewing and managing Apache Camel applications 링크 복사링크가 클립보드에 복사되었습니다!
In HawtIO’s Camel tab, you view and manage Apache Camel contexts, routes, and dependencies.
You can view the following details:
- A list of all running Camel contexts
- Detailed information of each Camel context such as Camel version number and runtime statics
- Lists of all routes in each Camel application and their runtime statistics
- Graphical representation of the running routes along with real time metrics
You can also interact with a Camel application by:
- Starting and suspending contexts
- Managing the lifecycle of all Camel applications and their routes, so you can restart, stop, pause, resume, etc.
- Live tracing and debugging of running routes
- Browsing and sending messages to Camel endpoints
The Camel tab is only available when you connect to a container that uses one or more Camel routes.
8.1. Starting, suspending, or deleting a context 링크 복사링크가 클립보드에 복사되었습니다!
- In the Camel tab’s tree view, click Camel Contexts.
- Check the box next to one or more contexts in the list.
- Click Start or Suspend.
To delete a context:
- Stop the context.
- Click the ellipse icon and then select Delete from the dropdown menu.
When you delete a context, you remove it from the deployed application.
8.2. Viewing Camel application details 링크 복사링크가 클립보드에 복사되었습니다!
- In the Camel tab’s tree view, click a Camel application.
- To view a list of application attributes and values, click Attributes.
- To view a graphical representation of the application attributes, click Chart and then click Edit to select the attributes that you want to see in the chart.
- To view inflight and blocked exchanges, click Exchanges.
- To view application endpoints, click Endpoints. You can filter the list by URL, Route ID, and direction.
- To view, enable, and disable statistics related to the Camel built-in type conversion mechanism that is used to convert message bodies and message headers to different types, click Type Converters.
- To view and execute JMX operations, such as adding or updating routes from XML or finding all Camel components available in the classpath, click Operations.
8.3. Viewing a list of the Camel routes and interacting with them 링크 복사링크가 클립보드에 복사되었습니다!
To view a list of routes:
- Click the Camel tab.
In the tree view, click the application’s routes folder:
To start, stop, or delete one or more routes:
- Check the box next to one or more routes in the list.
- Click Start or Stop.
To delete a route, you must first stop it. Then click the ellipse icon and select Delete from the dropdown menu.
Note- When you delete a route, you remove it from the deployed application.
- You can also select a specific route in the tree view and then click the upper-right menu to start, stop, or delete it.
- To view a graphical diagram of the routes, click Route Diagram.
- To view inflight and blocked exchanges, click Exchanges.
- To view endpoints, click Endpoints. You can filter the list by URL, Route ID, and direction.
- Click Type Converters to view, enable, and disable statistics related to the Camel built-in type conversion mechanism, which is used to convert message bodies and message headers to different types.
To interact with a specific route:
- In the Camel tab’s tree view, select a route. To view a list of route attributes and values, click Attributes.
- To view a graphical representation of the route attributes, click Chart. You can click Edit to select the attributes that you want to see in the chart.
- To view inflight and blocked exchanges, click Exchanges.
- Click Operations to view and execute JMX operations on the route, such as dumping the route as XML or getting the route’s Camel ID value.
To trace messages through a route:
- In the Camel tab’s tree view, select a route.
- Select Trace, and then click Start tracing.
To send messages to a route:
- In the Camel tab’s tree view, open the context’s endpoints folder and then select an endpoint.
- Click the Send subtab.
- Configure the message in JSON or XML format.
- Click Send.
- Return to the route’s Trace tab to view the flow of messages through the route.
8.4. Debugging a route 링크 복사링크가 클립보드에 복사되었습니다!
- In the Camel tab’s tree view, select a route.
- Select Debug, and then click Start debugging.
To add a breakpoint, select a node in the diagram and then click Add breakpoint. A red dot appears in the node:
The node is added to the list of breakpoints:
Click the down arrow to step to the next node or the Resume button to resume running the route.
- Click the Pause button to suspend all threads for the route.
- Click Stop debugging when you are done. All breakpoints are cleared.
Chapter 9. Viewing and managing JMX domains and MBeans 링크 복사링크가 클립보드에 복사되었습니다!
Java Management Extensions (JMX) is a Java technology that allows you to manage resources (services, devices, and applications) dynamically at runtime. The resources are represented by objects called MBeans (for Managed Bean). You can manage and monitor resources as soon as they are created, implemented, or installed.
With the JMX plugin on HawtIO, you can view and manage JMX domains and MBeans. You can view MBean attributes, run commands, and create charts that show statistics for the MBeans.
The JMX tab provides a tree view of the active JMX domains and MBeans organized in folders. You can view details and execute commands on the MBeans.
Procedure:
To view and edit MBean attributes:
- In the tree view, select an MBean.
- Click the Attributes tab.
- Click an attribute to see its details.
To perform operations:
- In the tree view, select an MBean.
- Click the Operations tab, expand one of the listed operations.
- Click Execute to run the operation.
To view charts:
- In the tree view, select an item.
- Click the Chart tab.
Chapter 10. Viewing and managing Quartz Schedules 링크 복사링크가 클립보드에 복사되었습니다!
Quartz is a richly featured, open source job scheduling library that you can integrate within most Java applications. You can use Quartz to create simple or complex schedules for executing jobs.
A job is defined as a standard Java component that can execute virtually anything that you program it to do.
HawtIO shows the Quartz tab if your Camel route deploys the camel-quartz component. Note that you can alternately access Quartz mbeans through the JMX tree view.
Procedure:
- In HawtIO, click the Quartz tab. The Quartz page includes a tree view of the Quartz Schedulers and Scheduler, Triggers, and Jobs tabs.
- To pause or start a scheduler, click the buttons on the Scheduler tab.
Click the Triggers tab to view the triggers that determine when jobs will run. For example, a trigger can specify to start a job at a certain time of day (to the millisecond), on specified days, or repeated a specified number of times or at specific times.
- To filter the list of triggers select State, Group, Name, or Type from the drop-down list. You can then further filter the list by selecting or typing in the fill-on field.
- To pause, resume, update, or manually fire a trigger, click the options in the Action column.
- Click the Jobs tab to view the list of running jobs. You can sort the list by the columns in the table: Group, Name, Durable, Recover, Job ClassName, and Description.
Chapter 11. Viewing Threads 링크 복사링크가 클립보드에 복사되었습니다!
You can view and monitor the state of threads.
Procedure:
- Click the Runtime tab and then the Threads subtab.
- The Threads page lists active threads and stack trace details for each thread. By default, the thread list shows all threads in descending ID order.
- To sort the list by increasing ID, click the ID column label.
- Optionally, filter the list by thread state (for example, Blocked) or by thread name.
- To drill down to detailed information for a specific thread, such as the lock class name and full stack trace for that thread, in the Actions column, click More.
Chapter 12. Ensuring correct data displays in HawtIO 링크 복사링크가 클립보드에 복사되었습니다!
If the display of the queues and connections in HawtIO is missing queues, missing connections, or displaying inconsistent icons, adjust the Jolokia collection size parameter that specifies the maximum number of elements in an array that Jolokia marshals in a response.
Procedure:
In the upper right corner of HawtIO, click the user icon and then click Preferences.
- Increase the value of the Maximum collection size option (the default is 50,000).
- Click Close.