이 콘텐츠는 선택한 언어로 제공되지 않습니다.

HawtIO Diagnostic Console Guide


Red Hat build of Apache Camel 4.0

Manage applications with Red Hat build of HawtIO

Abstract

When you deploy a HawtIO-enabled application, you can use HawtIO to monitor and interact with the integrations.

Preface

HawtIO provides enterprise monitoring tools for viewing and managing Red Hat HawtIO-enabled applications. It is a web-based console accessed from a browser to monitor and manage a running HawtIO-enabled container. HawtIO is based on the open source HawtIO software (https://hawt.io/). HawtIO Diagnostic Console Guide describes how to manage applications with HawtIO.

The audience for this guide are Apache Camel eco-system developers and administrators. This guide assumes familiarity with Apache Camel and the processing requirements for your organization.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Overview of HawtIO

Important

Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend implementing any Technology Preview features in production environments.

This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about support scope, see Technology Preview Features Support Scope.

HawtIO is a diagnostic Console for the Red Hat build of Apache Camel and Red Hat build of AMQ. It is a pluggable Web diagnostic console built with modern Web technologies such as React and PatternFly. HawtIO provides a central interface to examine and manage the details of one or more deployed HawtIO-enabled containers. HawtIO is available when you install HawtIO standalone or use HawtIO on OpenShift. The integrations that you can view and manage in HawtIO depend on the plugins that are running. You can monitor HawtIO and system resources, perform updates, and start or stop services.

The pluggable architecture is based on Webpack Module Federation and is highly extensible; you can dynamically extend HawtIO with your plugins or automatically discover plugins inside the JVM. HawtIO has built-in plugins already to make it highly useful out of the box for your JVM application. The plugins include Apache Camel, Connect, JMX, Logs, Runtime, Quartz, and Spring Boot. HawtIO is primarily designed to be used with Camel Quarkus and Camel Spring Boot. It’s also a tool for managing microservice applications. HawtIO is cloud-native; it’s ready to go over the cloud! You can deploy it to Kubernetes and OpenShift with the HawtIO Operator.

Benefits of HawtIO can be listed as follows:

  • Runtime management of JVM via JMX, especially that of Camel applications and AMQ broker, with specialized views
  • Visualization and debugging/tracing of Camel routes
  • Simple managing and monitoring of application metrics

Chapter 2. Installing HawtIO

There are several options to start using the HawtIO console:

2.1. Adding Red Hat repositories to Maven

To access artifacts that are in Red Hat Maven repositories, you need to add those repositories to Maven’s settings.xml file. Maven looks for the settings.xml file in the .m2 directory of the user’s home directory. If there is not a user specified settings.xml file, Maven uses the system-level settings.xml file at M2_HOME/conf/settings.xml.

Prerequisite:

You know the location of the settings.xml file in which you want to add the Red Hat repositories.

Procedure:

  1. In the settings.xml file, add repository elements for the Red Hat repositories as shown in this example:

    <?xml version="1.0"?>
    <settings>
    
      <profiles>
        <profile>
          <id>extra-repos</id>
          <activation>
            <activeByDefault>true</activeByDefault>
          </activation>
          <repositories>
           <repository>
             <id>redhat-ga-repository</id>
             <url>https://maven.repository.redhat.com/ga</url>
             <releases>
               <enabled>true</enabled>
             </releases>
             <snapshots>
               <enabled>false</enabled>
             </snapshots>
            </repository>
            <repository>
              <id>redhat-ea-repository</id>
              <url>https://maven.repository.redhat.com/earlyaccess/all</url>
              <releases>
                <enabled>true</enabled>
              </releases>
              <snapshots>
                <enabled>false</enabled>
              </snapshots>
            </repository>
          </repositories>
          <pluginRepositories>
            <pluginRepository>
              <id>redhat-ga-repository</id>
              <url>https://maven.repository.redhat.com/ga</url>
              <releases>
                <enabled>true</enabled>
              </releases>
              <snapshots>
                <enabled>false</enabled>
              </snapshots>
            </pluginRepository>
            <pluginRepository>
              <id>redhat-ea-repository</id>
              <url>https://maven.repository.redhat.com/earlyaccess/all</url>
              <releases>
                <enabled>true</enabled>
              </releases>
              <snapshots>
                <enabled>false</enabled>
              </snapshots>
            </pluginRepository>
          </pluginRepositories>
        </profile>
      </profiles>
    
      <activeProfiles>
        <activeProfile>extra-repos</activeProfile>
      </activeProfiles>
    
    </settings>
    Copy to Clipboard Toggle word wrap

2.2. Running from CLI (JBang)

You can install and run HawtIO from CLI using JBang.

Note

If you don’t have JBang locally yet, first install it: https://www.jbang.dev/download/

Procedure:

  1. Install the latest HawtIO on your machine using the jbang command:

    $ jbang app install -Dhawtio.jbang.version=4.0.0.redhat-00040 hawtio@hawtio/hawtio
    Copy to Clipboard Toggle word wrap
    Note

    This installation method is available only with jbang>=0.115.0.

  2. It will install the HawtIO command. Launch a HawtIO instance with the following command:

    $ hawtio
    Copy to Clipboard Toggle word wrap
  3. The command will automatically open the console at http://0.0.0.0:8080/hawtio/. To change the port number, run the following command:

    $ hawtio --port 8090
    Copy to Clipboard Toggle word wrap
  4. For more information on the configuration options of the CLI, run the following code:

    $ hawtio --help
    Usage: hawtio [-hjoV] [-c=<contextPath>] [-d=<plugins>] [-e=<extraClassPath>]
                  [-H=<host>] [-k=<keyStore>] [-l=<warLocation>] [-p=<port>]
                  [-s=<keyStorePass>] [-w=<war>]
    Run Hawtio
      -c, --context-path=<contextPath>
                          Context path.
      -d, --plugins-dir=<plugins>
                          Directory to search for .war files to install as 3rd
                            party plugins.
      -e, --extra-class-path=<extraClassPath>
                          Extra class path.
      -h, --help          Print usage help and exit.
      -H, --host=<host>   Hostname to listen to.
      -j, --join          Join server thread.
      -k, --key-store=<keyStore>
                          JKS keyStore with the keys for https.
      -l, --war-location=<warLocation>
                          Directory to search for .war files.
      -o, --open-url      Open the web console automatic in the web browser.
      -p, --port=<port>   Port number.
      -s, --key-store-pass=<keyStorePass>
                          Password for the JKS keyStore with the keys for https.
      -V, --version       Print Hawtio version
      -w, --war=<war>     War file or directory of the hawtio web application.
    Copy to Clipboard Toggle word wrap

2.3. Running a Quarkus app

You can attach HawtIO to your Quarkus application in a single step.

Procedure:

  1. Add io.hawt:hawtio-quarkus and the supporting Camel Quarkus extensions to the dependencies in pom.xml:

    <dependencyManagement>
      <dependencies>
        <dependency>
          <groupId>io.hawt</groupId>
          <artifactId>hawtio-bom</artifactId>
          <version>4.0.0.redhat-00040</version>
          <type>pom</type>
          <scope>import</scope>
        </dependency>
      </dependencies>
      <!-- ... other BOMs or dependencies ... -->
    </dependencyManagement>
    
    <dependencies>
      <dependency>
        <groupId>io.hawt</groupId>
        <artifactId>hawtio-quarkus</artifactId>
      </dependency>
    
       <!-- Mandatory for enabling Camel management via JMX / Hawtio -->
      <dependency>
        <groupId>org.apache.camel.quarkus</groupId>
        <artifactId>camel-management-starter</artifactId>
      </dependency>
    
      <!-- (Optional) Required for Hawtio Camel route diagram tab -->
      <dependency>
        <groupId>org.apache.camel.quarkus</groupId>
        <artifactId>camel-quarkus-jaxb</artifactId>
      </dependency>
    
      <!-- ... other dependencies ... -->
    </dependencies>
    Copy to Clipboard Toggle word wrap
  2. Run HawtIO with your Quarkus application in development mode as follows:

    mvn compile quarkus:dev
    Copy to Clipboard Toggle word wrap
  3. Open http://localhost:8080/hawtio to view the HawtIO console.

2.4. Running a Spring Boot app

You can attach HawtIO to your Spring Boot application in two steps.

Procedure:

  1. Add io.hawt:hawtio-springboot and the supporting Camel Spring Boot starters to the dependencies in pom.xml:

    <dependencyManagement>
      <dependencies>
        <dependency>
          <groupId>io.hawt</groupId>
          <artifactId>hawtio-bom</artifactId>
          <version>4.0.0.redhat-00040</version>
          <type>pom</type>
          <scope>import</scope>
        </dependency>
        <!-- ... other BOMs or dependencies ... -->
      </dependencies>
    </dependencyManagement>
    
    <dependencies>
      <dependency>
        <groupId>io.hawt</groupId>
        <artifactId>hawtio-springboot</artifactId>
      </dependency>
    
       <!-- Mandatory for enabling Camel management via JMX / Hawtio -->
      <dependency>
        <groupId>org.apache.camel.springboot</groupId>
        <artifactId>camel-management-starter</artifactId>
      </dependency>
    
      <!-- (Optional) Required for Hawtio Camel route diagram tab -->
      <dependency>
        <groupId>org.apache.camel.springboot</groupId>
        <artifactId>camel-spring-boot-xml-starter</artifactId>
      </dependency>
    
      <!-- ... other dependencies ... -->
    </dependencies>
    Copy to Clipboard Toggle word wrap
  2. Enable the HawtIO and Jolokia endpoints by adding the following lines to application.properties:

    spring.jmx.enabled = true
    management.endpoints.web.exposure.include = hawtio,jolokia
    Copy to Clipboard Toggle word wrap
  3. Run HawtIO with your Spring Boot application in development mode as follows:

    mvn spring-boot:run
    Copy to Clipboard Toggle word wrap
  4. Open http://localhost:8080/actuator/hawtio to view the HawtIO console.

2.4.1. Configuring HawtIO path

If you don’t prefer to have the /actuator base path for the HawtIO endpoint, you can also execute the following:

  1. Customize the Spring Boot management base path with the management.endpoints.web.base-path property:

    management.endpoints.web.base-path = /
    Copy to Clipboard Toggle word wrap
  2. You can also customize the path to the HawtIO endpoint by setting the management.endpoints.web.path-mapping.hawtio property:

    management.endpoints.web.path-mapping.hawtio = hawtio/console
    Copy to Clipboard Toggle word wrap
  3. Example:

    1. There is a working Spring Boot example that shows how to monitor a web application that exposes information about Apache Camel routes, metrics, etc. with HawtIO Spring Boot example.
    2. A good MBean for real-time values and charts is java.lang/OperatingSystem. Try looking at Camel routes. Notice that as you change selections in the tree the list of tabs available changes dynamically based on the content.

Chapter 3. Configuration of HawtIO

HawtIO and its plugins can configure their behaviours through System properties.

3.1. Configuration properties

The following table lists the configuration properties for the HawtIO core system and various plugins.

Expand
SystemDefaultDescription

hawtio.disableProxy

false

With this property set to true, ProxyServlet (/hawtio/proxy/*) can be disabled. This makes the Connect plugin unavailable, which means HawtIO can no longer connect to remote JVMs, but sometimes users might want to do so because of security if the Connect plugin is not used.

hawtio.localAddressProbing

true

Whether local address probing for proxy allowlist is enabled or not upon startup. Set this property to false to disable it.

hawtio.proxyAllowlist

localhost, 127.0.0.1

Comma-separated allowlist for target hosts that Connect plugin can connect to via ProxyServlet. All hosts not listed in this allowlist are denied to connect for security reasons. This option can be set to * to allow all hosts. Prefixing an element of the list with "r:" allows to define a regex (example: localhost,r:myserver[0-9]+.mydomain.com)

hawtio.redirect.scheme

 

The scheme is to redirect the URL to the login page when authentication is required.

hawtio.sessionTimeout

 

The maximum time interval, in seconds, that the servlet container will keep this session open between client accesses. If this option is not configured, then HawtIO uses the default session timeout of the servlet container.

3.1.1. Quarkus

For Quarkus, all those properties are configurable in application.properties or application.yaml with the quarkus.hawtio prefix.

For example:

quarkus.hawtio.disableProxy = true
Copy to Clipboard Toggle word wrap

3.1.2. Spring Boot

For Spring Boot, all those properties are configurable in application.properties or application.yaml as is.

For example:

hawtio.disableProxy = true
Copy to Clipboard Toggle word wrap

3.2. Configuring Jolokia through system properties

The Jolokia agent is deployed automatically with io.hawt.web.JolokiaConfiguredAgentServlet that extends Jolokia native org.jolokia.http.AgentServlet class, defined in hawtio-war/WEB-INF/web.xml. If you want to customize the Jolokia Servlet with the configuration parameters that are defined in the Jolokia documentation, you can pass them as System properties prefixed with jolokia.

For example:

jolokia.policyLocation = file:///opt/hawtio/my-jolokia-access.xml
Copy to Clipboard Toggle word wrap

3.2.1. RBAC Restrictor

For some runtimes that support HawtIO RBAC (role-based access control), HawtIO provides a custom Jolokia Restrictor implementation that provides an additional layer of protection over JMX operations based on the ACL (access control list) policy.

Warning

You cannot use HawtIO RBAC with Quarkus and Spring Boot yet. Enabling the RBAC Restrictor on those runtimes only imposes additional load without any gains.

To activate the HawtIO RBAC Restrictor, configure the Jolokia parameter restrictorClass via System property to use io.hawt.web.RBACRestrictor as follows:

jolokia.restrictorClass = io.hawt.system.RBACRestrictor
Copy to Clipboard Toggle word wrap

Chapter 4. Security and Authentication of HawtIO

HawtIO enables authentication out of the box depending on the runtimes/containers it runs with. To use HawtIO with your application, either setting up authentication for the runtime or disabling HawtIO authentication is necessary.

4.1. Configuration properties

The following table lists the Security-related configuration properties for the HawtIO core system.

Expand
NameDefaultDescription

hawtio.authenticationContainerDiscoveryClasses

io.hawt.web.tomcat.TomcatAuthenticationContainerDiscovery

List of used AuthenticationContainerDiscovery implementations separated by a comma. By default, there is just TomcatAuthenticationContainerDiscovery, which is used to authenticate users on Tomcat from tomcat-users.xml file. Feel free to remove it if you want to authenticate users on Tomcat from the configured JAAS login module or feel free to add more classes of your own.

hawtio.authenticationContainerTomcatDigestAlgorithm

NONE

When using the Tomcat tomcat-users.xml file, passwords can be hashed instead of plain text. Use this to specify the digest algorithm; valid values are NONE, MD5, SHA, SHA-256, SHA-384, and SHA-512.

hawtio.authenticationEnabled

true

Whether or not security is enabled.

hawtio.keycloakClientConfig

classpath:keycloak.json

Keycloak configuration file used for the front end. It is mandatory if Keycloak integration is enabled.

hawtio.keycloakEnabled

false

Whether to enable or disable Keycloak integration.

hawtio.noCredentials401

false

Whether to return HTTP status 401 when authentication is enabled, but no credentials have been provided. Returning 401 will cause the browser popup window to prompt for credentials. By default this option is false, returning HTTP status 403 instead.

hawtio.realm

hawtio

The security realm used to log in.

hawtio.rolePrincipalClasses

 

Fully qualified principal class name(s). A comma can separate multiple classes.

hawtio.roles

Admin, manager, viewer

The user roles are required to log in to the console. A comma can separate multiple roles to allow. Set to * or an empty value to disable role checking when HawtIO authenticates a user.

hawtio.tomcatUserFileLocation

conf/tomcat-users.xml

Specify an alternative location for the tomcat-users.xml file, e.g. /production/userlocation/.

4.2. Quarkus

HawtIO is secured with the authentication mechanisms that Quarkus and also Keycloak provide.

If you want to disable HawtIO authentication for Quarkus, add the following configuration to application.properties:

quarkus.hawtio.authenticationEnabled = false
Copy to Clipboard Toggle word wrap

4.2.1. Quarkus authentication mechanisms

HawtIO is just a web application in terms of Quarkus, so the various mechanisms Quarkus provides are used to authenticate HawtIO in the same way it authenticates a Web application.

Here we show how you can use the properties-based authentication with HawtIO for demonstrating purposes.

Important

The properties-based authentication is not recommended for use in production. This mechanism is for development and testing purposes only.

  1. To use the properties-based authentication with HawtIO, add the following dependency to pom.xml:

    <dependency>
        <groupId>io.quarkus</groupId>
        <artifactId>quarkus-elytron-security-properties-file</artifactId>
    </dependency>
    Copy to Clipboard Toggle word wrap
  2. You can then define users in application.properties to enable the authentication. For example, defining a user hawtio with password s3cr3t! and role admin would look like the following:

    quarkus.security.users.embedded.enabled = true
    quarkus.security.users.embedded.plain-text = true
    quarkus.security.users.embedded.users.hawtio = s3cr3t!
    quarkus.security.users.embedded.roles.hawtio = admin
    Copy to Clipboard Toggle word wrap

Example:

See Quarkus example for a working example of the properties-based authentication.

4.2.2. Quarkus with Keycloak

See Keycloak Integration - Quarkus.

4.3. Spring Boot

In addition to the standard JAAS authentication, HawtIO on Spring Boot can be secured through Spring Security or Keycloak. If you want to disable HawtIO authentication for Spring Boot, add the following configuration to application.properties:

hawtio.authenticationEnabled = false
Copy to Clipboard Toggle word wrap

4.3.1. Spring Security

To use Spring Security with HawtIO:

  1. Add org.springframework.boot:spring-boot-starter-security to the dependencies in pom.xml:

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-security</artifactId>
    </dependency>
    Copy to Clipboard Toggle word wrap
  2. Spring Security configuration in src/main/resources/application.properties should look like the following:

    spring.security.user.name = hawtio
    spring.security.user.password = s3cr3t!
    spring.security.user.roles = admin,viewer
    Copy to Clipboard Toggle word wrap
  3. A security config class has to be defined to set up how to secure the application with Spring Security:

    @EnableWebSecurity
    public class SecurityConfig
    {
        @Bean
        public SecurityFilterChain filterChain(HttpSecurity http) throws Exception
        {
            http.authorizeRequests().anyRequest().authenticated()
                .and()
                .formLogin()
                .and()
                .httpBasic()
                .and()
                .csrf().csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse());
            return http.build();
        }
    }
    Copy to Clipboard Toggle word wrap

Example:

See springboot-security example for a working example.

4.3.1.1. Connecting to a remote application with Spring Security

If you try to connect to a remote Spring Boot application with Spring Security enabled, make sure the Spring Security configuration allows access from the HawtIO console. Most likely, the default CSRF protection prohibits remote access to the Jolokia endpoint and thus causes authentication failures at the HawtIO console.

Warning

Be aware that it will expose your application to the risk of CSRF attacks.

  1. The easiest solution is to disable CSRF protection for the Jolokia endpoint at the remote application as follows.

    import org.springframework.boot.actuate.autoconfigure.jolokia.JolokiaEndpoint;
    import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest;
    
    @EnableWebSecurity
    public class SecurityConfig
    {
    
        @Bean
        public SecurityFilterChain filterChain(HttpSecurity http) throws Exception
        {
            ...
            // Disable CSRF protection for the Jolokia endpoint
            http.csrf().ignoringRequestMatchers(EndpointRequest.to(JolokiaEndpoint.class));
            return http.build();
        }
    
    }
    Copy to Clipboard Toggle word wrap
  2. To secure the Jolokia endpoint even without Spring Security’s CSRF protection, you need to provide a jolokia-access.xml file under src/main/resources/ like the following (snippet) so that only trusted nodes can access it:

    <restrict>
      ...
      <cors>
        <allow-origin>http*://localhost:*</allow-origin>
        <allow-origin>http*://127.0.0.1:*</allow-origin>
        <allow-origin>http*://*.example.com</allow-origin>
        <allow-origin>http*://*.example.com:*</allow-origin>
    
        <strict-checking />
      </cors>
    </restrict>
    Copy to Clipboard Toggle word wrap

4.3.2. Spring Boot with Keycloak

See Keycloak Integration - Spring Boot.

Chapter 5. Setting up HawtIO on OpenShift 4

On OpenShift 4.x, setting up HawtIO involves installing and deploying it. The preferred mechanism for this installation is using the HawtIO Operator available from the OperatorHub (see Section 5.1, “Installing and deploying HawtIO on OpenShift 4.x by using the OperatorHub”). Optionally, you can customize role-based access control (RBAC) for HawtIO as described in Section 2.3, “Role-based access control for HawtIO on OpenShift 4.x”.

5.1. Installing and deploying HawtIO on OpenShift 4 by using the OperatorHub

The HawtIO Operator is provided in the OpenShift OperatorHub for the installation of HawtIO. To deploy HawtIO you will have to deploy an instance of the installed operator as well as a HawtIO Custom Resource (CR).

To install and deploy HawtIO:

  1. Log in to the OpenShift console in the web browser as a user with cluster admin access.
  2. Click Operators and then click OperatorHub.
  3. In the search field window, type HawtIO to filter the list of operators. Click HawtIO Operator.
  4. In the HawtIO Operator install window, click Install. The Create Operator Subscription form opens:

    1. For Update Channel, select stable-v1.
    2. For Installation Mode, accept the default (a specific namespace on the cluster).

      Note

      This mode determines what namespaces the operator will monitor for HawtIO CRs. This is different to what namespaces HawtIO will monitor when it is fully deployed. The latter can be configured via the HawtIO CR.

    3. For Installed Namespace, select the namespace in which you want to install HawtIO Operator.
    4. For the Update Approval, select Automatic or Manual to configure how OpenShift handles updates to HawtIO Operator.

      1. If the Automatic updates option is selected and a new version of HawtIO Operator is available, the OpenShift Operator Lifecycle Manager (OLM) automatically upgrades the running instance of HawtIO without human intervention;
      2. If the Manual updates option is selected and a newer version of an Operator is available, the OLM only creates an update request. A Cluster Administrator must then manually approve the update request to have HawtIO Operator updated to the new version.
  5. Click Install and OpenShift installs HawtIO Operator into the current namespace.
  6. To verify the installation, click Operators and then click Installed Operators. HawtIO should be visible in the list of operators.
  7. To deploy HawtIO by using the OpenShift web console:

    1. In the list of Installed Operators, under the Name column, click HawtIO Operator.
    2. On the Operator Details page under Provided APIs, click Create HawtIO.
    3. Accept the configuration default values or optionally edit them.

      1. For Replicas, to increase HawtIO performance (for example, in a high availability environment), the number of pods allocated to HawtIO can be increased;
      2. For RBAC (role-based access control), only specify a value in the Config Map field if you want to customize the default RBAC behaviour and if the ConfigMap file already exists in the namespace in which you installed HawtIO Operator
      3. For Nginx, see Performance tuning for HawtIO Operator installation
      4. For Type, specify either:

        1. Cluster: for HawtIO to monitor all namespaces on the OpenShift cluster for any HawtIO-enabled applications;
        2. Namespace: for HawtIO to monitor only the HawtIO-enabled applications that have been deployed in the same namespace.
    4. Click Create. The HawtIO Operator Details page opens and shows the status of the deployment.
  8. To open HawtIO:

    1. For a namespace deployment: In the OpenShift web console, open the project in which the HawtIO operator is installed, and then select Overview. In the Project Overview page, scroll down to the Launcher section and click the HawtIO link.
    2. For a cluster deployment, in the OpenShift web console’s title bar, click the grid icon. In the popup menu, under Red Hat Applications, click the HawtIO URL link.
    3. Log into HawtIO. An Authorize Access page opens in the browser listing the required permissions.
    4. Click Allow selected permissions. HawtIO opens in the browser and shows any HawtIO-enabled application pods that are authorized for access.
  9. Click Connect to view the monitored application. A new browser window opens showing the application in HawtIO.

5.2. Role-based access control for HawtIO on OpenShift 4

HawtIO offers role-based access control (RBAC) that infers access according to the user authorization provided by OpenShift. In HawtIO, RBAC determines a user’s ability to perform MBean operations on a pod.

For information on OpenShift authorization, see the Using RBAC to define and apply permissions section of the OpenShift documentation.

Role-based access is enabled by default when you use the Operator to install HawtIO on OpenShift. HawtIO RBAC leverages the user’s verb access on a pod resource in OpenShift to determine the user’s access to a pod’s MBean operations in HawtIO. By default, there are two user roles for HawtIO:

  1. admin: if a user can update a pod in OpenShift, then the user is conferred the admin role for HawtIO. The user can perform write MBean operations in HawtIO for the pod.
  2. viewer: if a user can get a pod in OpenShift, then the user is conferred the viewer role for HawtIO. The user can perform read-only MBean operations in HawtIO for the pod.

5.2.1. Determining access roles for HawtIO on OpenShift 4

HawtIO role-based access control is inferred from a user’s OpenShift permissions for a pod. To determine HawtIO access role granted to a particular user, obtain the OpenShift permissions granted to the user for a pod.

Prerequisites:

  • The user’s name
  • The pod’s name

Procedure:

  1. To determine whether a user has HawtIO admin role for the pod, run the following command to see whether the user can update the pod on OpenShift:

    oc auth can-i update pods/<pod> --as <user>
    Copy to Clipboard Toggle word wrap
  2. If the response is yes, the user has the admin role for the pod. The user can perform write operations in HawtIO for the pod.
  3. To determine whether a user has HawtIO viewer role for the pod, run the following command to see whether the user can get a pod on OpenShift:

    oc auth can-i get pods/<pod> --as <user>
    Copy to Clipboard Toggle word wrap
  4. If the response is yes, the user has the viewer role for the pod. The user can perform read-only operations in HawtIO for the pod. Depending on the context, HawtIO prevents the user with the viewer role from performing a write MBean operation, by disabling an option or by displaying an operation not allowed for this user message when the user attempts a write MBean operation.
  5. If the response is no, the user is not bound to any HawtIO roles and the user cannot view the pod in HawtIO.

5.2.2. Customizing role-based access to HawtIO on OpenShift 4

If you use the OperatorHub to install HawtIO, role-based access control (RBAC) is enabled by default. To customize HawtIO RBAC behaviour, before deployment of HawtIO, a ConfigMap resource (that defines the custom RBAC behaviour) must be provided. The name of this ConfigMap should be entered in the rbac configuration section of the HawtIO Custom Resource (CR).

The custom ConfigMap resource must be added in the same namespace in which the HawtIO Operator has been installed.

Prerequisite:

  • The HawtIO Operator has been installed from the OperatorHub.

Procedure:

To customize HawtIO RBAC roles:

  1. Create an RBAC ConfigMap:

    1. Make sure the current OpenShift project is the project to which you want to install HawtIO. For example, to install HawtIO in the hawtio-test project, run this command:

      oc project hawtio-test
      Copy to Clipboard Toggle word wrap
    2. Create a HawtIO RBAC ConfigMap file from the template, and run this command:

      oc process -f https://raw.githubusercontent.com/hawtio/hawtio-online/2.x/docker/ACL.yaml -p APP_NAME=custom-hawtio | oc create -f -
      Copy to Clipboard Toggle word wrap
    3. Edit the new custom ConfigMap, using the command:

      oc edit ConfigMap custom-hawtio-rbac
      Copy to Clipboard Toggle word wrap
    4. By saving the edits, the ConfigMap resource will be updated
  2. Create a new HawtIO CR, as described above, and edit the rbac section by adding the name of the new ConfigMap under the property configMap.
  3. Click Create. The operator should deploy a new version of HawtIO making use of the custom ConfigMap

5.3. Migrating from Fuse Console

The version of the HawtIO Custom Resource Definition (CRD) has been upgraded in HawtIO from v1alpha1 to v1 and contains non-backwards compatible changes. Therefore, since the CRD is cluster-wide, this will have a detrimental impact on existing installations of Fuse Console if HawtIO is subsequently installed on the same cluster. Users are advised at this time to uninstall all versions of Fuse Console before proceeding with the installation of HawtIO.

Users wishing to migrate their existing HawtIO Custom Resources to HawtIO, can store the resource configuration in a file and re-apply them once the HawtIO Operator has been installed. On re-applying, the CR will be upgraded to the version v1 automatically. An important change in the new specification is that the version property can no longer be specified in the CR as the version is provided as an internal constant of the operator itself.

5.4. Upgrading HawtIO on OpenShift 4

Red Hat OpenShift 4.x handles updates to operators, including HawtIO operators. For more information see the Operators OpenShift documentation. In turn, the operator updates will trigger application upgrades, depending on how the application is configured.

5.5. Tuning the performance of HawtIO on OpenShift 4

By default, HawtIO uses the following Nginx settings:

  • clientBodyBufferSize: 256k
  • proxyBuffers: 16 128k
  • subrequestOutputBufferSize: 10m
Note

For descriptions of these settings, see the Nginx documentation.

To tune the performance of HawtIO, you can set any of the clientBodyBufferSize, proxyBuffers, and subrequestOutputBufferSize environment variables. For example, if you are using HawtIO to monitor numerous pods and routes (for instance, 100 routes in total), you can resolve a loading timeout issue by setting HawtIO’s subrequestOutputBufferSize environment variable between 60m to 100m.

5.5.1. Performance tuning for HawtIO Operator installation

On Openshift 4.x, you can set the Nginx performance tuning environment variables before or after you deploy HawtIO. If you do so afterwards, OpenShift redeploys HawtIO.

Prerequisite:

  • You must have cluster admin access to the OpenShift cluster.

Procedure:

You can set the environment variables before or after you deploy HawtIO.

  1. To set the environment variables before deploying HawtIO:

    1. In the OpenShift web console, in a project that has HawtIO Operator installed, select Operators> Installed Operators> HawtIO Operator.
    2. Click the HawtIO tab, and then click Create HawtIO.
    3. On the Create HawtIO page, in the Form view, scroll down to the Config> Nginx section.
    4. Expand the Nginx section and then set the environment variables. For example:

      1. clientBodyBufferSize: 256k
      2. proxyBuffers: 16 128k
      3. subrequestOutputBufferSize: 100m
    5. Click Create to deploy HawtIO.
    6. After the deployment completes, open the Deployments> HawtIO-console page, and then click Environment to verify that the environment variables are in the list.
  2. To set the environment variables after you deploy HawtIO:

    1. In the OpenShift web console, open the project in which HawtIO is deployed.
    2. Select Operators> Installed Operators> HawtIO Operator.
    3. Click the HawtIO tab, and then click HawtIO.
    4. Select Actions> Edit HawtIO.
    5. In the Editor window, scroll down to the spec section.
    6. Under the spec section, add a new nginx section and specify one or more environment variables, for example:

      apiVersion: hawt.io/v1
      kind: Hawtio
      metadata:
        name: hawtio-console
      spec:
        type: Namespace
        nginx:
          clientBodyBufferSize: 256k
          proxyBuffers: 16 128k
          subrequestOutputBufferSize: 100m
      Copy to Clipboard Toggle word wrap
    7. Click Save. OpenShift redeploys HawtIO.
    8. After the redeployment completes, open the Workloads> Deployments> HawtIO-console page, and then click Environment to see the environment variables in the list.

5.5.2. Performance tuning for viewing applications on HawtIO

Enhanced performance tuning capability of HawtIO allows viewing of the applications with a large number of MBeans. To use this capability perform the following steps.

Prerequisite:

  • You must have cluster admin access to the OpenShift cluster.

Procedure:

Increase the memory limit for the applications.

  1. To increase the memory limits after deploying HawtIO:

    1. In the OpenShift web console, open the project in which HawtIO is deployed.
    2. Select Operators> Installed Operators> HawtIO Operator.
    3. Click the HawtIO tab, and then click HawtIO.
    4. Select Actions> Edit HawtIO.
    5. In the Editor window, scroll down to the spec.resources section.
    6. Update the values for both requests and limits to preferred amounts
    7. Click Save
    8. HawtIO should re-deploy using the new resource specification.

Chapter 6. Setting up applications for HawtIO Online

This section shows how to deploy a Camel Quarkus app on OpenShift and make it HawtIO-enabled with Camel Quarkus. Once deployed on OpenShift, it will be discovered by HawtIO Online.

  1. This project uses Quarkus Container Images and Kubernetes extensions to build a container image and deploy it to a Kubernetes/OpenShift cluster (pom.xml).
  2. The most important part in terms of the HawtIO-enabled configuration is defined in the <properties> section. To make it HawtIO-enabled, the Jolokia agent must be attached to the application with HTTPS and SSL client-authentication configured. The client principal should match those that the HawtIO Online instance provides (the default is hawtio-online.hawtio.svc).

    <properties>
        <jolokia.protocol>https</jolokia.protocol>
        <jolokia.host>*</jolokia.host>
        <jolokia.port>8778</jolokia.port>
        <jolokia.useSslClientAuthentication>true</jolokia.useSslClientAuthentication>
        <jolokia.caCert>/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt</jolokia.caCert>
        <jolokia.clientPrincipal.1>cn=hawtio-online.hawtio.svc</jolokia.clientPrincipal.1>
        <jolokia.extendedClientCheck>true</jolokia.extendedClientCheck>
        <jolokia.discoveryEnabled>false</jolokia.discoveryEnabled>
    </properties>
    Copy to Clipboard Toggle word wrap
  3. Running the application locally:

    1. Run in development mode with:

      mvn compile quarkus:dev
      Copy to Clipboard Toggle word wrap
    2. Or build the project and execute the runnable JAR:

      mvn package && java -jar target/quarkus-app/quarkus-run.jar
      Copy to Clipboard Toggle word wrap
  4. Running with the Jolokia agent locally:

    1. You can run this example with Jolokia JVM agent locally as follows:

      java -javaagent:target/quarkus-app/lib/main/org.jolokia.jolokia-agent-jvm-2.0.1-javaagent.jar -jar target/quarkus-app/quarkus-run.jar
      Copy to Clipboard Toggle word wrap
  5. Deploy it to OpenShift:

    1. To deploy it to a cluster, firstly change the container image parameters in pom.xml to fit the development environment. (The default image name is quay.io/hawtio/hawtio-online-example-camel-quarkus:latest, which should be pushed to the hawtio organisation on Quay.io.)

      <!--
        Container registry and group should be changed to those which your application uses.
      -->
      <quarkus.container-image.registry>quay.io</quarkus.container-image.registry>
      <quarkus.container-image.group>hawtio</quarkus.container-image.group>
      <quarkus.container-image.tag>latest</quarkus.container-image.tag>
      Copy to Clipboard Toggle word wrap
    2. Then build the project with option -Dquarkus.container-image.push=true to push the build image to the preferred container registry:

      mvn install -Dquarkus.container-image.push=true
      Copy to Clipboard Toggle word wrap
    3. The resources file for deployment is also generated at target/kubernetes/kubernetes.yml. Use kubectl or oc command to deploy the application with the resources file:

      kubectl apply -f target/kubernetes/kubernetes.yml
      Copy to Clipboard Toggle word wrap
    4. After deployment is successful and the pod has started, the application log can be seen on the cluster

Chapter 7. Viewing containers and applications

When you login to HawtIO for OpenShift, the HawtIO home page shows the available containers.

Procedure:

  1. To manage (create, edit, or delete) containers, use the OpenShift console.
  2. To view HawtIO-enabled applications and AMQ Brokers (if applicable) on the OpenShift cluster, click the Online tab

Chapter 8. Viewing and managing Apache Camel applications

In HawtIO’s Camel tab, you view and manage Apache Camel contexts, routes, and dependencies.

You can view the following details:

  1. A list of all running Camel contexts
  2. Detailed information of each Camel context such as Camel version number and runtime statics
  3. Lists of all routes in each Camel application and their runtime statistics
  4. Graphical representation of the running routes along with real time metrics

You can also interact with a Camel application by:

  1. Starting and suspending contexts
  2. Managing the lifecycle of all Camel applications and their routes, so you can restart, stop, pause, resume, etc.
  3. Live tracing and debugging of running routes
  4. Browsing and sending messages to Camel endpoints
Note

The Camel tab is only available when you connect to a container that uses one or more Camel routes.

8.1. Starting, suspending, or deleting a context

  1. In the Camel tab’s tree view, click Camel Contexts.
  2. Check the box next to one or more contexts in the list.
  3. Click Start or Suspend.
  4. To delete a context:

    1. Stop the context.
    2. Click the ellipse icon and then select Delete from the dropdown menu.
Note

When you delete a context, you remove it from the deployed application.

8.2. Viewing Camel application details

  1. In the Camel tab’s tree view, click a Camel application.
  2. To view a list of application attributes and values, click Attributes.
  3. To view a graphical representation of the application attributes, click Chart and then click Edit to select the attributes that you want to see in the chart.
  4. To view inflight and blocked exchanges, click Exchanges.
  5. To view application endpoints, click Endpoints. You can filter the list by URL, Route ID, and direction.
  6. To view, enable, and disable statistics related to the Camel built-in type conversion mechanism that is used to convert message bodies and message headers to different types, click Type Converters.
  7. To view and execute JMX operations, such as adding or updating routes from XML or finding all Camel components available in the classpath, click Operations.

8.3. Viewing a list of the Camel routes and interacting with them

  1. To view a list of routes:

    1. Click the Camel tab.
    2. In the tree view, click the application’s routes folder:

  2. To start, stop, or delete one or more routes:

    1. Check the box next to one or more routes in the list.
    2. Click Start or Stop.
    3. To delete a route, you must first stop it. Then click the ellipse icon and select Delete from the dropdown menu.

      Note
      • When you delete a route, you remove it from the deployed application.
      • You can also select a specific route in the tree view and then click the upper-right menu to start, stop, or delete it.
  3. To view a graphical diagram of the routes, click Route Diagram.
  4. To view inflight and blocked exchanges, click Exchanges.
  5. To view endpoints, click Endpoints. You can filter the list by URL, Route ID, and direction.
  6. Click Type Converters to view, enable, and disable statistics related to the Camel built-in type conversion mechanism, which is used to convert message bodies and message headers to different types.
  7. To interact with a specific route:

    1. In the Camel tab’s tree view, select a route. To view a list of route attributes and values, click Attributes.
    2. To view a graphical representation of the route attributes, click Chart. You can click Edit to select the attributes that you want to see in the chart.
    3. To view inflight and blocked exchanges, click Exchanges.
    4. Click Operations to view and execute JMX operations on the route, such as dumping the route as XML or getting the route’s Camel ID value.
  8. To trace messages through a route:

    1. In the Camel tab’s tree view, select a route.
    2. Select Trace, and then click Start tracing.
  9. To send messages to a route:

    1. In the Camel tab’s tree view, open the context’s endpoints folder and then select an endpoint.
    2. Click the Send subtab.
    3. Configure the message in JSON or XML format.
    4. Click Send.
    5. Return to the route’s Trace tab to view the flow of messages through the route.

8.4. Debugging a route

  1. In the Camel tab’s tree view, select a route.
  2. Select Debug, and then click Start debugging.
  3. To add a breakpoint, select a node in the diagram and then click Add breakpoint. A red dot appears in the node:

  4. The node is added to the list of breakpoints:

  5. Click the down arrow to step to the next node or the Resume button to resume running the route.

  6. Click the Pause button to suspend all threads for the route.
  7. Click Stop debugging when you are done. All breakpoints are cleared.

Chapter 9. Viewing and managing JMX domains and MBeans

Java Management Extensions (JMX) is a Java technology that allows you to manage resources (services, devices, and applications) dynamically at runtime. The resources are represented by objects called MBeans (for Managed Bean). You can manage and monitor resources as soon as they are created, implemented, or installed.

With the JMX plugin on HawtIO, you can view and manage JMX domains and MBeans. You can view MBean attributes, run commands, and create charts that show statistics for the MBeans.

The JMX tab provides a tree view of the active JMX domains and MBeans organized in folders. You can view details and execute commands on the MBeans.

Procedure:

  1. To view and edit MBean attributes:

    1. In the tree view, select an MBean.
    2. Click the Attributes tab.
    3. Click an attribute to see its details.
  2. To perform operations:

    1. In the tree view, select an MBean.
    2. Click the Operations tab, expand one of the listed operations.
    3. Click Execute to run the operation.
  3. To view charts:

    1. In the tree view, select an item.
    2. Click the Chart tab.

Chapter 10. Viewing and managing Quartz Schedules

Quartz is a richly featured, open source job scheduling library that you can integrate within most Java applications. You can use Quartz to create simple or complex schedules for executing jobs.

A job is defined as a standard Java component that can execute virtually anything that you program it to do.

HawtIO shows the Quartz tab if your Camel route deploys the camel-quartz component. Note that you can alternately access Quartz mbeans through the JMX tree view.

Procedure:

  1. In HawtIO, click the Quartz tab. The Quartz page includes a tree view of the Quartz Schedulers and Scheduler, Triggers, and Jobs tabs.
  2. To pause or start a scheduler, click the buttons on the Scheduler tab.
  3. Click the Triggers tab to view the triggers that determine when jobs will run. For example, a trigger can specify to start a job at a certain time of day (to the millisecond), on specified days, or repeated a specified number of times or at specific times.

    1. To filter the list of triggers select State, Group, Name, or Type from the drop-down list. You can then further filter the list by selecting or typing in the fill-on field.
    2. To pause, resume, update, or manually fire a trigger, click the options in the Action column.
  4. Click the Jobs tab to view the list of running jobs. You can sort the list by the columns in the table: Group, Name, Durable, Recover, Job ClassName, and Description.

Chapter 11. Viewing Threads

You can view and monitor the state of threads.

Procedure:

  1. Click the Runtime tab and then the Threads subtab.
  2. The Threads page lists active threads and stack trace details for each thread. By default, the thread list shows all threads in descending ID order.
  3. To sort the list by increasing ID, click the ID column label.
  4. Optionally, filter the list by thread state (for example, Blocked) or by thread name.
  5. To drill down to detailed information for a specific thread, such as the lock class name and full stack trace for that thread, in the Actions column, click More.

Chapter 12. Ensuring correct data displays in HawtIO

If the display of the queues and connections in HawtIO is missing queues, missing connections, or displaying inconsistent icons, adjust the Jolokia collection size parameter that specifies the maximum number of elements in an array that Jolokia marshals in a response.

Procedure:

  1. In the upper right corner of HawtIO, click the user icon and then click Preferences.

  2. Increase the value of the Maximum collection size option (the default is 50,000).
  3. Click Close.

Legal Notice

Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat