Chapter 6. Known issues
Sometimes a Cryostat release might contain an issue or issues that Red Hat acknowledges and might fix at a later stage during the product’s development. Review each known issue for its description and its resolution.
OpenShift SDN default CNI network provider
Cryostat 2.0 cannot connect an OpenShift cluster that has the following configurations for JVMs located on nodes different to those running on the Cryostat node:
- Use the software-defined networking (SDN) method as the unified cluster network for communication between pods on the cluster.
- Use the default Container Network Interface (CNI) network provider for writing plug-ins to configure network interfaces for containers.
You can resolve this known issue by configuring your cluster to use the Open Virtual Network (OVN) method instead of the SDN method. The OVN method contains the following similar configurations to that of the SDN method:
- Uses the Open vSwitch (OVS) to manage network traffic.
- Uses a plug-in that sets the CNI network provider as default.
Additional resources
- For more information about the SND method, see About the OpenShift SDN default CNI network provider in the Red Hat OpenShift documentation.
- For more information about the OVN method, see About the OVN-Kubernetes default Container Network Interface (CNI) network provider in the Red Hat OpenShift documentation.
Archive upload API Out-Of-Memory (OOM)
Cryostat 2.0 consumes more memory than expected when a client sends a request to the Cryostat HTTP POST
/api/v1/recordings
handler. This handler points to the /opt/cryostat.d/recordings.d
directory on your Cryostat 2.0 instance and you can use the handler to upload .jfr
binary files to this directory.
The Cryostat Operator sets a default memory limit of 512 MB for a Cryostat instance deployed in an OpenShift project. If you upload a 150 MB or larger .jfr
file to your Cryostat 2.0 instance, the OpenShift cluster’s Out of Memory (OOM) killer stops the pod that contains your deployed Cryostat instance.
You can resolve this known issue by copying your .jfr
binary file to the persistent storage location on your Cryostat 2.0 instance. By using this method, you do not need to send a client request to the Cryostat HTTP POST
/api/v1/recordings
handler to store the .jfr
binary in the /opt/cryostat.d/recordings.d
directory.
You can issue the following commands to copy your .jfr
binary file to the persistent storage location on your Cryostat 2.0 instance:
oc exec -i -n <your_namespace> -c <cryostat_container_name> <cryostat_pod_name> – mkdir /opt/cryostat.d/recordings.d/unlabelled oc cp vertx-fib-demo-6f4775cdbf-82dvl_150mb_20211006t152006z.jfr <cryostat_pod_name>:/opt/cryostat.d/recordings.d/unlabelled/vertx-fib-demo-6f4775cdbf-82dvl_150mb_20211006t152006z.jfr -c <cryostat_container_name>
The previously stated oc exec
command fails if an unlabelled
directory already exists in your /opt/cryostat.d/recordings.d/
path. You can choose to ignore the failed command message and continue with the oc cp
command.
After you copy the .jfr
binary file directly into the PVC archives location, you can use a curl
, an httpie
, or a wget
command to verify that the .jfr
file exists on your Cryostat 2.0 instance.
The following example demonstrates using a curl
command to verify that Cryostat recognizes the uploaded file that was copied to the persistent storage location with the oc cp
command. The <cryostat_url> value in the example resolves to https://cryostat-sample-myproject.apps-crc.testing:443
, but you can replace the <cryostat_url> value with your application’s URL. You can determine your application’s URL by issuing the oc status
command.
$ curl -kv -H Authorization:"Bearer $(oc whoami -t)" <cryostat_url>/api/v1/recordings
Additional resources
- For more information about the commands that you can use to resolve the Cryostat container memory limit issue, see OPENJDK-495 in the Red Hat OpenJDK Jira project.
File upload limit for the integrated Grafana component
For Cryostat 2.0, the integrated View in Grafana component cannot accept JFR files larger than 10 MB, because of a configuration issue with the jfr-datasource
container.
A deployed Cryostat 2.0 pod’s jfr-datasource
container uses default Quarkus settings, which includes the default quarkus.http.limits.max-body-size
parameter. This parameter sets the maximum size limit for a file on Quarkus, and the parameter has a default value of 10 MB.
If a client attempts to upload a JFR file larger than 10 MB, the jfr-datasource
web server rejects the file and throws an HTTP 413
error message.
You can resolve this known issue by completing the following steps:
- Navigate to your listed active or archive recording in the Recordings menu on your Cryostat 2.0 instance.
- From the overflow menu for your target recording, click the Download Recording option.
- Save the file to your preferred location on your local system.
- Open the downloaded JFR file in your Java Mission Control (JMC) desktop application.