Chapter 5. Searching in the console introduction
For Red Hat Advanced Cluster Management for Kubernetes, search provides visibility into your Kubernetes resources across all of your clusters. Search also indexes the Kubernetes resources and the relationships to other resources.
5.1. Search components
The search architecture is composed of the following components:
Component name | Metrics | Metric type | Description |
---|---|---|---|
|
Watches the Kubernetes resources, collects the resource metadata, computes relationships for resources across all of your managed clusters, and sends the collected data to the | ||
Receives resource metadata from the collectors and writes to PostgreSQL database. The |
| Histogram | Time (seconds) the search indexer takes to process a request (from managed cluster). |
| Histogram | Total changes (add, update, delete) in the search indexer request (from managed cluster). | |
| Counter | Total requests received by the search indexer (from managed clusters). | |
| Gauge | Total requests the search indexer is processing at a given time. | |
Provides access to all cluster data in the |
| Histogram | Histogram of HTTP requests duration in seconds. |
| Histogram | Latency of database requests in seconds. | |
| Counter | The total number of database connection attempts that failed. | |
| Stores collected data from all managed clusters in an instance of the PostgreSQL database. |
Search is configured by default on the hub cluster. When you provision or manually import a managed cluster, the klusterlet-addon-search
is enabled. If you want to disable search on your managed cluster, see Modifying the klusterlet add-ons settings of your cluster for more information.
5.2. Search customization and configurations
You can modify the default values in the search-v2-operator
custom resource. To view details of the custom resource, run the following command:
oc get search search-v2-operator -o yaml
The search operator watches the search-v2-operator
custom resource, reconciles the changes and updates active pods. View the following descriptions of the configurations:
PostgreSQL database storage:
When you install Red Hat Advanced Cluster Management, the PostgreSQL database is configured to save the PostgreSQL data in an empty directory (
emptyDir
) volume. If the empty directory size is limited, you can save the PostgreSQL data on a Persistent Volume Claim (PVC) to improve search performance. You can select a storageclass from your Red Hat Advanced Cluster Management hub cluster to back up your search data. For example, if you select thegp2
storageclass your configuration might resemble the following example:apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management labels: cluster.open-cluster-management.io/backup: "" spec: dbStorage: size: 10Gi storageClassName: gp2
This configuration creates a PVC named
gp2-search
and is mounted to thesearch-postgres
pod. By default, the storage size is10Gi
. You can modify the storage size. For example,20Gi
might be sufficient for about 200 managed clusters.Optimize cost by tuning the pod memory or CPU requirements, replica count, and update log levels for any of the four search pods (
indexer
,database
,queryapi
, orcollector
pod). Update thedeployment
section of thesearch-v2-operator
custom resource. There are four deployments managed by thesearch-v2-operator
, which can be updated individually. Yoursearch-v2-operator
custom resource might resemble the following file:apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management spec: deployments: collector: resources: 1 limits: cpu: 500m memory: 128Mi requests: cpu: 250m memory: 64Mi indexer: replicaCount: 3 database: 2 envVar: - name: POSTGRESQL_EFFECTIVE_CACHE_SIZE value: 1024MB - name: POSTGRESQL_SHARED_BUFFERS value: 512MB - name: WORK_MEM value: 128MB queryapi: arguments: 3 - -v=3
- 1 1
- You can apply resources to an
indexer
,database
,queryapi
, orcollector
pod. - 2 2
- You can add multiple environment variables in the
envVar
section to specify a value for each variable that you name. - 3
- You can control the log level verbosity for any of the previous four pods by adding the
- -v=3
argument.
See the following example where memory resources are applied to the indexer pod:
indexer: resources: limits: memory: 5Gi requests: memory: 1Gi
Node placement for search pods:
You can update the
Placement
of search pods by using thenodeSelector
parameter, or thetolerations
parameter. View the following example configuration:spec: dbStorage: size: 10Gi deployments: collector: {} database: {} indexer: {} queryapi: {} nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra operator: Exists
5.3. Additional resources
- For instruction about how to manage search, see Managing search.
- For more topics about the Red Hat Advanced Cluster Management for Kubernetes console, see Web console.
5.4. Managing search
Use search to query resource data from your clusters.
Required access: Cluster administrator
Continue reading the following topics:
5.4.1. Creating search configurable collection
Create the search-collector-config
ConfigMap to define which Kubernetes resources get collected from the cluster by listing the resources in the allow and deny list section. List the resources in the data.AllowedResources
and data.DeniedResources
sections within the ConfigMap. Run the following command to create the resource:
oc apply -f yourconfigMapFile.yaml
Your ConfigMap might resemble the following YAML file:
apiVersion: v1 kind: ConfigMap metadata: name: search-collector-config namespace: <namespace where search-collector add-on is deployed> data: AllowedResources: |- - apiGroups: - "*" resources: - services - pods - apiGroups: - admission.k8s.io - authentication.k8s.io resources: - "*" DeniedResources: |- - apiGroups: - "*" resources: - secrets - apiGroups: - admission.k8s.io resources: - policies - iampolicies - certificatepolicies
The previous ConfigMap example allows services
and pods
to be collected from all apiGroups
, while allowing all resources to be collected from the admission.k8s.io
and authentication.k8s.io
apiGroups
. At the same time, the ConfigMap example also prevents the central collection of secrets
from all apiGroups
while preventing the collection of policies
, iampolicies
, and certificatepolicies
from the apiGroup
admission.k8s.io
.
Note: If you do not provide a ConfigMap, all resources are collected by default. If you only provide AllowedResources
, all resources not listed in AllowedResources
are automatically excluded. Resources listed in AllowedResources
and DeniedResources
at the same time are also excluded.
5.4.2. Customizing the search console
You can customize the search result limit from the OpenShift Container Platform console. Update the console-mce-config
in the multicluster-engine
namespace. These settings apply to all users and might affect performance. View the following performance parameter descriptions:
-
SAVED_SEARCH_LIMIT
- The maximum amount of saved searches per user. By default, there is a limit of ten saved searches for each user. The default value is10
. To update the limit, add the following key value to theconsole-config
ConfigMap:SAVED_SEARCH_LIMIT: x
. -
SEARCH_RESULT_LIMIT
- The maximum amount of search results displayed in the console. Default value is1000
. To remove this limit set to-1
. -
SEARCH_AUTOCOMPLETE_LIMIT
- The maximum number of suggestions retrieved for the search bar typeahead. Default value is10,000
. To remove this limit set to-1
.
Run the following patch
command from the OpenShift Container Platform console to change the search result to 100 items:
oc patch configmap console-mce-config -n multicluster-engine --type merge -p '{"data":{"SEARCH_RESULT_LIMIT":"100"}}'
5.4.3. Querying in the console
You can type any text value in the Search box and results include anything with that value from any property, such as a name or namespace. Users are unable to search for values that contain an empty space.
For more specific search results, include the property selector in your search. You can combine related values for the property for a more precise scope of your search. For example, search for cluster:dev red
to receive results that match the string "red" in the dev
cluster.
Complete the following steps to make queries with search:
- Click Search in the navigation menu.
Type a word in the Search box, then Search finds your resources that contain that value.
- As you search for resources, you receive other resources that are related to your original search result, which help you visualize how the resources interact with other resources in the system.
- Search returns and lists each cluster with the resource that you search. For resources in the hub cluster, the cluster name is displayed as local-cluster.
-
Your search results are grouped by
kind
, and each resourcekind
is grouped in a table. - Your search options depend on your cluster objects.
-
You can refine your results with specific labels. Search is case-sensitive when you query labels. See the following examples that you can select for filtering:
name
,namespace
,status
, and other resource fields. Auto-complete provides suggestions to refine your search. See the following example: -
Search for a single field, such as
kind:pod
to find all pod resources. Search for multiple fields, such as
kind:pod namespace:default
to find the pods in the default namespace.Notes:
-
You can also search with conditions by using characters, such as
>, >=, <, <=, !=
. - When you search for more than one property selector with multiple values, the search returns either of the values that were queried. View the following examples:
-
When you search for
kind:pod name:a
, any pod nameda
is returned. -
When you search for
kind:pod name:a,b
, any pod nameda
orb
are returned. -
Search for
kind:pod status:!Running
to find all pod resources where the status is notRunning
. -
Search for
kind:pod restarts:>1
to find all pods that restarted at least twice.
-
You can also search with conditions by using characters, such as
- If you want to save your search, click the Save search icon.
5.4.3.1. Querying ArgoCD applications
When you search for an ArgoCD application, you are directed to the Applications page. Complete the following steps to access the ArgoCD application from the Search page:
- Log in to your Red Hat Advanced Cluster Management hub cluster.
- From the console header, select the Search icon.
-
Filter your query with the following values:
kind:application
andapigroup:argoproj.io
. - Select an application to view. The Application page displays an overview of information for the application.
5.4.4. Updating klusterlet-addon-search deployments on managed clusters
To collect the Kubernetes objects from the managed clusters, the klusterlet-addon-search
pod is run on all the managed clusters where search is enabled. This deployment is run in the open-cluster-management-agent-addon
namespace. A managed cluster with a high number of resources might require more memory for the klusterlet-addon-search
deployment to function.
Resource requirements for the klusterlet-addon-search
pod in a managed cluster can be specified in the ManagedClusterAddon
custom resource in your Red Hat Advanced Cluster Management hub cluster. There is a namespace for each managed cluster with the managed cluster name. Edit the ManagedClusterAddon
custom resource from the namespace matching the managed cluster name. Run the following command to update the resource requirement in xyz
managed cluster:
oc edit managedclusteraddon search-collector -n xyz
Append the resource requirements as annotations. View the following example:
apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: annotations: addon.open-cluster-management.io/search_memory_limit: 2048Mi addon.open-cluster-management.io/search_memory_request: 512Mi
The annotation overrides the resource requirements on the managed clusters and automatically restarts the pod with new resource requirements.
Return to Observing environments introduction.