Chapter 5. Configuring the Network Observability Operator
You can update the Flow Collector API resource to configure the Network Observability Operator and its managed components. The Flow Collector is explicitly created during installation. Since this resource operates cluster-wide, only a single FlowCollector
is allowed, and it has to be named cluster
.
5.1. View the FlowCollector resource
You can view and edit YAML directly in the OpenShift Container Platform web console.
Procedure
-
In the web console, navigate to Operators
Installed Operators. - Under the Provided APIs heading for the NetObserv Operator, select Flow Collector.
-
Select cluster then select the YAML tab. There, you can modify the
FlowCollector
resource to configure the Network Observability operator.
The following example shows a sample FlowCollector
resource for OpenShift Container Platform Network Observability operator:
Sample FlowCollector
resource
apiVersion: flows.netobserv.io/v1beta1 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: DIRECT agent: type: EBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi conversationEndTimeout: 10s logTypes: FLOWS 3 conversationHeartbeatInterval: 30s loki: 4 url: 'https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network' statusUrl: 'https://loki-query-frontend-http.netobserv.svc:3100/' authToken: FORWARD tls: enable: true caCert: type: configmap name: loki-gateway-ca-bundle certFile: service-ca.crt namespace: loki-namespace # 5 consolePlugin: register: true logLevel: info portNaming: enable: true portNames: "3100": loki quickFilters: 6 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service'
- 1
- The Agent specification,
spec.agent.type
, must beEBPF
. eBPF is the only OpenShift Container Platform supported option. - 2
- You can set the Sampling specification,
spec.agent.ebpf.sampling
, to manage resources. Lower sampling values might consume a large amount of computational, memory and storage resources. You can mitigate this by specifying a sampling ratio value. A value of 100 means 1 flow every 100 is sampled. A value of 0 or 1 means all flows are captured. The lower the value, the increase in returned flows and the accuracy of derived metrics. By default, eBPF sampling is set to a value of 50, so 1 flow every 50 is sampled. Note that more sampled flows also means more storage needed. It is recommend to start with default values and refine empirically, to determine which setting your cluster can manage. - 3
- The optional specifications
spec.processor.logTypes
,spec.processor.conversationHeartbeatInterval
, andspec.processor.conversationEndTimeout
can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The values forspec.processor.logTypes
are as follows:FLOWS
CONVERSATIONS
,ENDED_CONVERSATIONS
, orALL
. Storage requirements are highest forALL
and lowest forENDED_CONVERSATIONS
. - 4
- The Loki specification,
spec.loki
, specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install. - 5
- The original certificates are copied to the Network Observability instance namespace and watched for updates. When not provided, the namespace defaults to be the same as "spec.namespace". If you chose to install Loki in a different namespace, you must specify it in the
spec.loki.tls.caCert.namespace
field. Similarly, thespec.exporters.kafka.tls.caCert.namespace
field is available for Kafka installed in a different namespace. - 6
- The
spec.quickFilters
specification defines filters that show up in the web console. TheApplication
filter keys,src_namespace
anddst_namespace
, are negated (!
), so theApplication
filter shows all traffic that does not originate from, or have a destination to, anyopenshift-
ornetobserv
namespaces. For more information, see Configuring quick filters below.
Additional resources
For more information about conversation tracking, see Working with conversations.
5.2. Configuring the Flow Collector resource with Kafka
You can configure the FlowCollector
resource to use Kafka for high-throughput and low-latency data feeds. A Kafka instance needs to be running, and a Kafka topic dedicated to OpenShift Container Platform Network Observability must be created in that instance. For more information, see Kafka documentation with AMQ Streams.
Prerequisites
- Kafka is installed. Red Hat supports Kafka with AMQ Streams Operator.
Procedure
-
In the web console, navigate to Operators
Installed Operators. - Under the Provided APIs heading for the Network Observability Operator, select Flow Collector.
- Select the cluster and then click the YAML tab.
-
Modify the
FlowCollector
resource for OpenShift Container Platform Network Observability Operator to use Kafka, as shown in the following sample YAML:
Sample Kafka configuration in FlowCollector
resource
apiVersion: flows.netobserv.io/v1beta1 kind: FlowCollector metadata: name: cluster spec: deploymentModel: KAFKA 1 kafka: address: "kafka-cluster-kafka-bootstrap.netobserv" 2 topic: network-flows 3 tls: enable: false 4
- 1
- Set
spec.deploymentModel
toKAFKA
instead ofDIRECT
to enable the Kafka deployment model. - 2
spec.kafka.address
refers to the Kafka bootstrap server address. You can specify a port if needed, for instancekafka-cluster-kafka-bootstrap.netobserv:9093
for using TLS on port 9093.- 3
spec.kafka.topic
should match the name of a topic created in Kafka.- 4
spec.kafka.tls
can be used to encrypt all communications to and from Kafka with TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where theflowlogs-pipeline
processor component is deployed (default:netobserv
) and where the eBPF agents are deployed (default:netobserv-privileged
). It must be referenced withspec.kafka.tls.caCert
. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced withspec.kafka.tls.userCert
.
5.3. Export enriched network flow data
You can send network flows to Kafka, IPFIX, or both at the same time. Any processor or storage that supports Kafka or IPFIX input, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data.
Prerequisites
-
Your Kafka or IPFIX collector endpoint(s) are available from Network Observability
flowlogs-pipeline
pods.
Procedure
-
In the web console, navigate to Operators
Installed Operators. - Under the Provided APIs heading for the NetObserv Operator, select Flow Collector.
- Select cluster and then select the YAML tab.
Edit the
FlowCollector
to configurespec.exporters
as follows:apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: exporters: - type: KAFKA 1 kafka: address: "kafka-cluster-kafka-bootstrap.netobserv" topic: netobserv-flows-export 2 tls: enable: false 3 - type: IPFIX 4 ipfix: targetHost: "ipfix-collector.ipfix.svc.cluster.local" targetPort: 4739 transport: tcp or udp 5
- 2
- The Network Observability Operator exports all flows to the configured Kafka topic.
- 3
- You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the
flowlogs-pipeline
processor component is deployed (default: netobserv). It must be referenced withspec.exporters.tls.caCert
. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced withspec.exporters.tls.userCert
. - 1 4
- You can export flows to IPFIX instead of or in conjunction with exporting flows to Kafka.
- 5
- You have the option to specify transport. The default value is
tcp
but you can also specifyudp
.
- After configuration, network flows data can be sent to an available output in a JSON format. For more information, see Network flows format reference.
Additional resources
For more information about specifying flow format, see Network flows format reference.
5.4. Updating the Flow Collector resource
As an alternative to editing YAML in the OpenShift Container Platform web console, you can configure specifications, such as eBPF sampling, by patching the flowcollector
custom resource (CR):
Procedure
Run the following command to patch the
flowcollector
CR and update thespec.agent.ebpf.sampling
value:$ oc patch flowcollector cluster --type=json -p "[{"op": "replace", "path": "/spec/agent/ebpf/sampling", "value": <new value>}] -n netobserv"
5.5. Configuring quick filters
You can modify the filters in the FlowCollector
resource. Exact matches are possible using double-quotes around values. Otherwise, partial matches are used for textual values. The bang (!) character, placed at the end of a key, means negation. See the sample FlowCollector
resource for more context about modifying the YAML.
The filter matching types "all of" or "any of" is a UI setting that the users can modify from the query options. It is not part of this resource configuration.
Here is a list of all available filter keys:
Universal* | Source | Destination | Description |
---|---|---|---|
namespace |
|
| Filter traffic related to a specific namespace. |
name |
|
| Filter traffic related to a given leaf resource name, such as a specific pod, service, or node (for host-network traffic). |
kind |
|
| Filter traffic related to a given resource kind. The resource kinds include the leaf resource (Pod, Service or Node), or the owner resource (Deployment and StatefulSet). |
owner_name |
|
| Filter traffic related to a given resource owner; that is, a workload or a set of pods. For example, it can be a Deployment name, a StatefulSet name, etc. |
resource |
|
|
Filter traffic related to a specific resource that is denoted by its canonical name, that identifies it uniquely. The canonical notation is |
address |
|
| Filter traffic related to an IP address. IPv4 and IPv6 are supported. CIDR ranges are also supported. |
mac |
|
| Filter traffic related to a MAC address. |
port |
|
| Filter traffic related to a specific port. |
host_address |
|
| Filter traffic related to the host IP address where the pods are running. |
protocol | N/A | N/A | Filter traffic related to a protocol, such as TCP or UDP. |
-
Universal keys filter for any of source or destination. For example, filtering
name: 'my-pod'
means all traffic frommy-pod
and all traffic tomy-pod
, regardless of the matching type used, whether Match all or Match any.
5.6. Configuring monitoring for SR-IOV interface traffic
In order to collect traffic from a cluster with a Single Root I/O Virtualization (SR-IOV) device, you must set the FlowCollector
spec.agent.ebpf.privileged
field to true
. Then, the eBPF agent monitors other network namespaces in addition to the host network namespaces, which are monitored by default. When a pod with a virtual functions (VF) interface is created, a new network namespace is created. With SRIOVNetwork
policy IPAM
configurations specified, the VF interface is migrated from the host network namespace to the pod network namespace.
Prerequisites
- Access to an OpenShift Container Platform cluster with a SR-IOV device.
-
The
SRIOVNetwork
custom resource (CR)spec.ipam
configuration must be set with an IP address from the range that the interface lists or from other plugins.
Procedure
-
In the web console, navigate to Operators
Installed Operators. - Under the Provided APIs heading for the NetObserv Operator, select Flow Collector.
- Select cluster and then select the YAML tab.
Configure the
FlowCollector
custom resource. A sample configuration is as follows:Configure
FlowCollector
for SR-IOV monitoringapiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: DIRECT agent: type: EBPF ebpf: privileged: true 1
- 1
- The
spec.agent.ebpf.privileged
field value must be set totrue
to enable SR-IOV monitoring.
Additional resources
For more information about creating the SriovNetwork
custom resource, see Creating an additional SR-IOV network attachment with the CNI VRF plugin.
5.7. Resource management and performance considerations
The amount of resources required by Network Observability depends on the size of your cluster and your requirements for the cluster to ingest and store observability data. To manage resources and set performance criteria for your cluster, consider configuring the following settings. Configuring these settings might meet your optimal setup and observability needs.
The following settings can help you manage resources and performance from the outset:
- eBPF Sampling
-
You can set the Sampling specification,
spec.agent.ebpf.sampling
, to manage resources. Smaller sampling values might consume a large amount of computational, memory and storage resources. You can mitigate this by specifying a sampling ratio value. A value of100
means 1 flow every 100 is sampled. A value of0
or1
means all flows are captured. Smaller values result in an increase in returned flows and the accuracy of derived metrics. By default, eBPF sampling is set to a value of 50, so 1 flow every 50 is sampled. Note that more sampled flows also means more storage needed. Consider starting with the default values and refine empirically, in order to determine which setting your cluster can manage. - Restricting or excluding interfaces
-
Reduce the overall observed traffic by setting the values for
spec.agent.ebpf.interfaces
andspec.agent.ebpf.excludeInterfaces
. By default, the agent fetches all the interfaces in the system, except the ones listed inexcludeInterfaces
andlo
(local interface). Note that the interface names might vary according to the Container Network Interface (CNI) used.
The following settings can be used to fine-tune performance after the Network Observability has been running for a while:
- Resource requirements and limits
-
Adapt the resource requirements and limits to the load and memory usage you expect on your cluster by using the
spec.agent.ebpf.resources
andspec.processor.resources
specifications. The default limits of 800MB might be sufficient for most medium-sized clusters. - Cache max flows timeout
-
Control how often flows are reported by the agents by using the eBPF agent’s
spec.agent.ebpf.cacheMaxFlows
andspec.agent.ebpf.cacheActiveTimeout
specifications. A larger value results in less traffic being generated by the agents, which correlates with a lower CPU load. However, a larger value leads to a slightly higher memory consumption, and might generate more latency in the flow collection.
5.7.1. Resource considerations
The following table outlines examples of resource considerations for clusters with certain workload sizes.
The examples outlined in the table demonstrate scenarios that are tailored to specific workloads. Consider each example only as a baseline from which adjustments can be made to accommodate your workload needs.
Extra small (10 nodes) | Small (25 nodes) | Medium (65 nodes) [2] | Large (120 nodes) [2] | |
---|---|---|---|---|
Worker Node vCPU and memory | 4 vCPUs| 16GiB mem [1] | 16 vCPUs| 64GiB mem [1] | 16 vCPUs| 64GiB mem [1] | 16 vCPUs| 64GiB Mem [1] |
LokiStack size |
|
|
|
|
Network Observability controller memory limit | 400Mi (default) | 400Mi (default) | 400Mi (default) | 800Mi |
eBPF sampling rate | 50 (default) | 50 (default) | 50 (default) | 50 (default) |
eBPF memory limit | 800Mi (default) | 800Mi (default) | 2000Mi | 800Mi (default) |
FLP memory limit | 800Mi (default) | 800Mi (default) | 800Mi (default) | 800Mi (default) |
FLP Kafka partitions | N/A | 48 | 48 | 48 |
Kafka consumer replicas | N/A | 24 | 24 | 24 |
Kafka brokers | N/A | 3 (default) | 3 (default) | 3 (default) |
- Tested with AWS M6i instances.
-
In addition to this worker and its controller, 3 infra nodes (size
M6i.12xlarge
) and 1 workload node (sizeM6i.8xlarge
) were tested.