Skip to content

Red Hat Best Practices Test Suite for Kubernetes configuration

The Red Hat Best Practices Test Suite for Kubernetes uses a YAML configuration file to certify a specific workload. This file specifies the workload’s resources to be certified, as well as any exceptions or other general configuration options.

By default a file named tnf_config.yml will be used. Here’s an example of the Config File. For a description of each config option see the section Config File options.

Config Generator

The Config File can be created using the Config Generator, which is part of the TNF tool shipped with the Test Suite. The purpose of this particular tool is to help users configuring the Test Suite providing a logical structure of the available options as well as the information required to make use of them. The result is a Config File in YAML format that will be parsed to adapt the verification process to a specific workload.

To compile the TNF tool:

make build-tnf-tool

To launch the Config Generator:

./tnf generate config

Here’s an example of how to use the tool:

Config File options

Workload resources

These options allow configuring the workload resources to be verified. Only the resources that the workload uses are required to be configured. The rest can be left empty. Usually a basic configuration includes Namespaces and Pods at least.

Note

Using the number of labels to determine how to get the resources under test.
If there are labels defined, we get the list of pods, statefulsets, deployments, csvs, by fetching the resources matching the labels. Otherwise, if the labels are not defined, we only test the resources that are in the namespaces under test (defined in tnf_config.yml).

targetNameSpaces

The namespaces in which the workload under test will be deployed.

targetNameSpaces:
  - name: tnf

podsUnderTestLabels

The labels that each Pod of the workload under test must have to be verified by the Test Suite.

Highly recommended

The labels should be defined in Pod definition rather than added after the Pod is created, as labels added later on will be lost in case the Pod gets rescheduled. In the case of Pods defined as part of a Deployment, it’s best to use the same label as the one defined in the spec.selector.matchLabels section of the Deployment YAML. The prefix field can be used to avoid naming collision with other labels.

podsUnderTestLabels:
  - "test-network-function.com/generic: target"

operatorsUnderTestLabels

The labels that each operator’s CSV of the workload under test must have to be verified by the Test Suite.

If a new label is used for this purpose make sure it is added to the workload operator’s CSVs.

operatorsUnderTestLabels:
  - "test-network-function.com/operator: target" 

targetCrdFilters

The CRD name suffix used to filter the workload’s CRDs among all the CRDs present in the cluster. For each CRD it can also be specified if it’s scalable or not in order to avoid some lifecycle test cases.

targetCrdFilters:
 - nameSuffix: "group1.tnf.com"
   scalable: false
 - nameSuffix: "anydomain.com"
   scalable: true

With the config show above, all CRD names in the cluster whose names have the suffix group1.tnf.com or anydomain.com ( e.g. crd1.group1.tnf.com or mycrd.mygroup.anydomain.com) will be tested.

managedDeployments / managedStatefulSets

The Deployments/StatefulSets managed by a Custom Resource whose scaling is controlled using the “scale” subresource of the CR.

The CRD defining that CR should be included in the CRD filters with the scalable property set to true. If so, the test case lifecycle-{deployment/statefulset}-scaling will be skipped, otherwise it will fail.

managedDeployments:
  - name: jack
managedStatefulsets:
  - name: jack

JUnit XML File Creation

The test suite has the ability to create the JUNit XML File output containing the test ID and the corresponding test result.

To enable this, set:

export TNF_ENABLE_XML_CREATION=true

This will create a file named cnf-certification-test/cnf-certification-tests_junit.xml.

Enable running container against OpenShift Local

While running the test suite as a container, you can enable the container to be able to reach the local CRC instance by setting:

export TNF_ENABLE_CRC_TESTING=true

This utilizes the --add-host flag in Docker to be able to point api.crc.testing to the host gateway.

Exceptions

These options allow adding exceptions to skip several checks for different resources. The exceptions must be justified in order to satisfy the Red Hat Best Practices for Kubernetes.

acceptedKernelTaints

The list of kernel modules loaded by the workload that make the Linux kernel mark itself as tainted but that should skip verification.

Test cases affected: platform-alteration-tainted-node-kernel.

acceptedKernelTaints:
  - module: vboxsf
  - module: vboxguest

skipHelmChartList

The list of Helm charts that the workload uses whose certification status will not be verified.

If no exception is configured, the certification status for all Helm charts will be checked in the OpenShift Helms Charts repository.

Test cases affected: affiliated-certification-helmchart-is-certified.

skipHelmChartList:
  - name: coredns

validProtocolNames

The list of allowed protocol names to be used for container port names.

The name field of a container port must be of the form protocol[-suffix] where protocol must be allowed by default or added to this list. The optional suffix can be chosen by the application. Protocol names allowed by default: grpc, grpc-web, http, http2, tcp, udp.

Test cases affected: manageability-container-port-name-format.

validProtocolNames:
  - "http3"
  - "sctp"

servicesIgnoreList

The list of Services that will skip verification.

Services included in this list will be filtered out at the autodiscovery stage and will not be subject to checks in any test case.

Tests cases affected: networking-dual-stack-service, access-control-service-type.

servicesignorelist:
  - "hazelcast-platform-controller-manager-service"
  - "hazelcast-platform-webhook-service"
  - "new-pro-controller-manager-metrics-service"

skipScalingTestDeployments / skipScalingTestStatefulSets

The list of Deployments/StatefulSets that do not support scale in/out operations.

Deployments/StatefulSets included in this list will skip any scaling operation check.

Test cases affected: lifecycle-deployment-scaling, lifecycle-statefulset-scaling.

skipScalingTestDeployments:
  - name: deployment1
    namespace: tnf
skipScalingTestStatefulSetNames:
  - name: statefulset1
    namespace: tnf

Red Hat Best Practices Test Suite settings

debugDaemonSetNamespace

This is an optional field with the name of the namespace where a privileged DaemonSet will be deployed. The namespace will be created in case it does not exist. In case this field is not set, the default namespace for this DaemonSet is cnf-suite.

debugDaemonSetNamespace: cnf-cert

This DaemonSet, called tnf-debug is deployed and used internally by the Test Suite tool to issue some shell commands that are needed in certain test cases. Some of these test cases might fail or be skipped in case it wasn’t deployed correctly.

Other settings

The autodiscovery mechanism will attempt to identify the default network device and all the IP addresses of the Pods it needs for network connectivity tests, though that information can be explicitly set using annotations if needed.

Pod IPs

  • The k8s.v1.cni.cncf.io/networks-status annotation is checked and all IPs from it are used. This annotation is automatically managed in OpenShift but may not be present in K8s.
  • If it is not present, then only known IPs associated with the Pod are used (the Pod .status.ips field).

Network Interfaces

  • The k8s.v1.cni.cncf.io/networks-status annotation is checked and the interface from the first entry found with “default”=true is used. This annotation is automatically managed in OpenShift but may not be present in K8s.

The label test-network-function.com/skip_connectivity_tests excludes Pods from all connectivity tests.

The label test-network-function.com/skip_multus_connectivity_tests excludes Pods from Multus connectivity tests. Tests on the default interface are still run.

Affinity requirements

For workloads that require Pods to use Pod or Node Affinity rules, the label AffinityRequired: true must be included on the Pod YAML. This will ensure that the affinity best practices are tested and prevent any test cases for anti-affinity to fail.