Movatterモバイル変換


[0]ホーム

URL:


Skip to navigationSkip to content
Red Hat Summit Red Hat SummitSupportConsoleDevelopersStart a trialContact
  1. Home
  2. Products
  3. OpenShift Dedicated
  4. 4
  5. Support
  6. Chapter 7. Troubleshooting

OpenShift Dedicated

Chapter 7. Troubleshooting


7.1. Troubleshooting an OpenShift Dedicated on GCP cluster deployment

OpenShift Dedicated on Google Cloud Platform (GCP) cluster deployment errors can occur for several reasons, including insufficient quota limits and settings, incorrectly inputted data, incompatible configurations, and so on.

Learn how to resolve common OpenShift Dedicated on GCP cluster installation errors in the following sections.

7.1.1. Troubleshooting OpenShift Dedicated on GCP installation error codes

The following table lists OpenShift Dedicated on Google Cloud Platform (GCP) installation error codes and what you can do to resolve these errors.

Table 7.1. OpenShift Dedicated on GCP installation error codes
Error codeDescriptionResolution

OCM3022

Invalid GCP project ID.

Verify the project ID in the Google cloud console and retry cluster creation.

OCM3023

GCP instance type not found.

Verify the instance type and retry cluster creation.

For more information about OpenShift Dedicated on GCP instance types, seeGoogle Cloud instance types in theAdditional resources section.

OCM3024

GCP precondition failed.

Verify the organization policy constraints and retry cluster creation.

For more information about organization policy constraints, seeOrganization policy constraints.

OCM3025

GCP SSD quota limit exceeded.

Check your available persistent disk SSD quota either in the Google Cloud console or in thegcloud CLI. There must be at least 896 GB of SSD available. Increase the SSD quota limit and retry cluster creation.

For more information about managing persistent disk SSD quota, seeAllocation quotas.

OCM3026

GCP compute quota limit exceeded.

Increase your CPU compute quota and retry cluster installation.

For more information about the CPU compute quota, seeCompute Engine quota and limits overview.

OCM3027

GCP service account quota limit exceeded.

Ensure your quota allows for additional unused service accounts. Check your current usage for quotas in your GCP account and try again.

For more information about managing your quotas, seeManage your quotas using the console.

Additional resources

7.2. Verifying node health

7.2.1. Reviewing node status, resource usage, and configuration

Review cluster node health status, resource consumption statistics, and node logs. Additionally, querykubelet status on individual nodes.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  • List the name, status, and role for all nodes in the cluster:

    $oc get nodes
    Copy to Clipboard
  • Summarize CPU and memory usage for each node within the cluster:

    $oc admtop nodes
    Copy to Clipboard
  • Summarize CPU and memory usage for a specific node:

    $oc admtopnode my-node
    Copy to Clipboard

7.3. Troubleshooting Operator issues

Operators are a method of packaging, deploying, and managing an OpenShift Dedicated application. They act like an extension of the software vendor’s engineering team, watching over an OpenShift Dedicated environment and using its current state to make decisions in real time. Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, such as skipping a software backup process to save time.

OpenShift Dedicated 4 includes a default set of Operators that are required for proper functioning of the cluster. These default Operators are managed by the Cluster Version Operator (CVO).

As a cluster administrator, you can install application Operators from the OperatorHub using the OpenShift Dedicated web console or the CLI. You can then subscribe the Operator to one or more namespaces to make it available for developers on your cluster. Application Operators are managed by Operator Lifecycle Manager (OLM).

If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis.

7.3.1. Operator subscription condition types

Subscriptions can report the following condition types:

Table 7.2. Subscription condition types
ConditionDescription

CatalogSourcesUnhealthy

Some or all of the catalog sources to be used in resolution are unhealthy.

InstallPlanMissing

An install plan for a subscription is missing.

InstallPlanPending

An install plan for a subscription is pending installation.

InstallPlanFailed

An install plan for a subscription has failed.

ResolutionFailed

The dependency resolution for a subscription has failed.

Note

Default OpenShift Dedicated cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have aSubscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have aSubscription object.

Additional resources

7.3.2. Viewing Operator subscription status by using the CLI

You can view Operator subscription status by using the CLI.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List Operator subscriptions:

    $oc get subs-n<operator_namespace>
    Copy to Clipboard
  2. Use theoc describe command to inspect aSubscription resource:

    $oc describe sub<subscription_name>-n<operator_namespace>
    Copy to Clipboard
  3. In the command output, find theConditions section for the status of Operator subscription condition types. In the following example, theCatalogSourcesUnhealthy condition type has a status offalse because all available catalog sources are healthy:

    Example output

    Name:         cluster-loggingNamespace:    openshift-loggingLabels:       operators.coreos.com/cluster-logging.openshift-logging=Annotations:  <none>API Version:  operators.coreos.com/v1alpha1Kind:         Subscription#...Conditions:   Last Transition Time:  2019-07-29T13:42:57Z   Message:               all available catalogsources are healthy   Reason:                AllCatalogSourcesHealthy   Status:                False   Type:                  CatalogSourcesUnhealthy#...
    Copy to Clipboard

Note

Default OpenShift Dedicated cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have aSubscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have aSubscription object.

7.3.3. Viewing Operator catalog source status by using the CLI

You can view the status of an Operator catalog source by using the CLI.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List the catalog sources in a namespace. For example, you can check theopenshift-marketplace namespace, which is used for cluster-wide catalog sources:

    $oc get catalogsources-n openshift-marketplace
    Copy to Clipboard

    Example output

    NAME                  DISPLAY               TYPE   PUBLISHER   AGEcertified-operators   Certified Operators   grpc   Red Hat     55mcommunity-operators   Community Operators   grpc   Red Hat     55mexample-catalog       Example Catalog       grpc   Example Org 2m25sredhat-operators      Red Hat Operators     grpc   Red Hat     55m
    Copy to Clipboard

  2. Use theoc describe command to get more details and status about a catalog source:

    $oc describe catalogsource example-catalog-n openshift-marketplace
    Copy to Clipboard

    Example output

    Name:         example-catalogNamespace:    openshift-marketplaceLabels:       <none>Annotations:  operatorframework.io/managed-by: marketplace-operator              target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"}API Version:  operators.coreos.com/v1alpha1Kind:         CatalogSource#...Status:  Connection State:    Address:              example-catalog.openshift-marketplace.svc:50051    Last Connect:         2021-09-09T17:07:35Z    Last Observed State:  TRANSIENT_FAILURE  Registry Service:    Created At:         2021-09-09T17:05:45Z    Port:               50051    Protocol:           grpc    Service Name:       example-catalog    Service Namespace:  openshift-marketplace#...
    Copy to Clipboard

    In the preceding example output, the last observed state isTRANSIENT_FAILURE. This state indicates that there is a problem establishing a connection for the catalog source.

  3. List the pods in the namespace where your catalog source was created:

    $oc get pods-n openshift-marketplace
    Copy to Clipboard

    Example output

    NAME                                    READY   STATUS             RESTARTS   AGEcertified-operators-cv9nn               1/1     Running            0          36mcommunity-operators-6v8lp               1/1     Running            0          36mmarketplace-operator-86bfc75f9b-jkgbc   1/1     Running            0          42mexample-catalog-bwt8z                   0/1     ImagePullBackOff   0          3m55sredhat-operators-smxx8                  1/1     Running            0          36m
    Copy to Clipboard

    When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for theexample-catalog-bwt8z pod isImagePullBackOff. This status indicates that there is an issue pulling the catalog source’s index image.

  4. Use theoc describe command to inspect a pod for more detailed information:

    $oc describe pod example-catalog-bwt8z-n openshift-marketplace
    Copy to Clipboard

    Example output

    Name:         example-catalog-bwt8zNamespace:    openshift-marketplacePriority:     0Node:         ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2...Events:  Type     Reason          Age                From               Message  ----     ------          ----               ----               -------  Normal   Scheduled       48s                default-scheduler  Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd  Normal   AddedInterface  47s                multus             Add eth0 [10.131.0.40/23] from openshift-sdn  Normal   BackOff         20s (x2 over 46s)  kubelet            Back-off pulling image "quay.io/example-org/example-catalog:v1"  Warning  Failed          20s (x2 over 46s)  kubelet            Error: ImagePullBackOff  Normal   Pulling         8s (x3 over 47s)   kubelet            Pulling image "quay.io/example-org/example-catalog:v1"  Warning  Failed          8s (x3 over 47s)   kubelet            Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized  Warning  Failed          8s (x3 over 47s)   kubelet            Error: ErrImagePull
    Copy to Clipboard

    In the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.

Additional resources

7.3.4. Querying Operator pod status

You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • Your API service is still functional.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List Operators running in the cluster. The output includes Operator version, availability, and up-time information:

    $oc get clusteroperators
    Copy to Clipboard
  2. List Operator pods running in the Operator’s namespace, plus pod status, restarts, and age:

    $oc get pod-n<operator_namespace>
    Copy to Clipboard
  3. Output a detailed Operator pod summary:

    $oc describe pod<operator_pod_name>-n<operator_namespace>
    Copy to Clipboard

7.3.5. Gathering Operator logs

If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • Your API service is still functional.
  • You have installed the OpenShift CLI (oc).
  • You have the fully qualified domain names of the control plane or control plane machines.

Procedure

  1. List the Operator pods that are running in the Operator’s namespace, plus the pod status, restarts, and age:

    $oc get pods-n<operator_namespace>
    Copy to Clipboard
  2. Review logs for an Operator pod:

    $oc logs pod/<pod_name>-n<operator_namespace>
    Copy to Clipboard

    If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container:

    $oc logs pod/<operator_pod_name>-c<container_name>-n<operator_namespace>
    Copy to Clipboard
  3. If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace<master-node>.<cluster_name>.<base_domain> with appropriate values.

    1. List pods on each control plane node:

      $ssh core@<master-node>.<cluster_name>.<base_domain>sudo crictl pods
      Copy to Clipboard
    2. For any Operator pods not showing aReady status, inspect the pod’s status in detail. Replace<operator_pod_id> with the Operator pod’s ID listed in the output of the preceding command:

      $ssh core@<master-node>.<cluster_name>.<base_domain>sudo crictl inspectp<operator_pod_id>
      Copy to Clipboard
    3. List containers related to an Operator pod:

      $ssh core@<master-node>.<cluster_name>.<base_domain>sudo crictlps--pod=<operator_pod_id>
      Copy to Clipboard
    4. For any Operator container not showing aReady status, inspect the container’s status in detail. Replace<container_id> with a container ID listed in the output of the preceding command:

      $ssh core@<master-node>.<cluster_name>.<base_domain>sudo crictl inspect<container_id>
      Copy to Clipboard
    5. Review the logs for any Operator containers not showing aReady status. Replace<container_id> with a container ID listed in the output of the preceding command:

      $ssh core@<master-node>.<cluster_name>.<base_domain>sudo crictl logs-f<container_id>
      Copy to Clipboard
      Note

      OpenShift Dedicated 4 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by runningoc adm must gather and otheroc commands is sufficient instead. However, if the OpenShift Dedicated API is not available, or the kubelet is not properly functioning on the target node,oc operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>.

7.4. Investigating pod issues

OpenShift Dedicated leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host. A pod is the smallest compute unit that can be defined, deployed, and managed on OpenShift Dedicated 4.

After a pod is defined, it is assigned to run on a node until its containers exit, or until it is removed. Depending on policy and exit code, pods are either removed after exiting or retained so that their logs can be accessed.

The first thing to check when pod issues arise is the pod’s status. If an explicit pod failure has occurred, observe the pod’s error state to identify specific image, container, or pod network issues. Focus diagnostic data collection according to the error state. Review pod event messages, as well as pod and container log information. Diagnose issues dynamically by accessing running Pods on the command line, or start a debug pod with root access based on a problematic pod’s deployment configuration.

7.4.1. Understanding pod error states

Pod failures return explicit error states that can be observed in thestatus field in the output ofoc get pods. Pod error states cover image, container, and container network related failures.

The following table provides a list of pod error states along with their descriptions.

Table 7.3. Pod error states
Pod error stateDescription

ErrImagePull

Generic image retrieval error.

ErrImagePullBackOff

Image retrieval failed and is backed off.

ErrInvalidImageName

The specified image name was invalid.

ErrImageInspect

Image inspection did not succeed.

ErrImageNeverPull

PullPolicy is set toNeverPullImage and the target image is not present locally on the host.

ErrRegistryUnavailable

When attempting to retrieve an image from a registry, an HTTP error was encountered.

ErrContainerNotFound

The specified container is either not present or not managed by the kubelet, within the declared pod.

ErrRunInitContainer

Container initialization failed.

ErrRunContainer

None of the pod’s containers started successfully.

ErrKillContainer

None of the pod’s containers were killed successfully.

ErrCrashLoopBackOff

A container has terminated. The kubelet will not attempt to restart it.

ErrVerifyNonRoot

A container or image attempted to run with root privileges.

ErrCreatePodSandbox

Pod sandbox creation did not succeed.

ErrConfigPodSandbox

Pod sandbox configuration was not obtained.

ErrKillPodSandbox

A pod sandbox did not stop successfully.

ErrSetupNetwork

Network initialization failed.

ErrTeardownNetwork

Network termination failed.

7.4.2. Reviewing pod status

You can query pod status and error states. You can also query a pod’s associated deployment configuration and review base image availability.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • You have installed the OpenShift CLI (oc).
  • skopeo is installed.

Procedure

  1. Switch into a project:

    $oc project<project_name>
    Copy to Clipboard
  2. List pods running within the namespace, as well as pod status, error states, restarts, and age:

    $oc get pods
    Copy to Clipboard
  3. Determine whether the namespace is managed by a deployment configuration:

    $oc status
    Copy to Clipboard

    If the namespace is managed by a deployment configuration, the output includes the deployment configuration name and a base image reference.

  4. Inspect the base image referenced in the preceding command’s output:

    $skopeo inspect docker://<image_reference>
    Copy to Clipboard
  5. If the base image reference is not correct, update the reference in the deployment configuration:

    $oc edit deployment/my-deployment
    Copy to Clipboard
  6. When deployment configuration changes on exit, the configuration will automatically redeploy. Watch pod status as the deployment progresses, to determine whether the issue has been resolved:

    $oc get pods-w
    Copy to Clipboard
  7. Review events within the namespace for diagnostic information relating to pod failures:

    $oc get events
    Copy to Clipboard

7.4.3. Inspecting pod and container logs

You can inspect pod and container logs for warnings and error messages related to explicit pod failures. Depending on policy and exit code, pod and container logs remain available after pods have been terminated.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • Your API service is still functional.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Query logs for a specific pod:

    $oc logs<pod_name>
    Copy to Clipboard
  2. Query logs for a specific container within a pod:

    $oc logs<pod_name>-c<container_name>
    Copy to Clipboard

    Logs retrieved using the precedingoc logs commands are composed of messages sent to stdout within pods or containers.

  3. Inspect logs contained in/var/log/ within a pod.

    1. List log files and subdirectories contained in/var/log within a pod:

      $ocexec<pod_name>  --ls-alh /var/log
      Copy to Clipboard

      Example output

      total 124Kdrwxr-xr-x. 1 root root   33 Aug 11 11:23 .drwxr-xr-x. 1 root root   28 Sep  6  2022 ..-rw-rw----. 1 root utmp    0 Jul 10 10:31 btmp-rw-r--r--. 1 root root  33K Jul 17 10:07 dnf.librepo.log-rw-r--r--. 1 root root  69K Jul 17 10:07 dnf.log-rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log-rw-r--r--. 1 root root  480 Jul 17 10:07 hawkey.log-rw-rw-r--. 1 root utmp    0 Jul 10 10:31 lastlogdrwx------. 2 root root   23 Aug 11 11:14 openshift-apiserverdrwx------. 2 root root    6 Jul 10 10:31 privatedrwxr-xr-x. 1 root root   22 Mar  9 08:05 rhsm-rw-rw-r--. 1 root utmp    0 Jul 10 10:31 wtmp
      Copy to Clipboard

    2. Query a specific log file contained in/var/log within a pod:

      $ocexec<pod_name>cat /var/log/<path_to_log>
      Copy to Clipboard

      Example output

      2023-07-10T10:29:38+0000 INFO --- logging initialized ---2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories.2023-07-10T10:29:38+0000 INFO Unable to read consumer identity2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode.2023-07-10T10:29:38+0000 INFO
      Copy to Clipboard

    3. List log files and subdirectories contained in/var/log within a specific container:

      $ocexec<pod_name>-c<container_name>ls /var/log
      Copy to Clipboard
    4. Query a specific log file contained in/var/log within a specific container:

      $ocexec<pod_name>-c<container_name>cat /var/log/<path_to_log>
      Copy to Clipboard

7.4.4. Accessing running pods

You can review running pods dynamically by opening a shell inside a pod or by gaining network access through port forwarding.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • Your API service is still functional.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Switch into the project that contains the pod you would like to access. This is necessary because theoc rsh command does not accept the-n namespace option:

    $oc project<namespace>
    Copy to Clipboard
  2. Start a remote shell into a pod:

    $oc rsh<pod_name>
    1
    Copy to Clipboard
    1
    If a pod has multiple containers,oc rsh defaults to the first container unless-c <container_name> is specified.
  3. Start a remote shell into a specific container within a pod:

    $oc rsh-c<container_name> pod/<pod_name>
    Copy to Clipboard
  4. Create a port forwarding session to a port on a pod:

    $oc port-forward<pod_name><host_port>:<pod_port>
    1
    Copy to Clipboard
    1
    EnterCtrl+C to cancel the port forwarding session.

7.4.5. Starting debug pods with root access

You can start a debug pod with root access, based on a problematic pod’s deployment or deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • Your API service is still functional.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Start a debug pod with root access, based on a deployment.

    1. Obtain a project’s deployment name:

      $oc get deployment-n<project_name>
      Copy to Clipboard
    2. Start a debug pod with root privileges, based on the deployment:

      $oc debug deployment/my-deployment --as-root-n<project_name>
      Copy to Clipboard
  2. Start a debug pod with root access, based on a deployment configuration.

    1. Obtain a project’s deployment configuration name:

      $oc get deploymentconfigs-n<project_name>
      Copy to Clipboard
    2. Start a debug pod with root privileges, based on the deployment configuration:

      $oc debug deploymentconfig/my-deployment-configuration --as-root-n<project_name>
      Copy to Clipboard
Note

You can append-- <command> to the precedingoc debug commands to run individual commands within a debug pod, instead of running an interactive shell.

7.4.6. Copying files to and from pods and containers

You can copy files to and from a pod to test configuration changes or gather diagnostic information.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • Your API service is still functional.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Copy a file to a pod:

    $occp<local_path><pod_name>:/<path>-c<container_name>
    1
    Copy to Clipboard
    1
    The first container in a pod is selected if the-c option is not specified.
  2. Copy a file from a pod:

    $occp<pod_name>:/<path>-c<container_name><local_path>
    1
    Copy to Clipboard
    1
    The first container in a pod is selected if the-c option is not specified.
    Note

    Foroc cp to function, thetar binary must be available within the container.

7.5. Troubleshooting the Source-to-Image process

7.5.1. Strategies for Source-to-Image troubleshooting

Use Source-to-Image (S2I) to build reproducible, Docker-formatted container images. You can create ready-to-run images by injecting application source code into a container image and assembling a new image. The new image incorporates the base image (the builder) and built source.

To determine where in the S2I process a failure occurs, you can observe the state of the pods relating to each of the following S2I stages:

  1. During the build configuration stage, a build pod is used to create an application container image from a base image and application source code.
  2. During the deployment configuration stage, a deployment pod is used to deploy application pods from the application container image that was built in the build configuration stage. The deployment pod also deploys other resources such as services and routes. The deployment configuration begins after the build configuration succeeds.
  3. After the deployment pod has started the application pods, application failures can occur within the running application pods. For instance, an application might not behave as expected even though the application pods are in aRunning state. In this scenario, you can access running application pods to investigate application failures within a pod.

When troubleshooting S2I issues, follow this strategy:

  1. Monitor build, deployment, and application pod status
  2. Determine the stage of the S2I process where the problem occurred
  3. Review logs corresponding to the failed stage

7.5.2. Gathering Source-to-Image diagnostic data

The S2I tool runs a build pod and a deployment pod in sequence. The deployment pod is responsible for deploying the application pods based on the application container image created in the build stage. Watch build, deployment and application pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • Your API service is still functional.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Watch the pod status throughout the S2I process to determine at which stage a failure occurs:

    $oc get pods-w
    1
    Copy to Clipboard
    1
    Use-w to monitor pods for changes until you quit the command usingCtrl+C.
  2. Review a failed pod’s logs for errors.

    • If the build pod fails, review the build pod’s logs:

      $oc logs-f pod/<application_name>-<build_number>-build
      Copy to Clipboard
      Note

      Alternatively, you can review the build configuration’s logs usingoc logs -f bc/<application_name>. The build configuration’s logs include the logs from the build pod.

    • If the deployment pod fails, review the deployment pod’s logs:

      $oc logs-f pod/<application_name>-<build_number>-deploy
      Copy to Clipboard
      Note

      Alternatively, you can review the deployment configuration’s logs usingoc logs -f dc/<application_name>. This outputs logs from the deployment pod until the deployment pod completes successfully. The command outputs logs from the application pods if you run it after the deployment pod has completed. After a deployment pod completes, its logs can still be accessed by runningoc logs -f pod/<application_name>-<build_number>-deploy.

    • If an application pod fails, or if an application is not behaving as expected within a running application pod, review the application pod’s logs:

      $oc logs-f pod/<application_name>-<build_number>-<random_string>
      Copy to Clipboard

7.5.3. Gathering application diagnostic data to investigate application failures

Application failures can occur within running application pods. In these situations, you can retrieve diagnostic information with these strategies:

  • Review events relating to the application pods.
  • Review the logs from the application pods, including application-specific log files that are not collected by the OpenShift Logging framework.
  • Test application functionality interactively and run diagnostic tools in an application container.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List events relating to a specific application pod. The following example retrieves events for an application pod namedmy-app-1-akdlg:

    $oc describe pod/my-app-1-akdlg
    Copy to Clipboard
  2. Review logs from an application pod:

    $oc logs-f pod/my-app-1-akdlg
    Copy to Clipboard
  3. Query specific logs within a running application pod. Logs that are sent to stdout are collected by the OpenShift Logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout.

    1. If an application log can be accessed without root privileges within a pod, concatenate the log file as follows:

      $ocexec my-app-1-akdlg --cat /var/log/my-application.log
      Copy to Clipboard
    2. If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project’sDeploymentConfig object. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation:

      $oc debug dc/my-deployment-configuration --as-root --cat /var/log/my-application.log
      Copy to Clipboard
      Note

      You can access an interactive shell with root access within the debug pod if you runoc debug dc/<deployment_configuration> --as-root without appending-- <command>.

  4. Test application functionality interactively and run diagnostic tools, in an application container with an interactive shell.

    1. Start an interactive shell on the application container:

      $ocexec-it my-app-1-akdlg /bin/bash
      Copy to Clipboard
    2. Test application functionality interactively from within the shell. For example, you can run the container’s entry point command and observe the results. Then, test changes from the command line directly, before updating the source code and rebuilding the application container through the S2I process.
    3. Run diagnostic binaries available within the container.

      Note

      Root privileges are required to run some diagnostic binaries. In these situations you can start a debug pod with root access, based on a problematic pod’sDeploymentConfig object, by runningoc debug dc/<deployment_configuration> --as-root. Then, you can run diagnostic binaries as root from within the debug pod.

7.6. Troubleshooting storage issues

7.6.1. Resolving multi-attach errors

When a node crashes or shuts down abruptly, the attached ReadWriteOnce (RWO) volume is expected to be unmounted from the node so that it can be used by a pod scheduled on another node.

However, mounting on a new node is not possible because the failed node is unable to unmount the attached volume.

A multi-attach error is reported:

Example output

Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the conditionMulti-Attach error for volume "pvc-8837384d-69d7-40b2-b2e6-5df86943eef9" Volume is already used by pod(s) sso-mysql-1-ns6b4
Copy to Clipboard

Procedure

To resolve the multi-attach issue, use one of the following solutions:

  • Enable multiple attachments by using RWX volumes.

    For most storage solutions, you can use ReadWriteMany (RWX) volumes to prevent multi-attach errors.

  • Recover or delete the failed node when using an RWO volume.

    For storage that does not support RWX, such as VMware vSphere, RWO volumes must be used instead. However, RWO volumes cannot be mounted on multiple nodes.

    If you encounter a multi-attach error message with an RWO volume, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached.

    $oc delete pod<old_pod>--force=true --grace-period=0
    Copy to Clipboard

    This command deletes the volumes stuck on shutdown or crashed nodes after six minutes.

7.7. Investigating monitoring issues

OpenShift Dedicated includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. In OpenShift Dedicated 4, cluster administrators can optionally enable monitoring for user-defined projects.

Use these procedures if the following issues occur:

  • Your own metrics are unavailable.
  • Prometheus is consuming a lot of disk space.
  • TheKubePersistentVolumeFillingUp alert is firing for Prometheus.

7.7.1. Investigating why user-defined project metrics are unavailable

ServiceMonitor resources enable you to determine how to use the metrics exposed by a service in user-defined projects. Follow the steps outlined in this procedure if you have created aServiceMonitor resource but cannot see any corresponding metrics in the Metrics UI.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • You have installed the OpenShift CLI (oc).
  • You have enabled and configured monitoring for user-defined projects.
  • You have created aServiceMonitor resource.

Procedure

  1. Ensure that your project and resources are not excluded from user workload monitoring. The following examples use thens1 project.

    1. Verify that the projectdoes not have theopenshift.io/user-monitoring=false label attached:

      $oc get namespace ns1 --show-labels|grep'openshift.io/user-monitoring=false'
      Copy to Clipboard
      Note

      The default label set for user workload projects isopenshift.io/user-monitoring=true. However, the label is not visible unless you manually apply it.

    2. Verify that theServiceMonitor andPodMonitor resourcesdo not have theopenshift.io/user-monitoring=false label attached. The following example checks theprometheus-example-monitor service monitor.

      $oc-n ns1 get servicemonitor prometheus-example-monitor --show-labels|grep'openshift.io/user-monitoring=false'
      Copy to Clipboard
    3. If the label is attached, remove the label:

      Example of removing the label from the project

      $oc label namespace ns1'openshift.io/user-monitoring-'
      Copy to Clipboard

      Example of removing the label from the resource

      $oc-n ns1 label servicemonitor prometheus-example-monitor'openshift.io/user-monitoring-'
      Copy to Clipboard

      Example output

      namespace/ns1 unlabeled
      Copy to Clipboard

  2. Check that the corresponding labels match in the service andServiceMonitor resource configurations. The following examples use theprometheus-example-app service, theprometheus-example-monitor service monitor, and thens1 project.

    1. Obtain the label defined in the service.

      $oc-n ns1 getservice prometheus-example-app-o yaml
      Copy to Clipboard

      Example output

        labels:    app: prometheus-example-app
      Copy to Clipboard

    2. Check that thematchLabels definition in theServiceMonitor resource configuration matches the label output in the previous step.

      $oc-n ns1 get servicemonitor prometheus-example-monitor-o yaml
      Copy to Clipboard

      Example output

      apiVersion: v1kind: ServiceMonitormetadata:name: prometheus-example-monitornamespace: ns1spec:endpoints:-interval: 30sport: webscheme: httpselector:matchLabels:app: prometheus-example-app
      Copy to Clipboard

      Note

      You can check service andServiceMonitor resource labels as a developer with view permissions for the project.

  3. Inspect the logs for the Prometheus Operator in theopenshift-user-workload-monitoring project.

    1. List the pods in theopenshift-user-workload-monitoring project:

      $oc-n openshift-user-workload-monitoring get pods
      Copy to Clipboard

      Example output

      NAME                                   READY   STATUS    RESTARTS   AGEprometheus-operator-776fcbbd56-2nbfm   2/2     Running   0          132mprometheus-user-workload-0             5/5     Running   1          132mprometheus-user-workload-1             5/5     Running   1          132mthanos-ruler-user-workload-0           3/3     Running   0          132mthanos-ruler-user-workload-1           3/3     Running   0          132m
      Copy to Clipboard

    2. Obtain the logs from theprometheus-operator container in theprometheus-operator pod. In the following example, the pod is calledprometheus-operator-776fcbbd56-2nbfm:

      $oc-n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm-c prometheus-operator
      Copy to Clipboard

      If there is a issue with the service monitor, the logs might include an error similar to this example:

      level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload
      Copy to Clipboard
  4. Review the target status for your endpoint on theMetrics targets page in the OpenShift Dedicated web console UI.

    1. Log in to the OpenShift Dedicated web console and go toObserveTargets.
    2. Locate the metrics endpoint in the list, and review the status of the target in theStatus column.
    3. If theStatus isDown, click the URL for the endpoint to view more information on theTarget Details page for that metrics target.
  5. Configure debug level logging for the Prometheus Operator in theopenshift-user-workload-monitoring project.

    1. Edit theuser-workload-monitoring-configConfigMap object in theopenshift-user-workload-monitoring project:

      $oc-n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      Copy to Clipboard
    2. AddlogLevel: debug forprometheusOperator underdata/config.yaml to set the log level todebug:

      apiVersion: v1kind: ConfigMapmetadata:name: user-workload-monitoring-confignamespace: openshift-user-workload-monitoringdata:config.yaml:|    prometheusOperator:      logLevel: debug# ...
      Copy to Clipboard
    3. Save the file to apply the changes. The affectedprometheus-operator pod is automatically redeployed.
    4. Confirm that thedebug log-level has been applied to theprometheus-operator deployment in theopenshift-user-workload-monitoring project:

      $oc-n openshift-user-workload-monitoring get deploy prometheus-operator-o yaml|grep"log-level"
      Copy to Clipboard

      Example output

              - --log-level=debug
      Copy to Clipboard

      Debug level logging will show all calls made by the Prometheus Operator.

    5. Check that theprometheus-operator pod is running:

      $oc-n openshift-user-workload-monitoring get pods
      Copy to Clipboard
      Note

      If an unrecognized Prometheus Operatorloglevel value is included in the config map, theprometheus-operator pod might not restart successfully.

    6. Review the debug logs to see if the Prometheus Operator is using theServiceMonitor resource. Review the logs for other related errors.

Additional resources

7.7.2. Determining why Prometheus is consuming a lot of disk space

Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, acustomer_id attribute is unbound because it has an infinite number of possible values.

Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space.

You can use the following measures when Prometheus consumes a lot of disk:

  • Check the time series database (TSDB) status using the Prometheus HTTP API for more information about which labels are creating the most time series data. Doing so requires cluster administrator privileges.
  • Check the number of scrape samples that are being collected.
  • Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics.

    Note

    Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations.

  • Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges.

Prerequisites

  • You have access to the cluster as a user with thededicated-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. In the OpenShift Dedicated web console, go toObserveMetrics.
  2. Enter a Prometheus Query Language (PromQL) query in theExpression field. The following example queries help to identify high cardinality metrics that might result in high disk space consumption:

    • By running the following query, you can identify the ten jobs that have the highest number of scrape samples:

      topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))
      Copy to Clipboard
    • By running the following query, you can pinpoint time series churn by identifying the ten jobs that have created the most time series data in the last hour:

      topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))
      Copy to Clipboard
  3. Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts:

    • If the metrics relate to a user-defined project, review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels.
    • If the metrics relate to a core OpenShift Dedicated project, create a Red Hat support case on theRed Hat Customer Portal.
  4. Review the TSDB status using the Prometheus HTTP API by following these steps when logged in as adedicated-admin:

    1. Get the Prometheus API route URL by running the following command:

      $HOST=$(oc-n openshift-monitoring get route prometheus-k8s-ojsonpath='{.status.ingress[].host}')
      Copy to Clipboard
    2. Extract an authentication token by running the following command:

      $TOKEN=$(ocwhoami-t)
      Copy to Clipboard
    3. Query the TSDB status for Prometheus by running the following command:

      $curl-H"Authorization: Bearer$TOKEN"-k"https://$HOST/api/v1/status/tsdb"
      Copy to Clipboard

      Example output

      "status": "success","data":{"headStats":{"numSeries":507473,"numLabelPairs":19832,"chunkCount":946298,"minTime":1712253600010,"maxTime":1712257935346},"seriesCountByMetricName":[{"name":"etcd_request_duration_seconds_bucket","value":51840},{"name":"apiserver_request_sli_duration_seconds_bucket","value":47718},...
      Copy to Clipboard

Additional resources

7.8. Diagnosing OpenShift CLI (oc) issues

7.8.1. Understanding OpenShift CLI (oc) log levels

With the OpenShift CLI (oc), you can create applications and manage OpenShift Dedicated projects from a terminal.

Ifoc command-specific issues arise, increase theoc log level to output API request, API response, andcurl request details generated by the command. This provides a granular view of a particularoc command’s underlying operation, which in turn might provide insight into the nature of a failure.

oc log levels range from 1 to 10. The following table provides a list ofoc log levels, along with their descriptions.

Table 7.4. OpenShift CLI (oc) log levels
Log levelDescription

1 to 5

No additional logging to stderr.

6

Log API requests to stderr.

7

Log API requests and headers to stderr.

8

Log API requests, headers, and body, plus API response headers and body to stderr.

9

Log API requests, headers, and body, API response headers and body, pluscurl requests to stderr.

10

Log API requests, headers, and body, API response headers and body, pluscurl requests to stderr, in verbose detail.

7.8.2. Specifying OpenShift CLI (oc) log levels

You can investigate OpenShift CLI (oc) issues by increasing the command’s log level.

The OpenShift Dedicated user’s current session token is typically included in loggedcurl requests where required. You can also obtain the current user’s session token manually, for use when testing aspects of anoc command’s underlying process step-by-step.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  • Specify theoc log level when running anoc command:

    $oc<command>--loglevel<log_level>
    Copy to Clipboard

    where:

    <command>
    Specifies the command you are running.
    <log_level>
    Specifies the log level to apply to the command.
  • To obtain the current user’s session token, run the following command:

    $ocwhoami-t
    Copy to Clipboard

    Example output

    sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6...
    Copy to Clipboard

7.9. Red Hat managed resources

7.9.1. Overview

The following covers all OpenShift Dedicated resources that are managed or protected by the Service Reliability Engineering Platform (SRE-P) Team. Customers must not modify these resources because doing so can lead to cluster instability.

7.9.2. Hive managed resources

The following list displays the OpenShift Dedicated resources managed by OpenShift Hive, the centralized fleet configuration management system. These resources are in addition to the OpenShift Container Platform resources created during installation. OpenShift Hive continually attempts to maintain consistency across all OpenShift Dedicated clusters. Changes to OpenShift Dedicated resources should be made through OpenShift Cluster Manager so that OpenShift Cluster Manager and Hive are synchronized. Contactocm-feedback@redhat.com if OpenShift Cluster Manager does not support modifying the resources in question.

Example 7.1. List of Hive managed resources

Resources:ConfigMap:-namespace: openshift-configname: rosa-brand-logo-namespace: openshift-consolename: custom-logo-namespace: openshift-deployment-validation-operatorname: deployment-validation-operator-config-namespace: openshift-file-integrityname: fr-aide-conf-namespace: openshift-managed-upgrade-operatorname: managed-upgrade-operator-config-namespace: openshift-monitoringname: cluster-monitoring-config-namespace: openshift-monitoringname: managed-namespaces-namespace: openshift-monitoringname: ocp-namespaces-namespace: openshift-monitoringname: osd-rebalance-infra-nodes-namespace: openshift-monitoringname: sre-dns-latency-exporter-code-namespace: openshift-monitoringname: sre-dns-latency-exporter-trusted-ca-bundle-namespace: openshift-monitoringname: sre-ebs-iops-reporter-code-namespace: openshift-monitoringname: sre-ebs-iops-reporter-trusted-ca-bundle-namespace: openshift-monitoringname: sre-stuck-ebs-vols-code-namespace: openshift-monitoringname: sre-stuck-ebs-vols-trusted-ca-bundle-namespace: openshift-securityname: osd-audit-policy-namespace: openshift-validation-webhookname: webhook-cert-namespace: openshiftname: motdEndpoints:-namespace: openshift-deployment-validation-operatorname: deployment-validation-operator-metrics-namespace: openshift-monitoringname: sre-dns-latency-exporter-namespace: openshift-monitoringname: sre-ebs-iops-reporter-namespace: openshift-monitoringname: sre-stuck-ebs-vols-namespace: openshift-scanningname: loggerservice-namespace: openshift-securityname: audit-exporter-namespace: openshift-validation-webhookname: validation-webhookNamespace:-name: dedicated-admin-name: openshift-addon-operator-name: openshift-aqua-name: openshift-aws-vpce-operator-name: openshift-backplane-name: openshift-backplane-cee-name: openshift-backplane-csa-name: openshift-backplane-cse-name: openshift-backplane-csm-name: openshift-backplane-managed-scripts-name: openshift-backplane-mobb-name: openshift-backplane-srep-name: openshift-backplane-tam-name: openshift-cloud-ingress-operator-name: openshift-codeready-workspaces-name: openshift-compliance-name: openshift-compliance-monkey-name: openshift-container-security-name: openshift-custom-domains-operator-name: openshift-customer-monitoring-name: openshift-deployment-validation-operator-name: openshift-managed-node-metadata-operator-name: openshift-file-integrity-name: openshift-logging-name: openshift-managed-upgrade-operator-name: openshift-must-gather-operator-name: openshift-observability-operator-name: openshift-ocm-agent-operator-name: openshift-operators-redhat-name: openshift-osd-metrics-name: openshift-rbac-permissions-name: openshift-route-monitor-operator-name: openshift-scanning-name: openshift-security-name: openshift-splunk-forwarder-operator-name: openshift-sre-pruning-name: openshift-suricata-name: openshift-validation-webhook-name: openshift-velero-name: openshift-monitoring-name: openshift-name: openshift-cluster-version-name: keycloak-name: goalert-name: configure-goalert-operatorReplicationController:-namespace: openshift-monitoringname: sre-ebs-iops-reporter-1-namespace: openshift-monitoringname: sre-stuck-ebs-vols-1Secret:-namespace: openshift-authenticationname: v4-0-config-user-idp-0-file-data-namespace: openshift-authenticationname: v4-0-config-user-template-error-namespace: openshift-authenticationname: v4-0-config-user-template-login-namespace: openshift-authenticationname: v4-0-config-user-template-provider-selection-namespace: openshift-configname: htpasswd-secret-namespace: openshift-configname: osd-oauth-templates-errors-namespace: openshift-configname: osd-oauth-templates-login-namespace: openshift-configname: osd-oauth-templates-providers-namespace: openshift-configname: rosa-oauth-templates-errors-namespace: openshift-configname: rosa-oauth-templates-login-namespace: openshift-configname: rosa-oauth-templates-providers-namespace: openshift-configname: support-namespace: openshift-configname: tony-devlab-primary-cert-bundle-secret-namespace: openshift-ingressname: tony-devlab-primary-cert-bundle-secret-namespace: openshift-kube-apiservername: user-serving-cert-000-namespace: openshift-kube-apiservername: user-serving-cert-001-namespace: openshift-monitoringname: dms-secret-namespace: openshift-monitoringname: observatorium-credentials-namespace: openshift-monitoringname: pd-secret-namespace: openshift-scanningname: clam-secrets-namespace: openshift-scanningname: logger-secrets-namespace: openshift-securityname: splunk-authServiceAccount:-namespace: openshift-backplane-managed-scriptsname: osd-backplane-namespace: openshift-backplane-srepname: 6804d07fb268b8285b023bcf65392f0e-namespace: openshift-backplane-srepname: osd-delete-ownerrefs-serviceaccounts-namespace: openshift-backplanename: osd-delete-backplane-serviceaccounts-namespace: openshift-cloud-ingress-operatorname: cloud-ingress-operator-namespace: openshift-custom-domains-operatorname: custom-domains-operator-namespace: openshift-managed-upgrade-operatorname: managed-upgrade-operator-namespace: openshift-machine-apiname: osd-disable-cpms-namespace: openshift-marketplacename: osd-patch-subscription-source-namespace: openshift-monitoringname: configure-alertmanager-operator-namespace: openshift-monitoringname: osd-cluster-ready-namespace: openshift-monitoringname: osd-rebalance-infra-nodes-namespace: openshift-monitoringname: sre-dns-latency-exporter-namespace: openshift-monitoringname: sre-ebs-iops-reporter-namespace: openshift-monitoringname: sre-stuck-ebs-vols-namespace: openshift-network-diagnosticsname: sre-pod-network-connectivity-check-pruner-namespace: openshift-ocm-agent-operatorname: ocm-agent-operator-namespace: openshift-rbac-permissionsname: rbac-permissions-operator-namespace: openshift-splunk-forwarder-operatorname: splunk-forwarder-operator-namespace: openshift-sre-pruningname: bz1980755-namespace: openshift-scanningname: logger-sa-namespace: openshift-scanningname: scanner-sa-namespace: openshift-sre-pruningname: sre-pruner-sa-namespace: openshift-suricataname: suricata-sa-namespace: openshift-validation-webhookname: validation-webhook-namespace: openshift-veleroname: managed-velero-operator-namespace: openshift-veleroname: velero-namespace: openshift-backplane-srepname: UNIQUE_BACKPLANE_SERVICEACCOUNT_IDService:-namespace: openshift-deployment-validation-operatorname: deployment-validation-operator-metrics-namespace: openshift-monitoringname: sre-dns-latency-exporter-namespace: openshift-monitoringname: sre-ebs-iops-reporter-namespace: openshift-monitoringname: sre-stuck-ebs-vols-namespace: openshift-scanningname: loggerservice-namespace: openshift-securityname: audit-exporter-namespace: openshift-validation-webhookname: validation-webhookAddonOperator:-name: addon-operatorValidatingWebhookConfiguration:-name: sre-hiveownership-validation-name: sre-namespace-validation-name: sre-pod-validation-name: sre-prometheusrule-validation-name: sre-regular-user-validation-name: sre-scc-validation-name: sre-techpreviewnoupgrade-validationDaemonSet:-namespace: openshift-monitoringname: sre-dns-latency-exporter-namespace: openshift-scanningname: logger-namespace: openshift-scanningname: scanner-namespace: openshift-securityname: audit-exporter-namespace: openshift-suricataname: suricata-namespace: openshift-validation-webhookname: validation-webhookDeploymentConfig:-namespace: openshift-monitoringname: sre-ebs-iops-reporter-namespace: openshift-monitoringname: sre-stuck-ebs-volsClusterRoleBinding:-name: aqua-scanner-binding-name: backplane-cluster-admin-name: backplane-impersonate-cluster-admin-name: bz1980755-name: configure-alertmanager-operator-prom-name: dedicated-admins-cluster-name: dedicated-admins-registry-cas-cluster-name: logger-clusterrolebinding-name: openshift-backplane-managed-scripts-reader-name: osd-cluster-admin-name: osd-cluster-ready-name: osd-delete-backplane-script-resources-name: osd-delete-ownerrefs-serviceaccounts-name: osd-patch-subscription-source-name: osd-rebalance-infra-nodes-name: pcap-dedicated-admins-name: splunk-forwarder-operator-name: splunk-forwarder-operator-clusterrolebinding-name: sre-pod-network-connectivity-check-pruner-name: sre-pruner-buildsdeploys-pruning-name: velero-name: webhook-validationClusterRole:-name: backplane-cee-readers-cluster-name: backplane-impersonate-cluster-admin-name: backplane-readers-cluster-name: backplane-srep-admins-cluster-name: backplane-srep-admins-project-name: bz1980755-name: dedicated-admins-aggregate-cluster-name: dedicated-admins-aggregate-project-name: dedicated-admins-cluster-name: dedicated-admins-manage-operators-name: dedicated-admins-project-name: dedicated-admins-registry-cas-cluster-name: dedicated-readers-name: image-scanner-name: logger-clusterrole-name: openshift-backplane-managed-scripts-reader-name: openshift-splunk-forwarder-operator-name: osd-cluster-ready-name: osd-custom-domains-dedicated-admin-cluster-name: osd-delete-backplane-script-resources-name: osd-delete-backplane-serviceaccounts-name: osd-delete-ownerrefs-serviceaccounts-name: osd-get-namespace-name: osd-netnamespaces-dedicated-admin-cluster-name: osd-patch-subscription-source-name: osd-readers-aggregate-name: osd-rebalance-infra-nodes-name: osd-rebalance-infra-nodes-openshift-pod-rebalance-name: pcap-dedicated-admins-name: splunk-forwarder-operator-name: sre-allow-read-machine-info-name: sre-pruner-buildsdeploys-cr-name: webhook-validation-crRoleBinding:-namespace: kube-systemname: cloud-ingress-operator-cluster-config-v1-reader-namespace: kube-systemname: managed-velero-operator-cluster-config-v1-reader-namespace: openshift-aquaname: dedicated-admins-openshift-aqua-namespace: openshift-backplane-managed-scriptsname: backplane-cee-mustgather-namespace: openshift-backplane-managed-scriptsname: backplane-srep-mustgather-namespace: openshift-backplane-managed-scriptsname: osd-delete-backplane-script-resources-namespace: openshift-cloud-ingress-operatorname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-codeready-workspacesname: dedicated-admins-openshift-codeready-workspaces-namespace: openshift-configname: dedicated-admins-project-request-namespace: openshift-configname: dedicated-admins-registry-cas-project-namespace: openshift-configname: muo-pullsecret-reader-namespace: openshift-configname: oao-openshiftconfig-reader-namespace: openshift-configname: osd-cluster-ready-namespace: openshift-custom-domains-operatorname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-customer-monitoringname: dedicated-admins-openshift-customer-monitoring-namespace: openshift-customer-monitoringname: prometheus-k8s-openshift-customer-monitoring-namespace: openshift-dnsname: dedicated-admins-openshift-dns-namespace: openshift-dnsname: osd-rebalance-infra-nodes-openshift-dns-namespace: openshift-image-registryname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-ingress-operatorname: cloud-ingress-operator-namespace: openshift-ingressname: cloud-ingress-operator-namespace: openshift-kube-apiservername: cloud-ingress-operator-namespace: openshift-machine-apiname: cloud-ingress-operator-namespace: openshift-loggingname: admin-dedicated-admins-namespace: openshift-loggingname: admin-system:serviceaccounts:dedicated-admin-namespace: openshift-loggingname: openshift-logging-dedicated-admins-namespace: openshift-loggingname: openshift-logging:serviceaccounts:dedicated-admin-namespace: openshift-machine-apiname: osd-cluster-ready-namespace: openshift-machine-apiname: sre-ebs-iops-reporter-read-machine-info-namespace: openshift-machine-apiname: sre-stuck-ebs-vols-read-machine-info-namespace: openshift-managed-node-metadata-operatorname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-machine-apiname: osd-disable-cpms-namespace: openshift-marketplacename: dedicated-admins-openshift-marketplace-namespace: openshift-monitoringname: backplane-cee-namespace: openshift-monitoringname: muo-monitoring-reader-namespace: openshift-monitoringname: oao-monitoring-manager-namespace: openshift-monitoringname: osd-cluster-ready-namespace: openshift-monitoringname: osd-rebalance-infra-nodes-openshift-monitoring-namespace: openshift-monitoringname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-monitoringname: sre-dns-latency-exporter-namespace: openshift-monitoringname: sre-ebs-iops-reporter-namespace: openshift-monitoringname: sre-stuck-ebs-vols-namespace: openshift-must-gather-operatorname: backplane-cee-mustgather-namespace: openshift-must-gather-operatorname: backplane-srep-mustgather-namespace: openshift-must-gather-operatorname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-network-diagnosticsname: sre-pod-network-connectivity-check-pruner-namespace: openshift-network-operatorname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-ocm-agent-operatorname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-operators-redhatname: admin-dedicated-admins-namespace: openshift-operators-redhatname: admin-system:serviceaccounts:dedicated-admin-namespace: openshift-operators-redhatname: openshift-operators-redhat-dedicated-admins-namespace: openshift-operators-redhatname: openshift-operators-redhat:serviceaccounts:dedicated-admin-namespace: openshift-operatorsname: dedicated-admins-openshift-operators-namespace: openshift-osd-metricsname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-osd-metricsname: prometheus-k8s-namespace: openshift-rbac-permissionsname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-rbac-permissionsname: prometheus-k8s-namespace: openshift-route-monitor-operatorname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-scanningname: scanner-rolebinding-namespace: openshift-securityname: osd-rebalance-infra-nodes-openshift-security-namespace: openshift-securityname: prometheus-k8s-namespace: openshift-splunk-forwarder-operatorname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-suricataname: suricata-rolebinding-namespace: openshift-user-workload-monitoringname: dedicated-admins-uwm-config-create-namespace: openshift-user-workload-monitoringname: dedicated-admins-uwm-config-edit-namespace: openshift-user-workload-monitoringname: dedicated-admins-uwm-managed-am-secret-namespace: openshift-user-workload-monitoringname: osd-rebalance-infra-nodes-openshift-user-workload-monitoring-namespace: openshift-veleroname: osd-rebalance-infra-nodes-openshift-pod-rebalance-namespace: openshift-veleroname: prometheus-k8sRole:-namespace: kube-systemname: cluster-config-v1-reader-namespace: kube-systemname: cluster-config-v1-reader-cio-namespace: openshift-aquaname: dedicated-admins-openshift-aqua-namespace: openshift-backplane-managed-scriptsname: backplane-cee-pcap-collector-namespace: openshift-backplane-managed-scriptsname: backplane-srep-pcap-collector-namespace: openshift-backplane-managed-scriptsname: osd-delete-backplane-script-resources-namespace: openshift-codeready-workspacesname: dedicated-admins-openshift-codeready-workspaces-namespace: openshift-configname: dedicated-admins-project-request-namespace: openshift-configname: dedicated-admins-registry-cas-project-namespace: openshift-configname: muo-pullsecret-reader-namespace: openshift-configname: oao-openshiftconfig-reader-namespace: openshift-configname: osd-cluster-ready-namespace: openshift-customer-monitoringname: dedicated-admins-openshift-customer-monitoring-namespace: openshift-customer-monitoringname: prometheus-k8s-openshift-customer-monitoring-namespace: openshift-dnsname: dedicated-admins-openshift-dns-namespace: openshift-dnsname: osd-rebalance-infra-nodes-openshift-dns-namespace: openshift-ingress-operatorname: cloud-ingress-operator-namespace: openshift-ingressname: cloud-ingress-operator-namespace: openshift-kube-apiservername: cloud-ingress-operator-namespace: openshift-machine-apiname: cloud-ingress-operator-namespace: openshift-loggingname: dedicated-admins-openshift-logging-namespace: openshift-machine-apiname: osd-cluster-ready-namespace: openshift-machine-apiname: osd-disable-cpms-namespace: openshift-marketplacename: dedicated-admins-openshift-marketplace-namespace: openshift-monitoringname: backplane-cee-namespace: openshift-monitoringname: muo-monitoring-reader-namespace: openshift-monitoringname: oao-monitoring-manager-namespace: openshift-monitoringname: osd-cluster-ready-namespace: openshift-monitoringname: osd-rebalance-infra-nodes-openshift-monitoring-namespace: openshift-must-gather-operatorname: backplane-cee-mustgather-namespace: openshift-must-gather-operatorname: backplane-srep-mustgather-namespace: openshift-network-diagnosticsname: sre-pod-network-connectivity-check-pruner-namespace: openshift-operatorsname: dedicated-admins-openshift-operators-namespace: openshift-osd-metricsname: prometheus-k8s-namespace: openshift-rbac-permissionsname: prometheus-k8s-namespace: openshift-scanningname: scanner-role-namespace: openshift-securityname: osd-rebalance-infra-nodes-openshift-security-namespace: openshift-securityname: prometheus-k8s-namespace: openshift-suricataname: suricata-role-namespace: openshift-user-workload-monitoringname: dedicated-admins-user-workload-monitoring-create-cm-namespace: openshift-user-workload-monitoringname: dedicated-admins-user-workload-monitoring-manage-am-secret-namespace: openshift-user-workload-monitoringname: osd-rebalance-infra-nodes-openshift-user-workload-monitoring-namespace: openshift-veleroname: prometheus-k8sCronJob:-namespace: openshift-backplane-managed-scriptsname: osd-delete-backplane-script-resources-namespace: openshift-backplane-srepname: osd-delete-ownerrefs-serviceaccounts-namespace: openshift-backplanename: osd-delete-backplane-serviceaccounts-namespace: openshift-machine-apiname: osd-disable-cpms-namespace: openshift-marketplacename: osd-patch-subscription-source-namespace: openshift-monitoringname: osd-rebalance-infra-nodes-namespace: openshift-network-diagnosticsname: sre-pod-network-connectivity-check-pruner-namespace: openshift-sre-pruningname: builds-pruner-namespace: openshift-sre-pruningname: bz1980755-namespace: openshift-sre-pruningname: deployments-prunerJob:-namespace: openshift-monitoringname: osd-cluster-readyCredentialsRequest:-namespace: openshift-cloud-ingress-operatorname: cloud-ingress-operator-credentials-aws-namespace: openshift-cloud-ingress-operatorname: cloud-ingress-operator-credentials-gcp-namespace: openshift-monitoringname: sre-ebs-iops-reporter-aws-credentials-namespace: openshift-monitoringname: sre-stuck-ebs-vols-aws-credentials-namespace: openshift-veleroname: managed-velero-operator-iam-credentials-aws-namespace: openshift-veleroname: managed-velero-operator-iam-credentials-gcpAPIScheme:-namespace: openshift-cloud-ingress-operatorname: rh-apiPublishingStrategy:-namespace: openshift-cloud-ingress-operatorname: publishingstrategyScanSettingBinding:-namespace: openshift-compliancename: fedramp-high-ocp-namespace: openshift-compliancename: fedramp-high-rhcosScanSetting:-namespace: openshift-compliancename: osdTailoredProfile:-namespace: openshift-compliancename: rhcos4-high-rosaOAuth:-name: clusterEndpointSlice:-namespace: openshift-deployment-validation-operatorname: deployment-validation-operator-metrics-rhtwg-namespace: openshift-monitoringname: sre-dns-latency-exporter-4cw9r-namespace: openshift-monitoringname: sre-ebs-iops-reporter-6tx5g-namespace: openshift-monitoringname: sre-stuck-ebs-vols-gmdhs-namespace: openshift-scanningname: loggerservice-zprbq-namespace: openshift-securityname: audit-exporter-nqfdk-namespace: openshift-validation-webhookname: validation-webhook-97b8tFileIntegrity:-namespace: openshift-file-integrityname: osd-fileintegrityMachineHealthCheck:-namespace: openshift-machine-apiname: srep-infra-healthcheck-namespace: openshift-machine-apiname: srep-metal-worker-healthcheck-namespace: openshift-machine-apiname: srep-worker-healthcheckMachineSet:-namespace: openshift-machine-apiname: sbasabat-mc-qhqkn-infra-us-east-1a-namespace: openshift-machine-apiname: sbasabat-mc-qhqkn-worker-us-east-1aContainerRuntimeConfig:-name: custom-crioKubeletConfig:-name: custom-kubeletMachineConfig:-name: 00-master-chrony-name: 00-worker-chronySubjectPermission:-namespace: openshift-rbac-permissionsname: backplane-cee-namespace: openshift-rbac-permissionsname: backplane-csa-namespace: openshift-rbac-permissionsname: backplane-cse-namespace: openshift-rbac-permissionsname: backplane-csm-namespace: openshift-rbac-permissionsname: backplane-mobb-namespace: openshift-rbac-permissionsname: backplane-srep-namespace: openshift-rbac-permissionsname: backplane-tam-namespace: openshift-rbac-permissionsname: dedicated-admin-serviceaccounts-namespace: openshift-rbac-permissionsname: dedicated-admin-serviceaccounts-core-ns-namespace: openshift-rbac-permissionsname: dedicated-admins-namespace: openshift-rbac-permissionsname: dedicated-admins-alert-routing-edit-namespace: openshift-rbac-permissionsname: dedicated-admins-core-ns-namespace: openshift-rbac-permissionsname: dedicated-admins-customer-monitoring-namespace: openshift-rbac-permissionsname: osd-delete-backplane-serviceaccountsVeleroInstall:-namespace: openshift-veleroname: clusterPrometheusRule:-namespace: openshift-monitoringname: rhmi-sre-cluster-admins-namespace: openshift-monitoringname: rhoam-sre-cluster-admins-namespace: openshift-monitoringname: sre-alertmanager-silences-active-namespace: openshift-monitoringname: sre-alerts-stuck-builds-namespace: openshift-monitoringname: sre-alerts-stuck-volumes-namespace: openshift-monitoringname: sre-cloud-ingress-operator-offline-alerts-namespace: openshift-monitoringname: sre-avo-pendingacceptance-namespace: openshift-monitoringname: sre-configure-alertmanager-operator-offline-alerts-namespace: openshift-monitoringname: sre-control-plane-resizing-alerts-namespace: openshift-monitoringname: sre-dns-alerts-namespace: openshift-monitoringname: sre-ebs-iops-burstbalance-namespace: openshift-monitoringname: sre-elasticsearch-jobs-namespace: openshift-monitoringname: sre-elasticsearch-managed-notification-alerts-namespace: openshift-monitoringname: sre-excessive-memory-namespace: openshift-monitoringname: sre-fr-alerts-low-disk-space-namespace: openshift-monitoringname: sre-haproxy-reload-fail-namespace: openshift-monitoringname: sre-internal-slo-recording-rules-namespace: openshift-monitoringname: sre-kubequotaexceeded-namespace: openshift-monitoringname: sre-leader-election-master-status-alerts-namespace: openshift-monitoringname: sre-managed-kube-apiserver-missing-on-node-namespace: openshift-monitoringname: sre-managed-kube-controller-manager-missing-on-node-namespace: openshift-monitoringname: sre-managed-kube-scheduler-missing-on-node-namespace: openshift-monitoringname: sre-managed-node-metadata-operator-alerts-namespace: openshift-monitoringname: sre-managed-notification-alerts-namespace: openshift-monitoringname: sre-managed-upgrade-operator-alerts-namespace: openshift-monitoringname: sre-managed-velero-operator-alerts-namespace: openshift-monitoringname: sre-node-unschedulable-namespace: openshift-monitoringname: sre-oauth-server-namespace: openshift-monitoringname: sre-pending-csr-alert-namespace: openshift-monitoringname: sre-proxy-managed-notification-alerts-namespace: openshift-monitoringname: sre-pruning-namespace: openshift-monitoringname: sre-pv-namespace: openshift-monitoringname: sre-router-health-namespace: openshift-monitoringname: sre-runaway-sdn-preventing-container-creation-namespace: openshift-monitoringname: sre-slo-recording-rules-namespace: openshift-monitoringname: sre-telemeter-client-namespace: openshift-monitoringname: sre-telemetry-managed-labels-recording-rules-namespace: openshift-monitoringname: sre-upgrade-send-managed-notification-alerts-namespace: openshift-monitoringname: sre-uptime-slaServiceMonitor:-namespace: openshift-monitoringname: sre-dns-latency-exporter-namespace: openshift-monitoringname: sre-ebs-iops-reporter-namespace: openshift-monitoringname: sre-stuck-ebs-volsClusterUrlMonitor:-namespace: openshift-route-monitor-operatorname: apiRouteMonitor:-namespace: openshift-route-monitor-operatorname: consoleNetworkPolicy:-namespace: openshift-deployment-validation-operatorname: allow-from-openshift-insights-namespace: openshift-deployment-validation-operatorname: allow-from-openshift-olmManagedNotification:-namespace: openshift-ocm-agent-operatorname: sre-elasticsearch-managed-notifications-namespace: openshift-ocm-agent-operatorname: sre-managed-notifications-namespace: openshift-ocm-agent-operatorname: sre-proxy-managed-notifications-namespace: openshift-ocm-agent-operatorname: sre-upgrade-managed-notificationsOcmAgent:-namespace: openshift-ocm-agent-operatorname: ocmagent-namespace: openshift-securityname: audit-exporterConsole:-name: clusterCatalogSource:-namespace: openshift-addon-operatorname: addon-operator-catalog-namespace: openshift-cloud-ingress-operatorname: cloud-ingress-operator-registry-namespace: openshift-compliancename: compliance-operator-registry-namespace: openshift-container-securityname: container-security-operator-registry-namespace: openshift-custom-domains-operatorname: custom-domains-operator-registry-namespace: openshift-deployment-validation-operatorname: deployment-validation-operator-catalog-namespace: openshift-managed-node-metadata-operatorname: managed-node-metadata-operator-registry-namespace: openshift-file-integrityname: file-integrity-operator-registry-namespace: openshift-managed-upgrade-operatorname: managed-upgrade-operator-catalog-namespace: openshift-monitoringname: configure-alertmanager-operator-registry-namespace: openshift-must-gather-operatorname: must-gather-operator-registry-namespace: openshift-observability-operatorname: observability-operator-catalog-namespace: openshift-ocm-agent-operatorname: ocm-agent-operator-registry-namespace: openshift-osd-metricsname: osd-metrics-exporter-registry-namespace: openshift-rbac-permissionsname: rbac-permissions-operator-registry-namespace: openshift-route-monitor-operatorname: route-monitor-operator-registry-namespace: openshift-splunk-forwarder-operatorname: splunk-forwarder-operator-catalog-namespace: openshift-veleroname: managed-velero-operator-registryOperatorGroup:-namespace: openshift-addon-operatorname: addon-operator-og-namespace: openshift-aquaname: openshift-aqua-namespace: openshift-cloud-ingress-operatorname: cloud-ingress-operator-namespace: openshift-codeready-workspacesname: openshift-codeready-workspaces-namespace: openshift-compliancename: compliance-operator-namespace: openshift-container-securityname: container-security-operator-namespace: openshift-custom-domains-operatorname: custom-domains-operator-namespace: openshift-customer-monitoringname: openshift-customer-monitoring-namespace: openshift-deployment-validation-operatorname: deployment-validation-operator-og-namespace: openshift-managed-node-metadata-operatorname: managed-node-metadata-operator-namespace: openshift-file-integrityname: file-integrity-operator-namespace: openshift-loggingname: openshift-logging-namespace: openshift-managed-upgrade-operatorname: managed-upgrade-operator-og-namespace: openshift-must-gather-operatorname: must-gather-operator-namespace: openshift-observability-operatorname: observability-operator-og-namespace: openshift-ocm-agent-operatorname: ocm-agent-operator-og-namespace: openshift-osd-metricsname: osd-metrics-exporter-namespace: openshift-rbac-permissionsname: rbac-permissions-operator-namespace: openshift-route-monitor-operatorname: route-monitor-operator-namespace: openshift-splunk-forwarder-operatorname: splunk-forwarder-operator-og-namespace: openshift-veleroname: managed-velero-operatorSubscription:-namespace: openshift-addon-operatorname: addon-operator-namespace: openshift-cloud-ingress-operatorname: cloud-ingress-operator-namespace: openshift-compliancename: compliance-operator-sub-namespace: openshift-container-securityname: container-security-operator-sub-namespace: openshift-custom-domains-operatorname: custom-domains-operator-namespace: openshift-deployment-validation-operatorname: deployment-validation-operator-namespace: openshift-managed-node-metadata-operatorname: managed-node-metadata-operator-namespace: openshift-file-integrityname: file-integrity-operator-sub-namespace: openshift-managed-upgrade-operatorname: managed-upgrade-operator-namespace: openshift-monitoringname: configure-alertmanager-operator-namespace: openshift-must-gather-operatorname: must-gather-operator-namespace: openshift-observability-operatorname: observability-operator-namespace: openshift-ocm-agent-operatorname: ocm-agent-operator-namespace: openshift-osd-metricsname: osd-metrics-exporter-namespace: openshift-rbac-permissionsname: rbac-permissions-operator-namespace: openshift-route-monitor-operatorname: route-monitor-operator-namespace: openshift-splunk-forwarder-operatorname: openshift-splunk-forwarder-operator-namespace: openshift-veleroname: managed-velero-operatorPackageManifest:-namespace: openshift-splunk-forwarder-operatorname: splunk-forwarder-operator-namespace: openshift-addon-operatorname: addon-operator-namespace: openshift-rbac-permissionsname: rbac-permissions-operator-namespace: openshift-cloud-ingress-operatorname: cloud-ingress-operator-namespace: openshift-managed-node-metadata-operatorname: managed-node-metadata-operator-namespace: openshift-veleroname: managed-velero-operator-namespace: openshift-deployment-validation-operatorname: managed-upgrade-operator-namespace: openshift-managed-upgrade-operatorname: managed-upgrade-operator-namespace: openshift-container-securityname: container-security-operator-namespace: openshift-route-monitor-operatorname: route-monitor-operator-namespace: openshift-file-integrityname: file-integrity-operator-namespace: openshift-custom-domains-operatorname: managed-node-metadata-operator-namespace: openshift-route-monitor-operatorname: custom-domains-operator-namespace: openshift-managed-upgrade-operatorname: managed-upgrade-operator-namespace: openshift-ocm-agent-operatorname: ocm-agent-operator-namespace: openshift-observability-operatorname: observability-operator-namespace: openshift-monitoringname: configure-alertmanager-operator-namespace: openshift-must-gather-operatorname: deployment-validation-operator-namespace: openshift-osd-metricsname: osd-metrics-exporter-namespace: openshift-compliancename: compliance-operator-namespace: openshift-rbac-permissionsname: rbac-permissions-operatorStatus:-{}Project:-name: dedicated-admin-name: openshift-addon-operator-name: openshift-aqua-name: openshift-backplane-name: openshift-backplane-cee-name: openshift-backplane-csa-name: openshift-backplane-cse-name: openshift-backplane-csm-name: openshift-backplane-managed-scripts-name: openshift-backplane-mobb-name: openshift-backplane-srep-name: openshift-backplane-tam-name: openshift-cloud-ingress-operator-name: openshift-codeready-workspaces-name: openshift-compliance-name: openshift-container-security-name: openshift-custom-domains-operator-name: openshift-customer-monitoring-name: openshift-deployment-validation-operator-name: openshift-managed-node-metadata-operator-name: openshift-file-integrity-name: openshift-logging-name: openshift-managed-upgrade-operator-name: openshift-must-gather-operator-name: openshift-observability-operator-name: openshift-ocm-agent-operator-name: openshift-operators-redhat-name: openshift-osd-metrics-name: openshift-rbac-permissions-name: openshift-route-monitor-operator-name: openshift-scanning-name: openshift-security-name: openshift-splunk-forwarder-operator-name: openshift-sre-pruning-name: openshift-suricata-name: openshift-validation-webhook-name: openshift-veleroClusterResourceQuota:-name: loadbalancer-quota-name: persistent-volume-quotaSecurityContextConstraints:-name: osd-scanning-scc-name: osd-suricata-scc-name: pcap-dedicated-admins-name: splunkforwarderSplunkForwarder:-namespace: openshift-securityname: splunkforwarderGroup:-name: cluster-admins-name: dedicated-adminsUser:-name: backplane-cluster-adminBackup:-namespace: openshift-veleroname: daily-full-backup-20221123112305-namespace: openshift-veleroname: daily-full-backup-20221125042537-namespace: openshift-veleroname: daily-full-backup-20221126010038-namespace: openshift-veleroname: daily-full-backup-20221127010039-namespace: openshift-veleroname: daily-full-backup-20221128010040-namespace: openshift-veleroname: daily-full-backup-20221129050847-namespace: openshift-veleroname: hourly-object-backup-20221128051740-namespace: openshift-veleroname: hourly-object-backup-20221128061740-namespace: openshift-veleroname: hourly-object-backup-20221128071740-namespace: openshift-veleroname: hourly-object-backup-20221128081740-namespace: openshift-veleroname: hourly-object-backup-20221128091740-namespace: openshift-veleroname: hourly-object-backup-20221129050852-namespace: openshift-veleroname: hourly-object-backup-20221129051747-namespace: openshift-veleroname: weekly-full-backup-20221116184315-namespace: openshift-veleroname: weekly-full-backup-20221121033854-namespace: openshift-veleroname: weekly-full-backup-20221128020040Schedule:-namespace: openshift-veleroname: daily-full-backup-namespace: openshift-veleroname: hourly-object-backup-namespace: openshift-veleroname: weekly-full-backup
Copy to Clipboard

7.9.3. OpenShift Dedicated core namespaces

OpenShift Dedicated core namespaces are installed by default during cluster installation.

Example 7.2. List of core namespaces

apiVersion: v1kind: ConfigMapmetadata:name: ocp-namespacesnamespace: openshift-monitoringdata:managed_namespaces.yaml:|    Resources:      Namespace:      - name: kube-system      - name: openshift-apiserver      - name: openshift-apiserver-operator      - name: openshift-authentication      - name: openshift-authentication-operator      - name: openshift-cloud-controller-manager      - name: openshift-cloud-controller-manager-operator      - name: openshift-cloud-credential-operator      - name: openshift-cloud-network-config-controller      - name: openshift-cluster-api      - name: openshift-cluster-csi-drivers      - name: openshift-cluster-machine-approver      - name: openshift-cluster-node-tuning-operator      - name: openshift-cluster-samples-operator      - name: openshift-cluster-storage-operator      - name: openshift-config      - name: openshift-config-managed      - name: openshift-config-operator      - name: openshift-console      - name: openshift-console-operator      - name: openshift-console-user-settings      - name: openshift-controller-manager      - name: openshift-controller-manager-operator      - name: openshift-dns      - name: openshift-dns-operator      - name: openshift-etcd      - name: openshift-etcd-operator      - name: openshift-host-network      - name: openshift-image-registry      - name: openshift-ingress      - name: openshift-ingress-canary      - name: openshift-ingress-operator      - name: openshift-insights      - name: openshift-kni-infra      - name: openshift-kube-apiserver      - name: openshift-kube-apiserver-operator      - name: openshift-kube-controller-manager      - name: openshift-kube-controller-manager-operator      - name: openshift-kube-scheduler      - name: openshift-kube-scheduler-operator      - name: openshift-kube-storage-version-migrator      - name: openshift-kube-storage-version-migrator-operator      - name: openshift-machine-api      - name: openshift-machine-config-operator      - name: openshift-marketplace      - name: openshift-monitoring      - name: openshift-multus      - name: openshift-network-diagnostics      - name: openshift-network-operator      - name: openshift-nutanix-infra      - name: openshift-oauth-apiserver      - name: openshift-openstack-infra      - name: openshift-operator-lifecycle-manager      - name: openshift-operators      - name: openshift-ovirt-infra      - name: openshift-sdn      - name: openshift-ovn-kubernetes      - name: openshift-platform-operators      - name: openshift-route-controller-manager      - name: openshift-service-ca      - name: openshift-service-ca-operator      - name: openshift-user-workload-monitoring      - name: openshift-vsphere-infra
Copy to Clipboard

7.9.4. OpenShift Dedicated add-on namespaces

OpenShift Dedicated add-ons are services available for installation after cluster installation. These additional services include AWS CloudWatch, Red Hat OpenShift Dev Spaces, Red Hat OpenShift API Management, and Cluster Logging Operator. Any changes to resources within the following namespaces might be overridden by the add-on during upgrades, which can lead to unsupported configurations for the add-on functionality.

Example 7.3. List of add-on managed namespaces

addon-namespaces:ocs-converged-dev: openshift-storagemanaged-api-service-internal: redhat-rhoami-operatorcodeready-workspaces-operator: codeready-workspaces-operatormanaged-odh: redhat-ods-operatorcodeready-workspaces-operator-qe: codeready-workspaces-operator-qeintegreatly-operator: redhat-rhmi-operatornvidia-gpu-addon: redhat-nvidia-gpu-addonintegreatly-operator-internal: redhat-rhmi-operatorrhoams: redhat-rhoam-operatorocs-converged: openshift-storageaddon-operator: redhat-addon-operatorprow-operator: prowcluster-logging-operator: openshift-loggingadvanced-cluster-management: redhat-open-cluster-managementcert-manager-operator: redhat-cert-manager-operatordba-operator: addon-dba-operatorreference-addon: redhat-reference-addonocm-addon-test-operator: redhat-ocm-addon-test-operator
Copy to Clipboard

7.9.5. OpenShift Dedicated validating webhooks

OpenShift Dedicated validating webhooks are a set of dynamic admission controls maintained by the OpenShift SRE team. These HTTP callbacks, also known as webhooks, are called for various types of requests to ensure cluster stability. The webhooks evaluate each request and either accept or reject them. The following list describes the various webhooks with rules containing the registered operations and resources that are controlled. Any attempt to circumvent these validating webhooks could affect the stability and supportability of the cluster.

Example 7.4. List of validating webhooks

[  {    "webhookName": "clusterlogging-validation",    "rules": [      {        "operations": [          "CREATE",          "UPDATE"        ],        "apiGroups": [          "logging.openshift.io"        ],        "apiVersions": [          "v1"        ],        "resources": [          "clusterloggings"        ],        "scope": "Namespaced"      }    ],    "documentString": "Managed OpenShift Customers may set log retention outside the allowed range of 0-7 days"  },  {    "webhookName": "clusterrolebindings-validation",    "rules": [      {        "operations": [          "DELETE"        ],        "apiGroups": [          "rbac.authorization.k8s.io"        ],        "apiVersions": [          "v1"        ],        "resources": [          "clusterrolebindings"        ],        "scope": "Cluster"      }    ],    "documentString": "Managed OpenShift Customers may not delete the cluster role bindings under the managed namespaces: (^openshift-.*|kube-system)"  },  {    "webhookName": "customresourcedefinitions-validation",    "rules": [      {        "operations": [          "CREATE",          "UPDATE",          "DELETE"        ],        "apiGroups": [          "apiextensions.k8s.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "customresourcedefinitions"        ],        "scope": "Cluster"      }    ],    "documentString": "Managed OpenShift Customers may not change CustomResourceDefinitions managed by Red Hat."  },  {    "webhookName": "hiveownership-validation",    "rules": [      {        "operations": [          "UPDATE",          "DELETE"        ],        "apiGroups": [          "quota.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "clusterresourcequotas"        ],        "scope": "Cluster"      }    ],    "webhookObjectSelector": {      "matchLabels": {        "hive.openshift.io/managed": "true"      }    },    "documentString": "Managed OpenShift customers may not edit certain managed resources. A managed resource has a \"hive.openshift.io/managed\": \"true\" label."  },  {    "webhookName": "imagecontentpolicies-validation",    "rules": [      {        "operations": [          "CREATE",          "UPDATE"        ],        "apiGroups": [          "config.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "imagedigestmirrorsets",          "imagetagmirrorsets"        ],        "scope": "Cluster"      },      {        "operations": [          "CREATE",          "UPDATE"        ],        "apiGroups": [          "operator.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "imagecontentsourcepolicies"        ],        "scope": "Cluster"      }    ],    "documentString": "Managed OpenShift customers may not create ImageContentSourcePolicy, ImageDigestMirrorSet, or ImageTagMirrorSet resources that configure mirrors that would conflict with system registries (e.g. quay.io, registry.redhat.io, registry.access.redhat.com, etc). For more details, see https://docs.openshift.com/"  },  {    "webhookName": "ingress-config-validation",    "rules": [      {        "operations": [          "CREATE",          "UPDATE",          "DELETE"        ],        "apiGroups": [          "config.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "ingresses"        ],        "scope": "Cluster"      }    ],    "documentString": "Managed OpenShift customers may not modify ingress config resources because it can can degrade cluster operators and can interfere with OpenShift SRE monitoring."  },  {    "webhookName": "ingresscontroller-validation",    "rules": [      {        "operations": [          "CREATE",          "UPDATE"        ],        "apiGroups": [          "operator.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "ingresscontroller",          "ingresscontrollers"        ],        "scope": "Namespaced"      }    ],    "documentString": "Managed OpenShift Customer may create IngressControllers without necessary taints. This can cause those workloads to be provisioned on infra or master nodes."  },  {    "webhookName": "namespace-validation",    "rules": [      {        "operations": [          "CREATE",          "UPDATE",          "DELETE"        ],        "apiGroups": [          ""        ],        "apiVersions": [          "*"        ],        "resources": [          "namespaces"        ],        "scope": "Cluster"      }    ],    "documentString": "Managed OpenShift Customers may not modify namespaces specified in the [openshift-monitoring/managed-namespaces openshift-monitoring/ocp-namespaces] ConfigMaps because customer workloads should be placed in customer-created namespaces. Customers may not create namespaces identified by this regular expression (^com$|^io$|^in$) because it could interfere with critical DNS resolution. Additionally, customers may not set or change the values of these Namespace labels [managed.openshift.io/storage-pv-quota-exempt managed.openshift.io/service-lb-quota-exempt]."  },  {    "webhookName": "networkpolicies-validation",    "rules": [      {        "operations": [          "CREATE",          "UPDATE",          "DELETE"        ],        "apiGroups": [          "networking.k8s.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "networkpolicies"        ],        "scope": "Namespaced"      }    ],    "documentString": "Managed OpenShift Customers may not create NetworkPolicies in namespaces managed by Red Hat."  },  {    "webhookName": "node-validation-osd",    "rules": [      {        "operations": [          "CREATE",          "UPDATE",          "DELETE"        ],        "apiGroups": [          ""        ],        "apiVersions": [          "*"        ],        "resources": [          "nodes",          "nodes/*"        ],        "scope": "*"      }    ],    "documentString": "Managed OpenShift customers may not alter Node objects."  },  {    "webhookName": "pod-validation",    "rules": [      {        "operations": [          "*"        ],        "apiGroups": [          "v1"        ],        "apiVersions": [          "*"        ],        "resources": [          "pods"        ],        "scope": "Namespaced"      }    ],    "documentString": "Managed OpenShift Customers may use tolerations on Pods that could cause those Pods to be scheduled on infra or master nodes."  },  {    "webhookName": "prometheusrule-validation",    "rules": [      {        "operations": [          "CREATE",          "UPDATE",          "DELETE"        ],        "apiGroups": [          "monitoring.coreos.com"        ],        "apiVersions": [          "*"        ],        "resources": [          "prometheusrules"        ],        "scope": "Namespaced"      }    ],    "documentString": "Managed OpenShift Customers may not create PrometheusRule in namespaces managed by Red Hat."  },  {    "webhookName": "regular-user-validation",    "rules": [      {        "operations": [          "*"        ],        "apiGroups": [          "cloudcredential.openshift.io",          "machine.openshift.io",          "admissionregistration.k8s.io",          "addons.managed.openshift.io",          "cloudingress.managed.openshift.io",          "managed.openshift.io",          "ocmagent.managed.openshift.io",          "splunkforwarder.managed.openshift.io",          "upgrade.managed.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "*/*"        ],        "scope": "*"      },      {        "operations": [          "*"        ],        "apiGroups": [          "autoscaling.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "clusterautoscalers",          "machineautoscalers"        ],        "scope": "*"      },      {        "operations": [          "*"        ],        "apiGroups": [          "config.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "clusterversions",          "clusterversions/status",          "schedulers",          "apiservers",          "proxies"        ],        "scope": "*"      },      {        "operations": [          "CREATE",          "UPDATE",          "DELETE"        ],        "apiGroups": [          ""        ],        "apiVersions": [          "*"        ],        "resources": [          "configmaps"        ],        "scope": "*"      },      {        "operations": [          "*"        ],        "apiGroups": [          "machineconfiguration.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "machineconfigs",          "machineconfigpools"        ],        "scope": "*"      },      {        "operations": [          "*"        ],        "apiGroups": [          "operator.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "kubeapiservers",          "openshiftapiservers"        ],        "scope": "*"      },      {        "operations": [          "*"        ],        "apiGroups": [          "managed.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "subjectpermissions",          "subjectpermissions/*"        ],        "scope": "*"      },      {        "operations": [          "*"        ],        "apiGroups": [          "network.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "netnamespaces",          "netnamespaces/*"        ],        "scope": "*"      }    ],    "documentString": "Managed OpenShift customers may not manage any objects in the following APIGroups [autoscaling.openshift.io network.openshift.io machine.openshift.io admissionregistration.k8s.io addons.managed.openshift.io cloudingress.managed.openshift.io splunkforwarder.managed.openshift.io upgrade.managed.openshift.io managed.openshift.io ocmagent.managed.openshift.io config.openshift.io machineconfiguration.openshift.io operator.openshift.io cloudcredential.openshift.io], nor may Managed OpenShift customers alter the APIServer, KubeAPIServer, OpenShiftAPIServer, ClusterVersion, Proxy or SubjectPermission objects."  },  {    "webhookName": "scc-validation",    "rules": [      {        "operations": [          "UPDATE",          "DELETE"        ],        "apiGroups": [          "security.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "securitycontextconstraints"        ],        "scope": "Cluster"      }    ],    "documentString": "Managed OpenShift Customers may not modify the following default SCCs: [anyuid hostaccess hostmount-anyuid hostnetwork hostnetwork-v2 node-exporter nonroot nonroot-v2 privileged restricted restricted-v2]"  },  {    "webhookName": "sdn-migration-validation",    "rules": [      {        "operations": [          "UPDATE"        ],        "apiGroups": [          "config.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "networks"        ],        "scope": "Cluster"      }    ],    "documentString": "Managed OpenShift customers may not modify the network config type because it can can degrade cluster operators and can interfere with OpenShift SRE monitoring."  },  {    "webhookName": "service-mutation",    "rules": [      {        "operations": [          "CREATE",          "UPDATE"        ],        "apiGroups": [          ""        ],        "apiVersions": [          "v1"        ],        "resources": [          "services"        ],        "scope": "Namespaced"      }    ],    "documentString": "LoadBalancer-type services on Managed OpenShift clusters must contain an additional annotation for managed policy compliance."  },  {    "webhookName": "serviceaccount-validation",    "rules": [      {        "operations": [          "DELETE"        ],        "apiGroups": [          ""        ],        "apiVersions": [          "v1"        ],        "resources": [          "serviceaccounts"        ],        "scope": "Namespaced"      }    ],    "documentString": "Managed OpenShift Customers may not delete the service accounts under the managed namespaces。"  },  {    "webhookName": "techpreviewnoupgrade-validation",    "rules": [      {        "operations": [          "CREATE",          "UPDATE"        ],        "apiGroups": [          "config.openshift.io"        ],        "apiVersions": [          "*"        ],        "resources": [          "featuregates"        ],        "scope": "Cluster"      }    ],    "documentString": "Managed OpenShift Customers may not use TechPreviewNoUpgrade FeatureGate that could prevent any future ability to do a y-stream upgrade to their clusters."  }]
Copy to Clipboard
Jump to section
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see theRed Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat

[8]ページ先頭

©2009-2025 Movatter.jp