- Home
- Products
- OpenShift Dedicated
- 4
- CLI tools
- Chapter 2. OpenShift CLI (oc)
OpenShift Dedicated
Get started
Introduction to OpenShift Dedicated
OpenShift Dedicated clusters on GCP
- OpenShift Dedicated clusters on GCP
- Private Service Connect overview
- Creating a cluster on GCP with Workload Identity Federation authentication
- Creating a cluster on GCP with Service Account authentication
- Creating a cluster on GCP with a Red Hat cloud account
- Deleting an OpenShift Dedicated cluster on GCP
OpenShift Dedicated clusters on AWS
Upgrading
Manage clusters
Authentication and authorization
- Authentication and authorization
- Overview of authentication and authorization
- Understanding authentication
- Managing user-owned OAuth access tokens
- Configuring identity providers
- Revoking privileges and access to an OpenShift Dedicated cluster
- Managing administration roles and users
- Using RBAC to define and apply permissions
- Understanding and creating service accounts
- Using service accounts in applications
- Using a service account as an OAuth client
- Scoping tokens
- Using bound service account tokens
- Managing security context constraints
- Understanding and managing pod security admission
- Syncing LDAP groups
Security and compliance
Building applications
- Building applications
- Building applications overview
- Projects
- Creating applications
- Viewing application composition by using the Topology view
- Working with Helm charts
- Deployments
- Quotas
- Using config maps with applications
- Monitoring project and application metrics using the Developer perspective
- Monitoring application health by using health checks
- Editing applications
- Working with quotas
- Pruning objects to reclaim resources
- Idling applications
- Deleting applications
Logging
- Logging
- Release notes
- Support
- Troubleshooting logging
- About Logging
- Installing Logging
- Updating Logging
- Visualizing logs
- Configuring your Logging deployment
- Log collection and forwarding
- Log storage
- Logging alerts
- Performance and reliability tuning
- Scheduling resources
- Uninstalling Logging
- Log Record Fields
- tags
- kubernetes
- OpenShift
- API reference
- Glossary
Serverless
CI/CD
CI/CD overview
Builds using Shipwright
Builds using BuildConfig
- Builds using BuildConfig
- Understanding image builds
- Understanding build configurations
- Creating build inputs
- Managing build output
- Using build strategies
- Performing and configuring basic builds
- Triggering and modifying builds
- Performing advanced builds
- Using Red Hat subscriptions in builds
- Troubleshooting builds
Reference
Red Hat OpenShift Cluster Manager
- Legal Notice
Chapter 2. OpenShift CLI (oc)
2.1. Getting started with the OpenShift CLI
2.1.1. About the OpenShift CLI
With the OpenShift CLI (oc
), you can create applications and manage OpenShift Dedicated projects from a terminal. The OpenShift CLI is ideal in the following situations:
- Working directly with project source code
- Scripting OpenShift Dedicated operations
- Managing projects while restricted by bandwidth resources and the web console is unavailable
2.1.2. Installing the OpenShift CLI
You can install the OpenShift CLI (oc
) either by downloading the binary or by using an RPM.
2.1.2.1. Installing the OpenShift CLI
You can install the OpenShift CLI (oc
) to interact with OpenShift Dedicated clusters from a command-line interface. You can installoc
on Linux, Windows, or macOS.
If you installed an earlier version ofoc
, you cannot use it to complete all of the commands in OpenShift Dedicated 4. Download and install the new version ofoc
.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
- Navigate to theOpenShift Dedicated downloads page on the Red Hat Customer Portal.
- Select the architecture from theProduct Variant drop-down list.
- Select the appropriate version from theVersion drop-down list.
- ClickDownload Now next to theOpenShift v4 Linux Clients entry and save the file.
Unpack the archive:
tar xvf <file>
$tar xvf<file>
Copy to ClipboardCopied! Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:echo $PATH
$echo$PATH
Copy to ClipboardCopied!
Verification
After you install the OpenShift CLI, it is available using the
oc
command:oc <command>
$oc<command>
Copy to ClipboardCopied!
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
- Navigate to theOpenShift Dedicated downloads page on the Red Hat Customer Portal.
- Select the appropriate version from theVersion drop-down list.
- ClickDownload Now next to theOpenShift v4 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:path
C:\> path
Copy to ClipboardCopied!
Verification
After you install the OpenShift CLI, it is available using the
oc
command:oc <command>
C:\> oc <command>
Copy to ClipboardCopied!
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
- Navigate to theOpenShift Dedicated downloads page on the Red Hat Customer Portal.
- Select the appropriate version from theVersion drop-down list.
ClickDownload Now next to theOpenShift v4 macOS Clients entry and save the file.
NoteFor macOS arm64, choose theOpenShift v4 macOS arm64 Client entry.
- Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:echo $PATH
$echo$PATH
Copy to ClipboardCopied!
Verification
Verify your installation by using an
oc
command:oc <command>
$oc<command>
Copy to ClipboardCopied!
2.1.2.2. Installing the OpenShift CLI by using the web console
You can install the OpenShift CLI (oc
) to interact with OpenShift Dedicated clusters from a web console. You can installoc
on Linux, Windows, or macOS.
If you installed an earlier version ofoc
, you cannot use it to complete all of the commands in OpenShift Dedicated 4. Download and install the new version ofoc
.
2.1.2.2.1. Installing the OpenShift CLI on Linux using the web console
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
- Download the latest version of the
oc
CLI for your operating system from theDownloads page on OpenShift Cluster Manager. Extract the
oc
binary file from the downloaded archive.tar xvf <file>
$tar xvf<file>
Copy to ClipboardCopied! Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, execute the following command:echo $PATH
$echo$PATH
Copy to ClipboardCopied!
After you install the OpenShift CLI, it is available using theoc
command:
oc <command>
$oc<command>
2.1.2.2.2. Installing the OpenShift CLI on Windows using the web console
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
- Download the latest version of the
oc
CLI for your operating system from theDownloads page on OpenShift Cluster Manager. - Extract the
oc
binary file from the downloaded archive. Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:path
C:\> path
Copy to ClipboardCopied!
After you install the OpenShift CLI, it is available using theoc
command:
oc <command>
C:\> oc <command>
2.1.2.2.3. Installing the OpenShift CLI on macOS using the web console
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
- Download the latest version of the
oc
CLI for your operating system from theDownloads page on OpenShift Cluster Manager. - Extract the
oc
binary file from the downloaded archive. Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:echo $PATH
$echo$PATH
Copy to ClipboardCopied!
After you install the OpenShift CLI, it is available using theoc
command:
oc <command>
$oc<command>
2.1.2.3. Installing the OpenShift CLI by using an RPM
For Red Hat Enterprise Linux (RHEL), you can install the OpenShift CLI (oc
) as an RPM if you have an active OpenShift Dedicated subscription on your Red Hat account.
You must installoc
for RHEL 9 by downloading the binary. Installingoc
by using an RPM package is not supported on Red Hat Enterprise Linux (RHEL) 9.
Prerequisites
- Must have root or sudo privileges.
Procedure
Register with Red Hat Subscription Manager:
subscription-manager register
#subscription-manager register
Copy to ClipboardCopied! Pull the latest subscription data:
subscription-manager refresh
#subscription-manager refresh
Copy to ClipboardCopied! List the available subscriptions:
subscription-manager list --available --matches '*OpenShift*'
#subscription-manager list--available--matches'*OpenShift*'
Copy to ClipboardCopied! In the output for the previous command, find the pool ID for an OpenShift Dedicated subscription and attach the subscription to the registered system:
subscription-manager attach --pool=<pool_id>
#subscription-manager attach--pool=<pool_id>
Copy to ClipboardCopied! Enable the repositories required by OpenShift Dedicated 4.
subscription-manager repos --enable="rhocp-4-for-rhel-8-x86_64-rpms"
#subscription-manager repos--enable="rhocp-4-for-rhel-8-x86_64-rpms"
Copy to ClipboardCopied! Install the
openshift-clients
package:yum install openshift-clients
#yuminstall openshift-clients
Copy to ClipboardCopied!
Verification
- Verify your installation by using an
oc
command:
oc <command>
$oc<command>
2.1.2.4. Installing the OpenShift CLI by using Homebrew
For macOS, you can install the OpenShift CLI (oc
) by using theHomebrew package manager.
Prerequisites
- You must have Homebrew (
brew
) installed.
Procedure
Install theopenshift-cli package by running the following command:
brew install openshift-cli
$brewinstall openshift-cli
Copy to ClipboardCopied!
Verification
- Verify your installation by using an
oc
command:
oc <command>
$oc<command>
2.1.3. Logging in to the OpenShift CLI
You can log in to the OpenShift CLI (oc
) to access and manage your cluster.
Prerequisites
- You must have access to an OpenShift Dedicated cluster.
- The OpenShift CLI (
oc
) is installed.
To access a cluster that is accessible only over an HTTP proxy server, you can set theHTTP_PROXY
,HTTPS_PROXY
andNO_PROXY
variables. These environment variables are respected by theoc
CLI so that all communication with the cluster goes through the HTTP proxy.
Authentication headers are sent only when using HTTPS transport.
Procedure
Enter the
oc login
command and pass in a user name:oc login -u user1
$oc login-u user1
Copy to ClipboardCopied! When prompted, enter the required information:
Example output
Server [https://localhost:8443]: https://openshift.example.com:6443 The server uses a certificate signed by an unknown authority.You can bypass the certificate check, but any data you send to the server could be intercepted by others.Use insecure connections? (y/n): y Authentication required for https://openshift.example.com:6443 (openshift)Username: user1Password: Login successful.You don't have any projects. You can try to create a new project, by running oc new-project <projectname>Welcome! See 'oc help' to get started.
Server [https://localhost:8443]: https://openshift.example.com:6443
1 The server uses a certificate signed by an unknown authority.You can bypass the certificate check, but any data you send to the server could be intercepted by others.Use insecure connections? (y/n): y
2 Authentication required for https://openshift.example.com:6443 (openshift)Username: user1Password:
3 Login successful.You don't have any projects. You can try to create a new project, by running oc new-project <projectname>Welcome! See 'oc help' to get started.
Copy to ClipboardCopied!
If you are logged in to the web console, you can generate anoc login
command that includes your token and server information. You can use the command to log in to the OpenShift Dedicated CLI without the interactive prompts. To generate the command, selectCopy login command from the username drop-down menu at the top right of the web console.
You can now create a project or issue other commands for managing your cluster.
2.1.4. Logging in to the OpenShift CLI using a web browser
You can log in to the OpenShift CLI (oc
) with the help of a web browser to access and manage your cluster. This allows users to avoid inserting their access token into the command line.
Logging in to the CLI through the web browser runs a server on localhost with HTTP, not HTTPS; use with caution on multi-user workstations.
Prerequisites
- You must have access to an OpenShift Dedicated cluster.
- You must have installed the OpenShift CLI (
oc
). - You must have a browser installed.
Procedure
Enter the
oc login
command with the--web
flag:oc login <cluster_url> --web
$oc login<cluster_url>--web
1 Copy to ClipboardCopied! - 1
- Optionally, you can specify the server URL and callback port. For example,
oc login <cluster_url> --web --callback-port 8280 localhost:8443
.
The web browser opens automatically. If it does not, click the link in the command output. If you do not specify the OpenShift Dedicated server
oc
tries to open the web console of the cluster specified in the currentoc
configuration file. If nooc
configuration exists,oc
prompts interactively for the server URL.Example output
Opening login URL in the default browser: https://openshift.example.comOpening in existing browser session.
Opening login URL in the default browser: https://openshift.example.comOpening in existing browser session.
Copy to ClipboardCopied! - If more than one identity provider is available, select your choice from the options provided.
- Enter your username and password into the corresponding browser fields. After you are logged in, the browser displays the text
access token received successfully; please return to your terminal
. Check the CLI for a login confirmation.
Example output
Login successful.You don't have any projects. You can try to create a new project, by running oc new-project <projectname>
Login successful.You don't have any projects. You can try to create a new project, by running oc new-project <projectname>
Copy to ClipboardCopied!
The web console defaults to the profile used in the previous session. To switch between Administrator and Developer profiles, log out of the OpenShift Dedicated web console and clear the cache.
You can now create a project or issue other commands for managing your cluster.
2.1.5. Using the OpenShift CLI
Review the following sections to learn how to complete common tasks using the CLI.
2.1.5.1. Creating a project
Use theoc new-project
command to create a new project.
oc new-project my-project
$oc new-project my-project
Example output
Now using project "my-project" on server "https://openshift.example.com:6443".
Now using project "my-project" on server "https://openshift.example.com:6443".
2.1.5.2. Creating a new app
Use theoc new-app
command to create a new application.
oc new-app https://github.com/sclorg/cakephp-ex
$oc new-app https://github.com/sclorg/cakephp-ex
Example output
--> Found image 40de956 (9 days old) in imagestream "openshift/php" under tag "7.2" for "php"... Run 'oc status' to view your app.
--> Found image 40de956 (9 days old) in imagestream "openshift/php" under tag "7.2" for "php"... Run 'oc status' to view your app.
2.1.5.3. Viewing pods
Use theoc get pods
command to view the pods for the current project.
When you runoc
inside a pod and do not specify a namespace, the namespace of the pod is used by default.
oc get pods -o wide
$oc get pods-o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEcakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none>cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none>cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEcakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none>cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none>cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>
2.1.5.4. Viewing pod logs
Use theoc logs
command to view logs for a particular pod.
oc logs cakephp-ex-1-deploy
$oc logs cakephp-ex-1-deploy
Example output
--> Scaling cakephp-ex-1 to 1--> Success
--> Scaling cakephp-ex-1 to 1--> Success
2.1.5.5. Viewing the current project
Use theoc project
command to view the current project.
oc project
$oc project
Example output
Using project "my-project" on server "https://openshift.example.com:6443".
Using project "my-project" on server "https://openshift.example.com:6443".
2.1.5.6. Viewing the status for the current project
Use theoc status
command to view information about the current project, such as services, deployments, and build configs.
oc status
$oc status
Example output
In project my-project on server https://openshift.example.com:6443svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod3 infos identified, use 'oc status --suggest' to see details.
In project my-project on server https://openshift.example.com:6443svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod3 infos identified, use 'oc status --suggest' to see details.
2.1.5.7. Listing supported API resources
Use theoc api-resources
command to view the list of supported API resources on the server.
oc api-resources
$oc api-resources
Example output
NAME SHORTNAMES APIGROUP NAMESPACED KINDbindings true Bindingcomponentstatuses cs false ComponentStatusconfigmaps cm true ConfigMap...
NAME SHORTNAMES APIGROUP NAMESPACED KINDbindings true Bindingcomponentstatuses cs false ComponentStatusconfigmaps cm true ConfigMap...
2.1.6. Getting help
You can get help with CLI commands and OpenShift Dedicated resources in the following ways:
Use
oc help
to get a list and description of all available CLI commands:Example: Get general help for the CLI
oc help
$ochelp
Copy to ClipboardCopied! Example output
OpenShift ClientThis client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatibleplatform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand.Usage: oc [flags]Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application...
OpenShift ClientThis client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatibleplatform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand.Usage: oc [flags]Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application...
Copy to ClipboardCopied! Use the
--help
flag to get help about a specific CLI command:Example: Get help for the
oc create
commandoc create --help
$oc create--help
Copy to ClipboardCopied! Example output
Create a resource by filename or stdinJSON and YAML formats are accepted.Usage: oc create -f FILENAME [flags]...
Create a resource by filename or stdinJSON and YAML formats are accepted.Usage: oc create -f FILENAME [flags]...
Copy to ClipboardCopied! Use the
oc explain
command to view the description and fields for a particular resource:Example: View documentation for the
Pod
resourceoc explain pods
$oc explain pods
Copy to ClipboardCopied! Example output
KIND: PodVERSION: v1DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts.FIELDS: apiVersion<string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources...
KIND: PodVERSION: v1DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts.FIELDS: apiVersion<string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources...
Copy to ClipboardCopied!
2.1.7. Logging out of the OpenShift CLI
You can log out the OpenShift CLI to end your current session.
Use the
oc logout
command.oc logout
$oclogout
Copy to ClipboardCopied! Example output
Logged "user1" out on "https://openshift.example.com"
Logged "user1" out on "https://openshift.example.com"
Copy to ClipboardCopied!
This deletes the saved authentication token from the server and removes it from your configuration file.
2.2. Configuring the OpenShift CLI
2.2.1. Enabling tab completion
You can enable tab completion for the Bash or Zsh shells.
2.2.1.1. Enabling tab completion for Bash
After you install the OpenShift CLI (oc
), you can enable tab completion to automatically completeoc
commands or suggest options when you press Tab. The following procedure enables tab completion for the Bash shell.
Prerequisites
- You must have the OpenShift CLI (
oc
) installed. - You must have the package
bash-completion
installed.
Procedure
Save the Bash completion code to a file:
oc completion bash > oc_bash_completion
$oc completionbash> oc_bash_completion
Copy to ClipboardCopied! Copy the file to
/etc/bash_completion.d/
:sudo cp oc_bash_completion /etc/bash_completion.d/
$sudocp oc_bash_completion /etc/bash_completion.d/
Copy to ClipboardCopied! You can also save the file to a local directory and source it from your
.bashrc
file instead.
Tab completion is enabled when you open a new terminal.
2.2.1.2. Enabling tab completion for Zsh
After you install the OpenShift CLI (oc
), you can enable tab completion to automatically completeoc
commands or suggest options when you press Tab. The following procedure enables tab completion for the Zsh shell.
Prerequisites
- You must have the OpenShift CLI (
oc
) installed.
Procedure
To add tab completion for
oc
to your.zshrc
file, run the following command:cat >>~/.zshrc<<EOFautoload -Uz compinitcompinitif [ $commands[oc] ]; then source <(oc completion zsh) compdef _oc ocfiEOF
$cat>>~/.zshrc<<EOFautoload -Uz compinitcompinitif [$commands[oc] ]; then source <(oc completion zsh) compdef _oc ocfiEOF
Copy to ClipboardCopied!
Tab completion is enabled when you open a new terminal.
2.3. Usage of oc and kubectl commands
The Kubernetes command-line interface (CLI),kubectl
, can be used to run commands against a Kubernetes cluster. Because OpenShift Dedicated is a certified Kubernetes distribution, you can use the supportedkubectl
binaries that ship with OpenShift Dedicated , or you can gain extended functionality by using theoc
binary.
2.3.1. The oc binary
Theoc
binary offers the same capabilities as thekubectl
binary, but it extends to natively support additional OpenShift Dedicated features, including:
Full support for OpenShift Dedicated resources
Resources such as
DeploymentConfig
,BuildConfig
,Route
,ImageStream
, andImageStreamTag
objects are specific to OpenShift Dedicated distributions, and build upon standard Kubernetes primitives.- Authentication
Additional commands
The additional command
oc new-app
, for example, makes it easier to get new applications started using existing source code or pre-built images. Similarly, the additional commandoc new-project
makes it easier to start a project that you can switch to as your default.
If you installed an earlier version of theoc
binary, you cannot use it to complete all of the commands in OpenShift Dedicated 4 . If you want the latest features, you must download and install the latest version of theoc
binary corresponding to your OpenShift Dedicated server version.
Non-security API changes will involve, at minimum, two minor releases (4.1 to 4.2 to 4.3, for example) to allow olderoc
binaries to update. Using new capabilities might require neweroc
binaries. A 4.3 server might have additional capabilities that a 4.2oc
binary cannot use and a 4.3oc
binary might have additional capabilities that are unsupported by a 4.2 server.
X.Y ( | X.Y+N footnote:versionpolicyn[WhereN is a number greater than or equal to 1.] ( | |
X.Y (Server) | ||
X.Y+N footnote:versionpolicyn[] (Server) |
Fully compatible.
oc
client might not be able to access server features.
oc
client might provide options and features that might not be compatible with the accessed server.
2.3.2. The kubectl binary
Thekubectl
binary is provided as a means to support existing workflows and scripts for new OpenShift Dedicated users coming from a standard Kubernetes environment, or for those who prefer to use thekubectl
CLI. Existing users ofkubectl
can continue to use the binary to interact with Kubernetes primitives, with no changes required to the OpenShift Dedicated cluster.
You can install the supportedkubectl
binary by following the steps toInstall the OpenShift CLI. Thekubectl
binary is included in the archive if you download the binary, or is installed when you install the CLI by using an RPM.
For more information, see thekubectl documentation.
2.4. Managing CLI profiles
A CLI configuration file allows you to configure different profiles, or contexts, for use with theCLI tools overview. A context consists of an OpenShift Dedicated server information associated with anickname.
2.4.1. About switches between CLI profiles
Contexts allow you to easily switch between multiple users across multiple OpenShift Dedicated servers, or clusters, when using CLI operations. Nicknames make managing CLI configurations easier by providing short-hand references to contexts, user credentials, and cluster details. After a user logs in with theoc
CLI for the first time, OpenShift Dedicated creates a~/.kube/config
file if one does not already exist. As more authentication and connection details are provided to the CLI, either automatically during anoc login
operation or by manually configuring CLI profiles, the updated information is stored in the configuration file:
CLI config file
apiVersion: v1clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443- cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443contexts: - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice- context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alicecurrent-context: joe-project/openshift1.example.com:8443/alice kind: Configpreferences: {}users: - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k
apiVersion: v1clusters:
-cluster:insecure-skip-tls-verify:trueserver: https://openshift1.example.com:8443name: openshift1.example.com:8443-cluster:insecure-skip-tls-verify:trueserver: https://openshift2.example.com:8443name: openshift2.example.com:8443contexts:
-context:cluster: openshift1.example.com:8443namespace: alice-projectuser: alice/openshift1.example.com:8443name: alice-project/openshift1.example.com:8443/alice-context:cluster: openshift1.example.com:8443namespace: joe-projectuser: alice/openshift1.example.com:8443name: joe-project/openshift1/alicecurrent-context: joe-project/openshift1.example.com:8443/alice
kind: Configpreferences:{}users:
-name: alice/openshift1.example.com:8443user:token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k
- 1
- The
clusters
section defines connection details for OpenShift Dedicated clusters, including the address for their master server. In this example, one cluster is nicknamedopenshift1.example.com:8443
and another is nicknamedopenshift2.example.com:8443
. - 2
- This
contexts
section defines two contexts: one nicknamedalice-project/openshift1.example.com:8443/alice
, using thealice-project
project,openshift1.example.com:8443
cluster, andalice
user, and another nicknamedjoe-project/openshift1.example.com:8443/alice
, using thejoe-project
project,openshift1.example.com:8443
cluster andalice
user. - 3
- The
current-context
parameter shows that thejoe-project/openshift1.example.com:8443/alice
context is currently in use, allowing thealice
user to work in thejoe-project
project on theopenshift1.example.com:8443
cluster. - 4
- The
users
section defines user credentials. In this example, the user nicknamealice/openshift1.example.com:8443
uses an access token.
The CLI can support multiple configuration files which are loaded at runtime and merged together along with any override options specified from the command line. After you are logged in, you can use theoc status
oroc project
command to verify your current working environment:
Verify the current working environment
oc status
$oc status
Example output
oc statusIn project Joe's Project (joe-project)service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 podservice frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 podsTo see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'.You can use 'oc get all' to see lists of each of the types described in this example.
oc statusIn project Joe's Project (joe-project)service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 podservice frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 podsTo see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'.You can use 'oc get all' to see lists of each of the types described in this example.
List the current project
oc project
$oc project
Example output
Using project "joe-project" from context named "joe-project/openshift1.example.com:8443/alice" on server "https://openshift1.example.com:8443".
Using project "joe-project" from context named "joe-project/openshift1.example.com:8443/alice" on server "https://openshift1.example.com:8443".
You can run theoc login
command again and supply the required information during the interactive process, to log in using any other combination of user credentials and cluster details. A context is constructed based on the supplied information if one does not already exist. If you are already logged in and want to switch to another project the current user already has access to, use theoc project
command and enter the name of the project:
oc project alice-project
$oc project alice-project
Example output
Now using project "alice-project" on server "https://openshift1.example.com:8443".
Now using project "alice-project" on server "https://openshift1.example.com:8443".
At any time, you can use theoc config view
command to view your current CLI configuration, as seen in the output. Additional CLI configuration commands are also available for more advanced usage.
If you have access to administrator credentials but are no longer logged in as the default system usersystem:admin
, you can log back in as this user at any time as long as the credentials are still present in your CLI config file. The following command logs in and switches to the default project:
oc login -u system:admin -n default
$oc login-u system:admin-n default
2.4.2. Manual configuration of CLI profiles
This section covers more advanced usage of CLI configurations. In most situations, you can use theoc login
andoc project
commands to log in and switch between contexts and projects.
If you want to manually configure your CLI config files, you can use theoc config
command instead of directly modifying the files. Theoc config
command includes a number of helpful sub-commands for this purpose:
Subcommand | Usage |
---|---|
| Sets a cluster entry in the CLI config file. If the referenced cluster nickname already exists, the specified information is merged in. oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>]
|
| Sets a context entry in the CLI config file. If the referenced context nickname already exists, the specified information is merged in. oc config set-context <context_nickname> [--cluster=<cluster_nickname>]
|
| Sets the current context using the specified context nickname. oc config use-context <context_nickname>
|
| Sets an individual value in the CLI config file. oc config set <property_name> <property_value>
The |
| Unsets individual values in the CLI config file. oc config unset <property_name>
The |
| Displays the merged CLI configuration currently in use. oc config view
Displays the result of the specified CLI config file. oc config view --config=<specific_filename>
|
Example usage
- Log in as a user that uses an access token. This token is used by the
alice
user:
oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0
$oc login https://openshift1.example.com--token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0
- View the cluster entry automatically created:
oc config view
$oc config view
Example output
apiVersion: v1clusters:- cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-comcontexts:- context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alicecurrent-context: default/openshift1-example-com/alicekind: Configpreferences: {}users:- name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0
apiVersion: v1clusters:- cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-comcontexts:- context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alicecurrent-context: default/openshift1-example-com/alicekind: Configpreferences: {}users:- name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0
- Update the current context to have users log in to the desired namespace:
oc config set-context `oc config current-context` --namespace=<project_name>
$oc config set-context`oc config current-context`--namespace=<project_name>
- Examine the current context, to confirm that the changes are implemented:
oc whoami -c
$ocwhoami-c
All subsequent CLI operations uses the new context, unless otherwise specified by overriding CLI options or until the context is switched.
2.4.3. Load and merge rules
You can follow these rules, when issuing CLI operations for the loading and merging order for the CLI configuration:
CLI config files are retrieved from your workstation, using the following hierarchy and merge rules:
- If the
--config
option is set, then only that file is loaded. The flag is set once and no merging takes place. - If the
$KUBECONFIG
environment variable is set, then it is used. The variable can be a list of paths, and if so the paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list. - Otherwise, the
~/.kube/config
file is used and no merging takes place.
- If the
The context to use is determined based on the first match in the following flow:
- The value of the
--context
option. - The
current-context
value from the CLI config file. - An empty value is allowed at this stage.
- The value of the
The user and cluster to use is determined. At this point, you may or may not have a context; they are built based on the first match in the following flow, which is run once for the user and once for the cluster:
- The value of the
--user
for user name and--cluster
option for cluster name. - If the
--context
option is present, then use the context’s value. - An empty value is allowed at this stage.
- The value of the
The actual cluster information to use is determined. At this point, you may or may not have cluster information. Each piece of the cluster information is built based on the first match in the following flow:
The values of any of the following command-line options:
--server
,--api-version
--certificate-authority
--insecure-skip-tls-verify
- If cluster information and a value for the attribute is present, then use it.
- If you do not have a server location, then there is an error.
The actual user information to use is determined. Users are built using the same rules as clusters, except that you can only have one authentication technique per user; conflicting techniques cause the operation to fail. Command-line options take precedence over config file values. Valid command-line options are:
--auth-path
--client-certificate
--client-key
--token
- For any information that is still missing, default values are used and prompts are given for additional information.
2.5. Extending the OpenShift CLI with plugins
You can write and install plugins to build on the defaultoc
commands, allowing you to perform new and more complex tasks with the OpenShift CLI.
2.5.1. Writing CLI plugins
You can write a plugin for the OpenShift CLI in any programming language or script that allows you to write command-line commands. Note that you can not use a plugin to overwrite an existingoc
command.
Procedure
This procedure creates a simple Bash plugin that prints a message to the terminal when theoc foo
command is issued.
Create a file called
oc-foo
.When naming your plugin file, keep the following in mind:
- The file must begin with
oc-
orkubectl-
to be recognized as a plugin. - The file name determines the command that invokes the plugin. For example, a plugin with the file name
oc-foo-bar
can be invoked by a command ofoc foo bar
. You can also use underscores if you want the command to contain dashes. For example, a plugin with the file nameoc-foo_bar
can be invoked by a command ofoc foo-bar
.
- The file must begin with
Add the following contents to the file.
optional argument handlingoptional argument handling
#!/bin/bash# optional argument handlingif[["$1"=="version"]]thenecho"1.0.0"exit0fi# optional argument handlingif[["$1"=="config"]]thenecho$KUBECONFIGexit0fiecho"I am a plugin named kubectl-foo"
Copy to ClipboardCopied!
After you install this plugin for the CLI, it can be invoked using theoc foo
command.
Additional resources
- Review theSample plugin repository for an example of a plugin written in Go.
- Review theCLI runtime repository for a set of utilities to assist in writing plugins in Go.
2.5.2. Installing and using CLI plugins
After you write a custom plugin for the OpenShift CLI, you must install the plugin before use.
Prerequisites
- You must have the
oc
CLI tool installed. - You must have a CLI plugin file that begins with
oc-
orkubectl-
.
Procedure
If necessary, update the plugin file to be executable.
chmod +x <plugin_file>
$chmod +x<plugin_file>
Copy to ClipboardCopied! Place the file anywhere in your
PATH
, such as/usr/local/bin/
.sudo mv <plugin_file> /usr/local/bin/.
$sudomv<plugin_file> /usr/local/bin/.
Copy to ClipboardCopied! Run
oc plugin list
to make sure that the plugin is listed.oc plugin list
$oc plugin list
Copy to ClipboardCopied! Example output
The following compatible plugins are available:/usr/local/bin/<plugin_file>
The following compatible plugins are available:/usr/local/bin/<plugin_file>
Copy to ClipboardCopied! If your plugin is not listed here, verify that the file begins with
oc-
orkubectl-
, is executable, and is on yourPATH
.Invoke the new command or option introduced by the plugin.
For example, if you built and installed the
kubectl-ns
plugin from theSample plugin repository, you can use the following command to view the current namespace.oc ns
$oc ns
Copy to ClipboardCopied! Note that the command to invoke the plugin depends on the plugin file name. For example, a plugin with the file name of
oc-foo-bar
is invoked by theoc foo bar
command.
2.6. OpenShift CLI developer command reference
This reference provides descriptions and example commands for OpenShift CLI (oc
) developer commands.
Runoc help
to list all commands or runoc <command> --help
to get additional details for a specific command.
2.6.1. OpenShift CLI (oc) developer commands
2.6.1.1. oc annotate
Update the annotations on a resource
Example usage
Update pod 'foo' with the annotation 'description' and the value 'my frontend'
# Update pod 'foo' with the annotation 'description' and the value 'my frontend'# If the same annotation is set multiple times, only the last value will be applied oc annotate pods foodescription='my frontend'# Update a pod identified by type and name in "pod.json" oc annotate-f pod.jsondescription='my frontend'# Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate--overwrite pods foodescription='my frontend running nginx'# Update all pods in the namespace oc annotate pods--alldescription='my frontend running nginx'# Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foodescription='my frontend running nginx' --resource-version=1# Update pod 'foo' by removing an annotation named 'description' if it exists# Does not require the --overwrite flag oc annotate pods foo description-
2.6.1.2. oc api-resources
Print the supported API resources on the server
Example usage
Print the supported API resources
# Print the supported API resources oc api-resources# Print the supported API resources with more information oc api-resources-o wide# Print the supported API resources sorted by a column oc api-resources --sort-by=name# Print the supported namespaced resources oc api-resources--namespaced=true# Print the supported non-namespaced resources oc api-resources--namespaced=false# Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io
2.6.1.3. oc api-versions
Print the supported API versions on the server, in the form of "group/version"
Example usage
Print the supported API versions
# Print the supported API versions oc api-versions
2.6.1.4. oc apply
Apply a configuration to a resource by file name or stdin
Example usage
Apply the configuration in pod.json to a pod
# Apply the configuration in pod.json to a pod oc apply-f ./pod.json# Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply-k dir/# Apply the JSON passed into stdin to a podcat pod.json| oc apply-f -# Apply the configuration from all files that end with '.json' oc apply-f'*.json'# Note: --prune is still in Alpha# Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply--prune-f manifest.yaml-lapp=nginx# Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply--prune-f manifest.yaml--all --prune-allowlist=core/v1/ConfigMap
2.6.1.5. oc apply edit-last-applied
Edit latest last-applied-configuration annotations of a resource/object
Example usage
Edit the last-applied-configuration annotations by type/name in YAML
# Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx# Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied-f deploy.yaml-o json
2.6.1.6. oc apply set-last-applied
Set the last-applied-configuration annotation on a live object to match the contents of a file
Example usage
Set the last-applied-configuration of a resource to match the contents of a file
# Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied-f deploy.yaml# Execute set-last-applied against each configuration file in a directory oc apply set-last-applied-f path/# Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied-f deploy.yaml --create-annotation=true
2.6.1.7. oc apply view-last-applied
View the latest last-applied-configuration annotations of a resource/object
Example usage
View the last-applied-configuration annotations by type/name in YAML
# View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx# View the last-applied-configuration annotations by file in JSON oc apply view-last-applied-f deploy.yaml-o json
2.6.1.8. oc attach
Attach to a running container
Example usage
Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation
# Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation# for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod# Get output from ruby-container from pod mypod oc attach mypod-c ruby-container# Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod# and sends stdout/stderr from 'bash' back to the client oc attach mypod-c ruby-container-i-t# Get output from the first pod of a replica set named nginx oc attach rs/nginx
2.6.1.9. oc auth can-i
Check whether an action is allowed
Example usage
Check to see if I can create pods in any namespace
# Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces# Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps# Check to see if service account "foo" of namespace "dev" can list pods in the namespace "prod"# You must be allowed to use impersonation for the global option "--as" oc auth can-i list pods--as=system:serviceaccount:dev:foo-n prod# Check to see if I can do everything in my current namespace ("*" means all) oc auth can-i'*''*'# Check to see if I can get the job named "bar" in namespace "foo" oc auth can-i list jobs.batch/bar-n foo# Check to see if I can read pod logs oc auth can-i get pods--subresource=log# Check to see if I can access the URL /logs/ oc auth can-i get /logs/# Check to see if I can approve certificates.k8s.io oc auth can-i approve certificates.k8s.io# List all allowed actions in namespace "foo" oc auth can-i--list--namespace=foo
2.6.1.10. oc auth reconcile
Reconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects
Example usage
Reconcile RBAC resources from a file
# Reconcile RBAC resources from a file oc auth reconcile-f my-rbac-rules.yaml
2.6.1.11. oc auth whoami
Experimental: Check self subject attributes
Example usage
Get your subject attributes
# Get your subject attributes oc authwhoami# Get your subject attributes in JSON format oc authwhoami-o json
2.6.1.12. oc autoscale
Autoscale a deployment config, deployment, replica set, stateful set, or replication controller
Example usage
Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used
# Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo--min=2--max=10# Auto scale a replication controller "foo", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo--max=5 --cpu-percent=80
2.6.1.13. oc cancel-build
Cancel running, pending, or new builds
Example usage
Cancel the build with the given name
# Cancel the build with the given name oc cancel-build ruby-build-2# Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs# Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2--restart# Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3# Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build--state=new
2.6.1.14. oc cluster-info
Display cluster information
Example usage
Print the address of the control plane and cluster services
# Print the address of the control plane and cluster services oc cluster-info
2.6.1.15. oc cluster-info dump
Dump relevant information for debugging and diagnosis
Example usage
Dump current cluster state to stdout
# Dump current cluster state to stdout oc cluster-info dump# Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state# Dump all namespaces to stdout oc cluster-info dump --all-namespaces# Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump--namespaces default,kube-system --output-directory=/path/to/cluster-state
2.6.1.16. oc completion
Output shell completion code for the specified shell (bash, zsh, fish, or powershell)
Example usage
Installing bash completion on macOS using homebrew
# Installing bash completion on macOS using homebrew## If running Bash 3.2 included with macOS brewinstall bash-completion## or, if running Bash 4.1+ brewinstall bash-completion@2## If oc is installed via homebrew, this should start working immediately## If you've installed via other means, you may need add the completion to your completion directory oc completionbash>$(brew--prefix)/etc/bash_completion.d/oc# Installing bash completion on Linux## If bash-completion is not installed on Linux, install the 'bash-completion' package## via your distribution's package manager.## Load the oc completion code for bash into the current shellsource<(oc completionbash)## Write bash completion code to a file and source it from .bash_profile oc completionbash> ~/.kube/completion.bash.incprintf" # oc shell completion source '$HOME/.kube/completion.bash.inc' ">>$HOME/.bash_profilesource$HOME/.bash_profile# Load the oc completion code for zsh[1] into the current shellsource<(oc completionzsh)# Set the oc completion code for zsh[1] to autoload on startup oc completionzsh>"${fpath[1]}/_oc"# Load the oc completion code for fish[2] into the current shell oc completion fish|source# To load completions for each session, execute once: oc completion fish> ~/.config/fish/completions/oc.fish# Load the oc completion code for powershell into the current shell oc completion powershell| Out-String| Invoke-Expression# Set oc completion code for powershell to run on startup## Save completion code to a script and execute in the profile oc completion powershell>$HOME\.kube\completion.ps1 Add-Content$PROFILE"$HOME\.kube\completion.ps1"## Execute completion code in the profile Add-Content$PROFILE"if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }"## Add completion code directly to the $PROFILE script oc completion powershell>>$PROFILE
2.6.1.17. oc config current-context
Display the current-context
Example usage
Display the current-context
# Display the current-context oc config current-context
2.6.1.18. oc config delete-cluster
Delete the specified cluster from the kubeconfig
Example usage
Delete the minikube cluster
# Delete the minikube cluster oc config delete-cluster minikube
2.6.1.19. oc config delete-context
Delete the specified context from the kubeconfig
Example usage
Delete the context for the minikube cluster
# Delete the context for the minikube cluster oc config delete-context minikube
2.6.1.20. oc config delete-user
Delete the specified user from the kubeconfig
Example usage
Delete the minikube user
# Delete the minikube user oc config delete-user minikube
2.6.1.21. oc config get-clusters
Display clusters defined in the kubeconfig
Example usage
List the clusters that oc knows about
# List the clusters that oc knows about oc config get-clusters
2.6.1.22. oc config get-contexts
Describe one or many contexts
Example usage
List all the contexts in your kubeconfig file
# List all the contexts in your kubeconfig file oc config get-contexts# Describe one context in your kubeconfig file oc config get-contexts my-context
2.6.1.23. oc config get-users
Display users defined in the kubeconfig
Example usage
List the users that oc knows about
# List the users that oc knows about oc config get-users
2.6.1.24. oc config new-admin-kubeconfig
Generate, make the server trust, and display a new admin.kubeconfig
Example usage
Generate a new admin kubeconfig
# Generate a new admin kubeconfig oc config new-admin-kubeconfig
2.6.1.25. oc config new-kubelet-bootstrap-kubeconfig
Generate, make the server trust, and display a new kubelet /etc/kubernetes/kubeconfig
Example usage
Generate a new kubelet bootstrap kubeconfig
# Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig
2.6.1.26. oc config refresh-ca-bundle
Update the OpenShift CA bundle by contacting the API server
Example usage
Refresh the CA bundle for the current context's cluster
# Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle# Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e# Print the CA bundle from the current OpenShift cluster's API server oc config refresh-ca-bundle --dry-run
2.6.1.27. oc config rename-context
Rename a context from the kubeconfig file
Example usage
Rename the context 'old-name' to 'new-name' in your kubeconfig file
# Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name
2.6.1.28. oc config set
Set an individual value in a kubeconfig file
Example usage
Set the server field on the my-cluster cluster to https://1.2.3.4
# Set the server field on the my-cluster cluster to https://1.2.3.4 oc configset clusters.my-cluster.server https://1.2.3.4# Set the certificate-authority-data field on the my-cluster cluster oc configset clusters.my-cluster.certificate-authority-data$(echo"cert_data_here"| base64-i -)# Set the cluster field in the my-context context to my-cluster oc configset contexts.my-context.cluster my-cluster# Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc configset users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true
2.6.1.29. oc config set-cluster
Set a cluster entry in kubeconfig
Example usage
Set only the server field on the e2e cluster entry without touching other values
# Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e--server=https://1.2.3.4# Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt# Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true# Set the custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name# Set the proxy URL for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4
2.6.1.30. oc config set-context
Set a context entry in kubeconfig
Example usage
Set the user field on the gce context entry without touching other values
# Set the user field on the gce context entry without touching other values oc config set-context gce--user=cluster-admin
2.6.1.31. oc config set-credentials
Set a user entry in kubeconfig
Example usage
Set only the "client-key" field on the "cluster-admin"
# Set only the "client-key" field on the "cluster-admin"# entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key# Set basic auth for the "cluster-admin" entry oc config set-credentials cluster-admin--username=admin--password=uXFGweU9l35qcif# Embed client certificate data in the "cluster-admin" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true# Enable the Google Compute Platform auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=gcp# Enable the OpenID Connect auth provider for the "cluster-admin" entry with additional arguments oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar# Remove the "client-secret" config value for the OpenID Connect auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-# Enable new exec auth plugin for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1# Enable new exec auth plugin for the "cluster-admin" entry with interactive mode oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never# Define new exec auth plugin arguments for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2# Create or update exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2# Remove exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=var-to-remove-
2.6.1.32. oc config unset
Unset an individual value in a kubeconfig file
Example usage
Unset the current-context
# Unset the current-context oc configunset current-context# Unset namespace in foo context oc configunset contexts.foo.namespace
2.6.1.33. oc config use-context
Set the current-context in a kubeconfig file
Example usage
Use the context for the minikube cluster
# Use the context for the minikube cluster oc config use-context minikube
2.6.1.34. oc config view
Display merged kubeconfig settings or a specified kubeconfig file
Example usage
Show merged kubeconfig settings
# Show merged kubeconfig settings oc config view# Show merged kubeconfig settings, raw certificate data, and exposed secrets oc config view--raw# Get the password for the e2e user oc config view-ojsonpath='{.users[?(@.name == "e2e")].user.password}'
2.6.1.35. oc cp
Copy files and directories to and from containers
Example usage
!!!Important Note!!!
# !!!Important Note!!!# Requires that the 'tar' binary is present in your container# image. If 'tar' is not present, 'oc cp' will fail.## For advanced use cases, such as symlinks, wildcard expansion or# file mode preservation, consider using 'oc exec'.# Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace>tar cf - /tmp/foo| ocexec-i-n<some-namespace><some-pod> --tar xf --C /tmp/bar# Copy /tmp/foo from a remote pod to /tmp/bar locally ocexec-n<some-namespace><some-pod> --tar cf - /tmp/foo|tar xf --C /tmp/bar# Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace occp /tmp/foo_dir<some-pod>:/tmp/bar_dir# Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container occp /tmp/foo<some-pod>:/tmp/bar-c<specific-container># Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> occp /tmp/foo<some-namespace>/<some-pod>:/tmp/bar# Copy /tmp/foo from a remote pod to /tmp/bar locally occp<some-namespace>/<some-pod>:/tmp/foo /tmp/bar
2.6.1.36. oc create
Create a resource from a file or from stdin
Example usage
Create a pod using the data in pod.json
# Create a pod using the data in pod.json oc create-f ./pod.json# Create a pod based on the JSON passed into stdincat pod.json| oc create-f -# Edit the data in registry.yaml in JSON then create the resource using the edited data oc create-f registry.yaml--edit-o json
2.6.1.37. oc create build
Create a new build
Example usage
Create a new build
# Create a new build oc create build myapp
2.6.1.38. oc create clusterresourcequota
Create a cluster resource quota
Example usage
Create a cluster resource quota limited to 10 pods
# Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob--hard=pods=10
2.6.1.39. oc create clusterrole
Create a cluster role
Example usage
Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods
# Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create clusterrole pod-reader--verb=get,list,watch--resource=pods# Create a cluster role named "pod-reader" with ResourceName specified oc create clusterrole pod-reader--verb=get--resource=pods --resource-name=readablepod --resource-name=anotherpod# Create a cluster role named "foo" with API Group specified oc create clusterrole foo--verb=get,list,watch--resource=rs.apps# Create a cluster role named "foo" with SubResource specified oc create clusterrole foo--verb=get,list,watch--resource=pods,pods/status# Create a cluster role name "foo" with NonResourceURL specified oc create clusterrole"foo"--verb=get --non-resource-url=/logs/*# Create a cluster role name "monitoring" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true"
2.6.1.40. oc create clusterrolebinding
Create a cluster role binding for a particular cluster role
Example usage
Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role
# Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin--clusterrole=cluster-admin--user=user1--user=user2--group=group1
2.6.1.41. oc create configmap
Create a config map from a local file, directory or literal value
Example usage
Create a new config map named my-config based on folder bar
# Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar# Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt# Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2# Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar# Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env
2.6.1.42. oc create cronjob
Create a cron job with the specified name
Example usage
Create a cron job
# Create a cron job oc create cronjob my-job--image=busybox--schedule="*/1 * * * *"# Create a cron job with a command oc create cronjob my-job--image=busybox--schedule="*/1 * * * *" --date
2.6.1.43. oc create deployment
Create a deployment with the specified name
Example usage
Create a deployment named my-dep that runs the busybox image
# Create a deployment named my-dep that runs the busybox image oc create deployment my-dep--image=busybox# Create a deployment with a command oc create deployment my-dep--image=busybox --date# Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep--image=nginx--replicas=3# Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep--image=busybox--port=5701# Create a deployment named my-dep that runs multiple containers oc create deployment my-dep--image=busybox:latest--image=ubuntu:latest--image=nginx
2.6.1.44. oc create deploymentconfig
Create a deployment config with default options that uses a given image
Example usage
Create an nginx deployment config named my-nginx
# Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx--image=nginx
2.6.1.45. oc create identity
Manually create an identity (only needed if automatic creation is disabled)
Example usage
Create an identity with identity provider "acme_ldap" and the identity provider username "adamjones"
# Create an identity with identity provider "acme_ldap" and the identity provider username "adamjones" oc create identity acme_ldap:adamjones
2.6.1.46. oc create imagestream
Create a new empty image stream
Example usage
Create a new image stream
# Create a new image stream oc create imagestream mysql
2.6.1.47. oc create imagestreamtag
Create a new image stream tag
Example usage
Create a new image stream tag based on an image in a remote registry
# Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0
2.6.1.48. oc create ingress
Create an ingress with the specified name
Example usage
Create a single ingress called 'simple' that directs requests to foo.com/bar to svc
# Create a single ingress called 'simple' that directs requests to foo.com/bar to svc# svc1:8080 with a TLS secret "my-cert" oc create ingress simple--rule="foo.com/bar=svc1:8080,tls=my-cert"# Create a catch all ingress of "/path" pointing to service svc:port and Ingress Class as "otheringress" oc create ingress catch-all--class=otheringress--rule="/path=svc:port"# Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated--class=default--rule="foo.com/bar=svc:port"\--annotationingress.annotation1=foo\--annotationingress.annotation2=bla# Create an ingress with the same host and multiple paths oc create ingress multipath--class=default\--rule="foo.com/=svc:port"\--rule="foo.com/admin/=svcadmin:portadmin"# Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1--class=default\--rule="foo.com/path*=svc:8080"\--rule="bar.com/admin*=svc2:http"# Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls--class=default\--rule="foo.com/=svc:https,tls"\--rule="foo.com/path/subpath*=othersvc:8080"# Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret--class=default\--rule="foo.com/*=svc:8080,tls=secret1"# Create an ingress with a default backend oc create ingress ingdefault--class=default\ --default-backend=defaultsvc:http\--rule="foo.com/*=svc:8080,tls=secret1"
2.6.1.49. oc create job
Create a job with the specified name
Example usage
Create a job
# Create a job oc create job my-job--image=busybox# Create a job with a command oc create job my-job--image=busybox --date# Create a job from a cron job named "a-cronjob" oc create job test-job--from=cronjob/a-cronjob
2.6.1.50. oc create namespace
Create a namespace with the specified name
Example usage
Create a new namespace named my-namespace
# Create a new namespace named my-namespace oc create namespace my-namespace
2.6.1.51. oc create poddisruptionbudget
Create a pod disruption budget with the specified name
Example usage
Create a pod disruption budget named my-pdb that will select all pods with the app=rails label
# Create a pod disruption budget named my-pdb that will select all pods with the app=rails label# and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb--selector=app=rails --min-available=1# Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label# and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb--selector=app=nginx --min-available=50%
2.6.1.52. oc create priorityclass
Create a priority class with the specified name
Example usage
Create a priority class named high-priority
# Create a priority class named high-priority oc create priorityclass high-priority--value=1000--description="high priority"# Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority--value=1000 --global-default=true--description="default priority"# Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority--value=1000--description="high priority" --preemption-policy="Never"
2.6.1.53. oc create quota
Create a quota with the specified name
Example usage
Create a new resource quota named my-quota
# Create a new resource quota named my-quota oc createquota my-quota--hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10# Create a new resource quota named best-effort oc createquota best-effort--hard=pods=100--scopes=BestEffort
2.6.1.54. oc create role
Create a role with single rule
Example usage
Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods
# Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create role pod-reader--verb=get--verb=list--verb=watch--resource=pods# Create a role named "pod-reader" with ResourceName specified oc create role pod-reader--verb=get--resource=pods --resource-name=readablepod --resource-name=anotherpod# Create a role named "foo" with API Group specified oc create role foo--verb=get,list,watch--resource=rs.apps# Create a role named "foo" with SubResource specified oc create role foo--verb=get,list,watch--resource=pods,pods/status
2.6.1.55. oc create rolebinding
Create a role binding for a particular role or cluster role
Example usage
Create a role binding for user1, user2, and group1 using the admin cluster role
# Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin--clusterrole=admin--user=user1--user=user2--group=group1# Create a role binding for service account monitoring:sa-dev using the admin role oc create rolebinding admin-binding--role=admin--serviceaccount=monitoring:sa-dev
2.6.1.56. oc create route edge
Create a route that uses edge TLS termination
Example usage
Create an edge route named "my-route" that exposes the frontend service
# Create an edge route named "my-route" that exposes the frontend service oc create route edge my-route--service=frontend# Create an edge route that exposes the frontend service and specify a path# If the route name is omitted, the service name will be used oc create route edge--service=frontend--path /assets
2.6.1.57. oc create route passthrough
Create a route that uses passthrough TLS termination
Example usage
Create a passthrough route named "my-route" that exposes the frontend service
# Create a passthrough route named "my-route" that exposes the frontend service oc create route passthrough my-route--service=frontend# Create a passthrough route that exposes the frontend service and specify# a host name. If the route name is omitted, the service name will be used oc create route passthrough--service=frontend--hostname=www.example.com
2.6.1.58. oc create route reencrypt
Create a route that uses reencrypt TLS termination
Example usage
Create a route named "my-route" that exposes the frontend service
# Create a route named "my-route" that exposes the frontend service oc create route reencrypt my-route--service=frontend --dest-ca-cert cert.cert# Create a reencrypt route that exposes the frontend service, letting the# route name default to the service name and the destination CA certificate# default to the service CA oc create route reencrypt--service=frontend
2.6.1.59. oc create secret docker-registry
Create a secret for use with a Docker registry
Example usage
If you do not already have a .dockercfg file, create a dockercfg secret directly
# If you do not already have a .dockercfg file, create a dockercfg secret directly oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL# Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=path/to/.docker/config.json
2.6.1.60. oc create secret generic
Create a secret from a local file, directory, or literal value
Example usage
Create a new secret named my-secret with keys for each file in folder bar
# Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar# Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub# Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret# Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret# Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env
2.6.1.61. oc create secret tls
Create a TLS secret
Example usage
Create a new TLS secret named tls-secret with the given key pair
# Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret--cert=path/to/tls.crt--key=path/to/tls.key
2.6.1.62. oc create service clusterip
Create a ClusterIP service
Example usage
Create a new ClusterIP service named my-cs
# Create a new ClusterIP service named my-cs oc createservice clusterip my-cs--tcp=5678:8080# Create a new ClusterIP service named my-cs (in headless mode) oc createservice clusterip my-cs--clusterip="None"
2.6.1.63. oc create service externalname
Create an ExternalName service
Example usage
Create a new ExternalName service named my-ns
# Create a new ExternalName service named my-ns oc createservice externalname my-ns --external-name bar.com
2.6.1.64. oc create service loadbalancer
Create a LoadBalancer service
Example usage
Create a new LoadBalancer service named my-lbs
# Create a new LoadBalancer service named my-lbs oc createservice loadbalancer my-lbs--tcp=5678:8080
2.6.1.65. oc create service nodeport
Create a NodePort service
Example usage
Create a new NodePort service named my-ns
# Create a new NodePort service named my-ns oc createservice nodeport my-ns--tcp=5678:8080
2.6.1.66. oc create serviceaccount
Create a service account with the specified name
Example usage
Create a new service account named my-service-account
# Create a new service account named my-service-account oc create serviceaccount my-service-account
2.6.1.67. oc create token
Request a service account token
Example usage
Request a token to authenticate to the kube-apiserver as the service account "myapp" in the current namespace
# Request a token to authenticate to the kube-apiserver as the service account "myapp" in the current namespace oc create token myapp# Request a token for a service account in a custom namespace oc create token myapp--namespace myns# Request a token with a custom expiration oc create token myapp--duration 10m# Request a token with a custom audience oc create token myapp--audience https://example.com# Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret# Request a token bound to an instance of a Secret object with a specific UID oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc
2.6.1.68. oc create user
Manually create a user (only needed if automatic creation is disabled)
Example usage
Create a user with the username "ajones" and the display name "Adam Jones"
# Create a user with the username "ajones" and the display name "Adam Jones" oc create user ajones --full-name="Adam Jones"
2.6.1.69. oc create useridentitymapping
Manually map an identity to a user
Example usage
Map the identity "acme_ldap:adamjones" to the user "ajones"
# Map the identity "acme_ldap:adamjones" to the user "ajones" oc create useridentitymapping acme_ldap:adamjones ajones
2.6.1.70. oc debug
Launch a new instance of a pod for debugging
Example usage
Start a shell session into a pod using the OpenShift tools image
# Start a shell session into a pod using the OpenShift tools image oc debug# Debug a currently running deployment by creating a new pod oc debug deploy/test# Debug a node as an administrator oc debug node/master-1# Debug a Windows node# Note: the chosen image must match the Windows Server version (2019, 2022) of the node oc debug node/win-worker-1--image=mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022# Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest-n openshift# Test running a job as a non-root user oc debug job/test --as-user=1000000# Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test-c second -- /bin/env# See the pod that would be created to debug oc debug mypod-9xbc-o yaml# Debug a resource but launch the debug pod in another namespace# Note: Not all resources can be debugged using --to-namespace without modification. For example,# volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition# to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns
2.6.1.71. oc delete
Delete resources by file names, stdin, resources and names, or by resources and label selector
Example usage
Delete a pod using the type and name specified in pod.json
# Delete a pod using the type and name specified in pod.json oc delete-f ./pod.json# Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete-kdir# Delete resources from all files that end with '.json' oc delete-f'*.json'# Delete a pod based on the type and name in the JSON passed into stdincat pod.json| oc delete-f -# Delete pods and services with same names "baz" and "foo" oc delete pod,service baz foo# Delete pods and services with label name=myLabel oc delete pods,services-lname=myLabel# Delete a pod with minimal delay oc delete pod foo--now# Force delete a pod on a dead node oc delete pod foo--force# Delete all pods oc delete pods--all# Delete all pods only if the user confirms the deletion oc delete pods--all--interactive
2.6.1.72. oc describe
Show details of a specific resource or group of resources
Example usage
Describe a node
# Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal# Describe a pod oc describe pods/nginx# Describe a pod identified by type and name in "pod.json" oc describe-f pod.json# Describe all pods oc describe pods# Describe pods by label name=myLabel oc describe pods-lname=myLabel# Describe all pods managed by the 'frontend' replication controller# (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend
2.6.1.73. oc diff
Diff the live version against a would-be applied version
Example usage
Diff resources included in pod.json
# Diff resources included in pod.json ocdiff-f pod.json# Diff file read from stdincat service.yaml| ocdiff-f -
2.6.1.74. oc edit
Edit a resource on the server
Example usage
Edit the service named 'registry'
# Edit the service named 'registry' oc edit svc/registry# Use an alternative editorKUBE_EDITOR="nano" oc edit svc/registry# Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob-o json# Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment-o yaml --save-config# Edit the 'status' subresource for the 'mydeployment' deployment oc edit deployment mydeployment--subresource='status'
2.6.1.75. oc events
List events
Example usage
List recent events in the default namespace
# List recent events in the default namespace oc events# List recent events in all namespaces oc events --all-namespaces# List recent events for the specified pod, then wait for more events and list them as they arrive oc events--for pod/web-pod-13je7--watch# List recent events in YAML format oc events-oyaml# List recent only events of type 'Warning' or 'Normal' oc events--types=Warning,Normal
2.6.1.76. oc exec
Execute a command in a container
Example usage
Get output from running the 'date' command from pod mypod, using the first container by default
# Get output from running the 'date' command from pod mypod, using the first container by default ocexec mypod --date# Get output from running the 'date' command in ruby-container from pod mypod ocexec mypod-c ruby-container --date# Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod# and sends stdout/stderr from 'bash' back to the client ocexec mypod-c ruby-container-i-t --bash-il# List contents of /usr from the first container of pod mypod and sort by modification time# If the command you want to execute in the pod has any flags in common (e.g. -i),# you must use two dashes (--) to separate your command's flags/arguments# Also note, do not surround your command and its flags/arguments with quotes# unless that is how you would execute it normally (i.e., do ls -t /usr, not "ls -t /usr") ocexec mypod-i-t --ls-t /usr# Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default ocexec deploy/mydeployment --date# Get output from running 'date' command from the first pod of the service myservice, using the first container by default ocexec svc/myservice --date
2.6.1.77. oc explain
Get documentation for a resource
Example usage
Get the documentation of the resource and its fields
# Get the documentation of the resource and its fields oc explain pods# Get all the fields in the resource oc explain pods--recursive# Get the explanation for deployment in supported api versions oc explain deployments --api-version=apps/v1# Get the documentation of a specific field of a resource oc explain pods.spec.containers# Get the documentation of resources in different format oc explain deployment--output=plaintext-openapiv2
2.6.1.78. oc expose
Expose a replicated application as a service or route
Example usage
Create a route based on service nginx. The new route will reuse nginx's labels
# Create a route based on service nginx. The new route will reuse nginx's labels oc exposeservice nginx# Create a route and specify your own label and route name oc exposeservice nginx-lname=myroute--name=fromdowntown# Create a route and specify a host name oc exposeservice nginx--hostname=www.example.com# Create a route with a wildcard oc exposeservice nginx--hostname=x.example.com --wildcard-policy=Subdomain# This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included# Expose a deployment configuration as a service and use the specified port oc exposedc ruby-hello-world--port=8080# Expose a service as a route in the specified path oc exposeservice nginx--path=/nginx
2.6.1.79. oc extract
Extract secrets or config maps to disk
Example usage
Extract the secret "test" to the current directory
# Extract the secret "test" to the current directory oc extract secret/test# Extract the config map "nginx" to the /tmp directory oc extract configmap/nginx--to=/tmp# Extract the config map "nginx" to STDOUT oc extract configmap/nginx--to=-# Extract only the key "nginx.conf" from config map "nginx" to the /tmp directory oc extract configmap/nginx--to=/tmp--keys=nginx.conf
2.6.1.80. oc get
Display one or many resources
Example usage
List all pods in ps output format
# List all pods in ps output format oc get pods# List all pods in ps output format with more information (such as node name) oc get pods-o wide# List a single replication controller with specified NAME in ps output format oc get replicationcontroller web# List deployments in JSON output format, in the "v1" version of the "apps" API group oc get deployments.v1.apps-o json# List a single pod in JSON output format oc get-o json pod web-pod-13je7# List a pod identified by type and name specified in "pod.yaml" in JSON output format oc get-f pod.yaml-o json# List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get-k dir/# Return only the phase value of the specified pod oc get-o template pod/web-pod-13je7--template={{.status.phase}}# List resource information in custom columns oc get pod test-pod-o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image# List all replication controllers and services together in ps output format oc get rc,services# List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7# List the 'status' subresource for a single pod oc get pod web-pod-13je7--subresource status# List all deployments in namespace 'backend' oc get deployments.apps--namespace backend# List all pods existing in all namespaces oc get pods --all-namespaces
2.6.1.81. oc get-token
Experimental: Get token from external OIDC issuer as credentials exec plugin
Example usage
Starts an auth code flow to the issuer URL with the client ID and the given extra scopes
# Starts an auth code flow to the issuer URL with the client ID and the given extra scopes oc get-token --client-id=client-id --issuer-url=test.issuer.url --extra-scopes=email,profile# Starts an auth code flow to the issuer URL with a different callback address oc get-token --client-id=client-id --issuer-url=test.issuer.url --callback-address=127.0.0.1:8343
2.6.1.82. oc idle
Idle scalable resources
Example usage
Idle the scalable controllers associated with the services listed in to-idle.txt
# Idle the scalable controllers associated with the services listed in to-idle.txt $ oc idle --resource-names-file to-idle.txt
2.6.1.83. oc image append
Add layers to images and push them to a registry
Example usage
Remove the entrypoint on the mysql:latest image
# Remove the entrypoint on the mysql:latest image oc image append--from mysql:latest--to myregistry.com/myimage:latest--image'{"Entrypoint":null}'# Add a new layer to the image oc image append--from mysql:latest--to myregistry.com/myimage:latest layer.tar.gz# Add a new layer to the image and store the result on disk# This results in $(pwd)/v2/mysql/blobs,manifests oc image append--from mysql:latest--to file://mysql:local layer.tar.gz# Add a new layer to the image and store the result on disk in a designated directory# This will result in $(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append--from mysql:latest--to file://mysql:local--dir mysql-local layer.tar.gz# Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local--to myregistry.com/myimage:latest layer.tar.gz# Add a new layer to an image that was mirrored to the current directory on disk ($(pwd)/v2/image exists) oc image append --from-dir v2--to myregistry.com/myimage:latest layer.tar.gz# Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch# Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append--from docker.io/library/busybox:latest --filter-by-os=linux/s390x--to myregistry.com/myimage:latest layer.tar.gz# Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append--from docker.io/library/busybox:latest --keep-manifest-list--to myregistry.com/myimage:latest layer.tar.gz# Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append--from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list--to myregistry.com/myimage:latest layer.tar.gz
2.6.1.84. oc image extract
Copy files from an image to the file system
Example usage
Extract the busybox image into the current directory
# Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest# Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest--path /:/tmp/busybox# Extract the busybox image into the current directory for linux/s390x platform# Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x# Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7--path /bin/bash:.# Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7--path /etc/yum.repos.d/*.repo:.# Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist)# This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7--path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d# Extract an image stored on disk into the current directory ($(pwd)/v2/busybox/blobs,manifests exists)# --confirm is required because the current directory is not empty oc image extract file://busybox:local--confirm# Extract an image stored on disk in a directory other than $(pwd)/v2 into the current directory# --confirm is required because the current directory is not empty ($(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local--dir busybox-mirror-dir--confirm# Extract an image stored on disk in a directory other than $(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local--dir busybox-mirror-dir--path /:/tmp/busybox# Extract the last layer in the image oc image extract docker.io/library/centos:7[-1]# Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3]# Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:]
2.6.1.85. oc image info
Display information about an image
Example usage
Show information about an image
# Show information about an image oc image info quay.io/openshift/cli:latest# Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.*# Show information about a file mirrored to disk under DIR oc image info--dir=DIR file://library/busybox:latest# Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64
2.6.1.86. oc image mirror
Mirror images from one repository to another
Example usage
Copy image to another tag
# Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable# Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable# Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage# Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest# Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest# Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image# Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable\ docker.io/myrepository/myimage:dev# Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test\ myregistry.com/myimage:new=myregistry.com/other:target# Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test\ --keep-manifest-list=true# Copy specific os/arch manifest of a multi-architecture image# Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images# Note that with multi-arch images, this results in a new manifest list digest that includes only the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test\ --filter-by-os=os/arch# Copy all os/arch manifests of a multi-architecture image# Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test\ --keep-manifest-list=true# Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test\ --filter-by-os=.*# Copy specific os/arch manifest of a multi-architecture image# Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images# Note that the target registry may reject a manifest list if the platform specific images do not all exist# You must use a registry with sparse registry support enabled oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test\ --filter-by-os=linux/386\ --keep-manifest-list=true
2.6.1.87. oc import-image
Import images from a container image registry
Example usage
Import tag latest into a new image stream
# Import tag latest into a new image stream oc import-image mystream--from=registry.io/repo/image:latest--confirm# Update imported data for tag latest in an already existing image stream oc import-image mystream# Update imported data for tag stable in an already existing image stream oc import-image mystream:stable# Update imported data for all tags in an existing image stream oc import-image mystream--all# Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal# Import all tags into a new image stream oc import-image mystream--from=registry.io/repo/image--all--confirm# Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream--from=registry.io/repo/image--all--confirm
2.6.1.88. oc kustomize
Build a kustomization target from a directory or URL
Example usage
Build the current working directory
# Build the current working directory oc kustomize# Build some shared configuration directory oc kustomize /home/config/production# Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6
2.6.1.89. oc label
Update the labels on a resource
Example usage
Update pod 'foo' with the label 'unhealthy' and the value 'true'
# Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foounhealthy=true# Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label--overwrite pods foostatus=unhealthy# Update all pods in the namespace oc label pods--allstatus=unhealthy# Update a pod identified by the type and name in "pod.json" oc label-f pod.jsonstatus=unhealthy# Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foostatus=unhealthy --resource-version=1# Update pod 'foo' by removing a label named 'bar' if it exists# Does not require the --overwrite flag oc label pods foo bar-
2.6.1.90. oc login
Log in to a server
Example usage
Log in interactively
# Log in interactively oc login--username=myuser# Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt# Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443--username=myuser--password=mypass# Log in to the given server through a browser oc login localhost:8443--web --callback-port8280# Log in to the external OIDC issuer through Auth Code + PKCE by starting a local server listening on port 8080 oc login localhost:8443 --exec-plugin=oc-oidc --client-id=client-id --extra-scopes=email,profile --callback-port=8080
2.6.1.91. oc logout
End the current server session
Example usage
Log out
# Log out oclogout
2.6.1.92. oc logs
Print the logs for a container in a pod
Example usage
Start streaming the logs of the most recent build of the openldap build config
# Start streaming the logs of the most recent build of the openldap build config oc logs-f bc/openldap# Start streaming the logs of the latest deployment of the mysql deployment config oc logs-f dc/mysql# Get the logs of the first deployment for the mysql deployment config. Note that logs# from older deployments may not exist either because the deployment was successful# or due to deployment pruning or manual deletion of the deployment oc logs--version=1 dc/mysql# Return a snapshot of ruby-container logs from pod backend oc logs backend-c ruby-container# Start streaming of ruby-container logs from pod backend oc logs-f pod/backend-c ruby-container
2.6.1.93. oc new-app
Create a new application
Example usage
List all local templates and image streams that can be used to create an app
# List all local templates and image streams that can be used to create an app oc new-app--list# Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app.--image=registry/repo/langimage# Create an application myapp with Docker based build strategy expecting binary input oc new-app--strategy=docker--binary--name myapp# Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git# Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysqlMYSQL_USER=userMYSQL_PASSWORD=passMYSQL_DATABASE=testdb-ldb=mysql# Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app--image=myregistry.com/mycompany/mysql--name=private# Use an image with the full manifest list to create an app and override application artifacts' names oc new-app--image=myregistry.com/mycompany/image--name=private --import-mode=PreserveOriginal# Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4# Create an application based on a stored template, explicitly setting a parameter value oc new-app--template=ruby-helloworld-sample--param=MYSQL_USER=admin# Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build# Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret# Create an application based on a template file, explicitly setting a parameter value oc new-app--file=./example/myapp/template.json--param=MYSQL_USER=admin# Search all templates, image streams, and container images for the ones that match "ruby" oc new-app--search ruby# Search for "ruby", but only in stored templates (--template, --image-stream and --image# can be used to filter search results) oc new-app--search--template=ruby# Search for "ruby" in stored templates and print the output as YAML oc new-app--search--template=ruby--output=yaml
2.6.1.94. oc new-build
Create a new build configuration
Example usage
Create a build config based on the source code in the current git repository (with a public
# Create a build config based on the source code in the current git repository (with a public# remote) and a container image oc new-build.--image=repo/langimage# Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git# Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2# Create a build config using a Dockerfile specified as an argument oc new-build-D$'FROM centos:7\nRUN yum install -y httpd'# Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world-eRACK_ENV=development# Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret# Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build--image=myregistry.com/mycompany/image--name=private --import-mode=PreserveOriginal# Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc# Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config# Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp
2.6.1.95. oc new-project
Request a new project
Example usage
Create a new project with minimal information
# Create a new project with minimal information oc new-project web-team-dev# Create a new project with a display name and description oc new-project web-team-dev --display-name="Web Team Development"--description="Development project for the web team."
2.6.1.96. oc observe
Observe changes to resources and react to them (experimental)
Example usage
Observe changes to services
# Observe changes to services oc observe services# Observe changes to services, including the clusterIP and invoke a script for each oc observe services--template'{ .spec.clusterIP }' -- register_dns.sh# Observe changes to services filtered by a label selector oc observe services-l regist-dns=true--template'{ .spec.clusterIP }' -- register_dns.sh
2.6.1.97. oc patch
Update fields of a resource
Example usage
Partially update a node using a strategic merge patch, specifying the patch as JSON
# Partially update a node using a strategic merge patch, specifying the patch as JSON oc patchnode k8s-node-1-p'{"spec":{"unschedulable":true}}'# Partially update a node using a strategic merge patch, specifying the patch as YAML oc patchnode k8s-node-1-p$'spec:\n unschedulable: true'# Partially update a node identified by the type and name specified in "node.json" using strategic merge patch oc patch-f node.json-p'{"spec":{"unschedulable":true}}'# Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod--type='json'-p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'# Update a deployment's replicas through the 'scale' subresource using a merge patch oc patch deployment nginx-deployment--subresource='scale'--type='merge'-p'{"spec":{"replicas":2}}'
2.6.1.98. oc plugin
Provides utilities for interacting with plugins
Example usage
List all available plugins
# List all available plugins oc plugin list# List only binary names of available plugins without paths oc plugin list --name-only
2.6.1.99. oc plugin list
List all visible plugin executables on a user’s PATH
Example usage
List all available plugins
# List all available plugins oc plugin list# List only binary names of available plugins without paths oc plugin list --name-only
2.6.1.100. oc policy add-role-to-user
Add a role to users or service accounts for the current project
Example usage
Add the 'view' role to user1 for the current project
# Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1# Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit-z serviceaccount1
2.6.1.101. oc policy scc-review
Check which service account can create a pod
Example usage
Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml
# Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml# Service Account specified in myresource.yaml file is ignored oc policy scc-review-z sa1,sa2-f my_resource.yaml# Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review-z system:serviceaccount:bob:default-f my_resource.yaml# Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review-f my_resource_with_sa.yaml# Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review-f myresource_with_no_sa.yaml
2.6.1.102. oc policy scc-subject-review
Check whether a user or a service account can create a pod
Example usage
Check whether user bob can create a pod specified in myresource.yaml
# Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review-u bob-f myresource.yaml# Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review-u bob-g projectAdmin-f myresource.yaml# Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review-f myresourcewithsa.yaml
2.6.1.103. oc port-forward
Forward one or more local ports to a pod
Example usage
Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod
# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod50006000# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment50006000# Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service oc port-forward service/myservice8443:https# Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod8888:5000# Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward--address0.0.0.0 pod/mypod8888:5000# Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward--address localhost,10.19.21.23 pod/mypod8888:5000# Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000
2.6.1.104. oc process
Process a template into list of resources
Example usage
Convert the template.json file into a resource list and pass to create
# Convert the template.json file into a resource list and pass to create oc process-f template.json| oc create-f -# Process a file locally instead of contacting the server oc process-f template.json--local-o yaml# Process template while passing a user-defined label oc process-f template.json-lname=mytemplate# Convert a stored template into a resource list oc process foo# Convert a stored template into a resource list by setting/overriding parameter values oc process fooPARM1=VALUE1PARM2=VALUE2# Convert a template stored in different namespace into a resource list oc process openshift//foo# Convert template.json into a resource listcat template.json| oc process-f -
2.6.1.105. oc project
Switch to another project
Example usage
Switch to the 'myapp' project
# Switch to the 'myapp' project oc project myapp# Display the project currently in use oc project
2.6.1.106. oc projects
Display existing projects
Example usage
List all projects
# List all projects oc projects
2.6.1.107. oc proxy
Run a proxy to the Kubernetes API server
Example usage
To proxy all of the Kubernetes API and nothing else
# To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/# To proxy only part of the Kubernetes API and also some static files# You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy--www=/my/files --www-prefix=/static/ --api-prefix=/api/# To proxy the entire Kubernetes API at a different root# You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/# Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy--port=8011--www=./local/www/# Run a proxy to the Kubernetes API server on an arbitrary local port# The chosen port for the server will be output to stdout oc proxy--port=0# Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api# This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api
2.6.1.108. oc registry login
Log in to the integrated registry
Example usage
Log in to the integrated registry
# Log in to the integrated registry oc registry login# Log in to different registry using BASIC auth credentials oc registry login--registry quay.io/myregistry --auth-basic=USER:PASS
2.6.1.109. oc replace
Replace a resource by file name or stdin
Example usage
Replace a pod using the data in pod.json
# Replace a pod using the data in pod.json oc replace-f ./pod.json# Replace a pod based on the JSON passed into stdincat pod.json| oc replace-f -# Update a single-container pod's image version (tag) to v4 oc get pod mypod-o yaml|sed's/\(image: myimage\):.*$/\1:v4/'| oc replace-f -# Force replace, delete and then re-create the resource oc replace--force-f ./pod.json
2.6.1.110. oc rollback
Revert part of an application back to a previous deployment
Example usage
Perform a rollback to the last successfully completed deployment for a deployment config
# Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend# See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run# Perform a rollback to a specific deployment oc rollback frontend-2# Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend-o json| oc replace dc/frontend-f -# Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend-o json
2.6.1.111. oc rollout
Manage the rollout of a resource
Example usage
Roll back to the previous deployment
# Roll back to the previous deployment oc rollout undo deployment/abc# Check the rollout status of a daemonset oc rollout status daemonset/foo# Restart a deployment oc rollout restart deployment/abc# Restart deployments with the 'app=nginx' label oc rollout restart deployment--selector=app=nginx
2.6.1.112. oc rollout cancel
Cancel the in-progress deployment
Example usage
Cancel the in-progress deployment based on 'nginx'
# Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx
2.6.1.113. oc rollout history
View rollout history
Example usage
View the rollout history of a deployment
# View the rollout history of a deployment oc rollouthistory deployment/abc# View the details of daemonset revision 3 oc rollouthistory daemonset/abc--revision=3
2.6.1.114. oc rollout latest
Start a new rollout for a deployment config with the latest state from its triggers
Example usage
Start a new rollout based on the latest images defined in the image change triggers
# Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx# Print the rolled out deployment config oc rollout latest dc/nginx-o json
2.6.1.115. oc rollout pause
Mark the provided resource as paused
Example usage
Mark the nginx deployment as paused
# Mark the nginx deployment as paused# Any current state of the deployment will continue its function; new updates# to the deployment will not have an effect as long as the deployment is paused oc rollout pause deployment/nginx
2.6.1.116. oc rollout restart
Restart a resource
Example usage
Restart all deployments in the test-namespace namespace
# Restart all deployments in the test-namespace namespace oc rollout restart deployment-n test-namespace# Restart a deployment oc rollout restart deployment/nginx# Restart a daemon set oc rollout restart daemonset/abc# Restart deployments with the app=nginx label oc rollout restart deployment--selector=app=nginx
2.6.1.117. oc rollout resume
Resume a paused resource
Example usage
Resume an already paused deployment
# Resume an already paused deployment oc rollout resume deployment/nginx
2.6.1.118. oc rollout retry
Retry the latest failed rollout
Example usage
Retry the latest failed deployment based on 'frontend'
# Retry the latest failed deployment based on 'frontend'# The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend
2.6.1.119. oc rollout status
Show the status of the rollout
Example usage
Watch the rollout status of a deployment
# Watch the rollout status of a deployment oc rollout status deployment/nginx
2.6.1.120. oc rollout undo
Undo a previous rollout
Example usage
Roll back to the previous deployment
# Roll back to the previous deployment oc rollout undo deployment/abc# Roll back to daemonset revision 3 oc rollout undo daemonset/abc --to-revision=3# Roll back to the previous deployment with dry-run oc rollout undo --dry-run=server deployment/abc
2.6.1.121. oc rsh
Start a shell session in a container
Example usage
Open a shell session on the first container in pod 'foo'
# Open a shell session on the first container in pod 'foo' oc rsh foo# Open a shell session on the first container in pod 'foo' and namespace 'bar'# (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh-n bar foo# Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foocat /etc/resolv.conf# See the configuration of your internal registry oc rsh dc/docker-registrycat config.yml# Open a shell session on the container named 'index' inside a pod of your job oc rsh-c index job/scheduled
2.6.1.122. oc rsync
Copy files between a local file system and a pod
Example usage
Synchronize a local directory with a pod directory
# Synchronize a local directory with a pod directory ocrsync ./local/dir/ POD:/remote/dir# Synchronize a pod directory with a local directory ocrsync POD:/remote/dir/ ./local/dir
2.6.1.123. oc run
Run a particular image on the cluster
Example usage
Start a nginx pod
# Start a nginx pod oc run nginx--image=nginx# Start a hazelcast pod and let the container expose port 5701 oc run hazelcast--image=hazelcast/hazelcast--port=5701# Start a hazelcast pod and set environment variables "DNS_DOMAIN=cluster" and "POD_NAMESPACE=default" in the container oc run hazelcast--image=hazelcast/hazelcast--env="DNS_DOMAIN=cluster"--env="POD_NAMESPACE=default"# Start a hazelcast pod and set labels "app=hazelcast" and "env=prod" in the container oc run hazelcast--image=hazelcast/hazelcast--labels="app=hazelcast,env=prod"# Dry run; print the corresponding API objects without creating them oc run nginx--image=nginx --dry-run=client# Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx--image=nginx--overrides='{ "apiVersion": "v1", "spec": { ... } }'# Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run-i-t busybox--image=busybox--restart=Never# Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx--image=nginx --<arg1><arg2>...<argN># Start the nginx pod using a different command and custom arguments oc run nginx--image=nginx--command --<cmd><arg1>...<argN>
2.6.1.124. oc scale
Set a new size for a deployment, replica set, or replication controller
Example usage
Scale a replica set named 'foo' to 3
# Scale a replica set named 'foo' to 3 oc scale--replicas=3 rs/foo# Scale a resource identified by type and name specified in "foo.yaml" to 3 oc scale--replicas=3-f foo.yaml# If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2--replicas=3 deployment/mysql# Scale multiple replication controllers oc scale--replicas=5 rc/example1 rc/example2 rc/example3# Scale stateful set named 'web' to 3 oc scale--replicas=3 statefulset/web
2.6.1.125. oc secrets link
Link secrets to a service account
Example usage
Add an image pull secret to a service account to automatically use it for pulling pod images
# Add an image pull secret to a service account to automatically use it for pulling pod images oc secretslink serviceaccount-name pull-secret--for=pull# Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secretslink builder builder-image-secret--for=pull,mount
2.6.1.126. oc secrets unlink
Detach secrets from a service account
Example usage
Unlink a secret currently associated with a service account
# Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name...
2.6.1.127. oc set build-hook
Update a build hook on a build config
Example usage
Clear post-commit hook on a build config
# Clear post-commit hook on a build config ocset build-hook bc/mybuild --post-commit--remove# Set the post-commit hook to execute a test suite using a new entrypoint ocset build-hook bc/mybuild --post-commit--command -- /bin/bash-c /var/lib/test-image.sh# Set the post-commit hook to execute a shell script ocset build-hook bc/mybuild --post-commit--script="/var/lib/test-image.sh param1 param2 && /var/lib/done.sh"
2.6.1.128. oc set build-secret
Update a build secret on a build config
Example usage
Clear the push secret on a build config
# Clear the push secret on a build config ocset build-secret--push--remove bc/mybuild# Set the pull secret on a build config ocset build-secret--pull bc/mybuild mysecret# Set the push and pull secret on a build config ocset build-secret--push--pull bc/mybuild mysecret# Set the source secret on a set of build configs matching a selector ocset build-secret--source-lapp=myapp gitsecret
2.6.1.129. oc set data
Update the data within a config map or secret
Example usage
Set the 'password' key of a secret
# Set the 'password' key of a secret ocset data secret/foopassword=this_is_secret# Remove the 'password' key from a secret ocset data secret/foo password-# Update the 'haproxy.conf' key of a config map from a file on disk ocset data configmap/bar --from-file=../haproxy.conf# Update a secret with the contents of a directory, one key per file ocset data secret/foo --from-file=secret-dir
2.6.1.130. oc set deployment-hook
Update a deployment hook on a deployment config
Example usage
Clear pre and post hooks on a deployment config
# Clear pre and post hooks on a deployment config ocset deployment-hook dc/myapp--remove--pre--post# Set the pre deployment hook to execute a db migration command for an application# using the data volume from the application ocset deployment-hook dc/myapp--pre--volumes=data -- /var/lib/migrate-db.sh# Set a mid deployment hook along with additional environment variables ocset deployment-hook dc/myapp--mid--volumes=data-eVAR1=value1-eVAR2=value2 -- /var/lib/prepare-deploy.sh
2.6.1.131. oc set env
Update environment variables on a pod template
Example usage
Update deployment config 'myapp' with a new environment variable
# Update deployment config 'myapp' with a new environment variable ocsetenv dc/myappSTORAGE_DIR=/local# List the environment variables defined on a build config 'sample-build' ocsetenv bc/sample-build--list# List the environment variables defined on all pods ocsetenv pods--all--list# Output modified build config in YAML ocsetenv bc/sample-buildSTORAGE_DIR=/data-o yaml# Update all containers in all replication controllers in the project to have ENV=prod ocsetenv rc--allENV=prod# Import environment from a secret ocsetenv--from=secret/mysecret dc/myapp# Import environment from a config map with a prefix ocsetenv--from=configmap/myconfigmap--prefix=MYSQL_ dc/myapp# Remove the environment variable ENV from container 'c1' in all deployment configs ocsetenvdc--all--containers="c1" ENV-# Remove the environment variable ENV from a deployment config definition on disk and# update the deployment config on the server ocsetenv-f dc.json ENV-# Set some of the local shell environment into a deployment config on the server ocsetenv|grep RAILS_| ocenv-e - dc/myapp
2.6.1.132. oc set image
Update the image of a pod template
Example usage
Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'.
# Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. ocset image dc/nginxbusybox=busyboxnginx=nginx:1.9.1# Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. ocset image dc/myappapp=openshift/ruby:2.3--source=imagestreamtag# Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' ocset image deployments,rcnginx=nginx:1.9.1--all# Update image of all containers of daemonset abc to 'nginx:1.9.1' ocset image daemonset abc *=nginx:1.9.1# Print result (in YAML format) of updating nginx container image from local file, without hitting the server ocset image-f path/to/file.yamlnginx=nginx:1.9.1--local-o yaml
2.6.1.133. oc set image-lookup
Change how images are resolved when deploying applications
Example usage
Print all of the image streams and whether they resolve local names
# Print all of the image streams and whether they resolve local names ocset image-lookup# Use local name lookup on image stream mysql ocset image-lookup mysql# Force a deployment to use local name lookup ocset image-lookup deploy/mysql# Show the current status of the deployment lookup ocset image-lookup deploy/mysql--list# Disable local name lookup on image stream mysql ocset image-lookup mysql--enabled=false# Set local name lookup on all image streams ocset image-lookup--all
2.6.1.134. oc set probe
Update a probe on a pod template
Example usage
Clear both readiness and liveness probes off all containers
# Clear both readiness and liveness probes off all containers ocset probe dc/myapp--remove--readiness--liveness# Set an exec action as a liveness probe to run 'echo ok' ocset probe dc/myapp--liveness --echo ok# Set a readiness probe to try to open a TCP socket on 3306 ocset probe rc/mysql--readiness --open-tcp=3306# Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP ocset probe dc/webapp--startup --get-url=http://:8080/healthz# Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP ocset probe dc/webapp--readiness --get-url=http://:8080/healthz# Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod ocset probe dc/router--readiness --get-url=https://127.0.0.1:1936/stats# Set only the initial-delay-seconds field on all deployments ocset probedc--all--readiness --initial-delay-seconds=30
2.6.1.135. oc set resources
Update resource requests/limits on objects with pod templates
Example usage
Set a deployments nginx container CPU limits to "200m and memory to 512Mi"
# Set a deployments nginx container CPU limits to "200m and memory to 512Mi" ocset resources deployment nginx-c=nginx--limits=cpu=200m,memory=512Mi# Set the resource request and limits for all containers in nginx ocset resources deployment nginx--limits=cpu=200m,memory=512Mi--requests=cpu=100m,memory=256Mi# Remove the resource requests for resources on containers in nginx ocset resources deployment nginx--limits=cpu=0,memory=0--requests=cpu=0,memory=0# Print the result (in YAML format) of updating nginx container limits locally, without hitting the server ocset resources-f path/to/file.yaml--limits=cpu=200m,memory=512Mi--local-o yaml
2.6.1.136. oc set route-backends
Update the backends for a route
Example usage
Print the backends on the route 'web'
# Print the backends on the route 'web' ocset route-backends web# Set two backend services on route 'web' with 2/3rds of traffic going to 'a' ocset route-backends weba=2b=1# Increase the traffic percentage going to b by 10%% relative to a ocset route-backends web--adjustb=+10%%# Set traffic percentage going to b to 10%% of the traffic going to a ocset route-backends web--adjustb=10%%# Set weight of b to 10 ocset route-backends web--adjustb=10# Set the weight to all backends to zero ocset route-backends web--zero
2.6.1.137. oc set selector
Set the selector on a resource
Example usage
Set the labels and selector before creating a deployment/service pair.
# Set the labels and selector before creating a deployment/service pair. oc createservice clusterip my-svc--clusterip="None"-o yaml --dry-run| ocset selector--local-f -'environment=qa'-o yaml| oc create-f - oc create deployment my-dep-o yaml --dry-run| oc label--local-f -environment=qa-o yaml| oc create-f -
2.6.1.138. oc set serviceaccount
Update the service account of a resource
Example usage
Set deployment nginx-deployment's service account to serviceaccount1
# Set deployment nginx-deployment's service account to serviceaccount1 ocset serviceaccount deployment nginx-deployment serviceaccount1# Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server ocset sa-f nginx-deployment.yaml serviceaccount1--local --dry-run-o yaml
2.6.1.139. oc set subject
Update the user, group, or service account in a role binding or cluster role binding
Example usage
Update a cluster role binding for serviceaccount1
# Update a cluster role binding for serviceaccount1 ocset subject clusterrolebinding admin--serviceaccount=namespace:serviceaccount1# Update a role binding for user1, user2, and group1 ocset subject rolebinding admin--user=user1--user=user2--group=group1# Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin--role=admin--user=admin-o yaml --dry-run| ocset subject--local-f ---user=foo-o yaml
2.6.1.140. oc set triggers
Update the triggers on one or more objects
Example usage
Print the triggers on the deployment config 'myapp'
# Print the triggers on the deployment config 'myapp' ocset triggers dc/myapp# Set all triggers to manual ocset triggers dc/myapp--manual# Enable all automatic triggers ocset triggers dc/myapp--auto# Reset the GitHub webhook on a build to a new, generated secret ocset triggers bc/webapp --from-github ocset triggers bc/webapp --from-webhook# Remove all triggers ocset triggers bc/webapp --remove-all# Stop triggering on config change ocset triggers dc/myapp --from-config--remove# Add an image trigger to a build config ocset triggers bc/webapp --from-image=namespace1/image:latest# Add an image trigger to a stateful set on the main container ocset triggers statefulset/db --from-image=namespace1/image:latest-c main
2.6.1.141. oc set volumes
Update volumes on a pod template
Example usage
List volumes defined on all deployment configs in the current project
# List volumes defined on all deployment configs in the current project ocset volumedc--all# Add a new empty dir volume to deployment config (dc) 'myapp' mounted under# /var/lib/myapp ocset volume dc/myapp--add --mount-path=/var/lib/myapp# Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' ocset volume dc/myapp--add--name=v1-t pvc --claim-name=pvc1--overwrite# Remove volume 'v1' from deployment config 'myapp' ocset volume dc/myapp--remove--name=v1# Create a new persistent volume claim that overwrites an existing volume 'v1' ocset volume dc/myapp--add--name=v1-t pvc --claim-size=1G--overwrite# Change the mount point for volume 'v1' to /data ocset volume dc/myapp--add--name=v1-m /data--overwrite# Modify the deployment config by removing volume mount "v1" from container "c1"# (and by removing the volume "v1" if no other containers have volume mounts that reference it) ocset volume dc/myapp--remove--name=v1--containers=c1# Add new volume based on a more complex volume source (AWS EBS, GCE PD,# Ceph, Gluster, NFS, ISCSI, ...) ocset volume dc/myapp--add-m /data--source=<json-string>
2.6.1.142. oc start-build
Start a new build
Example usage
Starts build from build config "hello-world"
# Starts build from build config "hello-world" oc start-build hello-world# Starts build from a previous build "hello-world-1" oc start-build --from-build=hello-world-1# Use the contents of a directory as build input oc start-build hello-world --from-dir=src/# Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world--commit=v2# Start a new build for build config "hello-world" and watch the logs until the build# completes or fails oc start-build hello-world--follow# Start a new build for build config "hello-world" and wait until the build completes. It# exits with a non-zero return code if the build fails oc start-build hello-world--wait
2.6.1.143. oc status
Show an overview of the current project
Example usage
See an overview of the current project
# See an overview of the current project oc status# Export the overview of the current project in an svg file oc status-o dot| dot-T svg-o project.svg# See an overview of the current project including details for any identified issues oc status--suggest
2.6.1.144. oc tag
Tag existing images into image streams
Example usage
Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip'
# Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip# Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip# Tag an external container image oc tag--source=docker openshift/origin-control-plane:latest yourproject/ruby:tip# Tag an external container image and request pullthrough for it oc tag--source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local# Tag an external container image and include the full manifest list oc tag--source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal# Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest-d
2.6.1.145. oc version
Print the client and server version information
Example usage
Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context
# Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version# Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context in JSON format oc version--output json# Print the OpenShift client version information for the current context oc version--client
2.6.1.146. oc wait
Experimental: Wait for a specific condition on one or many resources
Example usage
Wait for the pod "busybox1" to contain the status condition of type "Ready"
# Wait for the pod "busybox1" to contain the status condition of type "Ready" ocwait--for=condition=Ready pod/busybox1# The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity) ocwait--for=condition=Ready=false pod/busybox1# Wait for the pod "busybox1" to contain the status phase to be "Running" ocwait--for=jsonpath='{.status.phase}'=Running pod/busybox1# Wait for pod "busybox1" to be Ready ocwait--for='jsonpath={.status.conditions[?(@.type=="Ready")].status}=True' pod/busybox1# Wait for the service "loadbalancer" to have ingress ocwait--for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer# Wait for the secret "busybox1" to be created, with a timeout of 30s oc create secret generic busybox1 ocwait--for=create secret/busybox1--timeout=30s# Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command oc delete pod/busybox1 ocwait--for=delete pod/busybox1--timeout=60s
2.6.1.147. oc whoami
Return information about the current session
Example usage
Display the currently authenticated user
# Display the currently authenticated user ocwhoami
2.7. OpenShift CLI administrator command reference
This reference provides descriptions and example commands for OpenShift CLI (oc
) administrator commands. You must havecluster-admin
or equivalent permissions to use these commands.
For developer commands, see theOpenShift CLI developer command reference.
Runoc adm -h
to list all administrator commands or runoc <command> --help
to get additional details for a specific command.
2.7.1. OpenShift CLI (oc) administrator commands
2.7.1.1. oc adm build-chain
Output the inputs and dependencies of your builds
Example usage
Build the dependency tree for the 'latest' tag in <image-stream>
# Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain<image-stream># Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain<image-stream>:v2-o dot| dot-T svg-o deps.svg# Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain<image-stream>-ntest--all
2.7.1.2. oc adm catalog mirror
Mirror an operator-registry catalog
Example usage
Mirror an operator-registry image and its contents to a registry
# Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com# Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace# Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com# Configure a cluster to use a mirrored registry oc apply-f manifests/imageDigestMirrorSet.yaml# Edit the mirroring mappings and mirror with "oc image mirror" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror-f manifests/mapping.txt# Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset-l operators.openshift.org/catalog=true
2.7.1.3. oc adm certificate approve
Approve a certificate signing request
Example usage
Approve CSR 'csr-sqgzp'
# Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp
2.7.1.4. oc adm certificate deny
Deny a certificate signing request
Example usage
Deny CSR 'csr-sqgzp'
# Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp
2.7.1.5. oc adm copy-to-node
Copy specified files to the node
Example usage
Copy a new bootstrap kubeconfig file to node-0
# Copy a new bootstrap kubeconfig file to node-0 oc adm copy-to-node--copy=new-bootstrap-kubeconfig=/etc/kubernetes/kubeconfig node/node-0
2.7.1.6. oc adm cordon
Mark node as unschedulable
Example usage
Mark node "foo" as unschedulable
# Mark node "foo" as unschedulable oc adm cordon foo
2.7.1.7. oc adm create-bootstrap-project-template
Create a bootstrap project template
Example usage
Output a bootstrap project template in YAML format to stdout
# Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template-o yaml
2.7.1.8. oc adm create-error-template
Create an error page template
Example usage
Output a template for the error page to stdout
# Output a template for the error page to stdout oc adm create-error-template
2.7.1.9. oc adm create-login-template
Create a login template
Example usage
Output a template for the login page to stdout
# Output a template for the login page to stdout oc adm create-login-template
2.7.1.10. oc adm create-provider-selection-template
Create a provider selection template
Example usage
Output a template for the provider selection page to stdout
# Output a template for the provider selection page to stdout oc adm create-provider-selection-template
2.7.1.11. oc adm drain
Drain node in preparation for maintenance
Example usage
Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it
# Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it oc adm drain foo--force# As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900
2.7.1.12. oc adm groups add-users
Add users to a group
Example usage
Add user1 and user2 to my-group
# Add user1 and user2 to my-group oc admgroups add-users my-group user1 user2
2.7.1.13. oc adm groups new
Create a new group
Example usage
Add a group with no users
# Add a group with no users oc admgroups new my-group# Add a group with two users oc admgroups new my-group user1 user2# Add a group with one user and shorter output oc admgroups new my-group user1-o name
2.7.1.14. oc adm groups prune
Remove old OpenShift groups referencing missing records from an external provider
Example usage
Prune all orphaned groups
# Prune all orphaned groups oc admgroups prune --sync-config=/path/to/ldap-sync-config.yaml--confirm# Prune all orphaned groups except the ones from the denylist file oc admgroups prune--blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml--confirm# Prune all orphaned groups from a list of specific groups specified in an allowlist file oc admgroups prune--whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml--confirm# Prune all orphaned groups from a list of specific groups specified in a list oc admgroups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml--confirm
2.7.1.15. oc adm groups remove-users
Remove users from a group
Example usage
Remove user1 and user2 from my-group
# Remove user1 and user2 from my-group oc admgroups remove-users my-group user1 user2
2.7.1.16. oc adm groups sync
Sync OpenShift groups with records from an external provider
Example usage
Sync all groups with an LDAP server
# Sync all groups with an LDAP server oc admgroupssync --sync-config=/path/to/ldap-sync-config.yaml--confirm# Sync all groups except the ones from the blacklist file with an LDAP server oc admgroupssync--blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml--confirm# Sync specific groups specified in an allowlist file with an LDAP server oc admgroupssync--whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml--confirm# Sync all OpenShift groups that have been synced previously with an LDAP server oc admgroupssync--type=openshift --sync-config=/path/to/ldap-sync-config.yaml--confirm# Sync specific OpenShift groups if they have been synced previously with an LDAP server oc admgroupssync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml--confirm
2.7.1.17. oc adm inspect
Collect debugging data for a given resource
Example usage
Collect debugging data for the "openshift-apiserver" clusteroperator
# Collect debugging data for the "openshift-apiserver" clusteroperator oc adm inspect clusteroperator/openshift-apiserver# Collect debugging data for the "openshift-apiserver" and "kube-apiserver" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver# Collect debugging data for all clusteroperators oc adm inspect clusteroperator# Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions
2.7.1.18. oc adm migrate icsp
Update imagecontentsourcepolicy file(s) to imagedigestmirrorset file(s)
Example usage
Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory
# Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir
2.7.1.19. oc adm migrate template-instances
Update template instances to point to the latest group-version-kinds
Example usage
Perform a dry-run of updating all objects
# Perform a dry-run of updating all objects oc adm migrate template-instances# To actually perform the update, the confirm flag must be appended oc adm migrate template-instances--confirm
2.7.1.20. oc adm must-gather
Launch a new instance of a pod for gathering debug information
Example usage
Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand>
# Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather# Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory# Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs# Gather information using multiple plug-in images oc adm must-gather--image=quay.io/kubevirt/must-gather--image=quay.io/openshift/origin-must-gather# Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest# Gather information using a specific image, command, and pod directory oc adm must-gather--image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh
2.7.1.21. oc adm new-project
Create a new project
Example usage
Create a new project using a node selector
# Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east'
2.7.1.22. oc adm node-image create
Create an ISO image for booting the nodes to be added to the target cluster
Example usage
Create the ISO image and download it in the current folder
# Create the ISO image and download it in the current folder oc adm node-image create# Use a different assets folder oc adm node-image create--dir=/tmp/assets# Specify a custom image name oc adm node-image create-o=my-node.iso# In place of an ISO, creates files that can be used for PXE boot oc adm node-image create--pxe# Create an ISO to add a single node without using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb# Create an ISO to add a single node with a root device hint and without# using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb --root-device-hint=deviceName:/dev/sda
2.7.1.23. oc adm node-image monitor
Monitor new nodes being added to an OpenShift cluster
Example usage
Monitor a single node being added to a cluster
# Monitor a single node being added to a cluster oc adm node-image monitor --ip-addresses192.168.111.83# Monitor multiple nodes being added to a cluster by separating each# IP address with a comma oc adm node-image monitor --ip-addresses192.168.111.83,192.168.111.84
2.7.1.24. oc adm node-logs
Display and filter node logs
Example usage
Show kubelet logs from all control plane nodes
# Show kubelet logs from all control plane nodes oc adm node-logs--role master-u kubelet# See what logs are available in control plane nodes in /var/log oc adm node-logs--role master--path=/# Display cron log file from all control plane nodes oc adm node-logs--role master--path=cron
2.7.1.25. oc adm ocp-certificates monitor-certificates
Watch platform certificates
Example usage
Watch platform certificates
# Watch platform certificates oc adm ocp-certificates monitor-certificates
2.7.1.26. oc adm ocp-certificates regenerate-leaf
Regenerate client and serving certificates of an OpenShift cluster
Example usage
Regenerate a leaf certificate contained in a particular secret
# Regenerate a leaf certificate contained in a particular secret oc adm ocp-certificates regenerate-leaf-n openshift-config-managed secret/kube-controller-manager-client-cert-key
2.7.1.27. oc adm ocp-certificates regenerate-machine-config-server-serving-cert
Regenerate the machine config operator certificates in an OpenShift cluster
Example usage
Regenerate the MCO certs without modifying user-data secrets
# Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false# Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server
2.7.1.28. oc adm ocp-certificates regenerate-top-level
Regenerate the top level certificates in an OpenShift cluster
Example usage
Regenerate the signing certificate contained in a particular secret
# Regenerate the signing certificate contained in a particular secret oc adm ocp-certificates regenerate-top-level-n openshift-kube-apiserver-operator secret/loadbalancer-serving-signer-key
2.7.1.29. oc adm ocp-certificates remove-old-trust
Remove old CAs from ConfigMaps representing platform trust bundles in an OpenShift cluster
Example usage
Remove a trust bundled contained in a particular config map
# Remove a trust bundled contained in a particular config map oc adm ocp-certificates remove-old-trust-n openshift-config-managed configmaps/kube-apiserver-aggregator-client-ca --created-before2023-06-05T14:44:06Z# Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps-A--all --created-before2023-06-05T14:44:06Z
2.7.1.30. oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server
Update user-data secrets in an OpenShift cluster to use updated MCO certfs
Example usage
Regenerate the MCO certs without modifying user-data secrets
# Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false# Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server
2.7.1.31. oc adm policy add-cluster-role-to-group
Add a role to groups for all projects in the cluster
Example usage
Add the 'cluster-admin' cluster role to the 'cluster-admins' group
# Add the 'cluster-admin' cluster role to the 'cluster-admins' group oc adm policy add-cluster-role-to-group cluster-admin cluster-admins
2.7.1.32. oc adm policy add-cluster-role-to-user
Add a role to users for all projects in the cluster
Example usage
Add the 'system:build-strategy-docker' cluster role to the 'devuser' user
# Add the 'system:build-strategy-docker' cluster role to the 'devuser' user oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser
2.7.1.33. oc adm policy add-role-to-user
Add a role to users or service accounts for the current project
Example usage
Add the 'view' role to user1 for the current project
# Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1# Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit-z serviceaccount1
2.7.1.34. oc adm policy add-scc-to-group
Add a security context constraint to groups
Example usage
Add the 'restricted' security context constraint to group1 and group2
# Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2
2.7.1.35. oc adm policy add-scc-to-user
Add a security context constraint to users or a service account
Example usage
Add the 'restricted' security context constraint to user1 and user2
# Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2# Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged-z serviceaccount1
2.7.1.36. oc adm policy remove-cluster-role-from-group
Remove a role from groups for all projects in the cluster
Example usage
Remove the 'cluster-admin' cluster role from the 'cluster-admins' group
# Remove the 'cluster-admin' cluster role from the 'cluster-admins' group oc adm policy remove-cluster-role-from-group cluster-admin cluster-admins
2.7.1.37. oc adm policy remove-cluster-role-from-user
Remove a role from users for all projects in the cluster
Example usage
Remove the 'system:build-strategy-docker' cluster role from the 'devuser' user
# Remove the 'system:build-strategy-docker' cluster role from the 'devuser' user oc adm policy remove-cluster-role-from-user system:build-strategy-docker devuser
2.7.1.38. oc adm policy scc-review
Check which service account can create a pod
Example usage
Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml
# Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml# Service Account specified in myresource.yaml file is ignored oc adm policy scc-review-z sa1,sa2-f my_resource.yaml# Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review-z system:serviceaccount:bob:default-f my_resource.yaml# Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review-f my_resource_with_sa.yaml# Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review-f myresource_with_no_sa.yaml
2.7.1.39. oc adm policy scc-subject-review
Check whether a user or a service account can create a pod
Example usage
Check whether user bob can create a pod specified in myresource.yaml
# Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review-u bob-f myresource.yaml# Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review-u bob-g projectAdmin-f myresource.yaml# Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review-f myresourcewithsa.yaml
2.7.1.40. oc adm prune builds
Remove old completed and failed builds
Example usage
Dry run deleting older completed and failed builds and also including
# Dry run deleting older completed and failed builds and also including# all builds whose associated build config no longer exists oc adm prune builds--orphans# To actually perform the prune operation, the confirm flag must be appended oc adm prune builds--orphans--confirm
2.7.1.41. oc adm prune deployments
Remove old completed and failed deployment configs
Example usage
Dry run deleting all but the last complete deployment for every deployment config
# Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1# To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1--confirm
2.7.1.42. oc adm prune groups
Remove old OpenShift groups referencing missing records from an external provider
Example usage
Prune all orphaned groups
# Prune all orphaned groups oc adm prunegroups --sync-config=/path/to/ldap-sync-config.yaml--confirm# Prune all orphaned groups except the ones from the denylist file oc adm prunegroups--blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml--confirm# Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prunegroups--whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml--confirm# Prune all orphaned groups from a list of specific groups specified in a list oc adm prunegroups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml--confirm
2.7.1.43. oc adm prune images
Remove unreferenced images
Example usage
See what the prune command would delete if only images and their referrers were more than an hour old
# See what the prune command would delete if only images and their referrers were more than an hour old# and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m# To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m--confirm# See what the prune command would delete if we are interested in removing images# exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit# To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit--confirm# Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org--confirm# Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt--confirm
2.7.1.44. oc adm prune renderedmachineconfigs
Prunes rendered MachineConfigs in an OpenShift cluster
Example usage
See what the prune command would delete if run with no options
# See what the prune command would delete if run with no options oc adm prune renderedmachineconfigs# To actually perform the prune operation, the confirm flag must be appended oc adm prune renderedmachineconfigs--confirm# See what the prune command would delete if run on the worker MachineConfigPool oc adm prune renderedmachineconfigs --pool-name=worker# Prunes 10 oldest rendered MachineConfigs in the cluster oc adm prune renderedmachineconfigs--count=10--confirm# Prunes 10 oldest rendered MachineConfigs in the cluster for the worker MachineConfigPool oc adm prune renderedmachineconfigs--count=10 --pool-name=worker--confirm
2.7.1.45. oc adm prune renderedmachineconfigs list
Lists rendered MachineConfigs in an OpenShift cluster
Example usage
List all rendered MachineConfigs for the worker MachineConfigPool in the cluster
# List all rendered MachineConfigs for the worker MachineConfigPool in the cluster oc adm prune renderedmachineconfigs list --pool-name=worker# List all rendered MachineConfigs in use by the cluster's MachineConfigPools oc adm prune renderedmachineconfigs list --in-use
2.7.1.46. oc adm reboot-machine-config-pool
Initiate reboot of the specified MachineConfigPool
Example usage
Reboot all MachineConfigPools
# Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master# Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker# Reboot masters oc adm reboot-machine-config-pool mcp/master
2.7.1.47. oc adm release extract
Extract the contents of an update payload to disk
Example usage
Use git to check out the source code for the current cluster release to DIR
# Use git to check out the source code for the current cluster release to DIR oc adm release extract--git=DIR# Extract cloud credential requests for AWS oc adm release extract --credentials-requests--cloud=aws# Use git to check out the source code for the current cluster release to DIR from linux/s390x image# Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract--git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x
2.7.1.48. oc adm release info
Display information about a release
Example usage
Show information about the cluster's current release
# Show information about the cluster's current release oc adm release info# Show the source code that comprises a release oc adm release info4.11.2 --commit-urls# Show the source code difference between two releases oc adm release info4.11.04.11.2--commits# Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2--pullspecs# Show information about linux/s390x image# Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x
2.7.1.49. oc adm release mirror
Mirror a release to a different image registry location
Example usage
Perform a dry run showing what would be mirrored, including the mirror objects
# Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror4.11.0--to myregistry.local/openshift/release\ --release-image-signature-to-dir /tmp/releases --dry-run# Mirror a release into the current directory oc adm release mirror4.11.0--to file://openshift/release\ --release-image-signature-to-dir /tmp/releases# Mirror a release to another directory in the default location oc adm release mirror4.11.0 --to-dir /tmp/releases# Upload a release from the current directory to another server oc adm release mirror--from file://openshift/release--to myregistry.com/openshift/release\ --release-image-signature-to-dir /tmp/releases# Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror--from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64\--to=registry.example.com/your/repository --apply-release-image-signature
2.7.1.50. oc adm release new
Create a new OpenShift release
Example usage
Create a release from the latest origin images and push to a DockerHub repository
# Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11-n origin --to-image docker.io/mycompany/myrepo:latest# Create a new release with updated metadata from a previous release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11--name4.11.1\--previous4.11.0--metadata... --to-image docker.io/mycompany/myrepo:latest# Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11\cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest# Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11
2.7.1.51. oc adm restart-kubelet
Restart kubelet on the specified nodes
Example usage
Restart all the nodes, 10% at a time
# Restart all the nodes, 10% at a time oc adm restart-kubelet nodes--all--directive=RemoveKubeletKubeconfig# Restart all the nodes, 20 nodes at a time oc adm restart-kubelet nodes--all--parallelism=20--directive=RemoveKubeletKubeconfig# Restart all the nodes, 15% at a time oc adm restart-kubelet nodes--all--parallelism=15%--directive=RemoveKubeletKubeconfig# Restart all the masters at the same time oc adm restart-kubelet nodes-l node-role.kubernetes.io/master--parallelism=100%--directive=RemoveKubeletKubeconfig
2.7.1.52. oc adm taint
Update the taints on one or more nodes
Example usage
Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'
# Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'# If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foodedicated=special-user:NoSchedule# Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule-# Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated-# Add a taint with key 'dedicated' on nodes having label myLabel=X oc adm taintnode-lmyLabel=Xdedicated=foo:PreferNoSchedule# Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule
2.7.1.53. oc adm top images
Show usage statistics for images
Example usage
Show usage statistics for images
# Show usage statistics for images oc admtop images
2.7.1.54. oc adm top imagestreams
Show usage statistics for image streams
Example usage
Show usage statistics for image streams
# Show usage statistics for image streams oc admtop imagestreams
2.7.1.55. oc adm top node
Display resource (CPU/memory) usage of nodes
Example usage
Show metrics for all nodes
# Show metrics for all nodes oc admtopnode# Show metrics for a given node oc admtopnode NODE_NAME
2.7.1.56. oc adm top persistentvolumeclaims
Experimental: Show usage statistics for bound persistentvolumeclaims
Example usage
Show usage statistics for all the bound persistentvolumeclaims across the cluster
# Show usage statistics for all the bound persistentvolumeclaims across the cluster oc admtop persistentvolumeclaims-A# Show usage statistics for all the bound persistentvolumeclaims in a specific namespace oc admtop persistentvolumeclaims-n default# Show usage statistics for specific bound persistentvolumeclaims oc admtop persistentvolumeclaims database-pvc app-pvc-n default
2.7.1.57. oc adm top pod
Display resource (CPU/memory) usage of pods
Example usage
Show metrics for all pods in the default namespace
# Show metrics for all pods in the default namespace oc admtop pod# Show metrics for all pods in the given namespace oc admtop pod--namespace=NAMESPACE# Show metrics for a given pod and its containers oc admtop pod POD_NAME--containers# Show metrics for the pods defined by label name=myLabel oc admtop pod-lname=myLabel
2.7.1.58. oc adm uncordon
Mark node as schedulable
Example usage
Mark node "foo" as schedulable
# Mark node "foo" as schedulable oc adm uncordon foo
2.7.1.59. oc adm upgrade
Upgrade a cluster or adjust the upgrade channel
Example usage
View the update status and available cluster updates
# View the update status and available cluster updates oc adm upgrade# Update to the latest version oc adm upgrade --to-latest=true
2.7.1.60. oc adm verify-image-signature
Verify the image identity contained in the image signature
Example usage
Verify the image signature and identity using the local GPG keychain
# Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4\ --expected-identity=registry.local:5000/foo/bar:v1# Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4\ --expected-identity=registry.local:5000/foo/bar:v1--save# Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4\ --expected-identity=registry.local:5000/foo/bar:v1\ --registry-url=docker-registry.foo.com# Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all
2.7.1.61. oc adm wait-for-node-reboot
Wait for nodes to reboot after runningoc adm reboot-machine-config-pool
Example usage
Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master'
# Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes--all# Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes-l node-role.kubernetes.io/master# Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes-l node-role.kubernetes.io/master --reboot-number=4
2.7.1.62. oc adm wait-for-stable-cluster
Wait for the platform operators to become stable
Example usage
Wait for all cluster operators to become stable
# Wait for all cluster operators to become stable oc adm wait-for-stable-cluster# Consider operators to be stable if they report as such for 5 minutes straight oc adm wait-for-stable-cluster --minimum-stable-period 5m