CLI 工具
了解如何使用 Red Hat OpenShift Service on AWS 的命令行工具
摘要
第 1 章 Red Hat OpenShift Service on AWS CLI 工具概述 复制链接链接已复制到粘贴板!
用户在处理 Red Hat OpenShift Service on AWS 时执行一系列操作,如下所示:
- 管理集群
- 构建、部署和管理应用程序
- 管理部署过程
- 创建和维护 Operator 目录
Red Hat OpenShift Service on AWS 提供了一组命令行界面(CLI)工具,通过允许用户从终端执行各种管理和开发操作来简化这些任务。这些工具提供简单的命令来管理应用,并与系统的每个组件交互。
1.1. CLI 工具列表 复制链接链接已复制到粘贴板!
Red Hat OpenShift Service on AWS 中提供了以下一组 CLI 工具:
-
OpenShift CLI (
oc) :这是 Red Hat OpenShift Service on AWS 用户最常用的 CLI 工具。它帮助集群管理员和开发人员使用终端在 Red Hat OpenShift Service on AWS 间执行端到端操作。与 Web 控制台不同,它允许用户使用命令脚本直接处理项目源代码。 -
Knative CLI(kn) :Knative (
kn) CLI 工具提供简单直观的终端命令,可用于与 OpenShift Serverless 组件(如 Knative Serving 和 Eventing)交互。 -
Pipelines CLI (tkn) :OpenShift Pipelines 是 Red Hat OpenShift Service on AWS 中的持续集成和持续交付(CI/CD)解决方案,其内部使用 Tekton。
tknCLI 工具提供简单直观的命令,以便使用终端与 OpenShift Pipelines 进行交互。 -
opm CLI :
opmCLI 工具可帮助 Operator 开发人员和集群管理员从终端创建和维护 Operator 目录。 -
ROSA CLI (
rosa) :使用rosaCLI 创建、更新、管理和删除 Red Hat OpenShift Service on AWS 集群和资源。
第 2 章 OpenShift CLI (oc) 复制链接链接已复制到粘贴板!
2.1. OpenShift CLI 入门 复制链接链接已复制到粘贴板!
2.1.1. 关于 OpenShift CLI 复制链接链接已复制到粘贴板!
使用 OpenShift CLI (oc),您可以从终端创建应用程序并管理 Red Hat OpenShift Service on AWS 项目。OpenShift CLI 在以下情况下是理想的选择:
- 直接使用项目源代码。
- 编写 Red Hat OpenShift Service on AWS 操作脚本
- 在管理项目时,受带宽资源的限制,Web 控制台无法使用。
2.1.2. 安装 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以通过下载二进制文件或使用 RPM 来安装 OpenShift CLI(oc)。
2.1.2.1. 安装 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以安装 OpenShift CLI (oc)来使用命令行界面与 Red Hat OpenShift Service on AWS 进行交互。您可以在 Linux、Windows 或 macOS 上安装 oc。
如果安装了旧版本的 oc,则无法使用 Red Hat OpenShift Service on AWS 中的所有命令。
下载并安装新版本的 oc。
2.1.2.1.1. 在 Linux 上安装 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以按照以下流程在 Linux 上安装 OpenShift CLI(oc)二进制文件。
流程
- 进入到红帽客户门户网站上的 Download OpenShift Container Platform 页。
- 从 Product Variant 列表中选择构架。
- 从 Version 列表中选择适当的版本。
- 点 OpenShift v4 Linux Clients 条目旁的 Download Now 来保存文件。
解包存档:
$ tar xvf <file>将
oc二进制文件放到PATH 中的目录中。要查看您的
PATH,请执行以下命令:$ echo $PATH
验证
安装 OpenShift CLI 后,可以使用
oc命令:$ oc <command>
2.1.2.1.2. 在 Windows 上安装 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以按照以下流程在 Windows 上安装 OpenShift CLI(oc)二进制文件。
流程
- 导航到红帽客户门户网站上的 Download OpenShift Container Platform 页面。
- 从 Version 列表中选择适当的版本。
- 点 OpenShift v4 Windows Client 条目旁的 Download Now 来保存文件。
- 使用 ZIP 程序解压存档。
将
oc二进制文件移到PATH 中的目录中。要查看您的
PATH,请打开命令提示并执行以下命令:C:\> path
验证
安装 OpenShift CLI 后,可以使用
oc命令:C:\> oc <command>
2.1.2.1.3. 在 macOS 上安装 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以按照以下流程在 macOS 上安装 OpenShift CLI(oc)二进制文件。
流程
- 导航到红帽客户门户网站上的 Download OpenShift Container Platform。
- 从 版本 下拉列表中选择适当的版本。
- 点 OpenShift v4 macOS Clients 条目旁的 Download Now 来保存文件。
- 解包和解压存档。
将
oc二进制文件移到 PATH 的目录中。要查看您的
PATH,请打开终端并执行以下命令:$ echo $PATH
验证
使用
oc命令验证安装:$ oc <command>
2.1.2.2. 使用 Web 控制台安装 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以安装 OpenShift CLI (oc),以通过 Web 控制台与 Red Hat OpenShift Service on AWS 进行交互。您可以在 Linux、Windows 或 macOS 上安装 oc。
如果安装了旧版本的 oc,则无法使用 Red Hat OpenShift Service on AWS 中的所有命令。下载并安装新版本的 oc。
2.1.2.2.1. 使用 Web 控制台在 Linux 上安装 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以按照以下流程在 Linux 上安装 OpenShift CLI(oc)二进制文件。
流程
从 Web 控制台,单击 ?。
单击 Command Line Tools。
-
为您的 Linux 平台选择适当的
oc二进制文件,然后点 Download oc for Linux。 - 保存该文件。
解包存档。
$ tar xvf <file>将
oc二进制文件移到PATH 中的目录中。要查看您的
PATH,请执行以下命令:$ echo $PATH
安装 OpenShift CLI 后,可以使用 oc 命令:
$ oc <command>
2.1.2.2.2. 使用 Web 控制台在 Windows 上安装 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以按照以下流程在 Windows 上安装 OpenShift CLI(oc)二进制文件。
流程
从 Web 控制台,单击 ?。
单击 Command Line Tools。
-
为 Windows 平台选择
oc二进制文件,然后单击 Download oc for Windows for x86_64。 - 保存该文件。
- 使用 ZIP 程序解压存档。
将
oc二进制文件移到PATH 中的目录中。要查看您的
PATH,请打开命令提示并执行以下命令:C:\> path
安装 OpenShift CLI 后,可以使用 oc 命令:
C:\> oc <command>
2.1.2.2.3. 使用 Web 控制台在 macOS 上安装 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以按照以下流程在 macOS 上安装 OpenShift CLI(oc)二进制文件。
流程
从 Web 控制台,单击 ?。
单击 Command Line Tools。
为 macOS 平台选择
oc二进制文件,然后单击 Download oc for Mac for x86_64。注意对于 macOS arm64,点 Download oc for ARM 64。
- 保存该文件。
- 解包和解压存档。
将
oc二进制文件移到 PATH 的目录中。要查看您的
PATH,请打开终端并执行以下命令:$ echo $PATH
安装 OpenShift CLI 后,可以使用 oc 命令:
$ oc <command>
2.1.2.3. 使用 RPM 安装 OpenShift CLI 复制链接链接已复制到粘贴板!
对于 Red Hat Enterprise Linux (RHEL),如果您的红帽帐户上已有有效的 Red Hat OpenShift Service on AWS 订阅,您可以将 OpenShift CLI (oc)安装为 RPM。
您需要下载二进制文件,为 RHEL 9 安装 oc。Red Hat Enterprise Linux (RHEL) 9 不支持使用 RPM 软件包安装 oc。
先决条件
- 必须具有 root 或 sudo 权限。
流程
使用 Red Hat Subscription Manager 注册:
# subscription-manager register获取最新的订阅数据:
# subscription-manager refresh列出可用的订阅:
# subscription-manager list --available --matches '*OpenShift*'在上一命令的输出中,找到 Red Hat OpenShift Service on AWS 订阅的池 ID,并把订阅附加到注册的系统:
# subscription-manager attach --pool=<pool_id>启用 Red Hat OpenShift Service on AWS 4 所需的存储库。
# subscription-manager repos --enable="rhocp-4-for-rhel-8-x86_64-rpms"安装
openshift-clients软件包:# yum install openshift-clients
验证
-
使用
oc命令验证安装:
$ oc <command>
2.1.2.4. 使用 Homebrew 安装 OpenShift CLI 复制链接链接已复制到粘贴板!
对于 macOS,您可以使用 Homebrew 软件包管理器安装 OpenShift CLI(oc)。
先决条件
-
已安装 Homebrew(
brew)。
流程
运行以下命令来安装 openshift-cli 软件包:
$ brew install openshift-cli
验证
-
使用
oc命令验证安装:
$ oc <command>
2.1.3. 登录到 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以登录到 OpenShift CLI(oc)以访问和管理集群。
先决条件
- 您必须有权访问 Red Hat OpenShift Service on AWS 集群。
-
已安装 OpenShift CLI (
oc)。
要访问只能通过 HTTP 代理服务器访问的集群,可以设置 HTTP_PROXY、HTTPS_PROXY 和 NO_PROXY 变量。oc CLI 会使用这些环境变量以便所有与集群的通信都通过 HTTP 代理进行。
只有在使用 HTTPS 传输时,才会发送身份验证标头。
流程
输入
oc login命令并传递用户名:$ oc login -u user1提示时,请输入所需信息:
输出示例
Server [https://localhost:8443]: https://openshift.example.com:64431 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password:3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started.
如果登录到 web 控制台,您可以生成包含令牌和服务器信息的 oc login 命令。您可以使用命令登录到 OpenShift CLI (oc),而无需交互式提示。要生成 命令,请从 web 控制台右上角的用户名下拉菜单中选择 Copy login command。
您现在可以创建项目或执行其他命令来管理集群。
2.1.4. 使用 Web 浏览器登录 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以使用 Web 浏览器访问和管理集群来登录 OpenShift CLI (oc)。这可以使用户避免将其访问令牌插入到命令行中。
通过 Web 浏览器登录 CLI,在 localhost 上使用 HTTP (而非 HTTPS)运行服务器;在多用户工作站中请谨慎使用。
先决条件
- 您必须有权访问 Red Hat OpenShift Service on AWS 集群。
-
已安装 OpenShift CLI(
oc)。 - 已安装浏览器。
流程
输入
oc login命令,使用--web标志:$ oc login <cluster_url> --web1 - 1
- 另外,您可以指定服务器 URL 和回调端口。例如,
oc login <cluster_url> --web --callback-port 8280 localhost:8443。
Web 浏览器会自动打开。如果没有,请点命令输出中的链接。如果没有指定 Red Hat OpenShift Service on AWS 服务器
oc,请尝试打开当前oc配置文件中指定的集群的 Web 控制台。如果没有oc配置,oc会以交互方式提示输入服务器 URL。输出示例
Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session.- 如果有多个身份提供程序可用,请从提供的选项中选择您的选择。
-
在对应的浏览器字段中输入您的用户名和密码。登录后,浏览器会显示
access token received successfully; please return to your terminal。 检查 CLI 是否有登录确认。
输出示例
Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname>
Web 控制台默认为前面会话中使用的配置集。要在 Administrator 和 Developer 配置集间切换,请登出 Red Hat OpenShift Service on AWS Web 控制台并清除缓存。
您现在可以创建项目或执行其他命令来管理集群。
2.1.5. 使用 OpenShift CLI 复制链接链接已复制到粘贴板!
参阅以下部分以了解如何使用 CLI 完成常见任务。
2.1.5.1. 创建一个项目 复制链接链接已复制到粘贴板!
使用oc new-project命令创建新项目。
$ oc new-project my-project
输出示例
Now using project "my-project" on server "https://openshift.example.com:6443".
2.1.5.2. 创建一个新的应用程序 复制链接链接已复制到粘贴板!
使用oc new-app命令创建新应用程序。
$ oc new-app https://github.com/sclorg/cakephp-ex
输出示例
--> Found image 40de956 (9 days old) in imagestream "openshift/php" under tag "7.2" for "php"
...
Run 'oc status' to view your app.
2.1.5.3. 查看 pod 复制链接链接已复制到粘贴板!
使用oc get pods命令查看当前项目的 pod。
当您在 pod 中运行 oc 且没有指定命名空间时,默认使用 pod 的命名空间。
$ oc get pods -o wide
输出示例
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none>
cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none>
cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>
2.1.5.4. 查看 pod 日志 复制链接链接已复制到粘贴板!
使用oc logs命令查看特定 pod 的日志。
$ oc logs cakephp-ex-1-deploy
输出示例
--> Scaling cakephp-ex-1 to 1
--> Success
2.1.5.5. 查看当前项目 复制链接链接已复制到粘贴板!
使用oc project命令查看当前项目。
$ oc project
输出示例
Using project "my-project" on server "https://openshift.example.com:6443".
2.1.5.6. 查看当前项目的状态 复制链接链接已复制到粘贴板!
使用 oc status 命令查看有关当前项目的信息,如服务、部署和构建配置。
$ oc status
输出示例
In project my-project on server https://openshift.example.com:6443
svc/cakephp-ex - 172.30.236.80 ports 8080, 8443
dc/cakephp-ex deploys istag/cakephp-ex:latest <-
bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2
deployment #1 deployed 2 minutes ago - 1 pod
3 infos identified, use 'oc status --suggest' to see details.
2.1.5.7. 列出支持的 API 资源 复制链接链接已复制到粘贴板!
使用oc api-resources命令查看服务器上支持的 API 资源列表。
$ oc api-resources
输出示例
NAME SHORTNAMES APIGROUP NAMESPACED KIND
bindings true Binding
componentstatuses cs false ComponentStatus
configmaps cm true ConfigMap
...
2.1.6. 获得帮助 复制链接链接已复制到粘贴板!
您可以使用以下方法获得 CLI 命令和 Red Hat OpenShift Service on AWS 资源的帮助:
使用
oc help获取所有可用 CLI 命令的列表和描述:示例:获取 CLI 的常规帮助信息
$ oc help输出示例
OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application ...使用
--help标志获取有关特定CLI命令的帮助信息:示例:获取
oc create命令的帮助信息$ oc create --help输出示例
Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags] ...使用
oc explain命令查看特定资源的描述信息和项信息:示例:查看
Pod资源的文档$ oc explain pods输出示例
KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources ...
2.1.7. 注销 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以注销 OpenShift CLI 以结束当前会话。
使用
oc logout命令。$ oc logout输出示例
Logged "user1" out on "https://openshift.example.com"
这将从服务器中删除已保存的身份验证令牌,并将其从配置文件中删除。
2.2. 配置 OpenShift CLI 复制链接链接已复制到粘贴板!
2.2.1. 启用 tab 自动完成功能 复制链接链接已复制到粘贴板!
您可以为 Bash 或 Zsh shell 启用 tab 自动完成功能。
2.2.1.1. 为 Bash 启用 tab 自动完成 复制链接链接已复制到粘贴板!
安装 OpenShift CLI (oc)后,您可以启用 tab 自动完成功能,以便在按 Tab 键时自动完成 oc 命令或建议选项。以下流程为 Bash shell 启用 tab 自动完成功能。
先决条件
-
已安装 OpenShift CLI (
oc)。 -
已安装软件包
bash-completion。
流程
将 Bash 完成代码保存到一个文件中:
$ oc completion bash > oc_bash_completion将文件复制到
/etc/bash_completion.d/:$ sudo cp oc_bash_completion /etc/bash_completion.d/您也可以将文件保存到一个本地目录,并从您的
.bashrc文件中 source 这个文件。
开新终端时 tab 自动完成功能将被启用。
2.2.1.2. 为 Zsh 启用 tab 自动完成功能 复制链接链接已复制到粘贴板!
安装 OpenShift CLI (oc)后,您可以启用 tab 自动完成功能,以便在按 Tab 键时自动完成 oc 命令或建议选项。以下流程为 Zsh shell 启用 tab 自动完成功能。
先决条件
-
已安装 OpenShift CLI (
oc)。
流程
要在
.zshrc文件中为oc添加 tab 自动完成功能,请运行以下命令:$ cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ $commands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF
开新终端时 tab 自动完成功能将被启用。
2.2.2. 通过 oc CLI 访问 kubeconfig 复制链接链接已复制到粘贴板!
您可以使用 oc CLI 登录到 OpenShift 集群,并从命令行获取用于访问集群的 kubeconfig 文件。
先决条件
- 您可以访问 Red Hat OpenShift Service on AWS Web 控制台或 API 服务器端点。
流程
运行以下命令登录到您的 OpenShift 集群:
$ oc login <api-server-url> -u <username> -p <password>1 2 3 - 1
- 指定完整的 API 服务器 URL。例如:
https://api.my-cluster.example.com:6443。 - 2
- 指定一个有效的用户名。例如:
kubeadmin。 - 3
- 为指定的用户提供密码。例如,在集群安装过程中生成的
kubeadmin密码。
运行以下命令,将集群配置保存到本地文件:
$ oc config view --raw > kubeconfig运行以下命令,将
KUBECONFIG环境变量设置为指向导出的文件:$ export KUBECONFIG=./kubeconfig运行以下命令,使用
oc与 OpenShift 集群进行交互:$ oc get nodes
如果您计划在不同的会话或机器间重复使用导出的 kubeconfig 文件,请安全地存储该文件,并不用将它提交到源控制系统。
2.3. oc 和 kubectl 命令的使用方法 复制链接链接已复制到粘贴板!
Kubernetes 命令行界面(CLI)kubectl 可以用来对 Kubernetes 集群运行命令。因为 Red Hat OpenShift Service on AWS 是一个经过认证的 Kubernetes 发行版本,所以您可以使用 Red Hat OpenShift Service on AWS 提供的受支持的 kubectl 二进制文件,或使用 oc 二进制文件来获得扩展的功能。
2.3.1. oc 二进制文件 复制链接链接已复制到粘贴板!
oc 二进制文件提供与 kubectl 二进制文件相同的功能,但它扩展至原生支持 Red Hat OpenShift Service on AWS 功能,包括:
对 Red Hat OpenShift Service on AWS 资源的完全支持
DeploymentConfig、BuildConfig、Route、ImageStream和ImageStreamTag对象等资源特定于 Red Hat OpenShift Service on AWS 发行版本,并根据标准 Kubernetes 原语构建。- 身份验证
附加命令
例如,借助附加命令
oc new-app可以更轻松地使用现有源代码或预构建镜像来启动新的应用程序。同样,附加命令oc new-project让您可以更轻松地启动一个项目并切换到该项目作为您的默认项目。
如果安装了旧版本的 oc 二进制文件,则无法使用 Red Hat OpenShift Service on AWS 中的所有命令。如果需要最新的功能,您必须下载并安装与 Red Hat OpenShift Service on AWS 服务器版本对应的 oc 二进制文件的最新版本。
非安全 API 更改至少涉及两个次发行版本(例如,4.1 到 4.2 到 4.3)来更新旧的 oc 二进制文件。使用新功能可能需要较新的 oc 二进制文件。一个 4.3 服务器可能会带有版本 4.2 oc 二进制文件无法使用的功能,而一个 4.3 oc 二进制文件可能会带有 4.2 服务器不支持的功能。
|
X.Y ( |
X.Y+N footnote:versionpolicyn[其中 N 是一个大于或等于 1 的数字] ( | |
| X.Y (Server) |
|
|
| X.Y+N footnote:versionpolicyn[] (Server) |
|
|
完全兼容。
oc 客户端可能无法访问服务器的功能。
oc 客户端可能会提供与要访问的服务器不兼任的选项和功能。
2.3.2. kubectl 二进制文件 复制链接链接已复制到粘贴板!
提供 kubectl 二进制文件的目的是为来自标准 Kubernetes 环境的新 Red Hat OpenShift Service on AWS 用户支持现有工作流和脚本,或希望使用 kubectl CLI 的用户。kubectl 的现有用户可以继续使用二进制文件与 Kubernetes 原语交互,而无需更改 Red Hat OpenShift Service on AWS 集群。
您可以按照安装 OpenShift CLI 的步骤安装受支持的 kubectl 二进制文件。如果您下载二进制文件,或者在使用 RPM 安装 CLI 时安装,则 kubectl 二进制文件会包括在存档中。
如需更多信息,请参阅 kubectl 文档。
2.4. 管理 CLI 配置集 复制链接链接已复制到粘贴板!
CLI 配置文件允许您配置不同的配置文件或上下文,以用于 CLI 工具概述。上下文由与 nickname 关联的 Red Hat OpenShift Service on AWS 服务器信息组成。
2.4.1. 关于 CLI 配置集间的切换 复制链接链接已复制到粘贴板!
通过上下文,您可以在 AWS 服务器上的多个 Red Hat OpenShift Service 或使用 CLI 操作时轻松地切换多个用户。nicknames 通过提供对上下文、用户凭证和集群详情的简短参考来更轻松地管理 CLI 配置。用户第一次使用 oc CLI 登录后,Red Hat OpenShift Service on AWS 会创建一个 ~/.kube/config 文件(如果不存在)。随着更多身份验证和连接详情被提供给 CLI,可以在 oc login 操作或手动配置 CLI 配置集过程中自动提供,更新的信息会存储在配置文件中:
CLI 配置文件
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://openshift1.example.com:8443
name: openshift1.example.com:8443
- cluster:
insecure-skip-tls-verify: true
server: https://openshift2.example.com:8443
name: openshift2.example.com:8443
contexts:
- context:
cluster: openshift1.example.com:8443
namespace: alice-project
user: alice/openshift1.example.com:8443
name: alice-project/openshift1.example.com:8443/alice
- context:
cluster: openshift1.example.com:8443
namespace: joe-project
user: alice/openshift1.example.com:8443
name: joe-project/openshift1/alice
current-context: joe-project/openshift1.example.com:8443/alice
kind: Config
preferences: {}
users:
- name: alice/openshift1.example.com:8443
user:
token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k
- 1
clusters部分定义 Red Hat OpenShift Service on AWS 集群的连接详情,包括其 master 服务器的地址。在本例中,一个集群的别名为openshift1.example.com:8443,另一个别名是openshift2.example.com:8443。- 2
- 这个
contexts项定义了两个上下文:一个别名是alice-project/openshift1.example.com:8443/alice,使用alice-project项目,openshift1.example.com:8443集群以及alice用户,另外一个别名是joe-project/openshift1.example.com:8443/alice,使用joe-project项目,openshift1.example.com:8443集群以及alice用户。 - 3
current-context参数显示joe-project/openshift1.example.com:8443/alice上下文当前正在使用中,允许alice用户在openshift1.example.com:8443集群上的joe-project项目中工作。- 4
users部分定义用户凭据。在本例中,用户别名alice/openshift1.example.com:8443使用访问令牌。
CLI 可以支持多个在运行时加载的配置文件,并合并在一起,以及从命令行指定的覆盖选项。登录后,您可以使用 oc status 或 oc project 命令验证您当前的环境:
验证当前工作环境
$ oc status
输出示例
oc status
In project Joe's Project (joe-project)
service database (172.30.43.12:5434 -> 3306)
database deploys docker.io/openshift/mysql-55-centos7:latest
#1 deployed 25 minutes ago - 1 pod
service frontend (172.30.159.137:5432 -> 8080)
frontend deploys origin-ruby-sample:latest <-
builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest
#1 deployed 22 minutes ago - 2 pods
To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'.
You can use 'oc get all' to see lists of each of the types described in this example.
列出当前项目
$ oc project
输出示例
Using project "joe-project" from context named "joe-project/openshift1.example.com:8443/alice" on server "https://openshift1.example.com:8443".
您可以再次运行 oc login 命令,并在互动过程中提供所需的信息,使用用户凭证和集群详情的任何其他组合登录。基于提供的信息构建上下文(如果尚不存在)。如果您已经登录,并希望切换到当前用户已有权访问的另一个项目,请使用 oc project 命令并输入项目名称:
$ oc project alice-project
输出示例
Now using project "alice-project" on server "https://openshift1.example.com:8443".
在任何时候,您可以使用 oc config view 命令查看当前的 CLI 配置,如输出中所示。其他 CLI 配置命令也可用于更高级的用法。
如果您可以访问管理员凭证,但不再作为默认系统用户 system:admin 登录,只要仍存在于 CLI 配置文件中,您可以随时以这个用户身份登录。以下命令登录并切换到默认项目:
$ oc login -u system:admin -n default
2.4.2. 手动配置 CLI 配置集 复制链接链接已复制到粘贴板!
本节介绍 CLI 配置的更多高级用法。在大多数情况下,您可以使用 oc login 和 oc project 命令登录并在上下文和项目间切换。
如果要手动配置 CLI 配置文件,您可以使用 oc config 命令,而不是直接修改这些文件。oc config 命令包括很多有用的子命令来实现这一目的:
| 子命令 | 使用方法 |
|---|---|
|
| 在 CLI 配置文件中设置集群条目。如果引用的 cluster nickname 已存在,则指定的信息将合并到其中。
|
|
| 在 CLI 配置文件中设置上下文条目。如果引用的上下文 nickname 已存在,则指定的信息将合并在.
|
|
| 使用指定上下文 nickname 设置当前上下文。
|
|
| 在 CLI 配置文件中设置单个值。
|
|
| 在 CLI 配置文件中取消设置单个值。
|
|
| 显示当前正在使用的合并 CLI 配置。
显示指定 CLI 配置文件的结果。
|
用法示例
-
以使用访问令牌的用户身份登录。
alice用户使用此令牌:
$ oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0
- 查看自动创建的集群条目:
$ oc config view
输出示例
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://openshift1.example.com
name: openshift1-example-com
contexts:
- context:
cluster: openshift1-example-com
namespace: default
user: alice/openshift1-example-com
name: default/openshift1-example-com/alice
current-context: default/openshift1-example-com/alice
kind: Config
preferences: {}
users:
- name: alice/openshift1.example.com
user:
token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0
- 更新当前上下文以便用户登录到所需的命名空间:
$ oc config set-context `oc config current-context` --namespace=<project_name>
- 检查当前上下文,确认是否实施了更改:
$ oc whoami -c
所有后续 CLI 操作都使用新的上下文,除非通过覆盖 CLI 选项或直至上下文切换为止。
2.4.3. 载入和合并规则 复制链接链接已复制到粘贴板!
您可以在为 CLI 配置发出加载和合并顺序的 CLI 操作时遵循这些规则:
使用以下层次结构和合并规则从工作站检索 CLI 配置文件:
-
如果设置了
--config选项,则只加载该文件。标志会被设置一次,且不会发生合并。 -
如果设置了
$KUBECONFIG环境变量,则会使用它。变量可以是路径列表,如果将路径合并在一起。修改值后,会在定义该节的文件中对其进行修改。创建值时,会在存在的第一个文件中创建它。如果链中不存在任何文件,则会在列表中创建最后一个文件。 -
否则,将使用
~/.kube/config文件,且不会发生合并。
-
如果设置了
使用的上下文根据以下流程中的第一个匹配项决定:
-
--context选项的值。 -
CLI 配置文件中的
current-context值。 - 此阶段允许一个空值。
-
要使用的用户和集群是决定的。此时,您可能也可能没有上下文;它们基于以下流程中的第一个匹配项构建,该流中为用户运行一次,一次用于集群:
-
用于用户名的
--user的值,以及集群名称的--cluster选项。 -
如果存在
--context选项,则使用上下文的值。 - 此阶段允许一个空值。
-
用于用户名的
要使用的实际集群信息决定。此时,您可能没有集群信息。集群信息的每个信息根据以下流程中的第一个匹配项构建:
以下命令行选项中的任何值:
-
--server, -
--api-version -
--certificate-authority -
--insecure-skip-tls-verify
-
- 如果集群信息和属性的值存在,则使用它。
- 如果您没有服务器位置,则出现错误。
要使用的实际用户信息是确定的。用户使用与集群相同的规则构建,但每个用户只能有一个身份验证技术;冲突的技术会导致操作失败。命令行选项优先于配置文件值。有效命令行选项包括:
-
--auth-path -
--client-certificate -
--client-key -
--token
-
- 对于仍缺失的任何信息,将使用默认值,并提示提供其他信息。
2.5. 使用插件扩展 OpenShift CLI 复制链接链接已复制到粘贴板!
您可以针对默认的 oc 命令编写并安装插件,从而可以使用 OpenShift CLI 执行新的和更复杂的任务。
2.5.1. 编写 CLI 插件 复制链接链接已复制到粘贴板!
您可以使用任何可以编写命令行命令的编程语言或脚本为 OpenShift CLI 编写插件。请注意,您无法使用插件来覆盖现有的 oc 命令。
流程
此过程创建一个简单的Bash插件,它的功能是在执行oc foo命令时将消息输出到终端。
创建一个名为
oc-foo的文件。在命名插件文件时,请记住以下几点:
-
该文件必须以
oc-或kubectl-开头,才能被识别为插件。 -
文件名决定了调用该插件的命令。例如,可以通过
oc foo bar命令调用文件名为oc-foo-bar的插件。如果希望命令中包含破折号,也可以使用下划线。例如,可以通过oc foo-bar命令调用文件名为oc-foo_bar的插件。
-
该文件必须以
将以下内容添加到该文件中。
#!/bin/bash # optional argument handling if [[ "$1" == "version" ]] then echo "1.0.0" exit 0 fi # optional argument handling if [[ "$1" == "config" ]] then echo $KUBECONFIG exit 0 fi echo "I am a plugin named kubectl-foo"
为 OpenShift CLI 安装此插件后,可以使用 oc foo 命令调用。
2.5.2. 安装和使用 CLI 插件 复制链接链接已复制到粘贴板!
为 OpenShift CLI 编写自定义插件后,您必须使用前安装插件。
先决条件
-
已安装
ocCLI工具。 -
您必须具有以
oc-或kubectl-开头的 CLI 插件文件。
流程
如有必要,将插件文件更新为可执行。
$ chmod +x <plugin_file>将文件放在
PATH中的任何位置,例如/usr/local/bin/。$ sudo mv <plugin_file> /usr/local/bin/.运行
oc plugin list以确保列出了插件。$ oc plugin list输出示例
The following compatible plugins are available: /usr/local/bin/<plugin_file>如果您的插件没有被列出,请验证文件是否以
oc-或kubectl-开头,是否可执行,且位于PATH中。调用插件引入的新命令或选项。
例如,如果您从 Sample plug-in repository 构建并安装了
kubectl-ns插件,则可以使用以下命令查看当前命名空间。$ oc ns请注意,调用插件的命令取决于插件文件名。例如,文件名为
oc-foo-bar的插件会被oc foo bar命令调用。
2.6. OpenShift CLI 开发人员命令参考 复制链接链接已复制到粘贴板!
本参考提供了 OpenShift CLI(oc)开发人员命令的描述和示例命令。
运行 oc help 来列出所有命令或运行 oc <command> --help 获取特定命令的附加详情。
2.6.1. OpenShift CLI(oc)开发人员命令 复制链接链接已复制到粘贴板!
2.6.1.1. oc annotate 复制链接链接已复制到粘贴板!
更新资源上的注解
用法示例
# Update pod 'foo' with the annotation 'description' and the value 'my frontend'
# If the same annotation is set multiple times, only the last value will be applied
oc annotate pods foo description='my frontend'
# Update a pod identified by type and name in "pod.json"
oc annotate -f pod.json description='my frontend'
# Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value
oc annotate --overwrite pods foo description='my frontend running nginx'
# Update all pods in the namespace
oc annotate pods --all description='my frontend running nginx'
# Update pod 'foo' only if the resource is unchanged from version 1
oc annotate pods foo description='my frontend running nginx' --resource-version=1
# Update pod 'foo' by removing an annotation named 'description' if it exists
# Does not require the --overwrite flag
oc annotate pods foo description-
2.6.1.2. oc api-resources 复制链接链接已复制到粘贴板!
在服务器上显示支持的 API 资源
用法示例
# Print the supported API resources
oc api-resources
# Print the supported API resources with more information
oc api-resources -o wide
# Print the supported API resources sorted by a column
oc api-resources --sort-by=name
# Print the supported namespaced resources
oc api-resources --namespaced=true
# Print the supported non-namespaced resources
oc api-resources --namespaced=false
# Print the supported API resources with a specific APIGroup
oc api-resources --api-group=rbac.authorization.k8s.io
2.6.1.3. oc api-versions 复制链接链接已复制到粘贴板!
以"group/version"的形式输出服务器上支持的 API 版本。
用法示例
# Print the supported API versions
oc api-versions
2.6.1.4. oc apply 复制链接链接已复制到粘贴板!
通过文件名或 stdin 将配置应用到资源
用法示例
# Apply the configuration in pod.json to a pod
oc apply -f ./pod.json
# Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml
oc apply -k dir/
# Apply the JSON passed into stdin to a pod
cat pod.json | oc apply -f -
# Apply the configuration from all files that end with '.json'
oc apply -f '*.json'
# Note: --prune is still in Alpha
# Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx
oc apply --prune -f manifest.yaml -l app=nginx
# Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file
oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap
2.6.1.5. oc apply edit-last-applied 复制链接链接已复制到粘贴板!
编辑资源/对象的最新 last-applied-configuration 注解
用法示例
# Edit the last-applied-configuration annotations by type/name in YAML
oc apply edit-last-applied deployment/nginx
# Edit the last-applied-configuration annotations by file in JSON
oc apply edit-last-applied -f deploy.yaml -o json
2.6.1.6. oc apply set-last-applied 复制链接链接已复制到粘贴板!
设置 live 对象上的 last-applied-configuration 注释,以匹配文件的内容。
用法示例
# Set the last-applied-configuration of a resource to match the contents of a file
oc apply set-last-applied -f deploy.yaml
# Execute set-last-applied against each configuration file in a directory
oc apply set-last-applied -f path/
# Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist
oc apply set-last-applied -f deploy.yaml --create-annotation=true
2.6.1.7. oc apply view-last-applied 复制链接链接已复制到粘贴板!
查看资源/对象最新的最后应用配置注解
用法示例
# View the last-applied-configuration annotations by type/name in YAML
oc apply view-last-applied deployment/nginx
# View the last-applied-configuration annotations by file in JSON
oc apply view-last-applied -f deploy.yaml -o json
2.6.1.8. oc attach 复制链接链接已复制到粘贴板!
附加到正在运行的容器
用法示例
# Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation
# for selecting the container to be attached or the first container in the pod will be chosen
oc attach mypod
# Get output from ruby-container from pod mypod
oc attach mypod -c ruby-container
# Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod
# and sends stdout/stderr from 'bash' back to the client
oc attach mypod -c ruby-container -i -t
# Get output from the first pod of a replica set named nginx
oc attach rs/nginx
2.6.1.9. oc auth can-i 复制链接链接已复制到粘贴板!
检查是否允许操作
用法示例
# Check to see if I can create pods in any namespace
oc auth can-i create pods --all-namespaces
# Check to see if I can list deployments in my current namespace
oc auth can-i list deployments.apps
# Check to see if service account "foo" of namespace "dev" can list pods in the namespace "prod"
# You must be allowed to use impersonation for the global option "--as"
oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod
# Check to see if I can do everything in my current namespace ("*" means all)
oc auth can-i '*' '*'
# Check to see if I can get the job named "bar" in namespace "foo"
oc auth can-i list jobs.batch/bar -n foo
# Check to see if I can read pod logs
oc auth can-i get pods --subresource=log
# Check to see if I can access the URL /logs/
oc auth can-i get /logs/
# Check to see if I can approve certificates.k8s.io
oc auth can-i approve certificates.k8s.io
# List all allowed actions in namespace "foo"
oc auth can-i --list --namespace=foo
2.6.1.10. oc auth reconcile 复制链接链接已复制到粘贴板!
协调 RBAC 角色、角色绑定、集群角色和集群角色绑定对象的规则
用法示例
# Reconcile RBAC resources from a file
oc auth reconcile -f my-rbac-rules.yaml
2.6.1.11. oc auth whoami 复制链接链接已复制到粘贴板!
实验性:检查自我主题属性
用法示例
# Get your subject attributes
oc auth whoami
# Get your subject attributes in JSON format
oc auth whoami -o json
2.6.1.12. oc autoscale 复制链接链接已复制到粘贴板!
自动缩放部署配置、部署、副本集、有状态集或复制控制器
用法示例
# Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used
oc autoscale deployment foo --min=2 --max=10
# Auto scale a replication controller "foo", with the number of pods between 1 and 5, target CPU utilization at 80%
oc autoscale rc foo --max=5 --cpu-percent=80
2.6.1.13. oc cancel-build 复制链接链接已复制到粘贴板!
取消正在运行、待处理或新的构建
用法示例
# Cancel the build with the given name
oc cancel-build ruby-build-2
# Cancel the named build and print the build logs
oc cancel-build ruby-build-2 --dump-logs
# Cancel the named build and create a new one with the same parameters
oc cancel-build ruby-build-2 --restart
# Cancel multiple builds
oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3
# Cancel all builds created from the 'ruby-build' build config that are in the 'new' state
oc cancel-build bc/ruby-build --state=new
2.6.1.14. oc cluster-info 复制链接链接已复制到粘贴板!
显示集群信息
用法示例
# Print the address of the control plane and cluster services
oc cluster-info
2.6.1.15. oc cluster-info dump 复制链接链接已复制到粘贴板!
转储用于调试和诊断的相关信息
用法示例
# Dump current cluster state to stdout
oc cluster-info dump
# Dump current cluster state to /path/to/cluster-state
oc cluster-info dump --output-directory=/path/to/cluster-state
# Dump all namespaces to stdout
oc cluster-info dump --all-namespaces
# Dump a set of namespaces to /path/to/cluster-state
oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state
2.6.1.16. oc completion 复制链接链接已复制到粘贴板!
输出指定 shell 的 shell 完成代码 (bash、zsh、fish 或 powershell)
用法示例
# Installing bash completion on macOS using homebrew
## If running Bash 3.2 included with macOS
brew install bash-completion
## or, if running Bash 4.1+
brew install bash-completion@2
## If oc is installed via homebrew, this should start working immediately
## If you've installed via other means, you may need add the completion to your completion directory
oc completion bash > $(brew --prefix)/etc/bash_completion.d/oc
# Installing bash completion on Linux
## If bash-completion is not installed on Linux, install the 'bash-completion' package
## via your distribution's package manager.
## Load the oc completion code for bash into the current shell
source <(oc completion bash)
## Write bash completion code to a file and source it from .bash_profile
oc completion bash > ~/.kube/completion.bash.inc
printf "
# oc shell completion
source '$HOME/.kube/completion.bash.inc'
" >> $HOME/.bash_profile
source $HOME/.bash_profile
# Load the oc completion code for zsh[1] into the current shell
source <(oc completion zsh)
# Set the oc completion code for zsh[1] to autoload on startup
oc completion zsh > "${fpath[1]}/_oc"
# Load the oc completion code for fish[2] into the current shell
oc completion fish | source
# To load completions for each session, execute once:
oc completion fish > ~/.config/fish/completions/oc.fish
# Load the oc completion code for powershell into the current shell
oc completion powershell | Out-String | Invoke-Expression
# Set oc completion code for powershell to run on startup
## Save completion code to a script and execute in the profile
oc completion powershell > $HOME\.kube\completion.ps1
Add-Content $PROFILE "$HOME\.kube\completion.ps1"
## Execute completion code in the profile
Add-Content $PROFILE "if (Get-Command oc -ErrorAction SilentlyContinue) {
oc completion powershell | Out-String | Invoke-Expression
}"
## Add completion code directly to the $PROFILE script
oc completion powershell >> $PROFILE
2.6.1.17. oc config current-context 复制链接链接已复制到粘贴板!
显示 current-context
用法示例
# Display the current-context
oc config current-context
2.6.1.18. oc config delete-cluster 复制链接链接已复制到粘贴板!
从 kubeconfig 删除指定的集群
用法示例
# Delete the minikube cluster
oc config delete-cluster minikube
2.6.1.19. oc config delete-context 复制链接链接已复制到粘贴板!
从 kubeconfig 删除指定的上下文
用法示例
# Delete the context for the minikube cluster
oc config delete-context minikube
2.6.1.20. oc config delete-user 复制链接链接已复制到粘贴板!
从 kubeconfig 删除指定用户
用法示例
# Delete the minikube user
oc config delete-user minikube
2.6.1.21. oc config get-clusters 复制链接链接已复制到粘贴板!
显示 kubeconfig 中定义的集群
用法示例
# List the clusters that oc knows about
oc config get-clusters
2.6.1.22. oc config get-contexts 复制链接链接已复制到粘贴板!
描述一个或多个上下文
用法示例
# List all the contexts in your kubeconfig file
oc config get-contexts
# Describe one context in your kubeconfig file
oc config get-contexts my-context
2.6.1.23. oc config get-users 复制链接链接已复制到粘贴板!
显示 kubeconfig 中定义的用户
用法示例
# List the users that oc knows about
oc config get-users
2.6.1.24. oc config new-admin-kubeconfig 复制链接链接已复制到粘贴板!
生成,使服务器信任并显示新的 admin.kubeconfig
用法示例
# Generate a new admin kubeconfig
oc config new-admin-kubeconfig
2.6.1.25. oc config new-kubelet-bootstrap-kubeconfig 复制链接链接已复制到粘贴板!
生成,使服务器信任并显示新的 kubelet /etc/kubernetes/kubeconfig
用法示例
# Generate a new kubelet bootstrap kubeconfig
oc config new-kubelet-bootstrap-kubeconfig
2.6.1.26. oc config refresh-ca-bundle 复制链接链接已复制到粘贴板!
通过联系 API 服务器来更新 OpenShift CA 捆绑包
用法示例
# Refresh the CA bundle for the current context's cluster
oc config refresh-ca-bundle
# Refresh the CA bundle for the cluster named e2e in your kubeconfig
oc config refresh-ca-bundle e2e
# Print the CA bundle from the current OpenShift cluster's API server
oc config refresh-ca-bundle --dry-run
2.6.1.27. oc config rename-context 复制链接链接已复制到粘贴板!
从 kubeconfig 文件中重命名上下文
用法示例
# Rename the context 'old-name' to 'new-name' in your kubeconfig file
oc config rename-context old-name new-name
2.6.1.28. oc config set 复制链接链接已复制到粘贴板!
在 kubeconfig 文件中设置单个值
用法示例
# Set the server field on the my-cluster cluster to https://1.2.3.4
oc config set clusters.my-cluster.server https://1.2.3.4
# Set the certificate-authority-data field on the my-cluster cluster
oc config set clusters.my-cluster.certificate-authority-data $(echo "cert_data_here" | base64 -i -)
# Set the cluster field in the my-context context to my-cluster
oc config set contexts.my-context.cluster my-cluster
# Set the client-key-data field in the cluster-admin user using --set-raw-bytes option
oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true
2.6.1.29. oc config set-cluster 复制链接链接已复制到粘贴板!
在 kubeconfig 中设置集群条目
用法示例
# Set only the server field on the e2e cluster entry without touching other values
oc config set-cluster e2e --server=https://1.2.3.4
# Embed certificate authority data for the e2e cluster entry
oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt
# Disable cert checking for the e2e cluster entry
oc config set-cluster e2e --insecure-skip-tls-verify=true
# Set the custom TLS server name to use for validation for the e2e cluster entry
oc config set-cluster e2e --tls-server-name=my-cluster-name
# Set the proxy URL for the e2e cluster entry
oc config set-cluster e2e --proxy-url=https://1.2.3.4
2.6.1.30. oc config set-context 复制链接链接已复制到粘贴板!
在 kubeconfig 中设置上下文条目
用法示例
# Set the user field on the gce context entry without touching other values
oc config set-context gce --user=cluster-admin
2.6.1.31. oc config set-credentials 复制链接链接已复制到粘贴板!
在 kubeconfig 中设置用户条目
用法示例
# Set only the "client-key" field on the "cluster-admin"
# entry, without touching other values
oc config set-credentials cluster-admin --client-key=~/.kube/admin.key
# Set basic auth for the "cluster-admin" entry
oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
# Embed client certificate data in the "cluster-admin" entry
oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true
# Enable the Google Compute Platform auth provider for the "cluster-admin" entry
oc config set-credentials cluster-admin --auth-provider=gcp
# Enable the OpenID Connect auth provider for the "cluster-admin" entry with additional arguments
oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar
# Remove the "client-secret" config value for the OpenID Connect auth provider for the "cluster-admin" entry
oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-
# Enable new exec auth plugin for the "cluster-admin" entry
oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1
# Enable new exec auth plugin for the "cluster-admin" entry with interactive mode
oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never
# Define new exec auth plugin arguments for the "cluster-admin" entry
oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2
# Create or update exec auth plugin environment variables for the "cluster-admin" entry
oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2
# Remove exec auth plugin environment variables for the "cluster-admin" entry
oc config set-credentials cluster-admin --exec-env=var-to-remove-
2.6.1.32. oc config unset 复制链接链接已复制到粘贴板!
在 kubeconfig 文件中取消设置单个值
用法示例
# Unset the current-context
oc config unset current-context
# Unset namespace in foo context
oc config unset contexts.foo.namespace
2.6.1.33. oc config use-context 复制链接链接已复制到粘贴板!
在 kubeconfig 文件中设置 current-context
用法示例
# Use the context for the minikube cluster
oc config use-context minikube
2.6.1.34. oc config view 复制链接链接已复制到粘贴板!
显示合并的 kubeconfig 设置或指定的 kubeconfig 文件
用法示例
# Show merged kubeconfig settings
oc config view
# Show merged kubeconfig settings, raw certificate data, and exposed secrets
oc config view --raw
# Get the password for the e2e user
oc config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
2.6.1.35. oc cp 复制链接链接已复制到粘贴板!
将文件和目录复制到容器或从容器中复制
用法示例
# !!!Important Note!!!
# Requires that the 'tar' binary is present in your container
# image. If 'tar' is not present, 'oc cp' will fail.
#
# For advanced use cases, such as symlinks, wildcard expansion or
# file mode preservation, consider using 'oc exec'.
# Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace>
tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar
# Copy /tmp/foo from a remote pod to /tmp/bar locally
oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar
# Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace
oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
# Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container
oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>
# Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace>
oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar
# Copy /tmp/foo from a remote pod to /tmp/bar locally
oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar
2.6.1.36. oc create 复制链接链接已复制到粘贴板!
从文件或 stdin 创建资源
用法示例
# Create a pod using the data in pod.json
oc create -f ./pod.json
# Create a pod based on the JSON passed into stdin
cat pod.json | oc create -f -
# Edit the data in registry.yaml in JSON then create the resource using the edited data
oc create -f registry.yaml --edit -o json
2.6.1.37. oc create build 复制链接链接已复制到粘贴板!
创建一个新构建
用法示例
# Create a new build
oc create build myapp
2.6.1.38. oc create clusterresourcequota 复制链接链接已复制到粘贴板!
创建集群资源配额
用法示例
# Create a cluster resource quota limited to 10 pods
oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10
2.6.1.39. oc create clusterrole 复制链接链接已复制到粘贴板!
创建集群角色
用法示例
# Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods
oc create clusterrole pod-reader --verb=get,list,watch --resource=pods
# Create a cluster role named "pod-reader" with ResourceName specified
oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod
# Create a cluster role named "foo" with API Group specified
oc create clusterrole foo --verb=get,list,watch --resource=rs.apps
# Create a cluster role named "foo" with SubResource specified
oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status
# Create a cluster role name "foo" with NonResourceURL specified
oc create clusterrole "foo" --verb=get --non-resource-url=/logs/*
# Create a cluster role name "monitoring" with AggregationRule specified
oc create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true"
2.6.1.40. oc create clusterrolebinding 复制链接链接已复制到粘贴板!
为特定集群角色创建集群角色绑定
用法示例
# Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role
oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1
2.6.1.41. oc create configmap 复制链接链接已复制到粘贴板!
从本地文件、目录或字面值创建配置映射
用法示例
# Create a new config map named my-config based on folder bar
oc create configmap my-config --from-file=path/to/bar
# Create a new config map named my-config with specified keys instead of file basenames on disk
oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt
# Create a new config map named my-config with key1=config1 and key2=config2
oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2
# Create a new config map named my-config from the key=value pairs in the file
oc create configmap my-config --from-file=path/to/bar
# Create a new config map named my-config from an env file
oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env
2.6.1.42. oc create cronjob 复制链接链接已复制到粘贴板!
使用指定名称创建 cron 作业
用法示例
# Create a cron job
oc create cronjob my-job --image=busybox --schedule="*/1 * * * *"
# Create a cron job with a command
oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" -- date
2.6.1.43. oc create deployment 复制链接链接已复制到粘贴板!
使用指定名称创建部署
用法示例
# Create a deployment named my-dep that runs the busybox image
oc create deployment my-dep --image=busybox
# Create a deployment with a command
oc create deployment my-dep --image=busybox -- date
# Create a deployment named my-dep that runs the nginx image with 3 replicas
oc create deployment my-dep --image=nginx --replicas=3
# Create a deployment named my-dep that runs the busybox image and expose port 5701
oc create deployment my-dep --image=busybox --port=5701
# Create a deployment named my-dep that runs multiple containers
oc create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx
2.6.1.44. oc create deploymentconfig 复制链接链接已复制到粘贴板!
使用给定镜像的默认选项创建部署配置
用法示例
# Create an nginx deployment config named my-nginx
oc create deploymentconfig my-nginx --image=nginx
2.6.1.45. oc create identity 复制链接链接已复制到粘贴板!
手动创建身份(仅在禁用自动创建时才需要)
用法示例
# Create an identity with identity provider "acme_ldap" and the identity provider username "adamjones"
oc create identity acme_ldap:adamjones
2.6.1.46. oc create imagestream 复制链接链接已复制到粘贴板!
创建新的空镜像流
用法示例
# Create a new image stream
oc create imagestream mysql
2.6.1.47. oc create imagestreamtag 复制链接链接已复制到粘贴板!
创建新镜像流标签
用法示例
# Create a new image stream tag based on an image in a remote registry
oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0
2.6.1.48. oc create ingress 复制链接链接已复制到粘贴板!
使用指定名称创建入口
用法示例
# Create a single ingress called 'simple' that directs requests to foo.com/bar to svc
# svc1:8080 with a TLS secret "my-cert"
oc create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert"
# Create a catch all ingress of "/path" pointing to service svc:port and Ingress Class as "otheringress"
oc create ingress catch-all --class=otheringress --rule="/path=svc:port"
# Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2
oc create ingress annotated --class=default --rule="foo.com/bar=svc:port" \
--annotation ingress.annotation1=foo \
--annotation ingress.annotation2=bla
# Create an ingress with the same host and multiple paths
oc create ingress multipath --class=default \
--rule="foo.com/=svc:port" \
--rule="foo.com/admin/=svcadmin:portadmin"
# Create an ingress with multiple hosts and the pathType as Prefix
oc create ingress ingress1 --class=default \
--rule="foo.com/path*=svc:8080" \
--rule="bar.com/admin*=svc2:http"
# Create an ingress with TLS enabled using the default ingress certificate and different path types
oc create ingress ingtls --class=default \
--rule="foo.com/=svc:https,tls" \
--rule="foo.com/path/subpath*=othersvc:8080"
# Create an ingress with TLS enabled using a specific secret and pathType as Prefix
oc create ingress ingsecret --class=default \
--rule="foo.com/*=svc:8080,tls=secret1"
# Create an ingress with a default backend
oc create ingress ingdefault --class=default \
--default-backend=defaultsvc:http \
--rule="foo.com/*=svc:8080,tls=secret1"
2.6.1.49. oc create job 复制链接链接已复制到粘贴板!
使用指定名称创建作业
用法示例
# Create a job
oc create job my-job --image=busybox
# Create a job with a command
oc create job my-job --image=busybox -- date
# Create a job from a cron job named "a-cronjob"
oc create job test-job --from=cronjob/a-cronjob
2.6.1.50. oc create namespace 复制链接链接已复制到粘贴板!
使用指定名称创建命名空间
用法示例
# Create a new namespace named my-namespace
oc create namespace my-namespace
2.6.1.51. oc create poddisruptionBudget 复制链接链接已复制到粘贴板!
使用指定名称创建 pod 中断预算
用法示例
# Create a pod disruption budget named my-pdb that will select all pods with the app=rails label
# and require at least one of them being available at any point in time
oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1
# Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label
# and require at least half of the pods selected to be available at any point in time
oc create pdb my-pdb --selector=app=nginx --min-available=50%
2.6.1.52. oc create priorityclass 复制链接链接已复制到粘贴板!
创建具有指定名称的优先级类
用法示例
# Create a priority class named high-priority
oc create priorityclass high-priority --value=1000 --description="high priority"
# Create a priority class named default-priority that is considered as the global default priority
oc create priorityclass default-priority --value=1000 --global-default=true --description="default priority"
# Create a priority class named high-priority that cannot preempt pods with lower priority
oc create priorityclass high-priority --value=1000 --description="high priority" --preemption-policy="Never"
2.6.1.53. oc create quota 复制链接链接已复制到粘贴板!
使用指定名称创建配额
用法示例
# Create a new resource quota named my-quota
oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10
# Create a new resource quota named best-effort
oc create quota best-effort --hard=pods=100 --scopes=BestEffort
2.6.1.54. oc create role 复制链接链接已复制到粘贴板!
创建具有单一规则的角色
用法示例
# Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods
oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods
# Create a role named "pod-reader" with ResourceName specified
oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod
# Create a role named "foo" with API Group specified
oc create role foo --verb=get,list,watch --resource=rs.apps
# Create a role named "foo" with SubResource specified
oc create role foo --verb=get,list,watch --resource=pods,pods/status
2.6.1.55. oc create rolebinding 复制链接链接已复制到粘贴板!
为特定角色或集群角色创建角色绑定
用法示例
# Create a role binding for user1, user2, and group1 using the admin cluster role
oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1
# Create a role binding for service account monitoring:sa-dev using the admin role
oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev
2.6.1.56. oc create route edge 复制链接链接已复制到粘贴板!
创建使用边缘 TLS 终止的路由
用法示例
# Create an edge route named "my-route" that exposes the frontend service
oc create route edge my-route --service=frontend
# Create an edge route that exposes the frontend service and specify a path
# If the route name is omitted, the service name will be used
oc create route edge --service=frontend --path /assets
2.6.1.57. oc create route passthrough 复制链接链接已复制到粘贴板!
创建使用 passthrough TLS 终止的路由
用法示例
# Create a passthrough route named "my-route" that exposes the frontend service
oc create route passthrough my-route --service=frontend
# Create a passthrough route that exposes the frontend service and specify
# a host name. If the route name is omitted, the service name will be used
oc create route passthrough --service=frontend --hostname=www.example.com
2.6.1.58. oc create route reencrypt 复制链接链接已复制到粘贴板!
创建使用重新加密 TLS 终止的路由
用法示例
# Create a route named "my-route" that exposes the frontend service
oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert
# Create a reencrypt route that exposes the frontend service, letting the
# route name default to the service name and the destination CA certificate
# default to the service CA
oc create route reencrypt --service=frontend
2.6.1.59. oc create secret docker-registry 复制链接链接已复制到粘贴板!
创建用于 Docker registry 的 secret
用法示例
# If you do not already have a .dockercfg file, create a dockercfg secret directly
oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
# Create a new secret named my-secret from ~/.docker/config.json
oc create secret docker-registry my-secret --from-file=path/to/.docker/config.json
2.6.1.60. oc create secret generic 复制链接链接已复制到粘贴板!
从本地文件、目录或字面值创建 secret
用法示例
# Create a new secret named my-secret with keys for each file in folder bar
oc create secret generic my-secret --from-file=path/to/bar
# Create a new secret named my-secret with specified keys instead of names on disk
oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub
# Create a new secret named my-secret with key1=supersecret and key2=topsecret
oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret
# Create a new secret named my-secret using a combination of a file and a literal
oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret
# Create a new secret named my-secret from env files
oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env
2.6.1.61. oc create secret tls 复制链接链接已复制到粘贴板!
创建 TLS secret
用法示例
# Create a new TLS secret named tls-secret with the given key pair
oc create secret tls tls-secret --cert=path/to/tls.crt --key=path/to/tls.key
2.6.1.62. oc create service clusterip 复制链接链接已复制到粘贴板!
创建 ClusterIP 服务
用法示例
# Create a new ClusterIP service named my-cs
oc create service clusterip my-cs --tcp=5678:8080
# Create a new ClusterIP service named my-cs (in headless mode)
oc create service clusterip my-cs --clusterip="None"
2.6.1.63. oc create service externalname 复制链接链接已复制到粘贴板!
创建 ExternalName 服务
用法示例
# Create a new ExternalName service named my-ns
oc create service externalname my-ns --external-name bar.com
2.6.1.64. oc create service loadbalancer 复制链接链接已复制到粘贴板!
创建 LoadBalancer 服务
用法示例
# Create a new LoadBalancer service named my-lbs
oc create service loadbalancer my-lbs --tcp=5678:8080
2.6.1.65. oc create service nodeport 复制链接链接已复制到粘贴板!
创建 NodePort 服务
用法示例
# Create a new NodePort service named my-ns
oc create service nodeport my-ns --tcp=5678:8080
2.6.1.66. oc create serviceaccount 复制链接链接已复制到粘贴板!
使用指定名称创建服务帐户
用法示例
# Create a new service account named my-service-account
oc create serviceaccount my-service-account
2.6.1.67. oc create token 复制链接链接已复制到粘贴板!
请求服务帐户令牌
用法示例
# Request a token to authenticate to the kube-apiserver as the service account "myapp" in the current namespace
oc create token myapp
# Request a token for a service account in a custom namespace
oc create token myapp --namespace myns
# Request a token with a custom expiration
oc create token myapp --duration 10m
# Request a token with a custom audience
oc create token myapp --audience https://example.com
# Request a token bound to an instance of a Secret object
oc create token myapp --bound-object-kind Secret --bound-object-name mysecret
# Request a token bound to an instance of a Secret object with a specific UID
oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc
2.6.1.68. oc create user 复制链接链接已复制到粘贴板!
手动创建用户(仅在禁用自动创建时才需要)
用法示例
# Create a user with the username "ajones" and the display name "Adam Jones"
oc create user ajones --full-name="Adam Jones"
2.6.1.69. oc create useridentitymapping 复制链接链接已复制到粘贴板!
手动将身份映射到用户
用法示例
# Map the identity "acme_ldap:adamjones" to the user "ajones"
oc create useridentitymapping acme_ldap:adamjones ajones
2.6.1.70. oc debug 复制链接链接已复制到粘贴板!
启动用于调试的 pod 的新实例
用法示例
# Start a shell session into a pod using the OpenShift tools image
oc debug
# Debug a currently running deployment by creating a new pod
oc debug deploy/test
# Debug a node as an administrator
oc debug node/master-1
# Debug a Windows node
# Note: the chosen image must match the Windows Server version (2019, 2022) of the node
oc debug node/win-worker-1 --image=mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022
# Launch a shell in a pod using the provided image stream tag
oc debug istag/mysql:latest -n openshift
# Test running a job as a non-root user
oc debug job/test --as-user=1000000
# Debug a specific failing container by running the env command in the 'second' container
oc debug daemonset/test -c second -- /bin/env
# See the pod that would be created to debug
oc debug mypod-9xbc -o yaml
# Debug a resource but launch the debug pod in another namespace
# Note: Not all resources can be debugged using --to-namespace without modification. For example,
# volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition
# to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace
oc debug mypod-9xbc --to-namespace testns
2.6.1.71. oc delete 复制链接链接已复制到粘贴板!
通过文件名、stdin、资源和名称或者资源和标签选择器删除资源
用法示例
# Delete a pod using the type and name specified in pod.json
oc delete -f ./pod.json
# Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml
oc delete -k dir
# Delete resources from all files that end with '.json'
oc delete -f '*.json'
# Delete a pod based on the type and name in the JSON passed into stdin
cat pod.json | oc delete -f -
# Delete pods and services with same names "baz" and "foo"
oc delete pod,service baz foo
# Delete pods and services with label name=myLabel
oc delete pods,services -l name=myLabel
# Delete a pod with minimal delay
oc delete pod foo --now
# Force delete a pod on a dead node
oc delete pod foo --force
# Delete all pods
oc delete pods --all
# Delete all pods only if the user confirms the deletion
oc delete pods --all --interactive
2.6.1.72. oc describe 复制链接链接已复制到粘贴板!
显示特定资源或一组资源的详情
用法示例
# Describe a node
oc describe nodes kubernetes-node-emt8.c.myproject.internal
# Describe a pod
oc describe pods/nginx
# Describe a pod identified by type and name in "pod.json"
oc describe -f pod.json
# Describe all pods
oc describe pods
# Describe pods by label name=myLabel
oc describe pods -l name=myLabel
# Describe all pods managed by the 'frontend' replication controller
# (rc-created pods get the name of the rc as a prefix in the pod name)
oc describe pods frontend
2.6.1.73. oc diff 复制链接链接已复制到粘贴板!
针对 would-be 应用的版本对 live 版本进行 diff 操作
用法示例
# Diff resources included in pod.json
oc diff -f pod.json
# Diff file read from stdin
cat service.yaml | oc diff -f -
2.6.1.74. oc edit 复制链接链接已复制到粘贴板!
编辑服务器上的资源
用法示例
# Edit the service named 'registry'
oc edit svc/registry
# Use an alternative editor
KUBE_EDITOR="nano" oc edit svc/registry
# Edit the job 'myjob' in JSON using the v1 API format
oc edit job.v1.batch/myjob -o json
# Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation
oc edit deployment/mydeployment -o yaml --save-config
# Edit the 'status' subresource for the 'mydeployment' deployment
oc edit deployment mydeployment --subresource='status'
2.6.1.75. oc events 复制链接链接已复制到粘贴板!
列出事件
用法示例
# List recent events in the default namespace
oc events
# List recent events in all namespaces
oc events --all-namespaces
# List recent events for the specified pod, then wait for more events and list them as they arrive
oc events --for pod/web-pod-13je7 --watch
# List recent events in YAML format
oc events -oyaml
# List recent only events of type 'Warning' or 'Normal'
oc events --types=Warning,Normal
2.6.1.76. oc exec 复制链接链接已复制到粘贴板!
在容器中执行命令
用法示例
# Get output from running the 'date' command from pod mypod, using the first container by default
oc exec mypod -- date
# Get output from running the 'date' command in ruby-container from pod mypod
oc exec mypod -c ruby-container -- date
# Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod
# and sends stdout/stderr from 'bash' back to the client
oc exec mypod -c ruby-container -i -t -- bash -il
# List contents of /usr from the first container of pod mypod and sort by modification time
# If the command you want to execute in the pod has any flags in common (e.g. -i),
# you must use two dashes (--) to separate your command's flags/arguments
# Also note, do not surround your command and its flags/arguments with quotes
# unless that is how you would execute it normally (i.e., do ls -t /usr, not "ls -t /usr")
oc exec mypod -i -t -- ls -t /usr
# Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default
oc exec deploy/mydeployment -- date
# Get output from running 'date' command from the first pod of the service myservice, using the first container by default
oc exec svc/myservice -- date
2.6.1.77. oc explain 复制链接链接已复制到粘贴板!
获取资源的文档
用法示例
# Get the documentation of the resource and its fields
oc explain pods
# Get all the fields in the resource
oc explain pods --recursive
# Get the explanation for deployment in supported api versions
oc explain deployments --api-version=apps/v1
# Get the documentation of a specific field of a resource
oc explain pods.spec.containers
# Get the documentation of resources in different format
oc explain deployment --output=plaintext-openapiv2
2.6.1.78. oc expose 复制链接链接已复制到粘贴板!
将复制的应用程序作为服务或路由公开
用法示例
# Create a route based on service nginx. The new route will reuse nginx's labels
oc expose service nginx
# Create a route and specify your own label and route name
oc expose service nginx -l name=myroute --name=fromdowntown
# Create a route and specify a host name
oc expose service nginx --hostname=www.example.com
# Create a route with a wildcard
oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain
# This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included
# Expose a deployment configuration as a service and use the specified port
oc expose dc ruby-hello-world --port=8080
# Expose a service as a route in the specified path
oc expose service nginx --path=/nginx
2.6.1.79. oc extract 复制链接链接已复制到粘贴板!
将 secret 或配置映射提取到磁盘
用法示例
# Extract the secret "test" to the current directory
oc extract secret/test
# Extract the config map "nginx" to the /tmp directory
oc extract configmap/nginx --to=/tmp
# Extract the config map "nginx" to STDOUT
oc extract configmap/nginx --to=-
# Extract only the key "nginx.conf" from config map "nginx" to the /tmp directory
oc extract configmap/nginx --to=/tmp --keys=nginx.conf
2.6.1.80. oc get 复制链接链接已复制到粘贴板!
显示一个或多个资源
用法示例
# List all pods in ps output format
oc get pods
# List all pods in ps output format with more information (such as node name)
oc get pods -o wide
# List a single replication controller with specified NAME in ps output format
oc get replicationcontroller web
# List deployments in JSON output format, in the "v1" version of the "apps" API group
oc get deployments.v1.apps -o json
# List a single pod in JSON output format
oc get -o json pod web-pod-13je7
# List a pod identified by type and name specified in "pod.yaml" in JSON output format
oc get -f pod.yaml -o json
# List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml
oc get -k dir/
# Return only the phase value of the specified pod
oc get -o template pod/web-pod-13je7 --template={{.status.phase}}
# List resource information in custom columns
oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image
# List all replication controllers and services together in ps output format
oc get rc,services
# List one or more resources by their type and names
oc get rc/web service/frontend pods/web-pod-13je7
# List the 'status' subresource for a single pod
oc get pod web-pod-13je7 --subresource status
# List all deployments in namespace 'backend'
oc get deployments.apps --namespace backend
# List all pods existing in all namespaces
oc get pods --all-namespaces
2.6.1.81. oc get-token 复制链接链接已复制到粘贴板!
实验性:从外部 OIDC 签发者获取令牌作为凭证 exec 插件
用法示例
# Starts an auth code flow to the issuer URL with the client ID and the given extra scopes
oc get-token --client-id=client-id --issuer-url=test.issuer.url --extra-scopes=email,profile
# Starts an auth code flow to the issuer URL with a different callback address
oc get-token --client-id=client-id --issuer-url=test.issuer.url --callback-address=127.0.0.1:8343
2.6.1.82. oc idle 复制链接链接已复制到粘贴板!
闲置可扩展资源
用法示例
# Idle the scalable controllers associated with the services listed in to-idle.txt
$ oc idle --resource-names-file to-idle.txt
2.6.1.83. oc image append 复制链接链接已复制到粘贴板!
向镜像添加层并将其推送到 registry
用法示例
# Remove the entrypoint on the mysql:latest image
oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{"Entrypoint":null}'
# Add a new layer to the image
oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz
# Add a new layer to the image and store the result on disk
# This results in $(pwd)/v2/mysql/blobs,manifests
oc image append --from mysql:latest --to file://mysql:local layer.tar.gz
# Add a new layer to the image and store the result on disk in a designated directory
# This will result in $(pwd)/mysql-local/v2/mysql/blobs,manifests
oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz
# Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists)
oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz
# Add a new layer to an image that was mirrored to the current directory on disk ($(pwd)/v2/image exists)
oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz
# Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch
# Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified
oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz
# Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified
oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz
# Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist
oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz
2.6.1.84. oc image extract 复制链接链接已复制到粘贴板!
将文件从镜像复制到文件系统
用法示例
# Extract the busybox image into the current directory
oc image extract docker.io/library/busybox:latest
# Extract the busybox image into a designated directory (must exist)
oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox
# Extract the busybox image into the current directory for linux/s390x platform
# Note: Wildcard filter is not supported with extract; pass a single os/arch to extract
oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x
# Extract a single file from the image into the current directory
oc image extract docker.io/library/centos:7 --path /bin/bash:.
# Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory
oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:.
# Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist)
# This results in /tmp/yum.repos.d/*.repo on local system
oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d
# Extract an image stored on disk into the current directory ($(pwd)/v2/busybox/blobs,manifests exists)
# --confirm is required because the current directory is not empty
oc image extract file://busybox:local --confirm
# Extract an image stored on disk in a directory other than $(pwd)/v2 into the current directory
# --confirm is required because the current directory is not empty ($(pwd)/busybox-mirror-dir/v2/busybox exists)
oc image extract file://busybox:local --dir busybox-mirror-dir --confirm
# Extract an image stored on disk in a directory other than $(pwd)/v2 into a designated directory (must exist)
oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox
# Extract the last layer in the image
oc image extract docker.io/library/centos:7[-1]
# Extract the first three layers of the image
oc image extract docker.io/library/centos:7[:3]
# Extract the last three layers of the image
oc image extract docker.io/library/centos:7[-3:]
2.6.1.85. oc image info 复制链接链接已复制到粘贴板!
显示镜像的信息
用法示例
# Show information about an image
oc image info quay.io/openshift/cli:latest
# Show information about images matching a wildcard
oc image info quay.io/openshift/cli:4.*
# Show information about a file mirrored to disk under DIR
oc image info --dir=DIR file://library/busybox:latest
# Select which image from a multi-OS image to show
oc image info library/busybox:latest --filter-by-os=linux/arm64
2.6.1.86. oc image mirror 复制链接链接已复制到粘贴板!
将镜像从一个存储库镜像到另一个存储库
用法示例
# Copy image to another tag
oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable
# Copy image to another registry
oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable
# Copy all tags starting with mysql to the destination repository
oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage
# Copy image to disk, creating a directory structure that can be served as a registry
oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest
# Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest)
oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest
# Copy image to S3 without setting a tag (pull via @<digest>)
oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image
# Copy image to multiple locations
oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable \
docker.io/myrepository/myimage:dev
# Copy multiple images
oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \
myregistry.com/myimage:new=myregistry.com/other:target
# Copy manifest list of a multi-architecture image, even if only a single image is found
oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \
--keep-manifest-list=true
# Copy specific os/arch manifest of a multi-architecture image
# Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images
# Note that with multi-arch images, this results in a new manifest list digest that includes only the filtered manifests
oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \
--filter-by-os=os/arch
# Copy all os/arch manifests of a multi-architecture image
# Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored
oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \
--keep-manifest-list=true
# Note the above command is equivalent to
oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \
--filter-by-os=.*
# Copy specific os/arch manifest of a multi-architecture image
# Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images
# Note that the target registry may reject a manifest list if the platform specific images do not all exist
# You must use a registry with sparse registry support enabled
oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \
--filter-by-os=linux/386 \
--keep-manifest-list=true
2.6.1.87. oc import-image 复制链接链接已复制到粘贴板!
从容器镜像 registry 中导入镜像
用法示例
# Import tag latest into a new image stream
oc import-image mystream --from=registry.io/repo/image:latest --confirm
# Update imported data for tag latest in an already existing image stream
oc import-image mystream
# Update imported data for tag stable in an already existing image stream
oc import-image mystream:stable
# Update imported data for all tags in an existing image stream
oc import-image mystream --all
# Update imported data for a tag that points to a manifest list to include the full manifest list
oc import-image mystream --import-mode=PreserveOriginal
# Import all tags into a new image stream
oc import-image mystream --from=registry.io/repo/image --all --confirm
# Import all tags into a new image stream using a custom timeout
oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm
2.6.1.88. oc kustomize 复制链接链接已复制到粘贴板!
从目录或 URL 构建 kustomization 目标
用法示例
# Build the current working directory
oc kustomize
# Build some shared configuration directory
oc kustomize /home/config/production
# Build from github
oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6
2.6.1.89. oc label 复制链接链接已复制到粘贴板!
更新资源上的标签
用法示例
# Update pod 'foo' with the label 'unhealthy' and the value 'true'
oc label pods foo unhealthy=true
# Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value
oc label --overwrite pods foo status=unhealthy
# Update all pods in the namespace
oc label pods --all status=unhealthy
# Update a pod identified by the type and name in "pod.json"
oc label -f pod.json status=unhealthy
# Update pod 'foo' only if the resource is unchanged from version 1
oc label pods foo status=unhealthy --resource-version=1
# Update pod 'foo' by removing a label named 'bar' if it exists
# Does not require the --overwrite flag
oc label pods foo bar-
2.6.1.90. oc login 复制链接链接已复制到粘贴板!
登录到服务器
用法示例
# Log in interactively
oc login --username=myuser
# Log in to the given server with the given certificate authority file
oc login localhost:8443 --certificate-authority=/path/to/cert.crt
# Log in to the given server with the given credentials (will not prompt interactively)
oc login localhost:8443 --username=myuser --password=mypass
# Log in to the given server through a browser
oc login localhost:8443 --web --callback-port 8280
# Log in to the external OIDC issuer through Auth Code + PKCE by starting a local server listening on port 8080
oc login localhost:8443 --exec-plugin=oc-oidc --client-id=client-id --extra-scopes=email,profile --callback-port=8080
2.6.1.91. oc logout 复制链接链接已复制到粘贴板!
结束当前服务器会话
用法示例
# Log out
oc logout
2.6.1.92. oc logs 复制链接链接已复制到粘贴板!
显示 pod 中容器的日志
用法示例
# Start streaming the logs of the most recent build of the openldap build config
oc logs -f bc/openldap
# Start streaming the logs of the latest deployment of the mysql deployment config
oc logs -f dc/mysql
# Get the logs of the first deployment for the mysql deployment config. Note that logs
# from older deployments may not exist either because the deployment was successful
# or due to deployment pruning or manual deletion of the deployment
oc logs --version=1 dc/mysql
# Return a snapshot of ruby-container logs from pod backend
oc logs backend -c ruby-container
# Start streaming of ruby-container logs from pod backend
oc logs -f pod/backend -c ruby-container
2.6.1.93. oc new-app 复制链接链接已复制到粘贴板!
创建新应用程序
用法示例
# List all local templates and image streams that can be used to create an app
oc new-app --list
# Create an application based on the source code in the current git repository (with a public remote) and a container image
oc new-app . --image=registry/repo/langimage
# Create an application myapp with Docker based build strategy expecting binary input
oc new-app --strategy=docker --binary --name myapp
# Create a Ruby application based on the provided [image]~[source code] combination
oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git
# Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql
oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql
# Use a MySQL image in a private registry to create an app and override application artifacts' names
oc new-app --image=myregistry.com/mycompany/mysql --name=private
# Use an image with the full manifest list to create an app and override application artifacts' names
oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal
# Create an application from a remote repository using its beta4 branch
oc new-app https://github.com/openshift/ruby-hello-world#beta4
# Create an application based on a stored template, explicitly setting a parameter value
oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin
# Create an application from a remote repository and specify a context directory
oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build
# Create an application from a remote private repository and specify which existing secret to use
oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret
# Create an application based on a template file, explicitly setting a parameter value
oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin
# Search all templates, image streams, and container images for the ones that match "ruby"
oc new-app --search ruby
# Search for "ruby", but only in stored templates (--template, --image-stream and --image
# can be used to filter search results)
oc new-app --search --template=ruby
# Search for "ruby" in stored templates and print the output as YAML
oc new-app --search --template=ruby --output=yaml
2.6.1.94. oc new-build 复制链接链接已复制到粘贴板!
创建新构建配置
用法示例
# Create a build config based on the source code in the current git repository (with a public
# remote) and a container image
oc new-build . --image=repo/langimage
# Create a NodeJS build config based on the provided [image]~[source code] combination
oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git
# Create a build config from a remote repository using its beta2 branch
oc new-build https://github.com/openshift/ruby-hello-world#beta2
# Create a build config using a Dockerfile specified as an argument
oc new-build -D $'FROM centos:7\nRUN yum install -y httpd'
# Create a build config from a remote repository and add custom environment variables
oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development
# Create a build config from a remote private repository and specify which existing secret to use
oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret
# Create a build config using an image with the full manifest list to create an app and override application artifacts' names
oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal
# Create a build config from a remote repository and inject the npmrc into a build
oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc
# Create a build config from a remote repository and inject environment data into a build
oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config
# Create a build config that gets its input from a remote repository and another container image
oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp
2.6.1.95. oc new-project 复制链接链接已复制到粘贴板!
请求新项目
用法示例
# Create a new project with minimal information
oc new-project web-team-dev
# Create a new project with a display name and description
oc new-project web-team-dev --display-name="Web Team Development" --description="Development project for the web team."
2.6.1.96. oc observe 复制链接链接已复制到粘贴板!
观察资源的变化并对其做出反应(实验性)
用法示例
# Observe changes to services
oc observe services
# Observe changes to services, including the clusterIP and invoke a script for each
oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh
# Observe changes to services filtered by a label selector
oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh
2.6.1.97. oc patch 复制链接链接已复制到粘贴板!
更新资源字段
用法示例
# Partially update a node using a strategic merge patch, specifying the patch as JSON
oc patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'
# Partially update a node using a strategic merge patch, specifying the patch as YAML
oc patch node k8s-node-1 -p $'spec:\n unschedulable: true'
# Partially update a node identified by the type and name specified in "node.json" using strategic merge patch
oc patch -f node.json -p '{"spec":{"unschedulable":true}}'
# Update a container's image; spec.containers[*].name is required because it's a merge key
oc patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
# Update a container's image using a JSON patch with positional arrays
oc patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'
# Update a deployment's replicas through the 'scale' subresource using a merge patch
oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}'
2.6.1.98. oc plugin 复制链接链接已复制到粘贴板!
提供与插件交互的工具
用法示例
# List all available plugins
oc plugin list
# List only binary names of available plugins without paths
oc plugin list --name-only
2.6.1.99. oc plugin list 复制链接链接已复制到粘贴板!
列出用户 PATH 中的所有可见插件可执行文件
用法示例
# List all available plugins
oc plugin list
# List only binary names of available plugins without paths
oc plugin list --name-only
2.6.1.100. oc policy add-role-to-user 复制链接链接已复制到粘贴板!
为当前项目的用户或服务帐户添加角色
用法示例
# Add the 'view' role to user1 for the current project
oc policy add-role-to-user view user1
# Add the 'edit' role to serviceaccount1 for the current project
oc policy add-role-to-user edit -z serviceaccount1
2.6.1.101. oc policy scc-review 复制链接链接已复制到粘贴板!
检查哪个服务帐户可以创建 pod
用法示例
# Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml
# Service Account specified in myresource.yaml file is ignored
oc policy scc-review -z sa1,sa2 -f my_resource.yaml
# Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml
oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml
# Check whether the service account specified in my_resource_with_sa.yaml can admit the pod
oc policy scc-review -f my_resource_with_sa.yaml
# Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml
oc policy scc-review -f myresource_with_no_sa.yaml
2.6.1.102. oc policy scc-subject-review 复制链接链接已复制到粘贴板!
检查用户或服务帐户是否可以创建 pod
用法示例
# Check whether user bob can create a pod specified in myresource.yaml
oc policy scc-subject-review -u bob -f myresource.yaml
# Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml
oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml
# Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod
oc policy scc-subject-review -f myresourcewithsa.yaml
2.6.1.103. oc port-forward 复制链接链接已复制到粘贴板!
将一个或多个本地端口转发到一个 pod
用法示例
# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod
oc port-forward pod/mypod 5000 6000
# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment
oc port-forward deployment/mydeployment 5000 6000
# Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service
oc port-forward service/myservice 8443:https
# Listen on port 8888 locally, forwarding to 5000 in the pod
oc port-forward pod/mypod 8888:5000
# Listen on port 8888 on all addresses, forwarding to 5000 in the pod
oc port-forward --address 0.0.0.0 pod/mypod 8888:5000
# Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod
oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000
# Listen on a random port locally, forwarding to 5000 in the pod
oc port-forward pod/mypod :5000
2.6.1.104. oc process 复制链接链接已复制到粘贴板!
将模板处理为资源列表
用法示例
# Convert the template.json file into a resource list and pass to create
oc process -f template.json | oc create -f -
# Process a file locally instead of contacting the server
oc process -f template.json --local -o yaml
# Process template while passing a user-defined label
oc process -f template.json -l name=mytemplate
# Convert a stored template into a resource list
oc process foo
# Convert a stored template into a resource list by setting/overriding parameter values
oc process foo PARM1=VALUE1 PARM2=VALUE2
# Convert a template stored in different namespace into a resource list
oc process openshift//foo
# Convert template.json into a resource list
cat template.json | oc process -f -
2.6.1.105. oc project 复制链接链接已复制到粘贴板!
切换到另一个项目
用法示例
# Switch to the 'myapp' project
oc project myapp
# Display the project currently in use
oc project
2.6.1.106. oc projects 复制链接链接已复制到粘贴板!
显示现有项目
用法示例
# List all projects
oc projects
2.6.1.107. oc proxy 复制链接链接已复制到粘贴板!
运行到 Kubernetes API 服务器的代理
用法示例
# To proxy all of the Kubernetes API and nothing else
oc proxy --api-prefix=/
# To proxy only part of the Kubernetes API and also some static files
# You can get pods info with 'curl localhost:8001/api/v1/pods'
oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/
# To proxy the entire Kubernetes API at a different root
# You can get pods info with 'curl localhost:8001/custom/api/v1/pods'
oc proxy --api-prefix=/custom/
# Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/
oc proxy --port=8011 --www=./local/www/
# Run a proxy to the Kubernetes API server on an arbitrary local port
# The chosen port for the server will be output to stdout
oc proxy --port=0
# Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api
# This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/
oc proxy --api-prefix=/k8s-api
2.6.1.108. oc registry login 复制链接链接已复制到粘贴板!
登录到集成的 registry
用法示例
# Log in to the integrated registry
oc registry login
# Log in to different registry using BASIC auth credentials
oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS
2.6.1.109. oc replace 复制链接链接已复制到粘贴板!
使用文件名或 stdin 替换资源
用法示例
# Replace a pod using the data in pod.json
oc replace -f ./pod.json
# Replace a pod based on the JSON passed into stdin
cat pod.json | oc replace -f -
# Update a single-container pod's image version (tag) to v4
oc get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | oc replace -f -
# Force replace, delete and then re-create the resource
oc replace --force -f ./pod.json
2.6.1.110. oc rollback 复制链接链接已复制到粘贴板!
将应用程序的一部分还原回以前的部署
用法示例
# Perform a rollback to the last successfully completed deployment for a deployment config
oc rollback frontend
# See what a rollback to version 3 will look like, but do not perform the rollback
oc rollback frontend --to-version=3 --dry-run
# Perform a rollback to a specific deployment
oc rollback frontend-2
# Perform the rollback manually by piping the JSON of the new config back to oc
oc rollback frontend -o json | oc replace dc/frontend -f -
# Print the updated deployment configuration in JSON format instead of performing the rollback
oc rollback frontend -o json
2.6.1.111. oc rollout 复制链接链接已复制到粘贴板!
管理资源的推出
用法示例
# Roll back to the previous deployment
oc rollout undo deployment/abc
# Check the rollout status of a daemonset
oc rollout status daemonset/foo
# Restart a deployment
oc rollout restart deployment/abc
# Restart deployments with the 'app=nginx' label
oc rollout restart deployment --selector=app=nginx
2.6.1.112. oc rollout cancel 复制链接链接已复制到粘贴板!
取消进行中的部署
用法示例
# Cancel the in-progress deployment based on 'nginx'
oc rollout cancel dc/nginx
2.6.1.113. oc rollout history 复制链接链接已复制到粘贴板!
查看推出(rollout)历史记录
用法示例
# View the rollout history of a deployment
oc rollout history deployment/abc
# View the details of daemonset revision 3
oc rollout history daemonset/abc --revision=3
2.6.1.114. oc rollout latest 复制链接链接已复制到粘贴板!
使用来自触发器的最新状态为部署配置启动一个新的 rollout 操作
用法示例
# Start a new rollout based on the latest images defined in the image change triggers
oc rollout latest dc/nginx
# Print the rolled out deployment config
oc rollout latest dc/nginx -o json
2.6.1.115. oc rollout pause 复制链接链接已复制到粘贴板!
将提供的资源标记为暂停
用法示例
# Mark the nginx deployment as paused
# Any current state of the deployment will continue its function; new updates
# to the deployment will not have an effect as long as the deployment is paused
oc rollout pause deployment/nginx
2.6.1.116. oc rollout restart 复制链接链接已复制到粘贴板!
重启资源
用法示例
# Restart all deployments in the test-namespace namespace
oc rollout restart deployment -n test-namespace
# Restart a deployment
oc rollout restart deployment/nginx
# Restart a daemon set
oc rollout restart daemonset/abc
# Restart deployments with the app=nginx label
oc rollout restart deployment --selector=app=nginx
2.6.1.117. oc rollout resume 复制链接链接已复制到粘贴板!
恢复暂停的资源
用法示例
# Resume an already paused deployment
oc rollout resume deployment/nginx
2.6.1.118. oc rollout retry 复制链接链接已复制到粘贴板!
重试最新失败的 rollout 操作
用法示例
# Retry the latest failed deployment based on 'frontend'
# The deployer pod and any hook pods are deleted for the latest failed deployment
oc rollout retry dc/frontend
2.6.1.119. oc rollout status 复制链接链接已复制到粘贴板!
显示推出部署的状态
用法示例
# Watch the rollout status of a deployment
oc rollout status deployment/nginx
2.6.1.120. oc rollout undo 复制链接链接已复制到粘贴板!
撤消之前的推出部署
用法示例
# Roll back to the previous deployment
oc rollout undo deployment/abc
# Roll back to daemonset revision 3
oc rollout undo daemonset/abc --to-revision=3
# Roll back to the previous deployment with dry-run
oc rollout undo --dry-run=server deployment/abc
2.6.1.121. oc rsh 复制链接链接已复制到粘贴板!
在容器中启动 shell 会话
用法示例
# Open a shell session on the first container in pod 'foo'
oc rsh foo
# Open a shell session on the first container in pod 'foo' and namespace 'bar'
# (Note that oc client specific arguments must come before the resource name and its arguments)
oc rsh -n bar foo
# Run the command 'cat /etc/resolv.conf' inside pod 'foo'
oc rsh foo cat /etc/resolv.conf
# See the configuration of your internal registry
oc rsh dc/docker-registry cat config.yml
# Open a shell session on the container named 'index' inside a pod of your job
oc rsh -c index job/scheduled
2.6.1.122. oc rsync 复制链接链接已复制到粘贴板!
在本地文件系统和 pod 间复制文件
用法示例
# Synchronize a local directory with a pod directory
oc rsync ./local/dir/ POD:/remote/dir
# Synchronize a pod directory with a local directory
oc rsync POD:/remote/dir/ ./local/dir
2.6.1.123. oc run 复制链接链接已复制到粘贴板!
在集群中运行特定镜像
用法示例
# Start a nginx pod
oc run nginx --image=nginx
# Start a hazelcast pod and let the container expose port 5701
oc run hazelcast --image=hazelcast/hazelcast --port=5701
# Start a hazelcast pod and set environment variables "DNS_DOMAIN=cluster" and "POD_NAMESPACE=default" in the container
oc run hazelcast --image=hazelcast/hazelcast --env="DNS_DOMAIN=cluster" --env="POD_NAMESPACE=default"
# Start a hazelcast pod and set labels "app=hazelcast" and "env=prod" in the container
oc run hazelcast --image=hazelcast/hazelcast --labels="app=hazelcast,env=prod"
# Dry run; print the corresponding API objects without creating them
oc run nginx --image=nginx --dry-run=client
# Start a nginx pod, but overload the spec with a partial set of values parsed from JSON
oc run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }'
# Start a busybox pod and keep it in the foreground, don't restart it if it exits
oc run -i -t busybox --image=busybox --restart=Never
# Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command
oc run nginx --image=nginx -- <arg1> <arg2> ... <argN>
# Start the nginx pod using a different command and custom arguments
oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>
2.6.1.124. oc scale 复制链接链接已复制到粘贴板!
为部署、副本集或复制控制器设置新大小
用法示例
# Scale a replica set named 'foo' to 3
oc scale --replicas=3 rs/foo
# Scale a resource identified by type and name specified in "foo.yaml" to 3
oc scale --replicas=3 -f foo.yaml
# If the deployment named mysql's current size is 2, scale mysql to 3
oc scale --current-replicas=2 --replicas=3 deployment/mysql
# Scale multiple replication controllers
oc scale --replicas=5 rc/example1 rc/example2 rc/example3
# Scale stateful set named 'web' to 3
oc scale --replicas=3 statefulset/web
2.6.1.125. oc secrets link 复制链接链接已复制到粘贴板!
将 secret 链接到服务帐户
用法示例
# Add an image pull secret to a service account to automatically use it for pulling pod images
oc secrets link serviceaccount-name pull-secret --for=pull
# Add an image pull secret to a service account to automatically use it for both pulling and pushing build images
oc secrets link builder builder-image-secret --for=pull,mount
2.6.1.126. oc secrets unlink 复制链接链接已复制到粘贴板!
从服务帐户分离 secret
用法示例
# Unlink a secret currently associated with a service account
oc secrets unlink serviceaccount-name secret-name another-secret-name ...
2.6.1.127. oc set build-hook 复制链接链接已复制到粘贴板!
更新构建配置上的构建 hook
用法示例
# Clear post-commit hook on a build config
oc set build-hook bc/mybuild --post-commit --remove
# Set the post-commit hook to execute a test suite using a new entrypoint
oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh
# Set the post-commit hook to execute a shell script
oc set build-hook bc/mybuild --post-commit --script="/var/lib/test-image.sh param1 param2 && /var/lib/done.sh"
2.6.1.128. oc set build-secret 复制链接链接已复制到粘贴板!
更新构建配置上的构建 secret
用法示例
# Clear the push secret on a build config
oc set build-secret --push --remove bc/mybuild
# Set the pull secret on a build config
oc set build-secret --pull bc/mybuild mysecret
# Set the push and pull secret on a build config
oc set build-secret --push --pull bc/mybuild mysecret
# Set the source secret on a set of build configs matching a selector
oc set build-secret --source -l app=myapp gitsecret
2.6.1.129. oc set data 复制链接链接已复制到粘贴板!
更新配置映射或 secret 中的数据
用法示例
# Set the 'password' key of a secret
oc set data secret/foo password=this_is_secret
# Remove the 'password' key from a secret
oc set data secret/foo password-
# Update the 'haproxy.conf' key of a config map from a file on disk
oc set data configmap/bar --from-file=../haproxy.conf
# Update a secret with the contents of a directory, one key per file
oc set data secret/foo --from-file=secret-dir
2.6.1.130. oc set deployment-hook 复制链接链接已复制到粘贴板!
更新部署配置上的部署 hook
用法示例
# Clear pre and post hooks on a deployment config
oc set deployment-hook dc/myapp --remove --pre --post
# Set the pre deployment hook to execute a db migration command for an application
# using the data volume from the application
oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh
# Set a mid deployment hook along with additional environment variables
oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh
2.6.1.131. oc set env 复制链接链接已复制到粘贴板!
更新 pod 模板上的环境变量
用法示例
# Update deployment config 'myapp' with a new environment variable
oc set env dc/myapp STORAGE_DIR=/local
# List the environment variables defined on a build config 'sample-build'
oc set env bc/sample-build --list
# List the environment variables defined on all pods
oc set env pods --all --list
# Output modified build config in YAML
oc set env bc/sample-build STORAGE_DIR=/data -o yaml
# Update all containers in all replication controllers in the project to have ENV=prod
oc set env rc --all ENV=prod
# Import environment from a secret
oc set env --from=secret/mysecret dc/myapp
# Import environment from a config map with a prefix
oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp
# Remove the environment variable ENV from container 'c1' in all deployment configs
oc set env dc --all --containers="c1" ENV-
# Remove the environment variable ENV from a deployment config definition on disk and
# update the deployment config on the server
oc set env -f dc.json ENV-
# Set some of the local shell environment into a deployment config on the server
oc set env | grep RAILS_ | oc env -e - dc/myapp
2.6.1.132. oc set image 复制链接链接已复制到粘贴板!
更新 pod 模板的镜像
用法示例
# Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'.
oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1
# Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'.
oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag
# Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'
oc set image deployments,rc nginx=nginx:1.9.1 --all
# Update image of all containers of daemonset abc to 'nginx:1.9.1'
oc set image daemonset abc *=nginx:1.9.1
# Print result (in YAML format) of updating nginx container image from local file, without hitting the server
oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml
2.6.1.133. oc set image-lookup 复制链接链接已复制到粘贴板!
更改部署应用程序时镜像的解析方式
用法示例
# Print all of the image streams and whether they resolve local names
oc set image-lookup
# Use local name lookup on image stream mysql
oc set image-lookup mysql
# Force a deployment to use local name lookup
oc set image-lookup deploy/mysql
# Show the current status of the deployment lookup
oc set image-lookup deploy/mysql --list
# Disable local name lookup on image stream mysql
oc set image-lookup mysql --enabled=false
# Set local name lookup on all image streams
oc set image-lookup --all
2.6.1.134. oc set probe 复制链接链接已复制到粘贴板!
更新 pod 模板上的探测
用法示例
# Clear both readiness and liveness probes off all containers
oc set probe dc/myapp --remove --readiness --liveness
# Set an exec action as a liveness probe to run 'echo ok'
oc set probe dc/myapp --liveness -- echo ok
# Set a readiness probe to try to open a TCP socket on 3306
oc set probe rc/mysql --readiness --open-tcp=3306
# Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP
oc set probe dc/webapp --startup --get-url=http://:8080/healthz
# Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP
oc set probe dc/webapp --readiness --get-url=http://:8080/healthz
# Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod
oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats
# Set only the initial-delay-seconds field on all deployments
oc set probe dc --all --readiness --initial-delay-seconds=30
2.6.1.135. oc set resources 复制链接链接已复制到粘贴板!
使用 pod 模板更新对象上的资源请求/限制
用法示例
# Set a deployments nginx container CPU limits to "200m and memory to 512Mi"
oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi
# Set the resource request and limits for all containers in nginx
oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi
# Remove the resource requests for resources on containers in nginx
oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0
# Print the result (in YAML format) of updating nginx container limits locally, without hitting the server
oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml
2.6.1.136. oc set route-backends 复制链接链接已复制到粘贴板!
更新路由的后端
用法示例
# Print the backends on the route 'web'
oc set route-backends web
# Set two backend services on route 'web' with 2/3rds of traffic going to 'a'
oc set route-backends web a=2 b=1
# Increase the traffic percentage going to b by 10%% relative to a
oc set route-backends web --adjust b=+10%%
# Set traffic percentage going to b to 10%% of the traffic going to a
oc set route-backends web --adjust b=10%%
# Set weight of b to 10
oc set route-backends web --adjust b=10
# Set the weight to all backends to zero
oc set route-backends web --zero
2.6.1.137. oc set selector 复制链接链接已复制到粘贴板!
在资源上设置选择器
用法示例
# Set the labels and selector before creating a deployment/service pair.
oc create service clusterip my-svc --clusterip="None" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f -
oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f -
2.6.1.138. oc set serviceaccount 复制链接链接已复制到粘贴板!
更新资源的服务帐户
用法示例
# Set deployment nginx-deployment's service account to serviceaccount1
oc set serviceaccount deployment nginx-deployment serviceaccount1
# Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server
oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml
2.6.1.139. oc set subject 复制链接链接已复制到粘贴板!
更新角色绑定或集群角色绑定中的用户、组或服务帐户
用法示例
# Update a cluster role binding for serviceaccount1
oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1
# Update a role binding for user1, user2, and group1
oc set subject rolebinding admin --user=user1 --user=user2 --group=group1
# Print the result (in YAML format) of updating role binding subjects locally, without hitting the server
oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml
2.6.1.140. oc set triggers 复制链接链接已复制到粘贴板!
更新一个或多个对象上的触发器
用法示例
# Print the triggers on the deployment config 'myapp'
oc set triggers dc/myapp
# Set all triggers to manual
oc set triggers dc/myapp --manual
# Enable all automatic triggers
oc set triggers dc/myapp --auto
# Reset the GitHub webhook on a build to a new, generated secret
oc set triggers bc/webapp --from-github
oc set triggers bc/webapp --from-webhook
# Remove all triggers
oc set triggers bc/webapp --remove-all
# Stop triggering on config change
oc set triggers dc/myapp --from-config --remove
# Add an image trigger to a build config
oc set triggers bc/webapp --from-image=namespace1/image:latest
# Add an image trigger to a stateful set on the main container
oc set triggers statefulset/db --from-image=namespace1/image:latest -c main
2.6.1.141. oc set volumes 复制链接链接已复制到粘贴板!
更新 pod 模板中的卷
用法示例
# List volumes defined on all deployment configs in the current project
oc set volume dc --all
# Add a new empty dir volume to deployment config (dc) 'myapp' mounted under
# /var/lib/myapp
oc set volume dc/myapp --add --mount-path=/var/lib/myapp
# Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1'
oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite
# Remove volume 'v1' from deployment config 'myapp'
oc set volume dc/myapp --remove --name=v1
# Create a new persistent volume claim that overwrites an existing volume 'v1'
oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite
# Change the mount point for volume 'v1' to /data
oc set volume dc/myapp --add --name=v1 -m /data --overwrite
# Modify the deployment config by removing volume mount "v1" from container "c1"
# (and by removing the volume "v1" if no other containers have volume mounts that reference it)
oc set volume dc/myapp --remove --name=v1 --containers=c1
# Add new volume based on a more complex volume source (AWS EBS, GCE PD,
# Ceph, Gluster, NFS, ISCSI, ...)
oc set volume dc/myapp --add -m /data --source=<json-string>
2.6.1.142. oc start-build 复制链接链接已复制到粘贴板!
启动新构建
用法示例
# Starts build from build config "hello-world"
oc start-build hello-world
# Starts build from a previous build "hello-world-1"
oc start-build --from-build=hello-world-1
# Use the contents of a directory as build input
oc start-build hello-world --from-dir=src/
# Send the contents of a Git repository to the server from tag 'v2'
oc start-build hello-world --from-repo=../hello-world --commit=v2
# Start a new build for build config "hello-world" and watch the logs until the build
# completes or fails
oc start-build hello-world --follow
# Start a new build for build config "hello-world" and wait until the build completes. It
# exits with a non-zero return code if the build fails
oc start-build hello-world --wait
2.6.1.143. oc status 复制链接链接已复制到粘贴板!
显示当前项目的概述
用法示例
# See an overview of the current project
oc status
# Export the overview of the current project in an svg file
oc status -o dot | dot -T svg -o project.svg
# See an overview of the current project including details for any identified issues
oc status --suggest
2.6.1.144. oc tag 复制链接链接已复制到粘贴板!
将现有镜像标记到镜像流中
用法示例
# Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip'
oc tag openshift/ruby:2.0 yourproject/ruby:tip
# Tag a specific image
oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip
# Tag an external container image
oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip
# Tag an external container image and request pullthrough for it
oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local
# Tag an external container image and include the full manifest list
oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal
# Remove the specified spec tag from an image stream
oc tag openshift/origin-control-plane:latest -d
2.6.1.145. oc version 复制链接链接已复制到粘贴板!
输出客户端和服务器版本信息
用法示例
# Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context
oc version
# Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context in JSON format
oc version --output json
# Print the OpenShift client version information for the current context
oc version --client
2.6.1.146. oc wait 复制链接链接已复制到粘贴板!
实验性:等待一个或多个资源上的特定条件
用法示例
# Wait for the pod "busybox1" to contain the status condition of type "Ready"
oc wait --for=condition=Ready pod/busybox1
# The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity)
oc wait --for=condition=Ready=false pod/busybox1
# Wait for the pod "busybox1" to contain the status phase to be "Running"
oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1
# Wait for pod "busybox1" to be Ready
oc wait --for='jsonpath={.status.conditions[?(@.type=="Ready")].status}=True' pod/busybox1
# Wait for the service "loadbalancer" to have ingress
oc wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer
# Wait for the secret "busybox1" to be created, with a timeout of 30s
oc create secret generic busybox1
oc wait --for=create secret/busybox1 --timeout=30s
# Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command
oc delete pod/busybox1
oc wait --for=delete pod/busybox1 --timeout=60s
2.6.1.147. oc whoami 复制链接链接已复制到粘贴板!
返回有关当前会话的信息
用法示例
# Display the currently authenticated user
oc whoami
2.7. OpenShift CLI 管理员命令参考 复制链接链接已复制到粘贴板!
本参考提供了 OpenShift CLI(oc)管理员命令的描述和示例命令。您必须具有 cluster-admin 或同等权限才能使用这些命令。
如需开发人员命令,请参阅 OpenShift CLI 开发人员命令参考。
运行 oc adm -h 以列出所有管理员命令或运行 oc <command> --help 获取特定命令的更多详情。
2.7.1. OpenShift CLI(oc)管理员命令 复制链接链接已复制到粘贴板!
2.7.1.1. oc adm build-chain 复制链接链接已复制到粘贴板!
输出构建的输入和依赖项
用法示例
# Build the dependency tree for the 'latest' tag in <image-stream>
oc adm build-chain <image-stream>
# Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility
oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg
# Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace
oc adm build-chain <image-stream> -n test --all
2.7.1.2. oc adm catalog mirror 复制链接链接已复制到粘贴板!
镜像 operator-registry 目录
用法示例
# Mirror an operator-registry image and its contents to a registry
oc adm catalog mirror quay.io/my/image:latest myregistry.com
# Mirror an operator-registry image and its contents to a particular namespace in a registry
oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace
# Mirror to an airgapped registry by first mirroring to files
oc adm catalog mirror quay.io/my/image:latest file:///local/index
oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com
# Configure a cluster to use a mirrored registry
oc apply -f manifests/imageDigestMirrorSet.yaml
# Edit the mirroring mappings and mirror with "oc image mirror" manually
oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com
oc image mirror -f manifests/mapping.txt
# Delete all ImageDigestMirrorSets generated by oc adm catalog mirror
oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true
2.7.1.3. oc adm 证书批准 复制链接链接已复制到粘贴板!
批准证书签名请求
用法示例
# Approve CSR 'csr-sqgzp'
oc adm certificate approve csr-sqgzp
2.7.1.4. oc adm 证书拒绝 复制链接链接已复制到粘贴板!
拒绝证书签名请求
用法示例
# Deny CSR 'csr-sqgzp'
oc adm certificate deny csr-sqgzp
2.7.1.5. oc adm copy-to-node 复制链接链接已复制到粘贴板!
将指定的文件复制到节点
用法示例
# Copy a new bootstrap kubeconfig file to node-0
oc adm copy-to-node --copy=new-bootstrap-kubeconfig=/etc/kubernetes/kubeconfig node/node-0
2.7.1.6. oc adm cordon 复制链接链接已复制到粘贴板!
将节点标记为不可调度
用法示例
# Mark node "foo" as unschedulable
oc adm cordon foo
2.7.1.7. oc adm create-bootstrap-project-template 复制链接链接已复制到粘贴板!
创建 bootstrap 项目模板
用法示例
# Output a bootstrap project template in YAML format to stdout
oc adm create-bootstrap-project-template -o yaml
2.7.1.8. oc adm create-error-template 复制链接链接已复制到粘贴板!
创建错误页面模板
用法示例
# Output a template for the error page to stdout
oc adm create-error-template
2.7.1.9. oc adm create-login-template 复制链接链接已复制到粘贴板!
创建登录模板
用法示例
# Output a template for the login page to stdout
oc adm create-login-template
2.7.1.10. oc adm create-provider-selection-template 复制链接链接已复制到粘贴板!
创建供应商选择模板
用法示例
# Output a template for the provider selection page to stdout
oc adm create-provider-selection-template
2.7.1.11. oc adm drain 复制链接链接已复制到粘贴板!
排空节点以准备进行维护
用法示例
# Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it
oc adm drain foo --force
# As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes
oc adm drain foo --grace-period=900
2.7.1.12. oc adm groups add-users 复制链接链接已复制到粘贴板!
将用户添加到组
用法示例
# Add user1 and user2 to my-group
oc adm groups add-users my-group user1 user2
2.7.1.13. oc adm groups new 复制链接链接已复制到粘贴板!
创建一个新组
用法示例
# Add a group with no users
oc adm groups new my-group
# Add a group with two users
oc adm groups new my-group user1 user2
# Add a group with one user and shorter output
oc adm groups new my-group user1 -o name
2.7.1.14. oc adm groups prune 复制链接链接已复制到粘贴板!
从外部提供程序中删除引用缺失记录的旧 OpenShift 组
用法示例
# Prune all orphaned groups
oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm
# Prune all orphaned groups except the ones from the denylist file
oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm
# Prune all orphaned groups from a list of specific groups specified in an allowlist file
oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm
# Prune all orphaned groups from a list of specific groups specified in a list
oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm
2.7.1.15. oc adm groups remove-users 复制链接链接已复制到粘贴板!
从组中删除用户
用法示例
# Remove user1 and user2 from my-group
oc adm groups remove-users my-group user1 user2
2.7.1.16. oc adm groups sync 复制链接链接已复制到粘贴板!
将 OpenShift 组与来自外部提供程序的记录同步
用法示例
# Sync all groups with an LDAP server
oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm
# Sync all groups except the ones from the blacklist file with an LDAP server
oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm
# Sync specific groups specified in an allowlist file with an LDAP server
oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm
# Sync all OpenShift groups that have been synced previously with an LDAP server
oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm
# Sync specific OpenShift groups if they have been synced previously with an LDAP server
oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm
2.7.1.17. oc adm inspect 复制链接链接已复制到粘贴板!
为给定资源收集调试数据
用法示例
# Collect debugging data for the "openshift-apiserver" clusteroperator
oc adm inspect clusteroperator/openshift-apiserver
# Collect debugging data for the "openshift-apiserver" and "kube-apiserver" clusteroperators
oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver
# Collect debugging data for all clusteroperators
oc adm inspect clusteroperator
# Collect debugging data for all clusteroperators and clusterversions
oc adm inspect clusteroperators,clusterversions
2.7.1.18. oc adm migrate icsp 复制链接链接已复制到粘贴板!
将 imagecontentsourcepolicy 文件更新为 imagedigestmirrorset 文件
用法示例
# Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory
oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir
2.7.1.19. oc adm migrate template-instances 复制链接链接已复制到粘贴板!
更新模板实例以指向最新的 group-version-kinds
用法示例
# Perform a dry-run of updating all objects
oc adm migrate template-instances
# To actually perform the update, the confirm flag must be appended
oc adm migrate template-instances --confirm
2.7.1.20. oc adm must-gather 复制链接链接已复制到粘贴板!
启动用于收集调试信息的 pod 的新实例
用法示例
# Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand>
oc adm must-gather
# Gather information with a specific local folder to copy to
oc adm must-gather --dest-dir=/local/directory
# Gather audit information
oc adm must-gather -- /usr/bin/gather_audit_logs
# Gather information using multiple plug-in images
oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather
# Gather information using a specific image stream plug-in
oc adm must-gather --image-stream=openshift/must-gather:latest
# Gather information using a specific image, command, and pod directory
oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh
2.7.1.21. oc adm new-project 复制链接链接已复制到粘贴板!
创建新项目
用法示例
# Create a new project using a node selector
oc adm new-project myproject --node-selector='type=user-node,region=east'
2.7.1.22. oc adm node-image create 复制链接链接已复制到粘贴板!
创建 ISO 镜像,以引导要添加到目标集群的节点
用法示例
# Create the ISO image and download it in the current folder
oc adm node-image create
# Use a different assets folder
oc adm node-image create --dir=/tmp/assets
# Specify a custom image name
oc adm node-image create -o=my-node.iso
# In place of an ISO, creates files that can be used for PXE boot
oc adm node-image create --pxe
# Create an ISO to add a single node without using the configuration file
oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb
# Create an ISO to add a single node with a root device hint and without
# using the configuration file
oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb --root-device-hint=deviceName:/dev/sda
2.7.1.23. oc adm node-image monitor 复制链接链接已复制到粘贴板!
监控添加到 OpenShift 集群的新节点
用法示例
# Monitor a single node being added to a cluster
oc adm node-image monitor --ip-addresses 192.168.111.83
# Monitor multiple nodes being added to a cluster by separating each
# IP address with a comma
oc adm node-image monitor --ip-addresses 192.168.111.83,192.168.111.84
2.7.1.24. oc adm node-logs 复制链接链接已复制到粘贴板!
显示和过滤节点日志
用法示例
# Show kubelet logs from all control plane nodes
oc adm node-logs --role master -u kubelet
# See what logs are available in control plane nodes in /var/log
oc adm node-logs --role master --path=/
# Display cron log file from all control plane nodes
oc adm node-logs --role master --path=cron
2.7.1.25. oc adm ocp-certificates monitor-certificates 复制链接链接已复制到粘贴板!
观察平台证书
用法示例
# Watch platform certificates
oc adm ocp-certificates monitor-certificates
2.7.1.26. oc adm ocp-certificates regenerate-leaf 复制链接链接已复制到粘贴板!
重新生成 OpenShift 集群的客户端和提供证书
用法示例
# Regenerate a leaf certificate contained in a particular secret
oc adm ocp-certificates regenerate-leaf -n openshift-config-managed secret/kube-controller-manager-client-cert-key
在 OpenShift 集群中重新生成机器配置 Operator 证书
用法示例
# Regenerate the MCO certs without modifying user-data secrets
oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false
# Update the user-data secrets to use new MCS certs
oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server
2.7.1.28. oc adm ocp-certificates regenerate-top-level 复制链接链接已复制到粘贴板!
在 OpenShift 集群中重新生成顶级证书
用法示例
# Regenerate the signing certificate contained in a particular secret
oc adm ocp-certificates regenerate-top-level -n openshift-kube-apiserver-operator secret/loadbalancer-serving-signer-key
2.7.1.29. oc adm ocp-certificates remove-old-trust 复制链接链接已复制到粘贴板!
从代表 OpenShift 集群中平台信任捆绑包的 ConfigMap 中删除旧的 CA
用法示例
# Remove a trust bundled contained in a particular config map
oc adm ocp-certificates remove-old-trust -n openshift-config-managed configmaps/kube-apiserver-aggregator-client-ca --created-before 2023-06-05T14:44:06Z
# Remove only CA certificates created before a certain date from all trust bundles
oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z
更新 OpenShift 集群中的 user-data secret,以使用更新的 MCO certfs
用法示例
# Regenerate the MCO certs without modifying user-data secrets
oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false
# Update the user-data secrets to use new MCS certs
oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server
2.7.1.31. oc adm policy add-cluster-role-to-group 复制链接链接已复制到粘贴板!
向集群中的所有项目的组添加一个角色
用法示例
# Add the 'cluster-admin' cluster role to the 'cluster-admins' group
oc adm policy add-cluster-role-to-group cluster-admin cluster-admins
2.7.1.32. oc adm policy add-cluster-role-to-user 复制链接链接已复制到粘贴板!
为集群中所有项目的用户添加一个角色
用法示例
# Add the 'system:build-strategy-docker' cluster role to the 'devuser' user
oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser
2.7.1.33. oc adm policy add-role-to-user 复制链接链接已复制到粘贴板!
为当前项目的用户或服务帐户添加角色
用法示例
# Add the 'view' role to user1 for the current project
oc adm policy add-role-to-user view user1
# Add the 'edit' role to serviceaccount1 for the current project
oc adm policy add-role-to-user edit -z serviceaccount1
2.7.1.34. oc adm policy add-scc-to-group 复制链接链接已复制到粘贴板!
为组添加安全性上下文约束
用法示例
# Add the 'restricted' security context constraint to group1 and group2
oc adm policy add-scc-to-group restricted group1 group2
2.7.1.35. oc adm policy add-scc-to-user 复制链接链接已复制到粘贴板!
为用户或服务帐户添加安全性上下文约束
用法示例
# Add the 'restricted' security context constraint to user1 and user2
oc adm policy add-scc-to-user restricted user1 user2
# Add the 'privileged' security context constraint to serviceaccount1 in the current namespace
oc adm policy add-scc-to-user privileged -z serviceaccount1
2.7.1.36. oc adm policy remove-cluster-role-from-group 复制链接链接已复制到粘贴板!
从集群中所有项目的组中删除角色
用法示例
# Remove the 'cluster-admin' cluster role from the 'cluster-admins' group
oc adm policy remove-cluster-role-from-group cluster-admin cluster-admins
2.7.1.37. oc adm policy remove-cluster-role-from-user 复制链接链接已复制到粘贴板!
从集群中所有项目的用户中删除角色
用法示例
# Remove the 'system:build-strategy-docker' cluster role from the 'devuser' user
oc adm policy remove-cluster-role-from-user system:build-strategy-docker devuser
2.7.1.38. oc adm policy scc-review 复制链接链接已复制到粘贴板!
检查哪个服务帐户可以创建 pod
用法示例
# Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml
# Service Account specified in myresource.yaml file is ignored
oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml
# Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml
oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml
# Check whether the service account specified in my_resource_with_sa.yaml can admit the pod
oc adm policy scc-review -f my_resource_with_sa.yaml
# Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml
oc adm policy scc-review -f myresource_with_no_sa.yaml
2.7.1.39. oc adm policy scc-subject-review 复制链接链接已复制到粘贴板!
检查用户或服务帐户是否可以创建 pod
用法示例
# Check whether user bob can create a pod specified in myresource.yaml
oc adm policy scc-subject-review -u bob -f myresource.yaml
# Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml
oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml
# Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod
oc adm policy scc-subject-review -f myresourcewithsa.yaml
2.7.1.40. oc adm prune builds 复制链接链接已复制到粘贴板!
删除旧的完成和失败的构建
用法示例
# Dry run deleting older completed and failed builds and also including
# all builds whose associated build config no longer exists
oc adm prune builds --orphans
# To actually perform the prune operation, the confirm flag must be appended
oc adm prune builds --orphans --confirm
2.7.1.41. oc adm prune deployments 复制链接链接已复制到粘贴板!
删除旧的完成和失败的部署配置
用法示例
# Dry run deleting all but the last complete deployment for every deployment config
oc adm prune deployments --keep-complete=1
# To actually perform the prune operation, the confirm flag must be appended
oc adm prune deployments --keep-complete=1 --confirm
2.7.1.42. oc adm prune groups 复制链接链接已复制到粘贴板!
从外部提供程序中删除引用缺失记录的旧 OpenShift 组
用法示例
# Prune all orphaned groups
oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm
# Prune all orphaned groups except the ones from the denylist file
oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm
# Prune all orphaned groups from a list of specific groups specified in an allowlist file
oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm
# Prune all orphaned groups from a list of specific groups specified in a list
oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm
2.7.1.43. oc adm prune images 复制链接链接已复制到粘贴板!
删除未引用的镜像
用法示例
# See what the prune command would delete if only images and their referrers were more than an hour old
# and obsoleted by 3 newer revisions under the same tag were considered
oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m
# To actually perform the prune operation, the confirm flag must be appended
oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm
# See what the prune command would delete if we are interested in removing images
# exceeding currently set limit ranges ('openshift.io/Image')
oc adm prune images --prune-over-size-limit
# To actually perform the prune operation, the confirm flag must be appended
oc adm prune images --prune-over-size-limit --confirm
# Force the insecure HTTP protocol with the particular registry host name
oc adm prune images --registry-url=http://registry.example.org --confirm
# Force a secure connection with a custom certificate authority to the particular registry host name
oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm
2.7.1.44. oc adm prune renderedmachineconfigs 复制链接链接已复制到粘贴板!
在 OpenShift 集群中修剪渲染的 MachineConfig
用法示例
# See what the prune command would delete if run with no options
oc adm prune renderedmachineconfigs
# To actually perform the prune operation, the confirm flag must be appended
oc adm prune renderedmachineconfigs --confirm
# See what the prune command would delete if run on the worker MachineConfigPool
oc adm prune renderedmachineconfigs --pool-name=worker
# Prunes 10 oldest rendered MachineConfigs in the cluster
oc adm prune renderedmachineconfigs --count=10 --confirm
# Prunes 10 oldest rendered MachineConfigs in the cluster for the worker MachineConfigPool
oc adm prune renderedmachineconfigs --count=10 --pool-name=worker --confirm
2.7.1.45. oc adm prune renderedmachineconfigs list 复制链接链接已复制到粘贴板!
列出 OpenShift 集群中渲染的 MachineConfig
用法示例
# List all rendered MachineConfigs for the worker MachineConfigPool in the cluster
oc adm prune renderedmachineconfigs list --pool-name=worker
# List all rendered MachineConfigs in use by the cluster's MachineConfigPools
oc adm prune renderedmachineconfigs list --in-use
2.7.1.46. oc adm reboot-machine-config-pool 复制链接链接已复制到粘贴板!
启动指定 MachineConfigPool 的重启
用法示例
# Reboot all MachineConfigPools
oc adm reboot-machine-config-pool mcp/worker mcp/master
# Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra.
oc adm reboot-machine-config-pool mcp/worker
# Reboot masters
oc adm reboot-machine-config-pool mcp/master
2.7.1.47. oc adm release extract 复制链接链接已复制到粘贴板!
将更新有效负载的内容提取到磁盘
用法示例
# Use git to check out the source code for the current cluster release to DIR
oc adm release extract --git=DIR
# Extract cloud credential requests for AWS
oc adm release extract --credentials-requests --cloud=aws
# Use git to check out the source code for the current cluster release to DIR from linux/s390x image
# Note: Wildcard filter is not supported; pass a single os/arch to extract
oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x
2.7.1.48. oc adm release info 复制链接链接已复制到粘贴板!
显示发行版本的信息
用法示例
# Show information about the cluster's current release
oc adm release info
# Show the source code that comprises a release
oc adm release info 4.11.2 --commit-urls
# Show the source code difference between two releases
oc adm release info 4.11.0 4.11.2 --commits
# Show where the images referenced by the release are located
oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs
# Show information about linux/s390x image
# Note: Wildcard filter is not supported; pass a single os/arch to extract
oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x
2.7.1.49. oc adm release mirror 复制链接链接已复制到粘贴板!
将发行版本 mirror 到不同的镜像 registry 位置
用法示例
# Perform a dry run showing what would be mirrored, including the mirror objects
oc adm release mirror 4.11.0 --to myregistry.local/openshift/release \
--release-image-signature-to-dir /tmp/releases --dry-run
# Mirror a release into the current directory
oc adm release mirror 4.11.0 --to file://openshift/release \
--release-image-signature-to-dir /tmp/releases
# Mirror a release to another directory in the default location
oc adm release mirror 4.11.0 --to-dir /tmp/releases
# Upload a release from the current directory to another server
oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release \
--release-image-signature-to-dir /tmp/releases
# Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster
oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 \
--to=registry.example.com/your/repository --apply-release-image-signature
2.7.1.50. oc adm release new 复制链接链接已复制到粘贴板!
创建新的 OpenShift 发行版本
用法示例
# Create a release from the latest origin images and push to a DockerHub repository
oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest
# Create a new release with updated metadata from a previous release
oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 \
--previous 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest
# Create a new release and override a single image
oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 \
cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest
# Run a verification pass to ensure the release can be reproduced
oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11
2.7.1.51. oc adm restart-kubelet 复制链接链接已复制到粘贴板!
在指定节点上重启 kubelet
用法示例
# Restart all the nodes, 10% at a time
oc adm restart-kubelet nodes --all --directive=RemoveKubeletKubeconfig
# Restart all the nodes, 20 nodes at a time
oc adm restart-kubelet nodes --all --parallelism=20 --directive=RemoveKubeletKubeconfig
# Restart all the nodes, 15% at a time
oc adm restart-kubelet nodes --all --parallelism=15% --directive=RemoveKubeletKubeconfig
# Restart all the masters at the same time
oc adm restart-kubelet nodes -l node-role.kubernetes.io/master --parallelism=100% --directive=RemoveKubeletKubeconfig
2.7.1.52. oc adm taint 复制链接链接已复制到粘贴板!
更新一个或多个节点上的污点
用法示例
# Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'
# If a taint with that key and effect already exists, its value is replaced as specified
oc adm taint nodes foo dedicated=special-user:NoSchedule
# Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists
oc adm taint nodes foo dedicated:NoSchedule-
# Remove from node 'foo' all the taints with key 'dedicated'
oc adm taint nodes foo dedicated-
# Add a taint with key 'dedicated' on nodes having label myLabel=X
oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule
# Add to node 'foo' a taint with key 'bar' and no value
oc adm taint nodes foo bar:NoSchedule
2.7.1.53. oc adm top images 复制链接链接已复制到粘贴板!
显示镜像的用量统计
用法示例
# Show usage statistics for images
oc adm top images
2.7.1.54. oc adm top imagestreams 复制链接链接已复制到粘贴板!
显示镜像流的用量统计
用法示例
# Show usage statistics for image streams
oc adm top imagestreams
2.7.1.55. oc adm top node 复制链接链接已复制到粘贴板!
显示节点的资源(CPU/内存)使用情况
用法示例
# Show metrics for all nodes
oc adm top node
# Show metrics for a given node
oc adm top node NODE_NAME
2.7.1.56. oc adm top persistentvolumeclaims 复制链接链接已复制到粘贴板!
实验性:显示绑定的 persistentvolumeclaims 的用量统计
用法示例
# Show usage statistics for all the bound persistentvolumeclaims across the cluster
oc adm top persistentvolumeclaims -A
# Show usage statistics for all the bound persistentvolumeclaims in a specific namespace
oc adm top persistentvolumeclaims -n default
# Show usage statistics for specific bound persistentvolumeclaims
oc adm top persistentvolumeclaims database-pvc app-pvc -n default
2.7.1.57. oc adm top pod 复制链接链接已复制到粘贴板!
显示 pod 的资源(CPU/内存)使用情况
用法示例
# Show metrics for all pods in the default namespace
oc adm top pod
# Show metrics for all pods in the given namespace
oc adm top pod --namespace=NAMESPACE
# Show metrics for a given pod and its containers
oc adm top pod POD_NAME --containers
# Show metrics for the pods defined by label name=myLabel
oc adm top pod -l name=myLabel
2.7.1.58. oc adm uncordon 复制链接链接已复制到粘贴板!
将节点标记为可调度
用法示例
# Mark node "foo" as schedulable
oc adm uncordon foo
2.7.1.59. oc adm upgrade 复制链接链接已复制到粘贴板!
升级集群或调整升级频道
用法示例
# View the update status and available cluster updates
oc adm upgrade
# Update to the latest version
oc adm upgrade --to-latest=true
2.7.1.60. oc adm verify-image-signature 复制链接链接已复制到粘贴板!
验证镜像签名中包含的镜像身份
用法示例
# Verify the image signature and identity using the local GPG keychain
oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \
--expected-identity=registry.local:5000/foo/bar:v1
# Verify the image signature and identity using the local GPG keychain and save the status
oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \
--expected-identity=registry.local:5000/foo/bar:v1 --save
# Verify the image signature and identity via exposed registry route
oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \
--expected-identity=registry.local:5000/foo/bar:v1 \
--registry-url=docker-registry.foo.com
# Remove all signature verifications from the image
oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all
2.7.1.61. oc adm wait-for-node-reboot 复制链接链接已复制到粘贴板!
在运行 oc adm reboot-machine-config-pool 后等待节点重新引导
用法示例
# Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master'
oc adm wait-for-node-reboot nodes --all
# Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master'
oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master
# Wait for masters to complete a specific reboot
oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4
2.7.1.62. oc adm wait-for-stable-cluster 复制链接链接已复制到粘贴板!
等待平台 operator 变得稳定
用法示例
# Wait for all cluster operators to become stable
oc adm wait-for-stable-cluster
# Consider operators to be stable if they report as such for 5 minutes straight
oc adm wait-for-stable-cluster --minimum-stable-period 5m
第 3 章 odo 的重要更新 复制链接链接已复制到粘贴板!
红帽没有在 Red Hat OpenShift Service on AWS 文档站点上提供有关 odo 的信息。请参阅由红帽维护的文档,以及上游社区的与 odo 相关的文档。
对于上游社区维护的材料,红帽在合作社区支持下提供支持。
第 4 章 用于 OpenShift Serverless 的 Knative CLI 复制链接链接已复制到粘贴板!
Knative (kn) CLI 在 Red Hat OpenShift Service on AWS 上启用了与 Knative 组件的简单交互。
4.1. 主要特性 复制链接链接已复制到粘贴板!
Knative (kn) CLI 旨在使无服务器计算任务简单明确。Knative CLI 的主要功能包括:
- 从命令行部署无服务器应用程序。
- 管理 Knative Serving 的功能,如服务、修订和流量分割。
- 创建和管理 Knative Eventing 组件,如事件源和触发器。
- 创建 sink 绑定来连接现有的 Kubernetes 应用程序和 Knative 服务。
-
使用灵活的插件架构扩展 Knative CLI,类似于
kubectlCLI。 - 为 Knative 服务配置 autoscaling 参数。
- 脚本化使用,如等待一个操作的结果,或部署自定义推出和回滚策略。
4.2. 安装 Knative CLI 复制链接链接已复制到粘贴板!
请参阅安装 Knative CLI。
第 5 章 Pipelines CLI (tkn) 复制链接链接已复制到粘贴板!
5.1. 安装 tkn 复制链接链接已复制到粘贴板!
通过 CLI 工具从终端管理 Red Hat OpenShift Pipelines。下面的部分论述了如何在不同的平台中安装 CLI 工具。
您也可以从 Red Hat OpenShift Service on AWS Web 控制台找到最新二进制文件的 URL,方法是单击右上角的 ? 图标,然后选择 Command Line Tools。
在 ARM 硬件上运行 Red Hat OpenShift Pipelines 只是一个技术预览功能。技术预览功能不受红帽产品服务等级协议(SLA)支持,且功能可能并不完整。红帽不推荐在生产环境中使用它们。这些技术预览功能可以使用户提早试用新的功能,并有机会在开发阶段提供反馈意见。
有关红帽技术预览功能支持范围的更多信息,请参阅技术预览功能支持范围。
归档和 RPM 都包含以下可执行文件:
-
tkn -
tkn-pac -
opc
使用 opc CLI 工具运行 Red Hat OpenShift Pipelines 只是一个技术预览功能。技术预览功能不受红帽产品服务等级协议(SLA)支持,且功能可能并不完整。红帽不推荐在生产环境中使用它们。这些技术预览功能可以使用户提早试用新的功能,并有机会在开发阶段提供反馈意见。
有关红帽技术预览功能支持范围的更多信息,请参阅技术预览功能支持范围。
5.1.1. 在 Linux 上安装 Red Hat OpenShift Pipelines CLI 复制链接链接已复制到粘贴板!
对于 Linux 发行版,您可以将 CLI 下载为 tar.gz 存档。
流程
下载相关的 CLI 工具。
解包存档:
$ tar xvzf <file>-
将
tkn和tkn-pac文件的位置添加到PATH环境变量中。 要查看您的
PATH,请运行以下命令:$ echo $PATH
对于 Red Hat Enterprise Linux (RHEL) 版本 8,您可以使用 RPM 安装 Red Hat OpenShift Pipelines CLI。
先决条件
- 您的红帽帐户上已有有效的 Red Hat OpenShift Service on AWS 订阅。
- 您在本地系统中有 root 或者 sudo 权限。
流程
使用 Red Hat Subscription Manager 注册:
# subscription-manager register获取最新的订阅数据:
# subscription-manager refresh列出可用的订阅:
# subscription-manager list --available --matches '*pipelines*'在上一命令的输出中,找到 Red Hat OpenShift Service on AWS 订阅的池 ID,并把订阅附加到注册的系统:
# subscription-manager attach --pool=<pool_id>启用 Red Hat OpenShift Pipelines 所需的仓库:
Linux (x86_64, amd64)
# subscription-manager repos --enable="pipelines-1.18-for-rhel-8-x86_64-rpms"Linux on IBM Z® 和 IBM® LinuxONE (s390x)
# subscription-manager repos --enable="pipelines-1.18-for-rhel-8-s390x-rpms"Linux on IBM Power® (ppc64le)
# subscription-manager repos --enable="pipelines-1.18-for-rhel-8-ppc64le-rpms"Linux on ARM (aarch64, arm64)
# subscription-manager repos --enable="pipelines-1.18-for-rhel-8-aarch64-rpms"
安装
openshift-pipelines-client软件包:# yum install openshift-pipelines-client
安装 CLI 后,就可以使用tkn命令:
$ tkn version
5.1.3. 在 Windows 上安装 Red Hat OpenShift Pipelines CLI 复制链接链接已复制到粘贴板!
对于 Windows,您可以将 CLI 下载为 zip 存档。
流程
- 下载 CLI 工具。
- 使用 ZIP 程序解压存档。
-
将
tkn和tkn-pac文件的位置添加到PATH环境变量中。 要查看您的
PATH,请运行以下命令:C:\> path
5.1.4. 在 macOS 上安装 Red Hat OpenShift Pipelines CLI 复制链接链接已复制到粘贴板!
对于 macOS,您可以将 CLI 下载为 tar.gz 存档。
流程
下载相关的 CLI 工具。
- 解包并提取存档。
-
将
tkn和tkn-pac和文件的位置添加到PATH环境变量中。 要查看您的
PATH,请运行以下命令:$ echo $PATH
5.2. 配置 OpenShift Pipelines tkn CLI 复制链接链接已复制到粘贴板!
配置 Red Hat OpenShift Pipelines tkn CLI 以启用 tab 自动完成功能。
5.2.1. 启用 tab 自动完成功能 复制链接链接已复制到粘贴板!
在安装tkn CLI,可以启用 tab 自动完成功能,以便在按 Tab 键时自动完成tkn命令或显示建议选项。
先决条件
-
已安装
tknCLI。 -
需要在本地系统中安装了
bash-completion。
流程
以下过程为 Bash 启用 tab 自动完成功能。
将 Bash 完成代码保存到一个文件中:
$ tkn completion bash > tkn_bash_completion将文件复制到
/etc/bash_completion.d/:$ sudo cp tkn_bash_completion /etc/bash_completion.d/您也可以将文件保存到一个本地目录,并从您的
.bashrc文件中 source 这个文件。
开新终端时 tab 自动完成功能将被启用。
5.3. OpenShift Pipelines tkn 参考 复制链接链接已复制到粘贴板!
本节列出了基本的 tkn CLI 命令。
5.3.1. 基本语法 复制链接链接已复制到粘贴板!
tkn [command or options] [arguments…]
5.3.2. 全局选项 复制链接链接已复制到粘贴板!
--help, -h
5.3.3. 工具命令 复制链接链接已复制到粘贴板!
5.3.3.1. tkn 复制链接链接已复制到粘贴板!
tkn CLI 的主命令。
示例: 显示所有选项
$ tkn
5.3.3.2. completion [shell] 复制链接链接已复制到粘贴板!
输出 shell 完成代码,必须经过评估方可提供互动完成。支持的 shell 是 bash 和 zsh。
示例:bash shell 完成代码
$ tkn completion bash
5.3.3.3. version 复制链接链接已复制到粘贴板!
输出 tkn CLI 的版本信息。
示例: 检查 tkn 版本
$ tkn version
5.3.4. Pipelines 管理命令 复制链接链接已复制到粘贴板!
5.3.4.1. pipeline 复制链接链接已复制到粘贴板!
管理管道。
示例: 显示帮助信息
$ tkn pipeline --help
5.3.4.2. pipeline delete 复制链接链接已复制到粘贴板!
删除管道。
示例:从命名空间中删除 mypipeline 管道
$ tkn pipeline delete mypipeline -n myspace
5.3.4.3. pipeline describe 复制链接链接已复制到粘贴板!
描述管道。
示例:描述 mypipeline 管道
$ tkn pipeline describe mypipeline
5.3.4.4. pipeline list 复制链接链接已复制到粘贴板!
显示管道列表。
示例:显示管道列表
$ tkn pipeline list
5.3.4.5. pipeline logs 复制链接链接已复制到粘贴板!
显示特定管道的日志。
示例:将 mypipeline 管道的 live 日志流
$ tkn pipeline logs -f mypipeline
5.3.4.6. pipeline start 复制链接链接已复制到粘贴板!
启动管道。
示例:启动 mypipeline 管道
$ tkn pipeline start mypipeline
5.3.5. pipeline run 命令 复制链接链接已复制到粘贴板!
5.3.5.1. pipelinerun 复制链接链接已复制到粘贴板!
管理管道运行。
示例: 显示帮助信息
$ tkn pipelinerun -h
5.3.5.2. pipelinerun cancel 复制链接链接已复制到粘贴板!
取消管道运行。
示例:取消从命名空间中运行的 mypipelinerun 管道
$ tkn pipelinerun cancel mypipelinerun -n myspace
5.3.5.3. pipelinerun delete 复制链接链接已复制到粘贴板!
删除管道运行。
示例:删除管道从命名空间中运行
$ tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace
示例:删除所有管道从命名空间中运行,但最近执行的管道运行除外
$ tkn pipelinerun delete -n myspace --keep 5
- 1
- 使用您要保留的最新执行的管道运行数量替换
5。
示例:删除所有管道
$ tkn pipelinerun delete --all
从 Red Hat OpenShift Pipelines 1.6 开始,tkn pipelinerun delete --all 命令不会删除处于 running 状态的任何资源。
5.3.5.4. pipelinerun describe 复制链接链接已复制到粘贴板!
描述管道运行。
示例:描述在命名空间中运行的 mypipelinerun 管道
$ tkn pipelinerun describe mypipelinerun -n myspace
5.3.5.5. pipelinerun list 复制链接链接已复制到粘贴板!
列出管道运行。
示例: 显示在命名空间中运行的管道列表
$ tkn pipelinerun list -n myspace
5.3.5.6. pipelinerun logs 复制链接链接已复制到粘贴板!
显示管道运行的日志。
示例:显示 mypipelinerun 管道运行的日志,其中包含命名空间中的所有任务和步骤
$ tkn pipelinerun logs mypipelinerun -a -n myspace
5.3.6. 任务管理命令 复制链接链接已复制到粘贴板!
5.3.6.1. task 复制链接链接已复制到粘贴板!
管理任务。
示例: 显示帮助信息
$ tkn task -h
5.3.6.2. task delete 复制链接链接已复制到粘贴板!
删除任务。
示例:从命名空间中删除 mytask1 和 mytask2 任务
$ tkn task delete mytask1 mytask2 -n myspace
5.3.6.3. task describe 复制链接链接已复制到粘贴板!
描述任务。
示例:描述命名空间中的 mytask 任务
$ tkn task describe mytask -n myspace
5.3.6.4. task list 复制链接链接已复制到粘贴板!
列出任务。
示例: 列出命名空间中的所有任务
$ tkn task list -n myspace
5.3.6.5. task logs 复制链接链接已复制到粘贴板!
显示任务日志。
示例:显示 mytask 任务的 mytaskrun 任务运行的日志
$ tkn task logs mytask mytaskrun -n myspace
5.3.6.6. task start 复制链接链接已复制到粘贴板!
启动一个任务。
示例: 在命名空间中启动 mytask 任务
$ tkn task start mytask -s <ServiceAccountName> -n myspace
5.3.7. task run 命令 复制链接链接已复制到粘贴板!
5.3.7.1. taskrun 复制链接链接已复制到粘贴板!
管理任务运行。
示例: 显示帮助信息
$ tkn taskrun -h
5.3.7.2. taskrun cancel 复制链接链接已复制到粘贴板!
取消任务运行。
示例:取消从命名空间中运行的 mytaskrun 任务
$ tkn taskrun cancel mytaskrun -n myspace
5.3.7.3. taskrun delete 复制链接链接已复制到粘贴板!
删除一个 TaskRun。
示例:从命名空间中删除 mytaskrun1 和 mytaskrun2 任务
$ tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace
示例:删除除五个最近执行的任务外从命名空间中运行的所有任务
$ tkn taskrun delete -n myspace --keep 5
- 1
- 将
5替换为您要保留的最新执行任务数量。
5.3.7.4. taskrun describe 复制链接链接已复制到粘贴板!
描述任务运行。
示例:描述在命名空间中运行的 mytaskrun 任务
$ tkn taskrun describe mytaskrun -n myspace
5.3.7.5. taskrun list 复制链接链接已复制到粘贴板!
列出任务运行。
示例:列出所有任务在命名空间中运行
$ tkn taskrun list -n myspace
5.3.7.6. taskrun logs 复制链接链接已复制到粘贴板!
显示任务运行日志.
示例:显示在命名空间中运行的 mytaskrun 任务的实时日志
$ tkn taskrun logs -f mytaskrun -n myspace
5.3.8. 条件管理命令 复制链接链接已复制到粘贴板!
5.3.8.1. 条件 复制链接链接已复制到粘贴板!
管理条件(Condition)。
示例: 显示帮助信息
$ tkn condition --help
5.3.8.2. 删除条件 复制链接链接已复制到粘贴板!
删除一个条件。
示例:从命名空间中删除 mycondition1 Condition
$ tkn condition delete mycondition1 -n myspace
5.3.8.3. condition describe 复制链接链接已复制到粘贴板!
描述条件。
示例:在命名空间中描述 mycondition1 Condition
$ tkn condition describe mycondition1 -n myspace
5.3.8.4. condition list 复制链接链接已复制到粘贴板!
列出条件。
示例: 列出命名空间中的条件
$ tkn condition list -n myspace
5.3.9. Pipeline 资源管理命令 复制链接链接已复制到粘贴板!
5.3.9.1. resource 复制链接链接已复制到粘贴板!
管理管道资源。
示例: 显示帮助信息
$ tkn resource -h
5.3.9.2. resource create 复制链接链接已复制到粘贴板!
创建一个 Pipeline 资源。
示例: 在命名空间中创建一个 Pipeline 资源
$ tkn resource create -n myspace
这是一个交互式命令,它要求输入资源名称、资源类型以及基于资源类型的值。
5.3.9.3. resource delete 复制链接链接已复制到粘贴板!
删除 Pipeline 资源。
示例:从命名空间中删除 myresource Pipeline 资源
$ tkn resource delete myresource -n myspace
5.3.9.4. resource describe 复制链接链接已复制到粘贴板!
描述管道资源。
示例:描述 myresource Pipeline 资源
$ tkn resource describe myresource -n myspace
5.3.9.5. resource list 复制链接链接已复制到粘贴板!
列出管道资源。
示例: 列出命名空间中的所有管道资源
$ tkn resource list -n myspace
5.3.10. ClusterTask 管理命令 复制链接链接已复制到粘贴板!
在 Red Hat OpenShift Pipelines 1.10 中,tkn 命令行工具的 ClusterTask 功能已弃用,计划在以后的发行版本中删除。
5.3.10.1. clustertask 复制链接链接已复制到粘贴板!
管理 ClusterTasks。
示例: 显示帮助信息
$ tkn clustertask --help
5.3.10.2. clustertask delete 复制链接链接已复制到粘贴板!
删除集群中的 ClusterTask 资源。
示例: 删除 mytask1 和 mytask2 ClusterTasks
$ tkn clustertask delete mytask1 mytask2
5.3.10.3. clustertask describe 复制链接链接已复制到粘贴板!
描述 ClusterTask。
示例: 描述 mytask ClusterTask
$ tkn clustertask describe mytask1
5.3.10.4. clustertask list 复制链接链接已复制到粘贴板!
列出 ClusterTasks。
示例: 列出 ClusterTasks
$ tkn clustertask list
5.3.10.5. clustertask start 复制链接链接已复制到粘贴板!
启动 ClusterTasks。
示例: 启动 mytask ClusterTask
$ tkn clustertask start mytask
5.3.11. 触发器管理命令 复制链接链接已复制到粘贴板!
5.3.11.1. eventlistener 复制链接链接已复制到粘贴板!
管理 EventListeners。
示例: 显示帮助信息
$ tkn eventlistener -h
5.3.11.2. eventlistener delete 复制链接链接已复制到粘贴板!
删除一个 EventListener。
示例:删除命令空间中的 mylistener1 和 mylistener2 EventListeners
$ tkn eventlistener delete mylistener1 mylistener2 -n myspace
5.3.11.3. eventlistener describe 复制链接链接已复制到粘贴板!
描述 EventListener。
示例:描述命名空间中的 mylistener EventListener
$ tkn eventlistener describe mylistener -n myspace
5.3.11.4. eventlistener list 复制链接链接已复制到粘贴板!
列出 EventListeners。
示例: 列出命名空间中的所有 EventListeners
$ tkn eventlistener list -n myspace
5.3.11.5. eventListener 日志 复制链接链接已复制到粘贴板!
显示 EventListener 的日志。
示例: 在一个命名空间中显示 mylistener EventListener 的日志
$ tkn eventlistener logs mylistener -n myspace
5.3.11.6. triggerbinding 复制链接链接已复制到粘贴板!
管理 TriggerBindings。
示例: 显示 TriggerBindings 帮助信息
$ tkn triggerbinding -h
5.3.11.7. triggerbinding delete 复制链接链接已复制到粘贴板!
删除 TriggerBinding。
示例:删除一个命名空间中的 mybinding1 和 mybinding2 TriggerBindings
$ tkn triggerbinding delete mybinding1 mybinding2 -n myspace
5.3.11.8. triggerbinding describe 复制链接链接已复制到粘贴板!
描述 TriggerBinding。
示例:描述命名空间中的 mybinding TriggerBinding
$ tkn triggerbinding describe mybinding -n myspace
5.3.11.9. triggerbinding list 复制链接链接已复制到粘贴板!
列出 TriggerBindings。
示例: 列出命名空间中的所有 TriggerBindings
$ tkn triggerbinding list -n myspace
5.3.11.10. triggertemplate 复制链接链接已复制到粘贴板!
管理 TriggerTemplates。
示例: 显示 TriggerTemplate 帮助
$ tkn triggertemplate -h
5.3.11.11. triggertemplate delete 复制链接链接已复制到粘贴板!
删除 TriggerTemplate。
示例:删除命名空间中的 mytemplate1 和 mytemplate2 TriggerTemplates
$ tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace`
5.3.11.12. triggertemplate describe 复制链接链接已复制到粘贴板!
描述 TriggerTemplate。
示例: 描述命名空间中的 mytemplate TriggerTemplate
$ tkn triggertemplate describe mytemplate -n `myspace`
5.3.11.13. triggertemplate list 复制链接链接已复制到粘贴板!
列出 TriggerTemplates。
示例: 列出命名空间中的所有 TriggerTemplates
$ tkn triggertemplate list -n myspace
5.3.11.14. clustertriggerbinding 复制链接链接已复制到粘贴板!
管理 ClusterTriggerBindings。
示例: 显示 ClusterTriggerBindings 帮助信息
$ tkn clustertriggerbinding -h
5.3.11.15. clustertriggerbinding delete 复制链接链接已复制到粘贴板!
删除 ClusterTriggerBinding。
示例: 删除 myclusterbinding1 和 myclusterbinding2 ClusterTriggerBindings
$ tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2
5.3.11.16. clustertriggerbinding describe 复制链接链接已复制到粘贴板!
描述 ClusterTriggerBinding。
示例: 描述 myclusterbinding ClusterTriggerBinding
$ tkn clustertriggerbinding describe myclusterbinding
5.3.11.17. clustertriggerbinding list 复制链接链接已复制到粘贴板!
列出 ClusterTriggerBindings。
示例: 列出所有 ClusterTriggerBindings
$ tkn clustertriggerbinding list
5.3.12. hub 互动命令 复制链接链接已复制到粘贴板!
与 Tekton Hub 交互,以获取任务和管道等资源。
5.3.12.1. hub 复制链接链接已复制到粘贴板!
与 hub 交互。
示例: 显示帮助信息
$ tkn hub -h
示例:与 hub API 服务器交互
$ tkn hub --api-server https://api.hub.tekton.dev
对于每个示例,若要获取对应的子命令和标记,请运行 tkn hub <command> --help。
5.3.12.2. hub downgrade 复制链接链接已复制到粘贴板!
对一个安装的资源进行降级。
示例:将 mynamespace 命名空间中的 mytask 任务降级到它的较旧版本
$ tkn hub downgrade task mytask --to version -n mynamespace
5.3.12.3. hub get 复制链接链接已复制到粘贴板!
按名称、类型、目录和版本获取资源清单。
示例:从 tekton 目录中获取 myresource 管道或任务的特定版本的清单
$ tkn hub get [pipeline | task] myresource --from tekton --version version
5.3.12.4. hub info 复制链接链接已复制到粘贴板!
按名称、类型、目录和版本显示资源的信息。
示例:显示 tekton 目录中有关 mytask 任务的特定版本的信息
$ tkn hub info task mytask --from tekton --version version
5.3.12.5. hub install 复制链接链接已复制到粘贴板!
按类型、名称和版本从目录安装资源。
示例:从 mynamespace 命名空间中的 tekton 目录安装 mytask 任务的特定版本
$ tkn hub install task mytask --from tekton --version version -n mynamespace
5.3.12.6. hub reinstall 复制链接链接已复制到粘贴板!
按类型和名称重新安装资源。
示例:从 mynamespace 命名空间中的 tekton 目录重新安装 mytask 任务的特定版本
$ tkn hub reinstall task mytask --from tekton --version version -n mynamespace
5.3.12.7. hub search 复制链接链接已复制到粘贴板!
按名称、类型和标签组合搜索资源。
示例:搜索带有标签 cli的资源
$ tkn hub search --tags cli
5.3.12.8. hub upgrade 复制链接链接已复制到粘贴板!
升级已安装的资源。
示例:将 mynamespace 命名空间中安装的 mytask 任务升级到新版本
$ tkn hub upgrade task mytask --to version -n mynamespace
第 6 章 opm CLI 复制链接链接已复制到粘贴板!
6.1. 安装 opm CLI 复制链接链接已复制到粘贴板!
6.1.1. 关于 opm CLI 复制链接链接已复制到粘贴板!
opm CLI 工具由 Operator Framework 提供,用于 Operator 捆绑格式。您可以通过此工具从与软件存储库类似的 Operator 捆绑包列表中创建和维护 Operator 目录。其结果是一个容器镜像,它可以存储在容器的 registry 中,然后安装到集群中。
目录包含一个指向 Operator 清单内容的指针数据库,可通过在运行容器镜像时提供的已包含 API 进行查询。在 Red Hat OpenShift Service on AWS 上,Operator Lifecycle Manager (OLM)可以在由 CatalogSource 对象定义的目录源中引用镜像,它会定期轮询镜像,以对集群上安装的 Operator 进行更新。
6.1.2. 安装 opm CLI 复制链接链接已复制到粘贴板!
您可以在您的 Linux、macOS 或者 Windows 工作站上安装 opm CLI 工具。
先决条件
对于 Linux,您必须提供以下软件包:RHEL 8 满足以下要求:
-
podman1.9.3+(推荐版本 2.0+) -
glibc版本 2.28+
-
流程
- 进入到 OpenShift 镜像站点并下载与您的操作系统匹配的 tarball 的最新版本。
解包存档。
对于 Linux 或者 macOS:
$ tar xvf <file>- 对于 Windows,使用 ZIP 程序解压存档。
将文件放在
PATH中的任何位置。对于 Linux 或者 macOS:
检查
PATH:$ echo $PATH移动文件。例如:
$ sudo mv ./opm /usr/local/bin/
对于 Windows:
检查
PATH:C:\> path移动文件:
C:\> move opm.exe <directory>
验证
安装
opmCLI 后,验证是否可用:$ opm version
6.2. opm CLI 参考 复制链接链接已复制到粘贴板!
opm 命令行界面 (CLI) 是用于创建和维护 Operator 目录的工具。
opm CLI 语法
$ opm <command> [<subcommand>] [<argument>] [<flags>]
opm CLI 不是转发兼容。用于生成目录内容的 opm CLI 版本必须早于或等于用于在集群中提供内容的版本。
| 标记 | 描述 |
|---|---|
|
| 在拉取捆绑包或索引时跳过容器镜像 registry 的 TLS 证书验证。 |
|
| 在拉取捆绑包时,将普通 HTTP 用于容器镜像 registry。 |
基于 SQLite 的目录格式(包括相关的 CLI 命令)是一个弃用的功能。弃用的功能仍然包含在 Red Hat OpenShift Service on AWS 中,并且仍然被支持。但是,弃用的功能可能会在以后的发行版本中被删除,且不建议在新的部署中使用。
有关 Red Hat OpenShift Service on AWS 中已弃用或删除的主要功能的最新列表,请参阅 Red Hat OpenShift Service on AWS 发行注记中已弃用和删除的功能 部分。
6.2.1. generate 复制链接链接已复制到粘贴板!
为声明性配置索引生成各种工件。
命令语法
$ opm generate <subcommand> [<flags>]
| 子命令 | 描述 |
|---|---|
|
| 为声明性配置索引生成 Dockerfile。 |
| 标记 | 描述 |
|---|---|
|
| 生成帮助信息。 |
6.2.1.1. dockerfile 复制链接链接已复制到粘贴板!
为声明性配置索引生成 Dockerfile。
此命令在与 <dcRootDir> (名为 <dcDirName>.Dockerfile)相同的目录中创建 Dockerfile,用于构建索引。如果存在具有相同名称的 Dockerfile,这个命令会失败。
当指定额外标签时,如果存在重复的键,则只有每个重复键的最后值都会添加到生成的 Dockerfile 中。
命令语法
$ opm generate dockerfile <dcRootDir> [<flags>]
| 标记 | 描述 |
|---|---|
|
|
要构建目录的镜像。默认值为 |
|
|
生成的 Dockerfile 中包含的额外标签。标签的格式为 |
|
| Dockerfile 帮助。 |
要使用官方红帽镜像构建,请使用带有 -i 标志的 registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4 值。
6.2.2. index 复制链接链接已复制到粘贴板!
从预先存在的 Operator 捆绑包中为 SQLite 数据库格式容器镜像生成 Operator 索引。
从 Red Hat OpenShift Service on AWS 4.11 开始,默认的红帽提供的 Operator 目录以基于文件的目录格式发布。通过以过时的 SQLite 数据库格式发布的 4.10,Red Hat OpenShift Service on AWS 4.6 的默认红帽提供的 Operator 目录。
与 SQLite 数据库格式相关的 opm 子命令、标志和功能已被弃用,并将在以后的版本中删除。功能仍被支持,且必须用于使用已弃用的 SQLite 数据库格式的目录。
许多 opm 子命令和标志都用于 SQLite 数据库格式,如 opm index prune,它们无法使用基于文件的目录格式。
命令语法
$ opm index <subcommand> [<flags>]
| 子命令 | 描述 |
|---|---|
|
| 将 Operator 捆绑包添加到索引中。 |
|
| 修剪除指定软件包以外的所有索引。 |
|
| 修剪没有与特定镜像关联的分级捆绑包索引。 |
|
| 从索引中删除整个 Operator。 |
6.2.2.1. add 复制链接链接已复制到粘贴板!
将 Operator 捆绑包添加到索引中。
命令语法
$ opm index add [<flags>]
| 标记 | 描述 |
|---|---|
|
|
on-image |
|
|
构建容器镜像的工具: |
|
| 要添加的捆绑包的逗号分隔列表。 |
|
|
与容器镜像交互的工具,如保存和构建: |
|
| 要添加到的上一个索引。 |
|
| 如果启用,则仅创建 Dockerfile 并将其保存到本地磁盘。 |
|
|
图形更新模式,用来定义通频道图形如何被更新: |
|
| 可选:如果生成 Dockerfile,请指定一个文件名。 |
|
| 允许 registry 加载错误。 |
|
|
拉取容器镜像的工具: |
|
| 正在构建的容器镜像的自定义标签。 |
6.2.2.2. prune 复制链接链接已复制到粘贴板!
修剪除指定软件包以外的所有索引。
命令语法
$ opm index prune [<flags>]
| 标记 | 描述 |
|---|---|
|
|
on-image |
|
|
与容器镜像交互的工具,如保存和构建: |
|
| 到修剪的索引。 |
|
| 如果启用,则仅创建 Dockerfile 并将其保存到本地磁盘。 |
|
| 可选:如果生成 Dockerfile,请指定一个文件名。 |
|
| 要保留的软件包用逗号隔开。 |
|
| 允许 registry 加载错误。 |
|
| 正在构建的容器镜像的自定义标签。 |
6.2.2.3. prune-stranded 复制链接链接已复制到粘贴板!
修剪没有与特定镜像关联的分级捆绑包索引。
命令语法
$ opm index prune-stranded [<flags>]
| 标记 | 描述 |
|---|---|
|
|
on-image |
|
|
与容器镜像交互的工具,如保存和构建: |
|
| 到修剪的索引。 |
|
| 如果启用,则仅创建 Dockerfile 并将其保存到本地磁盘。 |
|
| 可选:如果生成 Dockerfile,请指定一个文件名。 |
|
| 要保留的软件包用逗号隔开。 |
|
| 允许 registry 加载错误。 |
|
| 正在构建的容器镜像的自定义标签。 |
6.2.2.4. rm 复制链接链接已复制到粘贴板!
从索引中删除整个 Operator。
命令语法
$ opm index rm [<flags>]
| 标记 | 描述 |
|---|---|
|
|
on-image |
|
|
构建容器镜像的工具: |
|
|
与容器镜像交互的工具,如保存和构建: |
|
| 从中删除的以前索引。 |
|
| 如果启用,则仅创建 Dockerfile 并将其保存到本地磁盘。 |
|
| 要删除的用逗号分开的 Operator 列表。 |
|
| 可选:如果生成 Dockerfile,请指定一个文件名。 |
|
| 要保留的软件包用逗号隔开。 |
|
| 允许 registry 加载错误。 |
|
|
拉取容器镜像的工具: |
|
| 正在构建的容器镜像的自定义标签。 |
6.2.3. init 复制链接链接已复制到粘贴板!
生成 olm.package 声明性配置 blob。
命令语法
$ opm init <package_name> [<flags>]
| 标记 | 描述 |
|---|---|
|
| 如果未指定,订阅的频道将默认为。 |
|
|
Operator 的 |
|
| 软件包图标的路径. |
|
|
输出格式: |
6.2.4. migrate 复制链接链接已复制到粘贴板!
将 SQLite 数据库格式索引镜像或数据库文件迁移到基于文件的目录。
基于 SQLite 的目录格式(包括相关的 CLI 命令)是一个弃用的功能。弃用的功能仍然包含在 Red Hat OpenShift Service on AWS 中,并且仍然被支持。但是,弃用的功能可能会在以后的发行版本中被删除,且不建议在新的部署中使用。
有关 Red Hat OpenShift Service on AWS 中已弃用或删除的主要功能的最新列表,请参阅 Red Hat OpenShift Service on AWS 发行注记中已弃用和删除的功能 部分。
命令语法
$ opm migrate <index_ref> <output_dir> [<flags>]
| 标记 | 描述 |
|---|---|
|
|
输出格式: |
6.2.5. render 复制链接链接已复制到粘贴板!
从提供的索引镜像、捆绑包镜像和 SQLite 数据库文件生成声明性配置 blob。
命令语法
$ opm render <index_image | bundle_image | sqlite_file> [<flags>]
| 标记 | 描述 |
|---|---|
|
|
输出格式: |
6.2.6. serve 复制链接链接已复制到粘贴板!
通过 GRPC 服务器提供声明配置。
声明性配置目录在启动时由 serving 命令加载。此命令启动后对声明配置所做的更改不会反映在提供的内容中。
命令语法
$ opm serve <source_path> [<flags>]
| 标记 | 描述 |
|---|---|
|
| 如果设置了此标志,它会同步并保留服务器缓存目录。 |
|
|
如果缓存不存在或无效,则退出并显示错误。当设置了 |
|
| 同步服务缓存,并在没有服务的情况下退出。 |
|
| 启用调试日志记录。 |
|
| 服务帮助。 |
|
|
服务的端口号。默认值为 |
|
|
启动性能分析端点的地址。格式为 |
|
|
容器终止日志文件的路径。默认值为 |
6.2.7. validate 复制链接链接已复制到粘贴板!
验证给定目录中声明性配置 JSON 文件。
命令语法
$ opm validate <directory> [<flags>]
第 7 章 ROSA CLI 复制链接链接已复制到粘贴板!
7.1. ROSA CLI 入门 复制链接链接已复制到粘贴板!
7.1.1. 关于 ROSA CLI 复制链接链接已复制到粘贴板!
使用 ROSA 命令行界面(CLI) (rosa)创建、更新、管理和删除 Red Hat OpenShift Service on AWS 集群和资源。
7.1.2. 设置 ROSA CLI 复制链接链接已复制到粘贴板!
使用以下步骤在安装主机上安装和配置 ROSA CLI (rosa)。
流程
安装和配置最新的 AWS CLI (
aws)。按照 AWS 命令行界面文档为您的操作系统安装和配置 AWS CLI。
在
.aws/credentials文件中指定aws_access_key_id、aws_secret_access_key和region。请参阅 AWS 文档中的 AWS 配置基础知识。注意您可以选择使用
AWS_DEFAULT_REGION环境变量设置默认 AWS 区域。查询 AWS API 以验证是否已安装并配置了 AWS CLI:
$ aws sts get-caller-identity --output text输出示例
<aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id>
-
从 OpenShift Cluster Manager 上的 Downloads 页面下载您的操作系统的 ROSA CLI (
rosa)的最新版本。 从下载的存档中提取
rosa二进制文件。以下示例从 Linux tar 归档中提取二进制文件:$ tar xvf rosa-linux.tar.gz在您的路径中添加
rosa。在以下示例中,/usr/local/bin目录包含在用户的路径中:$ sudo mv rosa /usr/local/bin/rosa通过查询
rosa版本来验证 ROSA CLI 是否已正确安装:$ rosa version输出示例
1.2.15 Your ROSA CLI is up to date.可选:为 ROSA CLI 启用 tab 自动完成功能。启用 tab 自动完成功能后,您可以按
Tab键两次来自动完成子命令并接收命令建议:在 Linux 主机上为 Bash 启用持久性 tab 自动完成功能:
为 Bash 生成
rosatab 自动完成配置文件,并将其保存到/etc/bash_completion.d/目录中:# rosa completion bash > /etc/bash_completion.d/rosa- 打开一个新的终端来激活配置。
在 macOS 主机上为 Bash 启用持久性 tab 自动完成功能:
为 Bash 生成
rosatab 自动完成配置文件,并将其保存到/usr/local/etc/bash_completion.d/目录中:$ rosa completion bash > /usr/local/etc/bash_completion.d/rosa- 打开一个新的终端来激活配置。
为 Zsh 启用持久性标签页自动完成功能:
如果没有为您的 Zsh 环境启用 tab 自动完成功能,请运行以下命令启用它:
$ echo "autoload -U compinit; compinit" >> ~/.zshrc为 Zsh 生成
rosatab 自动完成配置文件,并将其保存到功能路径中的第一个目录中:$ rosa completion zsh > "${fpath[1]}/_rosa"- 打开一个新的终端来激活配置。
为 fish 启用持久的 tab 自动完成功能:
为 fish 生成
rosatab 自动完成配置文件,并将其保存到~/.config/fish/completions/目录中:$ rosa completion fish > ~/.config/fish/completions/rosa.fish- 打开一个新的终端来激活配置。
为 PowerShell 启用持久性标签页自动完成功能:
为 PowerShell 生成
rosatab 自动完成配置文件,并将它保存到名为rosa.ps1的文件中:PS> rosa completion powershell | Out-String | Invoke-Expression-
Source 来自您的 PowerShell 配置集中的
rosa.ps1文件。
注意有关配置
rosatab 自动完成的更多信息,请参阅 帮助菜单,运行rosa completion --help命令。
7.1.3. 配置 ROSA CLI 复制链接链接已复制到粘贴板!
使用以下命令配置 ROSA 命令行界面(CLI) (rosa)。
7.1.3.1. login 复制链接链接已复制到粘贴板!
您可以使用几种方法使用 ROSA 命令行界面(CLI) (rosa)登录您的红帽帐户。下面详细介绍这些方法。
7.1.3.1.1. 使用红帽单点登录验证 ROSA CLI 复制链接链接已复制到粘贴板!
您可以使用红帽单点登录登录到 ROSA CLI (rosa)。红帽建议在 Red Hat Single sign-on 中使用 rosa 命令行工具,而不是使用离线身份验证令牌。
离线身份验证令牌长期存在,存储在您的操作系统上,且无法撤销。这些因素会增加整体安全风险以及未授权访问您的帐户的可能性。
或者,使用红帽单点登录方法进行身份验证会自动发送 rosa 实例,该刷新令牌有效期为 10 小时。这种独特的临时授权代码增强了安全性,并降低了未授权访问的风险。
使用红帽单点登录进行身份验证的方法不会破坏依赖于离线令牌的现有自动化。红帽建议将 服务帐户 用于自动化目的。如果您仍然需要将离线令牌用于自动化或其他目的,您可以从 OpenShift Cluster Manager API Token 页面下载 OpenShift Cluster Manager API 令牌。
使用以下方法之一验证:
- 如果您的系统有 Web 浏览器,请参阅"使用单点登录授权代码"部分的 ROSA CLI 部分来使用 Red Hat 单点登录进行身份验证。
- 如果您在没有 Web 浏览器的情况下使用容器、远程主机或其他环境,请参阅"使用单点登录代码"部分验证 ROSA CLI 部分。
- 要使用离线令牌验证 ROSA CLI,请参阅"授权使用离线令牌的 ROSA CLI"部分。
ROSA CLI (rosa)版本 1.2.36 或更高版本支持单点登录授权。
7.1.3.1.2. 使用单点登录授权代码验证 ROSA CLI 复制链接链接已复制到粘贴板!
要使用 Red Hat 单点登录授权代码登录到 ROSA CLI (
rosa),请运行以下命令:语法
$ rosa login --use-auth-code运行此命令会将您重定向到 Red Hat Single login-on 登录。使用您的 Red Hat 登录或电子邮件登录。
Expand 表 7.1. 从父命令继承的可选参数 选项 定义 --help
显示此命令的帮助信息。
--debug
启用调试模式。
要切换帐户,请从 https://sso.redhat.com 注销,并在尝试再次登录前在终端中运行
rosa logout命令。
7.1.3.1.3. 使用单点登录设备代码验证 ROSA CLI 复制链接链接已复制到粘贴板!
如果您在没有 Web 浏览器的情况下使用容器、远程主机和其他环境,您可以使用红帽单点登录设备代码来保护身份验证。要做到这一点,您必须使用具有 Web 浏览器批准登录的第二个设备。
ROSA CLI (rosa)版本 1.2.36 或更高版本支持单点登录授权。
要使用 Red Hat 单点登录设备代码登录到 ROSA CLI (
rosa),请运行以下命令:语法
$ rosa login --use-device-code运行此命令会将您重定向到红帽 SSO 登录,并提供登录代码。
Expand 表 7.2. 从父命令继承的可选参数 选项 定义 --help
显示此命令的帮助信息。
--debug
启用调试模式。
要切换帐户,请从 https://sso.redhat.com 注销,并在尝试再次登录前在终端中运行
rosa logout命令。
7.1.3.1.4. 使用离线令牌验证 ROSA CLI 复制链接链接已复制到粘贴板!
登录到您的红帽帐户,将凭据保存到 rosa 配置文件。
要将离线令牌用于自动化目的,您可以从 OpenShift Cluster Manager API Token 页面下载 OpenShift Cluster Manager API 令牌。要将服务帐户用于自动化目的,请参阅 Service Accounts 页面。
红帽建议将服务帐户用于自动化目的。
要使用红帽离线令牌登录到 ROSA CLI (
rosa),请运行以下命令:语法
$ rosa login [arguments]Expand 表 7.3. 参数 选项 定义 --client-id
OpenID 客户端标识符(字符串)。默认:
cloud-services--client-secret
OpenID 客户端 secret (字符串)。
--insecure
启用与服务器的不安全通信。这禁用 TLS 证书和主机名验证。
--scope
OpenID 范围(字符串)。如果使用这个选项,它将替换默认的范围。这可以重复多次以指定多个范围。默认:
openid--token
访问或刷新令牌(字符串)。
--token-url
OpenID 令牌 URL (字符串)。默认:
https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/tokenExpand 表 7.4. 从父命令继承的可选参数 选项 定义 --help
显示此命令的帮助信息。
--debug
启用调试模式。
--profile
指定来自您的凭证文件中的 AWS 配置集(字符串)。
7.1.3.2. logout 复制链接链接已复制到粘贴板!
从 rosa 注销。注销也会移除 rosa 配置文件。
语法
$ rosa logout [arguments]
| 选项 | 定义 |
|---|---|
| --help | 显示此命令的帮助信息。 |
| --debug | 启用调试模式。 |
| --profile | 指定来自您的凭证文件中的 AWS 配置集(字符串)。 |
7.1.3.3. 验证权限 复制链接链接已复制到粘贴板!
验证在 AWS 集群上创建 Red Hat OpenShift Service 所需的 AWS 权限是否已正确配置:
语法
$ rosa verify permissions [arguments]
此命令只验证没有使用 AWS 安全令牌服务 (STS) 的集群的权限。
| 选项 | 定义 |
|---|---|
| --help | 显示此命令的帮助信息。 |
| --debug | 启用调试模式。 |
| --region |
在其中运行命令的 AWS 区域(字符串)。这个值会覆盖 |
| --profile | 指定来自您的凭证文件中的 AWS 配置集(字符串)。 |
例子
验证 AWS 权限是否已正确配置:
$ rosa verify permissions
验证 AWS 权限是否在特定区域中正确配置:
$ rosa verify permissions --region=us-west-2
7.1.3.4. 验证配额 复制链接链接已复制到粘贴板!
验证您的默认区域是否正确配置了 AWS 配额。
语法
$ rosa verify quota [arguments]
| 选项 | 定义 |
|---|---|
| --help | 显示此命令的帮助信息。 |
| --debug | 启用调试模式。 |
| --region |
在其中运行命令的 AWS 区域(字符串)。这个值会覆盖 |
| --profile | 指定来自您的凭证文件中的 AWS 配置集(字符串)。 |
例子
验证默认区域是否正确配置了 AWS 配额:
$ rosa verify quota
验证 AWS 配额是否在特定区域中正确配置:
$ rosa verify quota --region=us-west-2
7.1.3.5. 下载 rosa 复制链接链接已复制到粘贴板!
下载 rosa CLI 的最新兼容版本。
下载 rosa 后,提取存档的内容并将其添加到您的路径中。
语法
$ rosa download rosa [arguments]
| 选项 | 定义 |
|---|---|
| --help | 显示此命令的帮助信息。 |
| --debug | 启用调试模式。 |
7.1.3.6. 下载 oc 复制链接链接已复制到粘贴板!
下载 OpenShift Container Platform CLI (oc) 的最新版本。
下载 oc 后,您必须提取存档的内容并将其添加到您的路径中。
语法
$ rosa download oc [arguments]
| 选项 | 定义 |
|---|---|
| --help | 显示此命令的帮助信息。 |
| --debug | 启用调试模式。 |
示例
下载 oc 客户端工具:
$ rosa download oc
7.1.3.7. 验证 oc 复制链接链接已复制到粘贴板!
验证 OpenShift Container Platform CLI (oc)是否已正确安装。
语法
$ rosa verify oc [arguments]
| 选项 | 定义 |
|---|---|
| --help | 显示此命令的帮助信息。 |
| --debug | 启用调试模式。 |
示例
验证 oc 客户端工具:
$ rosa verify oc
7.1.4. 更新 ROSA CLI 复制链接链接已复制到粘贴板!
更新至 ROSA CLI 的最新兼容版本(rosa)。
流程
确认新版本的 ROSA CLI (
rosa)可用:$ rosa version输出示例
1.2.12 There is a newer release version '1.2.15', please consider updating: https://mirror.openshift.com/pub/openshift-v4/clients/rosa/latest/下载 ROSA CLI 的最新兼容版本:
$ rosa download rosa此命令将名为
rosa114.tar.gz的存档下载到当前目录中。文件的确切名称取决于您的操作系统和系统架构。提取存档内容:
$ tar -xzf rosa-linux.tar.gz通过将提取的文件移至您的路径中来安装 ROSA CLI 的新版本。在以下示例中,
/usr/local/bin目录包含在用户的路径中:$ sudo mv rosa /usr/local/bin/rosa
验证
验证是否安装了新版本的 ROSA CLI。
$ rosa version输出示例
1.2.15 Your ROSA CLI is up to date.
7.2. ROSA CLI 命令参考 复制链接链接已复制到粘贴板!
本参考提供了 ROSA CLI (rosa)命令的描述和示例命令。
运行 rosa -h 以列出所有命令或运行 rosa <command> --help 获取特定命令的更多详情。
7.2.1. ROSA CLI 命令 复制链接链接已复制到粘贴板!
7.2.1.1. ROSA 创建 account-roles 复制链接链接已复制到粘贴板!
在创建集群前,创建集群范围的 IAM 角色。
用法示例
# Create default account roles for ROSA clusters using STS
rosa create account-roles
# Create account roles with a specific permissions boundary
rosa create account-roles --permissions-boundary arn:aws:iam::123456789012:policy/perm-boundary
7.2.1.2. ROSA 创建管理员 复制链接链接已复制到粘贴板!
创建 admin 用户以登录到集群
用法示例
# Create an admin user to login to the cluster
rosa create admin -c mycluster -p MasterKey123
7.2.1.3. ROSA 创建自动扩展 复制链接链接已复制到粘贴板!
为集群创建自动扩展
用法示例
# Interactively create an autoscaler to a cluster named "mycluster"
rosa create autoscaler --cluster=mycluster --interactive
# Create a cluster-autoscaler where it should skip nodes with local storage
rosa create autoscaler --cluster=mycluster --skip-nodes-with-local-storage
# Create a cluster-autoscaler with log verbosity of '3'
rosa create autoscaler --cluster=mycluster --log-verbosity 3
# Create a cluster-autoscaler with total CPU constraints
rosa create autoscaler --cluster=mycluster --min-cores 10 --max-cores 100
7.2.1.4. ROSA 创建 break-glass-credential 复制链接链接已复制到粘贴板!
为集群创建一个 breakfish 凭据。
用法示例
# Interactively create a break glass credential to a cluster named "mycluster"
rosa create break-glass-credential --cluster=mycluster --interactive
7.2.1.5. ROSA 创建集群 复制链接链接已复制到粘贴板!
创建集群
用法示例
# Create a cluster named "mycluster"
rosa create cluster --cluster-name=mycluster
# Create a cluster in the us-east-2 region
rosa create cluster --cluster-name=mycluster --region=us-east-2
7.2.1.6. ROSA 创建决策 复制链接链接已复制到粘贴板!
为访问请求创建一个决定
用法示例
# Create a decision for an Access Request to approve it
rosa create decision --access-request <access_request_id> --decision Approved
7.2.1.7. ROSA 创建 dns-domain 复制链接链接已复制到粘贴板!
创建 DNS 域。
用法示例
# Create DNS Domain
rosa create dns-domain
7.2.1.8. ROSA 创建 external-auth-provider 复制链接链接已复制到粘贴板!
为集群创建外部身份验证供应商。
用法示例
# Interactively create an external authentication provider to a cluster named "mycluster"
rosa create external-auth-provider --cluster=mycluster --interactive
7.2.1.9. ROSA create iamserviceaccount 复制链接链接已复制到粘贴板!
为 Kubernetes 服务帐户创建 IAM 角色
用法示例
# Create an IAM role for a service account
rosa create iamserviceaccount --cluster my-cluster --name my-app --namespace default
7.2.1.10. ROSA 创建 idp 复制链接链接已复制到粘贴板!
为集群添加 IDP
用法示例
# Add a GitHub identity provider to a cluster named "mycluster"
rosa create idp --type=github --cluster=mycluster
# Add an identity provider following interactive prompts
rosa create idp --cluster=mycluster --interactive
7.2.1.11. ROSA 创建 image-mirror 复制链接链接已复制到粘贴板!
为集群创建镜像镜像
用法示例
# Create an image mirror for cluster "mycluster"
rosa create image-mirror --cluster=mycluster \
--source=registry.example.com/team \
--mirrors=mirror.corp.com/team,backup.corp.com/team
# Create with a specific type (digest is default and only supported type)
rosa create image-mirror --cluster=mycluster \
--type=digest --source=docker.io/library \
--mirrors=internal-registry.company.com/dockerhub
7.2.1.12. ROSA create kubeletconfig 复制链接链接已复制到粘贴板!
为集群创建自定义 kubeletconfig
用法示例
# Create a custom kubeletconfig with a pod-pids-limit of 5000
rosa create kubeletconfig --cluster=mycluster --pod-pids-limit=5000
7.2.1.13. ROSA 创建 machinepool 复制链接链接已复制到粘贴板!
在集群中添加机器池
用法示例
# Interactively add a machine pool to a cluster named "mycluster"
rosa create machinepool --cluster=mycluster --interactive
# Add a machine pool mp-1 with 3 replicas of m5.xlarge to a cluster
rosa create machinepool --cluster=mycluster --name=mp-1 --replicas=3 --instance-type=m5.xlarge
# Add a machine pool mp-1 with autoscaling enabled and 3 to 6 replicas of m5.xlarge to a cluster
rosa create machinepool --cluster=mycluster --name=mp-1 --enable-autoscaling \
--min-replicas=3 --max-replicas=6 --instance-type=m5.xlarge
# Add a machine pool with labels to a cluster
rosa create machinepool -c mycluster --name=mp-1 --replicas=2 --instance-type=r5.2xlarge --labels=foo=bar,bar=baz,
# Add a machine pool with spot instances to a cluster
rosa create machinepool -c mycluster --name=mp-1 --replicas=2 --instance-type=r5.2xlarge --use-spot-instances \
--spot-max-price=0.5
# Add a machine pool to a cluster and set the node drain grace period
rosa create machinepool -c mycluster --name=mp-1 --node-drain-grace-period="90 minutes"
7.2.1.14. ROSA 创建网络 复制链接链接已复制到粘贴板!
Network AWS cloudformation stack
用法示例
# Create a AWS cloudformation stack
rosa create network <template-name> --param Param1=Value1 --param Param2=Value2
# ROSA quick start HCP VPC example with one availability zone
rosa create network rosa-quickstart-default-vpc --param Region=us-west-2 --param Name=quickstart-stack --param AvailabilityZoneCount=1 --param VpcCidr=10.0.0.0/16
# ROSA quick start HCP VPC example with two explicit availability zones
rosa create network rosa-quickstart-default-vpc --param Region=us-west-2 --param Name=quickstart-stack --param AZ1=us-west-2b --param AZ2=us-west-2d --param VpcCidr=10.0.0.0/16
# To delete the AWS cloudformation stack
aws cloudformation delete-stack --stack-name <name> --region <region>
# TEMPLATE_NAME:
Specifies the name of the template to use. This should match the name of a directory
under the path specified by '--template-dir' or the 'OCM_TEMPLATE_DIR' environment variable.
The directory should contain a YAML file defining the custom template structure.
If no TEMPLATE_NAME is provided, or if no matching directory is found, the default
built-in template 'rosa-quickstart-default-vpc' will be used.
7.2.1.15. ROSA create ocm-role 复制链接链接已复制到粘贴板!
创建 OCM 使用的角色
用法示例
# Create default ocm role for ROSA clusters using STS
rosa create ocm-role
# Create ocm role with a specific permissions boundary
rosa create ocm-role --permissions-boundary arn:aws:iam::123456789012:policy/perm-boundary
7.2.1.16. ROSA create oidc-config 复制链接链接已复制到粘贴板!
创建与 OIDC 协议兼容的 OIDC 配置。
用法示例
# Create OIDC config
rosa create oidc-config
7.2.1.17. ROSA create oidc-provider 复制链接链接已复制到粘贴板!
为 STS 集群创建 OIDC 供应商。
用法示例
# Create OIDC provider for cluster named "mycluster"
rosa create oidc-provider --cluster=mycluster
7.2.1.18. ROSA 创建 operator-roles 复制链接链接已复制到粘贴板!
为集群创建 operator IAM 角色。
用法示例
# Create default operator roles for cluster named "mycluster"
rosa create operator-roles --cluster=mycluster
# Create operator roles with a specific permissions boundary
rosa create operator-roles -c mycluster --permissions-boundary arn:aws:iam::123456789012:policy/perm-boundary
7.2.1.19. ROSA 创建 tuning-configs 复制链接链接已复制到粘贴板!
添加调优配置
用法示例
# Add a tuning config with name "tuned1" and spec from a file "file1" to a cluster named "mycluster"
rosa create tuning-config --name=tuned1 --spec-path=file1 --cluster=mycluster"
7.2.1.20. ROSA 创建 user-role 复制链接链接已复制到粘贴板!
创建用户角色以验证帐户关联
用法示例
# Create user roles
rosa create user-role
# Create user role with a specific permissions boundary
rosa create user-role --permissions-boundary arn:aws:iam::123456789012:policy/perm-boundary
7.2.1.21. ROSA 删除 account-roles 复制链接链接已复制到粘贴板!
删除帐户角色
用法示例
# Delete Account roles"
rosa delete account-roles -p prefix
7.2.1.22. ROSA 删除管理员 复制链接链接已复制到粘贴板!
删除 admin 用户
用法示例
# Delete the admin user
rosa delete admin --cluster=mycluster
7.2.1.23. ROSA 删除自动扩展 复制链接链接已复制到粘贴板!
为集群删除自动扩展
用法示例
# Delete the autoscaler config for cluster named "mycluster"
rosa delete autoscaler --cluster=mycluster
7.2.1.24. ROSA 删除集群 复制链接链接已复制到粘贴板!
删除集群
用法示例
# Delete a cluster named "mycluster"
rosa delete cluster --cluster=mycluster
7.2.1.25. ROSA 删除 dns-domain 复制链接链接已复制到粘贴板!
删除 DNS 域
用法示例
# Delete a DNS domain with ID github-1
rosa delete dns-domain github-1
7.2.1.26. ROSA 删除 external-auth-provider 复制链接链接已复制到粘贴板!
删除外部身份验证供应商
用法示例
# Delete an external authentication provider named exauth-1
rosa delete external-auth-provider exauth-1 --cluster=mycluster
7.2.1.27. ROSA delete iamserviceaccount 复制链接链接已复制到粘贴板!
删除 Kubernetes 服务帐户的 IAM 角色
用法示例
# Delete IAM role for service account
rosa delete iamserviceaccount --cluster my-cluster \
--name my-app \
--namespace default
7.2.1.28. ROSA delete idp 复制链接链接已复制到粘贴板!
删除集群 IDP
用法示例
# Delete an identity provider named github-1
rosa delete idp github-1 --cluster=mycluster
7.2.1.29. ROSA 删除 image-mirror 复制链接链接已复制到粘贴板!
从集群中删除镜像(mirror)
用法示例
# Delete image mirror with ID "abc123" from cluster "mycluster"
rosa delete image-mirror --cluster=mycluster abc123
# Delete without confirmation prompt
rosa delete image-mirror --cluster=mycluster abc123 --yes
# Alternative: using the --id flag
rosa delete image-mirror --cluster=mycluster --id=abc123
7.2.1.30. ROSA 删除入口 复制链接链接已复制到粘贴板!
删除集群入口
用法示例
# Delete ingress with ID a1b2 from a cluster named 'mycluster'
rosa delete ingress --cluster=mycluster a1b2
# Delete secondary ingress using the sub-domain name
rosa delete ingress --cluster=mycluster apps2
7.2.1.31. ROSA 删除 kubeletconfig 复制链接链接已复制到粘贴板!
从集群中删除 kubeletconfig
用法示例
# Delete the KubeletConfig for ROSA Classic cluster 'foo'
rosa delete kubeletconfig --cluster foo
# Delete the KubeletConfig named 'bar' from cluster 'foo'
rosa delete kubeletconfig --cluster foo --name bar
7.2.1.32. ROSA 删除 machinepool 复制链接链接已复制到粘贴板!
删除机器池
用法示例
# Delete machine pool with ID mp-1 from a cluster named 'mycluster'
rosa delete machinepool --cluster=mycluster mp-1
7.2.1.33. ROSA delete ocm-role 复制链接链接已复制到粘贴板!
删除 OCM 角色
用法示例
# Delete OCM role
rosa delete ocm-role --role-arn arn:aws:iam::123456789012:role/xxx-OCM-Role-1223456778
7.2.1.34. ROSA delete oidc-config 复制链接链接已复制到粘贴板!
删除 OIDC 配置
用法示例
# Delete OIDC config based on registered OIDC Config ID that has been supplied
rosa delete oidc-config --oidc-config-id <oidc_config_id>
7.2.1.35. ROSA delete oidc-provider 复制链接链接已复制到粘贴板!
删除 OIDC 供应商
用法示例
# Delete OIDC provider for cluster named "mycluster"
rosa delete oidc-provider --cluster=mycluster
7.2.1.36. ROSA 删除 operator-roles 复制链接链接已复制到粘贴板!
删除 Operator 角色
用法示例
# Delete Operator roles for cluster named "mycluster"
rosa delete operator-roles --cluster=mycluster
7.2.1.37. ROSA 删除 tuning-configs 复制链接链接已复制到粘贴板!
删除调优配置
用法示例
# Delete tuning config with name tuned1 from a cluster named 'mycluster'
rosa delete tuning-config --cluster=mycluster tuned1
7.2.1.38. ROSA 删除 user-role 复制链接链接已复制到粘贴板!
删除用户角色
用法示例
# Delete user role
rosa delete user-role --role-arn {prefix}-User-{username}-Role
7.2.1.39. rosa describe access-request 复制链接链接已复制到粘贴板!
显示访问请求的详情
用法示例
# Describe an Access Request wit id <access_request_id>
rosa describe access-request --id <access_request_id>
7.2.1.40. ROSA 描述附加组件 复制链接链接已复制到粘贴板!
显示附加组件的详细信息
用法示例
# Describe an add-on named "codeready-workspaces"
rosa describe addon codeready-workspaces
7.2.1.41. ROSA 描述 addon-installation 复制链接链接已复制到粘贴板!
显示附加组件安装的详情
用法示例
# Describe the 'bar' add-on installation on cluster 'foo'
rosa describe addon-installation --cluster foo --addon bar
7.2.1.42. ROSA 描述 admin 复制链接链接已复制到粘贴板!
显示 cluster-admin 用户的详情
用法示例
# Describe cluster-admin user of a cluster named mycluster
rosa describe admin -c mycluster
7.2.1.43. rosa describe autoscaler 复制链接链接已复制到粘贴板!
显示集群的自动扩展详情
用法示例
# Describe the autoscaler for cluster 'foo'
rosa describe autoscaler --cluster foo
7.2.1.44. rosa describe break-glass-credential 复制链接链接已复制到粘贴板!
显示集群中的一个中断镜凭证的详情
用法示例
# Show details of a break glass credential with ID "12345" on a cluster named "mycluster"
rosa describe break-glass-credential 12345 --cluster=mycluster
7.2.1.45. ROSA 描述集群 复制链接链接已复制到粘贴板!
显示集群详情
用法示例
# Describe a cluster named "mycluster"
rosa describe cluster --cluster=mycluster
7.2.1.46. rosa describe external-auth-provider 复制链接链接已复制到粘贴板!
显示集群上外部身份验证供应商的详情
用法示例
# Show details of an external authentication provider named "exauth" on a cluster named "mycluster"
rosa describe external-auth-provider exauth --cluster=mycluster
7.2.1.47. ROSA describe iamserviceaccount 复制链接链接已复制到粘贴板!
描述 Kubernetes 服务帐户的 IAM 角色
用法示例
# Describe IAM role for service account
rosa describe iamserviceaccount --cluster my-cluster \
--name my-app \
--namespace default
7.2.1.48. ROSA 描述入口 复制链接链接已复制到粘贴板!
显示集群中指定入口的详情
用法示例
rosa describe ingress <ingress_id> -c mycluster
7.2.1.49. rosa describe kubeletconfig 复制链接链接已复制到粘贴板!
显示集群的 kubeletconfig 的详情
用法示例
# Describe the custom kubeletconfig for ROSA Classic cluster 'foo'
rosa describe kubeletconfig --cluster foo
# Describe the custom kubeletconfig named 'bar' for cluster 'foo'
rosa describe kubeletconfig --cluster foo --name bar
7.2.1.50. ROSA 描述 machinepool 复制链接链接已复制到粘贴板!
显示集群中的机器池详情
用法示例
# Show details of a machine pool named "mymachinepool" on a cluster named "mycluster"
rosa describe machinepool --cluster=mycluster --machinepool=mymachinepool
7.2.1.51. ROSA 描述 tuning-configs 复制链接链接已复制到粘贴板!
显示调优配置的详情
用法示例
# Describe the 'tuned1' tuned config on cluster 'foo'
rosa describe tuning-config --cluster foo tuned1
7.2.1.52. ROSA 描述升级 复制链接链接已复制到粘贴板!
显示升级的详情
用法示例
# Describe an upgrade-policy"
rosa describe upgrade
7.2.1.53. ROSA 下载 openshift-client 复制链接链接已复制到粘贴板!
下载 OpenShift 客户端工具
用法示例
# Download oc client tools
rosa download oc
7.2.1.54. ROSA 下载 rosa-client 复制链接链接已复制到粘贴板!
下载 ROSA 客户端工具
用法示例
# Download rosa client tools
rosa download rosa
7.2.1.55. ROSA 编辑附加组件 复制链接链接已复制到粘贴板!
编辑集群中的附加安装参数
用法示例
# Edit the parameters of the Red Hat OpenShift logging operator add-on installation
rosa edit addon --cluster=mycluster cluster-logging-operator
7.2.1.56. ROSA 编辑自动扩展 复制链接链接已复制到粘贴板!
编辑集群的自动扩展器
用法示例
# Interactively edit an autoscaler to a cluster named "mycluster"
rosa edit autoscaler --cluster=mycluster --interactive
# Edit a cluster-autoscaler to skip nodes with local storage
rosa edit autoscaler --cluster=mycluster --skip-nodes-with-local-storage
# Edit a cluster-autoscaler with log verbosity of '3'
rosa edit autoscaler --cluster=mycluster --log-verbosity 3
# Edit a cluster-autoscaler with total CPU constraints
rosa edit autoscaler --cluster=mycluster --min-cores 10 --max-cores 100
7.2.1.57. ROSA 编辑集群 复制链接链接已复制到粘贴板!
编辑集群
用法示例
# Edit a cluster named "mycluster" to make it private
rosa edit cluster -c mycluster --private
# Edit a cluster named "mycluster" to enable User Workload Monitoring
rosa edit cluster -c mycluster --disable-workload-monitoring=false
# Edit all options interactively
rosa edit cluster -c mycluster --interactive
7.2.1.58. ROSA 编辑 image-mirror 复制链接链接已复制到粘贴板!
编辑集群的镜像镜像
用法示例
# Update mirrors for image mirror with ID "abc123" on cluster "mycluster"
rosa edit image-mirror --cluster=mycluster abc123 \
--mirrors=mirror.corp.com/team,backup.corp.com/team,new-mirror.corp.com/team
# Alternative: using the --id flag
rosa edit image-mirror --cluster=mycluster --id=abc123 \
--mirrors=mirror.corp.com/team,backup.corp.com/team,new-mirror.corp.com/team
7.2.1.59. ROSA 编辑入口 复制链接链接已复制到粘贴板!
编辑集群入口(负载均衡器)
用法示例
# Make additional ingress with ID 'a1b2' private on a cluster named 'mycluster'
rosa edit ingress --private --cluster=mycluster a1b2
# Update the router selectors for the additional ingress with ID 'a1b2'
rosa edit ingress --label-match=foo=bar --cluster=mycluster a1b2
# Update the default ingress using the sub-domain identifier
rosa edit ingress --private=false --cluster=mycluster apps
# Update the load balancer type of the apps2 ingress
rosa edit ingress --lb-type=nlb --cluster=mycluster apps2
7.2.1.60. rosa edit kubeletconfig 复制链接链接已复制到粘贴板!
为集群编辑 kubeletconfig
用法示例
# Edit a KubeletConfig to have a pod-pids-limit of 10000
rosa edit kubeletconfig --cluster=mycluster --pod-pids-limit=10000
# Edit a KubeletConfig named 'bar' to have a pod-pids-limit of 10000
rosa edit kubeletconfig --cluster=mycluster --name=bar --pod-pids-limit=10000
7.2.1.61. ROSA 编辑 machinepool 复制链接链接已复制到粘贴板!
编辑机器池
用法示例
# Set 4 replicas on machine pool 'mp1' on cluster 'mycluster'
rosa edit machinepool --replicas=4 --cluster=mycluster mp1
# Enable autoscaling and Set 3-5 replicas on machine pool 'mp1' on cluster 'mycluster'
rosa edit machinepool --enable-autoscaling --min-replicas=3 --max-replicas=5 --cluster=mycluster mp1
# Set the node drain grace period to 1 hour on machine pool 'mp1' on cluster 'mycluster'
rosa edit machinepool --node-drain-grace-period="1 hour" --cluster=mycluster mp1
7.2.1.62. ROSA 编辑 tuning-configs 复制链接链接已复制到粘贴板!
编辑调优配置
用法示例
# Update the tuning config with name 'tuning-1' with the spec defined in file1
rosa edit tuning-config --cluster=mycluster tuning-1 --spec-path file1
7.2.1.63. ROSA 授予用户 复制链接链接已复制到粘贴板!
授予用户对集群的访问权限
用法示例
# Add cluster-admin role to a user
rosa grant user cluster-admin --user=myusername --cluster=mycluster
# Grant dedicated-admins role to a user
rosa grant user dedicated-admin --user=myusername --cluster=mycluster
7.2.1.64. ROSA init 复制链接链接已复制到粘贴板!
应用模板来支持 Red Hat OpenShift Service on AWS
用法示例
# Configure your AWS account to allow IAM (non-STS) ROSA clusters
rosa init
# Configure a new AWS account using pre-existing OCM credentials
rosa init --token=$OFFLINE_ACCESS_TOKEN
7.2.1.65. ROSA 安装附加组件 复制链接链接已复制到粘贴板!
在集群上安装附加组件
用法示例
# Add the CodeReady Workspaces add-on installation to the cluster
rosa install addon --cluster=mycluster codeready-workspaces
7.2.1.66. ROSA link ocm-role 复制链接链接已复制到粘贴板!
将 OCM 角色链接到特定的 OCM 组织。
用法示例
# Link OCM role
rosa link ocm-role --role-arn arn:aws:iam::123456789012:role/ManagedOpenshift-OCM-Role
7.2.1.67. ROSA 链接 user-role 复制链接链接已复制到粘贴板!
将用户角色链接到特定的 OCM 帐户。
用法示例
# Link user roles
rosa link user-role --role-arn arn:aws:iam::{accountid}:role/{prefix}-User-{username}-Role
7.2.1.68. ROSA 列表 access-request 复制链接链接已复制到粘贴板!
列出访问请求
用法示例
# List all Access Requests for cluster 'foo'
rosa list access-request --cluster foo
7.2.1.69. ROSA list account-roles 复制链接链接已复制到粘贴板!
列出帐户角色和策略
用法示例
# List all account roles
rosa list account-roles
7.2.1.70. ROSA 列表附加组件 复制链接链接已复制到粘贴板!
列出附加组件安装
用法示例
# List all add-on installations on a cluster named "mycluster"
rosa list addons --cluster=mycluster
7.2.1.71. ROSA list break-glass-credentials 复制链接链接已复制到粘贴板!
列出断镜凭证
用法示例
# List all break glass credentials for a cluster named 'mycluster'"
rosa list break-glass-credentials -c mycluster
7.2.1.72. ROSA 列出集群 复制链接链接已复制到粘贴板!
列出集群
用法示例
# List all clusters
rosa list clusters
7.2.1.73. ROSA list dns-domain 复制链接链接已复制到粘贴板!
列出 DNS 域
用法示例
# List all DNS Domains tied to your organization ID"
rosa list dns-domain
7.2.1.74. ROSA list external-auth-providers 复制链接链接已复制到粘贴板!
列出外部身份验证供应商
用法示例
# List all external authentication providers for a cluster named 'mycluster'"
rosa list external-auth-provider -c mycluster
7.2.1.75. ROSA 列表限制 复制链接链接已复制到粘贴板!
列出可用的 OCP 等级
用法示例
# List all OCP gates for OCP version
rosa list gates --version 4.9
# List all STS gates for OCP version
rosa list gates --gate sts --version 4.9
# List all OCP gates for OCP version
rosa list gates --gate ocp --version 4.9
# List available gates for cluster upgrade version
rosa list gates -c <cluster_id> --version 4.9.15
7.2.1.76. ROSA list iamserviceaccounts 复制链接链接已复制到粘贴板!
列出 Kubernetes 服务帐户的 IAM 角色
用法示例
# List IAM roles for service accounts
rosa list iamserviceaccounts --cluster my-cluster
7.2.1.77. ROSA list idps 复制链接链接已复制到粘贴板!
列出集群 IDP
用法示例
# List all identity providers on a cluster named "mycluster"
rosa list idps --cluster=mycluster
7.2.1.78. ROSA list image-mirrors 复制链接链接已复制到粘贴板!
列出集群镜像
用法示例
# List all image mirrors on a cluster named "mycluster"
rosa list image-mirrors --cluster=mycluster
7.2.1.79. ROSA 列出 ingresses 复制链接链接已复制到粘贴板!
列出集群入口
用法示例
# List all routes on a cluster named "mycluster"
rosa list ingresses --cluster=mycluster
7.2.1.80. ROSA list instance-types 复制链接链接已复制到粘贴板!
列出实例类型
用法示例
# List all instance types
rosa list instance-types
7.2.1.81. ROSA list kubeletconfigs 复制链接链接已复制到粘贴板!
列出 kubeletconfigs
用法示例
# List the kubeletconfigs for cluster 'foo'
rosa list kubeletconfig --cluster foo
7.2.1.82. ROSA list machinepools 复制链接链接已复制到粘贴板!
列出集群池
用法示例
# List all machine pools on a cluster named "mycluster"
rosa list machinepools --cluster=mycluster
# List machine pools showing all information
rosa list machinepools --cluster=mycluster --all
7.2.1.83. ROSA list ocm-roles 复制链接链接已复制到粘贴板!
列出 ocm roles
用法示例
# List all ocm roles
rosa list ocm-roles
7.2.1.84. ROSA list oidc-config 复制链接链接已复制到粘贴板!
列出 OIDC 配置资源
用法示例
# List all OIDC Configurations tied to your organization ID"
rosa list oidc-config
7.2.1.85. ROSA list oidc-providers 复制链接链接已复制到粘贴板!
列出 OIDC 供应商
用法示例
# List all oidc providers
rosa list oidc-providers
7.2.1.86. ROSA list operator-roles 复制链接链接已复制到粘贴板!
列出 Operator 角色和策略
用法示例
# List all operator roles
rosa list operator-roles
7.2.1.87. ROSA 列出区域 复制链接链接已复制到粘贴板!
列出可用区域
用法示例
# List all available regions
rosa list regions
7.2.1.88. ROSA list tuning-configs 复制链接链接已复制到粘贴板!
列出调优配置
用法示例
# List all tuning configuration for a cluster named 'mycluster'"
rosa list tuning-configs -c mycluster
7.2.1.89. ROSA list user-roles 复制链接链接已复制到粘贴板!
列出用户角色
用法示例
# List all user roles
rosa list user-roles
7.2.1.90. ROSA 列出用户 复制链接链接已复制到粘贴板!
列出集群用户
用法示例
# List all users on a cluster named "mycluster"
rosa list users --cluster=mycluster
7.2.1.91. ROSA 列表版本 复制链接链接已复制到粘贴板!
列出可用版本
用法示例
# List all OpenShift versions
rosa list versions
7.2.1.92. ROSA 登录 复制链接链接已复制到粘贴板!
登录到您的红帽帐户
用法示例
# Login to the OpenShift API with an existing token generated from https://console.redhat.com/openshift/token/rosa
rosa login --token=$OFFLINE_ACCESS_TOKEN
7.2.1.93. ROSA 日志 复制链接链接已复制到粘贴板!
显示集群的安装或卸载日志
用法示例
# Show install logs for a cluster named 'mycluster'
rosa logs install --cluster=mycluster
# Show uninstall logs for a cluster named 'mycluster'
rosa logs uninstall --cluster=mycluster
7.2.1.94. ROSA 日志安装 复制链接链接已复制到粘贴板!
显示集群安装日志
用法示例
# Show last 100 install log lines for a cluster named "mycluster"
rosa logs install mycluster --tail=100
# Show install logs for a cluster using the --cluster flag
rosa logs install --cluster=mycluster
7.2.1.95. ROSA 日志卸载 复制链接链接已复制到粘贴板!
显示集群卸载日志
用法示例
# Show last 100 uninstall log lines for a cluster named "mycluster"
rosa logs uninstall mycluster --tail=100
# Show uninstall logs for a cluster using the --cluster flag
rosa logs uninstall --cluster=mycluster
7.2.1.96. ROSA register oidc-config 复制链接链接已复制到粘贴板!
使用 Openshift 集群管理器注册非受管 OIDC 配置。
用法示例
# Register OIDC config
rosa register oidc-config
7.2.1.97. ROSA revoke break-glass-credentials 复制链接链接已复制到粘贴板!
吊销损坏的镜凭证
用法示例
# Revoke all break glass credentials
rosa revoke break-glass-credentials --cluster=mycluster
7.2.1.98. ROSA 撤销用户 复制链接链接已复制到粘贴板!
从用户撤销角色
用法示例
# Revoke cluster-admin role from a user
rosa revoke user cluster-admins --user=myusername --cluster=mycluster
# Revoke dedicated-admin role from a user
rosa revoke user dedicated-admins --user=myusername --cluster=mycluster
7.2.1.99. ROSA 卸载附加组件 复制链接链接已复制到粘贴板!
从集群中删除附加组件
用法示例
# Remove the CodeReady Workspaces add-on installation from the cluster
rosa uninstall addon --cluster=mycluster codeready-workspaces
7.2.1.100. ROSA unlink ocm-role 复制链接链接已复制到粘贴板!
从特定 OCM 机构中取消链接 ocm 角色
用法示例
#Unlink ocm role
rosa unlink ocm-role --role-arn arn:aws:iam::123456789012:role/ManagedOpenshift-OCM-Role
7.2.1.101. ROSA unlink user-role 复制链接链接已复制到粘贴板!
从特定的 OCM 帐户取消链接用户角色
用法示例
# Unlink user role
rosa unlink user-role --role-arn arn:aws:iam::{accountid}:role/{prefix}-User-{username}-Role
7.2.1.102. ROSA 升级 account-roles 复制链接链接已复制到粘贴板!
将账户范围的 IAM 角色升级到最新版本。
用法示例
# Upgrade account roles for ROSA STS clusters
rosa upgrade account-roles
7.2.1.103. ROSA 升级集群 复制链接链接已复制到粘贴板!
升级集群
用法示例
# Interactively schedule an upgrade on the cluster named "mycluster"
rosa upgrade cluster --cluster=mycluster --interactive
# Schedule a cluster upgrade within the hour
rosa upgrade cluster -c mycluster --version 4.12.20
# Check if any gates need to be acknowledged prior to attempting an upgrading
rosa upgrade cluster -c mycluster --version 4.12.20 --dry-run
7.2.1.104. ROSA 升级 machinepool 复制链接链接已复制到粘贴板!
upgrade machinepool
用法示例
# Interactively schedule an upgrade on the cluster named "mycluster"" for a machinepool named "np1"
rosa upgrade machinepool np1 --cluster=mycluster --interactive
# Schedule a machinepool upgrade within the hour
rosa upgrade machinepool np1 -c mycluster --version 4.12.20
7.2.1.105. ROSA 升级 operator-roles 复制链接链接已复制到粘贴板!
升级集群的 operator IAM 角色。
用法示例
# Upgrade cluster-specific operator IAM roles
rosa upgrade operators-roles
7.2.1.106. ROSA 升级角色 复制链接链接已复制到粘贴板!
将特定于集群的 IAM 角色升级到最新版本。
用法示例
# Upgrade cluster roles for ROSA STS clusters
rosa upgrade roles -c <cluster_key>
7.2.1.107. ROSA 验证网络 复制链接链接已复制到粘贴板!
验证 VPC 子网是否已正确配置
用法示例
# Verify two subnets
rosa verify network --subnet-ids subnet-03046a9b92b5014fb,subnet-03046a9c92b5014fb
7.2.1.108. ROSA 验证 openshift-client 复制链接链接已复制到粘贴板!
验证 OpenShift 客户端工具
用法示例
# Verify oc client tools
rosa verify oc
7.2.1.109. ROSA 验证权限 复制链接链接已复制到粘贴板!
对于非STS 集群安装,验证 AWS 权限是否正常
用法示例
# Verify AWS permissions are configured correctly
rosa verify permissions
# Verify AWS permissions in a different region
rosa verify permissions --region=us-west-2
7.2.1.110. ROSA 验证配额 复制链接链接已复制到粘贴板!
验证集群安装的 AWS 配额正常
用法示例
# Verify AWS quotas are configured correctly
rosa verify quota
# Verify AWS quotas in a different region
rosa verify quota --region=us-west-2
7.2.1.111. ROSA 验证 rosa-client 复制链接链接已复制到粘贴板!
验证 ROSA 客户端工具
用法示例
# Verify rosa client tools
rosa verify rosa
7.2.1.112. ROSA whoami 复制链接链接已复制到粘贴板!
显示用户帐户信息
用法示例
# Displays user information
rosa whoami
7.3. ROSA CLI 命令的最低权限 复制链接链接已复制到粘贴板!
您可以使用遵循最小特权主体的权限创建角色,在其中分配了角色的用户没有其他权限分配给他们所需的特定操作范围。这些策略仅包含使用 ROSA 命令行界面(CLI) (rosa)执行特定操作所需的最小权限。
虽然本主题中介绍的策略和命令会相互工作,但您可能在 AWS 环境中存在其他限制,以便根据您的特定需求,这些命令的策略不足。红帽将这些示例作为基线提供,假设不存在其他 AWS Identity and Access Management (IAM)限制。
有关在 AWS 控制台中配置权限、策略和角色的更多信息,请参阅 AWS 文档中的 AWS Identity and Access Management。
7.3.1. 在 AWS CLI 命令上常见的 Red Hat OpenShift Service 的最低权限 复制链接链接已复制到粘贴板!
以下示例演示了在构建 Red Hat OpenShift Service on AWS 集群时,最常用的 ROSA CLI 命令所需的权限。
7.3.1.1. 创建受管 OpenID Connect (OIDC)供应商 复制链接链接已复制到粘贴板!
运行以下命令,具有指定权限,以使用 auto 模式创建受管 OIDC 供应商。
输入
$ rosa create oidc-config --mode auto
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CreateOidcConfig",
"Effect": "Allow",
"Action": [
"iam:TagOpenIDConnectProvider",
"iam:CreateOpenIDConnectProvider"
],
"Resource": "*"
}
]
}
7.3.1.2. 创建非受管 OpenID Connect 供应商 复制链接链接已复制到粘贴板!
运行以下命令,具有指定权限,以使用 auto 模式创建受管 OIDC 供应商。
输入
$ rosa create oidc-config --mode auto --managed=false
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:TagOpenIDConnectProvider",
"iam:ListRoleTags",
"iam:ListRoles",
"iam:CreateOpenIDConnectProvider",
"s3:CreateBucket",
"s3:PutObject",
"s3:PutBucketTagging",
"s3:PutBucketPolicy",
"s3:PutObjectTagging",
"s3:PutBucketPublicAccessBlock",
"secretsmanager:CreateSecret",
"secretsmanager:TagResource"
],
"Resource": "*"
}
]
}
7.3.1.3. 列出您的帐户角色 复制链接链接已复制到粘贴板!
使用指定权限运行以下命令,以列出您的帐户角色。
输入
$ rosa list account-roles
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListAccountRoles",
"Effect": "Allow",
"Action": [
"iam:ListRoleTags",
"iam:ListRoles"
],
"Resource": "*"
}
]
}
7.3.1.4. 列出 Operator 角色 复制链接链接已复制到粘贴板!
使用指定权限运行以下命令列出 Operator 角色。
输入
$ rosa list operator-roles
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListOperatorRoles",
"Effect": "Allow",
"Action": [
"iam:ListRoleTags",
"iam:ListAttachedRolePolicies",
"iam:ListRoles",
"iam:ListPolicyTags"
],
"Resource": "*"
}
]
}
7.3.1.5. 列出您的 OIDC 供应商 复制链接链接已复制到粘贴板!
使用指定权限运行以下命令列出您的 OIDC 供应商。
输入
$ rosa list oidc-providers
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListOidcProviders",
"Effect": "Allow",
"Action": [
"iam:ListOpenIDConnectProviders",
"iam:ListOpenIDConnectProviderTags"
],
"Resource": "*"
}
]
}
7.3.1.6. 验证您的配额 复制链接链接已复制到粘贴板!
使用指定权限运行以下命令,以验证您的配额。
输入
$ rosa verify quota
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VerifyQuota",
"Effect": "Allow",
"Action": [
"elasticloadbalancing:DescribeAccountLimits",
"servicequotas:ListServiceQuotas"
],
"Resource": "*"
}
]
}
7.3.1.7. 删除受管 OIDC 配置 复制链接链接已复制到粘贴板!
使用指定权限运行以下命令,以使用 auto 模式删除受管 OIDC 配置。
输入
$ rosa delete oidc-config -–mode auto
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DeleteOidcConfig",
"Effect": "Allow",
"Action": [
"iam:ListOpenIDConnectProviders",
"iam:DeleteOpenIDConnectProvider"
],
"Resource": "*"
}
]
}
7.3.1.8. 删除您的非受管 OIDC 配置 复制链接链接已复制到粘贴板!
运行以下命令,具有指定权限,以使用 auto 模式删除受管 OIDC 配置。
输入
$ rosa delete oidc-config -–mode auto
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"iam:ListOpenIDConnectProviders",
"iam:DeleteOpenIDConnectProvider",
"secretsmanager:DeleteSecret",
"s3:ListBucket",
"s3:DeleteObject",
"s3:DeleteBucket"
],
"Resource": "*"
}
]
}
7.3.1.9. 创建集群 复制链接链接已复制到粘贴板!
使用指定权限运行以下命令,在 AWS 集群上创建 Red Hat OpenShift Service。
输入
$ rosa create cluster --hosted-cp
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CreateCluster",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:ListRoleTags",
"iam:ListAttachedRolePolicies",
"iam:ListRoles",
"ec2:DescribeSubnets",
"ec2:DescribeRouteTables",
"ec2:DescribeAvailabilityZones"
],
"Resource": "*"
}
]
}
7.3.1.10. 创建帐户角色和 Operator 角色 复制链接链接已复制到粘贴板!
使用指定权限运行以下命令,以使用 auto 模式创建帐户和 Operator 角色。
输入
$ rosa create account-roles --mode auto --hosted-cp
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CreateAccountRoles",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:UpdateAssumeRolePolicy",
"iam:ListRoleTags",
"iam:GetPolicy",
"iam:TagRole",
"iam:ListRoles",
"iam:CreateRole",
"iam:AttachRolePolicy",
"iam:ListPolicyTags"
],
"Resource": "*"
}
]
}
7.3.1.11. 删除您的帐户角色 复制链接链接已复制到粘贴板!
使用指定权限运行以下命令,以删除处于 auto 模式的帐户角色。
输入
$ rosa delete account-roles -–mode auto
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DeleteAccountRoles",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:ListInstanceProfilesForRole",
"iam:DetachRolePolicy",
"iam:ListAttachedRolePolicies",
"iam:ListRoles",
"iam:DeleteRole",
"iam:ListRolePolicies"
],
"Resource": "*"
}
]
}
7.3.1.12. 删除 Operator 角色 复制链接链接已复制到粘贴板!
使用指定权限运行以下命令,以自动模式删除 Operator 角色。
输入
$ rosa delete operator-roles -–mode auto
policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DeleteOperatorRoles",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:DetachRolePolicy",
"iam:ListAttachedRolePolicies",
"iam:ListRoles",
"iam:DeleteRole"
],
"Resource": "*"
}
]
}
7.3.2. 没有所需权限的 ROSA CLI 命令 复制链接链接已复制到粘贴板!
以下 ROSA CLI 命令不需要权限或策略才能运行。相反,它们需要访问密钥并配置的 secret 密钥或附加的角色。
| 命令 | 输入 |
|---|---|
| 列出集群 |
|
| 列出版本 |
|
| 描述集群 |
|
| 创建管理员 |
|
| 列出用户 |
|
| 列出升级 |
|
| 列出 OIDC 配置 |
|
| 列出身份提供程序 |
|
| 列出 ingresses |
|
7.4. 管理 Red Hat OpenShift Service on AWS 集群的账单帐户 复制链接链接已复制到粘贴板!
在部署集群后,您可以使用 ROSA CLI (rosa)将集群链接到所需的 AWS 账单帐户。
如果您在集群部署期间意外链接到错误的 AWS 账单帐户,或者只是要更新账单帐户,这非常有用。
您还可以选择通过 OpenShift Cluster Manager 更新账单帐户。如需更多信息,请参阅为 Red Hat OpenShift Service on AWS 集群更新计费帐户。
7.4.1. 为 Red Hat OpenShift Service on AWS 集群更新账单帐户 复制链接链接已复制到粘贴板!
先决条件
- 您必须有多个 AWS 账单帐户。
- 您希望集群链接到的 AWS 账单帐户必须已链接到部署集群的红帽机构。
流程
在终端窗口中运行以下命令:
语法
$ rosa edit cluster -c <cluster_ID>1 - 1
- 将
<cluster_ID> 替换为您要更新 AWS 账单帐户的集群的 ID。
注意要找到活跃集群的 ID,请在终端窗口中运行
$ rosa listcluster 命令。-
跳到交互模式中的
Billing Account参数。 从可用选项列表中选择所需的 AWS 账单帐户,然后按"Enter"。
现在,您的集群的 AWS 账单帐户已被更新。
Legal Notice
复制链接链接已复制到粘贴板!
Copyright © Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of the OpenJS Foundation.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.