Questo contenuto non è disponibile nella lingua selezionata.
Chapter 1. Configuring an OpenShift cluster by deploying an application with cluster configurations
With Red Hat OpenShift GitOps, you can configure Argo CD to recursively sync the content of a Git directory with an application that contains custom configurations for your cluster.
1.1. Prerequisites
- You have logged in to the OpenShift Container Platform cluster as an administrator.
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
1.2. Using an Argo CD instance to manage cluster-scoped resources
Do not elevate the permissions of Argo CD instances to be cluster-scoped unless you have a distinct use case that requires it. Only users with cluster-admin
privileges should manage the instances you elevate. Anyone with access to the namespace of a cluster-scoped instance can elevate their privileges on the cluster to become a cluster administrator themselves.
To manage cluster-scoped resources, update the existing Subscription
object for the Red Hat OpenShift GitOps Operator and add the namespace of the Argo CD instance to the ARGOCD_CLUSTER_CONFIG_NAMESPACES
environment variable in the spec
section.
Procedure
-
In the Administrator perspective of the web console, navigate to Operators
Installed Operators Red Hat OpenShift GitOps Subscription. - Click the Actions list and then click Edit Subscription.
On the openshift-gitops-operator Subscription details page, under the YAML tab, edit the
Subscription
YAML file by adding the namespace of the Argo CD instance to theARGOCD_CLUSTER_CONFIG_NAMESPACES
environment variable in thespec
section:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator # ... spec: config: env: - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES value: openshift-gitops, <list of namespaces of cluster-scoped Argo CD instances> # ...
- Click Save and Reload.
To verify that the Argo CD instance is configured with a cluster role to manage cluster-scoped resources, perform the following steps:
-
Navigate to User Management
Roles and from the Filter list select Cluster-wide Roles. Search for the
argocd-application-controller
by using the Search by name field.The Roles page displays the created cluster role.
TipAlternatively, in the OpenShift CLI, run the following command:
oc auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller
The output
yes
verifies that the Argo instance is configured with a cluster role to manage cluster-scoped resources. Else, check your configurations and take necessary steps as required.
-
Navigate to User Management
1.3. Default permissions of an Argo CD instance
By default Argo CD instance has the following permissions:
-
Argo CD instance has the
admin
privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the foo namespace has theadmin
privileges to manage resources only for that namespace. Argo CD has the following cluster-scoped permissions because Argo CD requires cluster-wide
read
privileges on resources to function appropriately:- verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - get - list nonResourceURLs: - '*'
You can edit the cluster roles used by the
argocd-server
andargocd-application-controller
components where Argo CD is running such that thewrite
privileges are limited to only the namespaces and resources that you wish Argo CD to manage.$ oc edit clusterrole argocd-server $ oc edit clusterrole argocd-application-controller
1.4. Running the Argo CD instance at the cluster-level
The default Argo CD instance and the accompanying controllers, installed by the Red Hat OpenShift GitOps Operator, can now run on the infrastructure nodes of the cluster by setting a simple configuration toggle.
Procedure
Label the existing nodes:
$ oc label node <node-name> node-role.kubernetes.io/infra=""
Optional: If required, you can also apply taints and isolate the workloads on infrastructure nodes and prevent other workloads from scheduling on these nodes:
$ oc adm taint nodes -l node-role.kubernetes.io/infra \ infra=reserved:NoSchedule infra=reserved:NoExecute
Add the
runOnInfra
toggle in theGitOpsService
custom resource:apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true
Optional: If taints have been added to the nodes, then add
tolerations
to theGitOpsService
custom resource.Example
apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true tolerations: - effect: NoSchedule key: infra value: reserved - effect: NoExecute key: infra value: reserved
-
Verify that the workloads in the
openshift-gitops
namespace are now scheduled on the infrastructure nodes by viewing PodsPod details for any pod in the console UI.
Any nodeSelectors
and tolerations
manually added to the default Argo CD custom resource are overwritten by the toggle and tolerations
in the GitOpsService
custom resource.
Additional resources
- To learn more about taints and tolerations, see Controlling pod placement using node taints.
- For more information on infrastructure machine sets, see Creating infrastructure machine sets.
1.5. Creating an application by using the Argo CD dashboard
Argo CD provides a dashboard which allows you to create applications.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform web console cluster configurations that add a link to the Red Hat Developer Blog - Kubernetes under the
menu in the web console, and defines a namespace spring-petclinic
on the cluster.
Prerequisites
- You have logged in to the OpenShift Container Platform cluster as an administrator.
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to Argo CD instance.
Procedure
- In the Argo CD dashboard, click NEW APP to add a new Argo CD application.
For this workflow, create a cluster-configs application with the following configurations:
- Application Name
-
cluster-configs
- Project
-
default
- Sync Policy
-
Manual
- Repository URL
-
https://github.com/redhat-developer/openshift-gitops-getting-started
- Revision
-
HEAD
- Path
-
cluster
- Destination
-
https://kubernetes.default.svc
- Namespace
-
spring-petclinic
- Directory Recurse
-
checked
- Click CREATE to create your application.
-
Open the Administrator perspective of the web console and expand Administration
Namespaces. -
Search for and select the namespace, then enter
argocd.argoproj.io/managed-by=openshift-gitops
in the Label field so that the Argo CD instance in theopenshift-gitops
namespace can manage your namespace.
1.6. Creating an application by using the oc
tool
You can create Argo CD applications in your terminal by using the oc
tool.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to an Argo CD instance.
Procedure
Download the sample application:
$ git clone git@github.com:redhat-developer/openshift-gitops-getting-started.git
Create the application:
$ oc create -f openshift-gitops-getting-started/argo/app.yaml
Run the
oc get
command to review the created application:$ oc get application -n openshift-gitops
Add a label to the namespace your application is deployed in so that the Argo CD instance in the
openshift-gitops
namespace can manage it:$ oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops
1.7. Creating an application in the default mode by using the GitOps CLI
You can create applications in the default mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI. - You have logged in to Argo CD instance.
Procedure
Get the
admin
account password for the Argo CD server:$ ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)
Get the Argo CD server URL:
$ SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')
Log in to the Argo CD server by using the
admin
account password and enclosing it in single quotes:ImportantEnclosing the password in single quotes ensures that special characters, such as
$
, are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password.$ argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}
Example
$ argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing
Verify that you are able to run
argocd
commands in the default mode by listing all applications:$ argocd app list
If the configuration is correct, then existing applications will be listed with the following header:
Sample output
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
Create an application in the default mode:
$ argocd app create app-cluster-configs \ --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git \ --path cluster \ --revision main \ --dest-server https://kubernetes.default.svc \ --dest-namespace spring-petclinic \ --directory-recurse \ --sync-policy none \ --sync-option Prune=true \ --sync-option CreateNamespace=true
Label the
spring-petclinic
destination namespace to be managed by theopenshif-gitops
Argo CD instance:$ oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
List the available applications to confirm that the application is created successfully:
$ argocd app list
Even though the
cluster-configs
Argo CD application has theHealthy
status, it is not automatically synced due to itsnone
sync policy, causing it to remain in theOutOfSync
status.
1.8. Creating an application in core mode by using the GitOps CLI
You can create applications in core
mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI.
Procedure
Log in to the OpenShift Container Platform cluster by using the
oc
CLI tool:$ oc login -u <username> -p <password> <server_url>
Example
$ oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443
Check whether the context is set correctly in the
kubeconfig
file:$ oc config current-context
Set the default namespace of the current context to
openshift-gitops
:$ oc config set-context --current --namespace openshift-gitops
Set the following environment variable to override the Argo CD component names:
$ export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
Verify that you are able to run
argocd
commands incore
mode by listing all applications:$ argocd app list --core
If the configuration is correct, then existing applications will be listed with the following header:
Sample output
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
Create an application in
core
mode:$ argocd app create app-cluster-configs --core \ --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git \ --path cluster \ --revision main \ --dest-server https://kubernetes.default.svc \ --dest-namespace spring-petclinic \ --directory-recurse \ --sync-policy none \ --sync-option Prune=true \ --sync-option CreateNamespace=true
Label the
spring-petclinic
destination namespace to be managed by theopenshif-gitops
Argo CD instance:$ oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops"
List the available applications to confirm that the application is created successfully:
$ argocd app list --core
Even though the
cluster-configs
Argo CD application has theHealthy
status, it is not automatically synced due to itsnone
sync policy, causing it to remain in theOutOfSync
status.
1.9. Synchronizing your application with your Git repository
You can synchronize your application with your Git repository by modifying the synchronization policy for Argo CD. The policy modification automatically applies the changes in your cluster configurations from your Git repository to the cluster.
Procedure
- In the Argo CD dashboard, notice that the cluster-configs Argo CD application has the statuses Missing and OutOfSync. Because the application was configured with a manual sync policy, Argo CD does not sync it automatically.
- Click SYNC on the cluster-configs tile, review the changes, and then click SYNCHRONIZE. Argo CD will detect any changes in the Git repository automatically. If the configurations are changed, Argo CD will change the status of the cluster-configs to OutOfSync. You can modify the synchronization policy for Argo CD to automatically apply changes from your Git repository to the cluster.
- Notice that the cluster-configs Argo CD application now has the statuses Healthy and Synced. Click the cluster-configs tile to check the details of the synchronized resources and their status on the cluster.
- Navigate to the OpenShift Container Platform web console and click to verify that a link to the Red Hat Developer Blog - Kubernetes is now present there.
Navigate to the Project page and search for the
spring-petclinic
namespace to verify that it has been added to the cluster.Your cluster configurations have been successfully synchronized to the cluster.
1.10. Synchronizing an application in the default mode by using the GitOps CLI
You can synchronize applications in the default mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
- You have logged in to Argo CD instance.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI.
Procedure
Get the
admin
account password for the Argo CD server:$ ADMIN_PASSWD=$(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d)
Get the Argo CD server URL:
$ SERVER_URL=$(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')
Log in to the Argo CD server by using the
admin
account password and enclosing it in single quotes:ImportantEnclosing the password in single quotes ensures that special characters, such as
$
, are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password.$ argocd login --username admin --password ${ADMIN_PASSWD} ${SERVER_URL}
Example
$ argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing
Because the application is configured with the
none
sync policy, you must manually trigger the sync operation:$ argocd app sync openshift-gitops/app-cluster-configs
List the application to confirm that the application has the
Healthy
andSynced
statuses:$ argocd app list
1.11. Synchronizing an application in core mode by using the GitOps CLI
You can synchronize applications in core
mode by using the GitOps argocd
CLI.
This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster
directory to the cluster-configs
application. The directory defines the OpenShift Container Platform cluster configurations and the spring-petclinic
namespace on the cluster.
Prerequisites
- You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
). -
You have installed the Red Hat OpenShift GitOps
argocd
CLI.
Procedure
Log in to the OpenShift Container Platform cluster by using the
oc
CLI tool:$ oc login -u <username> -p <password> <server_url>
Example
$ oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443
Check whether the context is set correctly in the
kubeconfig
file:$ oc config current-context
Set the default namespace of the current context to
openshift-gitops
:$ oc config set-context --current --namespace openshift-gitops
Set the following environment variable to override the Argo CD component names:
$ export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server
Because the application is configured with the
none
sync policy, you must manually trigger the sync operation:$ argocd app sync --core openshift-gitops/app-cluster-configs
List the application to confirm that the application has the
Healthy
andSynced
statuses:$ argocd app list --core
1.12. In-built permissions for cluster configuration
By default, the Argo CD instance has permissions to manage specific cluster-scoped resources such as cluster Operators, optional OLM Operators and user management.
-
Argo CD does not have
cluster-admin
permissions. - You can extend the permissions bound to any Argo CD instances managed by the GitOps Operator. However, you must not modify the permission resources, such as roles or cluster roles created by the GitOps Operator, because the Operator might reconcile them back to their initial state. Instead, create dedicated role and cluster role objects and bind them to the appropriate service account that the application controller uses.
Permissions for the Argo CD instance:
Resources | Descriptions |
---|---|
Resource Groups | Configure the user or administrator |
| Optional Operators managed by OLM |
| Groups, Users and their permissions |
| Control plane Operators managed by CVO used to configure cluster-wide build configuration, registry configuration and scheduler policies |
| Storage |
| Console customization |
1.13. Adding permissions for cluster configuration
You can grant permissions for an Argo CD instance to manage cluster configuration. Create a cluster role with additional permissions and then create a new cluster role binding to associate the cluster role with a service account.
Prerequisites
-
You have access to an OpenShift Container Platform cluster with
cluster-admin
privileges and are logged in to the web console. - You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster.
Procedure
In the web console, select User Management
Roles Create Role. Use the following ClusterRole
YAML template to add rules to specify the additional permissions.apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: secrets-cluster-role rules: - apiGroups: [""] resources: ["secrets"] verbs: ["*"]
- Click Create to add the cluster role.
-
To create the cluster role binding, select User Management
Role Bindings Create Binding. - Select All Projects from the Project list.
- Click Create binding.
- Select Binding type as Cluster-wide role binding (ClusterRoleBinding).
- Enter a unique value for the RoleBinding name.
- Select the newly created cluster role or an existing cluster role from the drop-down list.
Select the Subject as ServiceAccount and the provide the Subject namespace and name.
-
Subject namespace:
openshift-gitops
Subject name:
openshift-gitops-argocd-application-controller
NoteThe value of Subject name depends on the GitOps control plane components for which you create the cluster roles and cluster role bindings.
-
Subject namespace:
Click Create. The YAML file for the
ClusterRoleBinding
object is as follows:kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-role-binding subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: secrets-cluster-role
1.14. Installing OLM Operators using Red Hat OpenShift GitOps
Red Hat OpenShift GitOps with cluster configurations manages specific cluster-scoped resources and takes care of installing cluster Operators or any namespace-scoped OLM Operators.
Consider a case where as a cluster administrator, you have to install an OLM Operator such as Tekton. You use the OpenShift Container Platform web console to manually install a Tekton Operator or the OpenShift CLI to manually install a Tekton subscription and Tekton Operator group on your cluster.
Red Hat OpenShift GitOps places your Kubernetes resources in your Git repository. As a cluster administrator, use Red Hat OpenShift GitOps to manage and automate the installation of other OLM Operators without any manual procedures. For example, after you place the Tekton subscription in your Git repository by using Red Hat OpenShift GitOps, the Red Hat OpenShift GitOps automatically takes this Tekton subscription from your Git repository and installs the Tekton Operator on your cluster.
1.14.1. Installing cluster-scoped Operators
Operator Lifecycle Manager (OLM) uses a default global-operators
Operator group in the openshift-operators
namespace for cluster-scoped Operators. Hence you do not have to manage the OperatorGroup
resource in your Gitops repository. However, for namespace-scoped Operators, you must manage the OperatorGroup
resource in that namespace.
To install cluster-scoped Operators, create and place the Subscription
resource of the required Operator in your Git repository.
Example: Grafana Operator subscription
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: grafana spec: channel: v4 installPlanApproval: Automatic name: grafana-operator source: redhat-operators sourceNamespace: openshift-marketplace
1.14.2. Installing namepace-scoped Operators
To install namespace-scoped Operators, create and place the Subscription
and OperatorGroup
resources of the required Operator in your Git repository.
Example: Ansible Automation Platform Resource Operator
# ... apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: ansible-automation-platform # ... apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform # ... apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: patch-me installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace # ...
When deploying multiple Operators using Red Hat OpenShift GitOps, you must create only a single Operator group in the corresponding namespace. If more than one Operator group exists in a single namespace, any CSV created in that namespace transition to a failure
state with the TooManyOperatorGroups
reason. After the number of Operator groups in their corresponding namespaces reaches one, all the previous failure
state CSVs transition to pending
state. You must manually approve the pending install plan to complete the Operator installation.