Copy to ClipboardCopied!Toggle word wrapToggle overflow
例如:
cns-deploy -v -n storage-project -g --admin-key secret --user-key mysecret topology.json
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.
Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.
The client machine that will run this script must have:
* Administrative access to an existing Kubernetes or OpenShift cluster
* Access to a python interpreter 'python'
Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
* 111 - rpcbind (for glusterblock)
* 2222 - sshd (if running GlusterFS in a pod)
* 3260 - iSCSI targets (for glusterblock)
* 24010 - glusterblockd
* 24007 - GlusterFS Management
* 24008 - GlusterFS RDMA
* 49152 to 49251 - Each brick for every volume on the host requires its own
port. For every new brick, one new port will be used starting at 49152. We
recommend a default range of 49152-49251 on each host, though you can adjust
this to fit your needs.
The following kernel modules must be loaded:
* dm_snapshot
* dm_mirror
* dm_thin_pool
* dm_multipath
* target_core_user
For systems with SELinux, the following settings need to be considered:
* virt_sandbox_use_fusefs should be enabled on each node to allow writing to
remote GlusterFS volumes
In addition, for an OpenShift deployment you must:
* Have 'cluster_admin' role on the administrative account doing the deployment
* Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
* Have a router deployed that is configured to allow apps to access services
running in the cluster
Do you wish to proceed with deployment?
[Y]es, [N]o? [Default: Y]: Y
Using OpenShift CLI.
Using namespace "storage-project".
Checking for pre-existing resources...
GlusterFS pods ... not found.
deploy-heketi pod ... not found.
heketi pod ... not found.
glusterblock-provisioner pod ... not found.
gluster-s3 pod ... not found.
Creating initial resources ... template "deploy-heketi" created
serviceaccount "heketi-service-account" created
template "heketi" created
template "glusterfs" created
role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
OK
node "ip-172-18-5-29.ec2.internal" labeled
node "ip-172-18-8-205.ec2.internal" labeled
node "ip-172-18-6-100.ec2.internal" labeled
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... OK
secret "heketi-config-secret" created
secret "heketi-config-secret" labeled
service "deploy-heketi" created
route "deploy-heketi" created
deploymentconfig "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: 30cd12e60f860fce21e7e7457d07db36
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node ip-172-18-5-29.ec2.internal ... ID: 4077242c76e5f477a27c5c47247cb348
Adding device /dev/xvdc ... OK
Creating node ip-172-18-8-205.ec2.internal ... ID: dda0e7d568d7b2f76a7e7491cfc26dd3
Adding device /dev/xvdc ... OK
Creating node ip-172-18-6-100.ec2.internal ... ID: 30a1795ca515c85dca32b09be7a68733
Adding device /dev/xvdc ... OK
heketi topology loaded.
Saving /tmp/heketi-storage.json
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job "heketi-storage-copy-job" created
service "heketi-storage-endpoints" labeled
deploymentconfig "deploy-heketi" deleted
route "deploy-heketi" deleted
service "deploy-heketi" deleted
job "heketi-storage-copy-job" deleted
pod "deploy-heketi-1-frjpt" deleted
secret "heketi-storage-secret" deleted
template "deploy-heketi" deleted
service "heketi" created
route "heketi" created
deploymentconfig "heketi" created
Waiting for heketi pod to start ... OK
heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run
administrative commands you can install 'heketi-cli' and use it as follows:
# heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list
You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:
# /bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
For dynamic provisioning, create a StorageClass similar to this:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
Ready to create and provide GlusterFS volumes.
clusterrole "glusterblock-provisioner-runner" created
serviceaccount "glusterblock-provisioner" created
clusterrolebinding "glusterblock-provisioner" created
deploymentconfig "glusterblock-provisioner-dc" created
Waiting for glusterblock-provisioner pod to start ... OK
Ready to create and provide Gluster block volumes.
Deployment complete!
# cns-deploy -v -n storage-project -g --admin-key secret --user-key mysecret topology.json
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.
Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.
The client machine that will run this script must have:
* Administrative access to an existing Kubernetes or OpenShift cluster
* Access to a python interpreter 'python'
Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
* 111 - rpcbind (for glusterblock)
* 2222 - sshd (if running GlusterFS in a pod)
* 3260 - iSCSI targets (for glusterblock)
* 24010 - glusterblockd
* 24007 - GlusterFS Management
* 24008 - GlusterFS RDMA
* 49152 to 49251 - Each brick for every volume on the host requires its own
port. For every new brick, one new port will be used starting at 49152. We
recommend a default range of 49152-49251 on each host, though you can adjust
this to fit your needs.
The following kernel modules must be loaded:
* dm_snapshot
* dm_mirror
* dm_thin_pool
* dm_multipath
* target_core_user
For systems with SELinux, the following settings need to be considered:
* virt_sandbox_use_fusefs should be enabled on each node to allow writing to
remote GlusterFS volumes
In addition, for an OpenShift deployment you must:
* Have 'cluster_admin' role on the administrative account doing the deployment
* Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
* Have a router deployed that is configured to allow apps to access services
running in the cluster
Do you wish to proceed with deployment?
[Y]es, [N]o? [Default: Y]: Y
Using OpenShift CLI.
Using namespace "storage-project".
Checking for pre-existing resources...
GlusterFS pods ... not found.
deploy-heketi pod ... not found.
heketi pod ... not found.
glusterblock-provisioner pod ... not found.
gluster-s3 pod ... not found.
Creating initial resources ... template "deploy-heketi" created
serviceaccount "heketi-service-account" created
template "heketi" created
template "glusterfs" created
role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
OK
node "ip-172-18-5-29.ec2.internal" labeled
node "ip-172-18-8-205.ec2.internal" labeled
node "ip-172-18-6-100.ec2.internal" labeled
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... OK
secret "heketi-config-secret" created
secret "heketi-config-secret" labeled
service "deploy-heketi" created
route "deploy-heketi" created
deploymentconfig "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: 30cd12e60f860fce21e7e7457d07db36
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node ip-172-18-5-29.ec2.internal ... ID: 4077242c76e5f477a27c5c47247cb348
Adding device /dev/xvdc ... OK
Creating node ip-172-18-8-205.ec2.internal ... ID: dda0e7d568d7b2f76a7e7491cfc26dd3
Adding device /dev/xvdc ... OK
Creating node ip-172-18-6-100.ec2.internal ... ID: 30a1795ca515c85dca32b09be7a68733
Adding device /dev/xvdc ... OK
heketi topology loaded.
Saving /tmp/heketi-storage.json
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job "heketi-storage-copy-job" created
service "heketi-storage-endpoints" labeled
deploymentconfig "deploy-heketi" deleted
route "deploy-heketi" deleted
service "deploy-heketi" deleted
job "heketi-storage-copy-job" deleted
pod "deploy-heketi-1-frjpt" deleted
secret "heketi-storage-secret" deleted
template "deploy-heketi" deleted
service "heketi" created
route "heketi" created
deploymentconfig "heketi" created
Waiting for heketi pod to start ... OK
heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run
administrative commands you can install 'heketi-cli' and use it as follows:
# heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list
You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:
# /bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
For dynamic provisioning, create a StorageClass similar to this:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
Ready to create and provide GlusterFS volumes.
clusterrole "glusterblock-provisioner-runner" created
serviceaccount "glusterblock-provisioner" created
clusterrolebinding "glusterblock-provisioner" created
deploymentconfig "glusterblock-provisioner-dc" created
Waiting for glusterblock-provisioner pod to start ... OK
Ready to create and provide Gluster block volumes.
Deployment complete!
Copy to ClipboardCopied!Toggle word wrapToggle overflow
注意
For more information on the cns-deploy commands, refer to the man page of cns-deploy.
For more information on the cns-deploy commands, refer to the man page of cns-deploy.
Copy to ClipboardCopied!Toggle word wrapToggle overflow
+
cns-deploy --help
# cns-deploy --help
Copy to ClipboardCopied!Toggle word wrapToggle overflow
要部署 S3 兼容对象存储以及 Heketi 和 Red Hat Gluster Storage pod,请执行以下命令:
Copy to ClipboardCopied!Toggle word wrapToggle overflow
要验证 Heketi 是否已使用拓扑载入,请执行以下命令:
heketi-cli topology info
# heketi-cli topology info
Copy to ClipboardCopied!Toggle word wrapToggle overflow
注意
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Managing_Clusters[]
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Managing_Clusters[]
Copy to ClipboardCopied!Toggle word wrapToggle overflow
Copy to ClipboardCopied!Toggle word wrapToggle overflow
例如:
cns-deploy -v -n storage-project -g --admin-key secret -s /root/.ssh/id_rsa --user-key mysecret topology.json
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.
Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.
The client machine that will run this script must have:
* Administrative access to an existing Kubernetes or OpenShift cluster
* Access to a python interpreter 'python'
Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
* 2222 - sshd (if running GlusterFS in a pod)
* 24007 - GlusterFS Management
* 24008 - GlusterFS RDMA
* 49152 to 49251 - Each brick for every volume on the host requires its own
port. For every new brick, one new port will be used starting at 49152. We
recommend a default range of 49152-49251 on each host, though you can adjust
this to fit your needs.
The following kernel modules must be loaded:
* dm_snapshot
* dm_mirror
* dm_thin_pool
For systems with SELinux, the following settings need to be considered:
* virt_sandbox_use_fusefs should be enabled on each node to allow writing to
remote GlusterFS volumes
In addition, for an OpenShift deployment you must:
* Have 'cluster_admin' role on the administrative account doing the deployment
* Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
* Have a router deployed that is configured to allow apps to access services
running in the cluster
Do you wish to proceed with deployment?
[Y]es, [N]o? [Default: Y]: y
Using OpenShift CLI.
Using namespace "storage-project".
Checking for pre-existing resources...
GlusterFS pods ... not found.
deploy-heketi pod ... not found.
heketi pod ... not found.
Creating initial resources ... template "deploy-heketi" created
serviceaccount "heketi-service-account" created
template "heketi" created
role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
OK
secret "heketi-config-secret" created
secret "heketi-config-secret" labeled
service "deploy-heketi" created
route "deploy-heketi" created
deploymentconfig "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: 60bf06636eb4eb81d4e9be4b04cfce92
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node dhcp47-104.lab.eng.blr.redhat.com ... ID: eadc66f9d03563bcfc3db3fe636c34be
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
Creating node dhcp47-83.lab.eng.blr.redhat.com ... ID: 178684b0a0425f51b8f1a032982ffe4d
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
Creating node dhcp46-152.lab.eng.blr.redhat.com ... ID: 08cd7034ef7ac66499dc040d93cf4a93
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
heketi topology loaded.
Saving /tmp/heketi-storage.json
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job "heketi-storage-copy-job" created
service "heketi-storage-endpoints" labeled
deploymentconfig "deploy-heketi" deleted
route "deploy-heketi" deleted
service "deploy-heketi" deleted
job "heketi-storage-copy-job" deleted
pod "deploy-heketi-1-30c06" deleted
secret "heketi-storage-secret" deleted
template "deploy-heketi" deleted
service "heketi" created
route "heketi" created
deploymentconfig "heketi" created
Waiting for heketi pod to start ... OK
heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run
administrative commands you can install 'heketi-cli' and use it as follows:
# heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list
You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:
# /usr/bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
For dynamic provisioning, create a StorageClass similar to this:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
Deployment complete!
# cns-deploy -v -n storage-project -g --admin-key secret -s /root/.ssh/id_rsa --user-key mysecret topology.json
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.
Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.
The client machine that will run this script must have:
* Administrative access to an existing Kubernetes or OpenShift cluster
* Access to a python interpreter 'python'
Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
* 2222 - sshd (if running GlusterFS in a pod)
* 24007 - GlusterFS Management
* 24008 - GlusterFS RDMA
* 49152 to 49251 - Each brick for every volume on the host requires its own
port. For every new brick, one new port will be used starting at 49152. We
recommend a default range of 49152-49251 on each host, though you can adjust
this to fit your needs.
The following kernel modules must be loaded:
* dm_snapshot
* dm_mirror
* dm_thin_pool
For systems with SELinux, the following settings need to be considered:
* virt_sandbox_use_fusefs should be enabled on each node to allow writing to
remote GlusterFS volumes
In addition, for an OpenShift deployment you must:
* Have 'cluster_admin' role on the administrative account doing the deployment
* Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
* Have a router deployed that is configured to allow apps to access services
running in the cluster
Do you wish to proceed with deployment?
[Y]es, [N]o? [Default: Y]: y
Using OpenShift CLI.
Using namespace "storage-project".
Checking for pre-existing resources...
GlusterFS pods ... not found.
deploy-heketi pod ... not found.
heketi pod ... not found.
Creating initial resources ... template "deploy-heketi" created
serviceaccount "heketi-service-account" created
template "heketi" created
role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
OK
secret "heketi-config-secret" created
secret "heketi-config-secret" labeled
service "deploy-heketi" created
route "deploy-heketi" created
deploymentconfig "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: 60bf06636eb4eb81d4e9be4b04cfce92
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node dhcp47-104.lab.eng.blr.redhat.com ... ID: eadc66f9d03563bcfc3db3fe636c34be
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
Creating node dhcp47-83.lab.eng.blr.redhat.com ... ID: 178684b0a0425f51b8f1a032982ffe4d
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
Creating node dhcp46-152.lab.eng.blr.redhat.com ... ID: 08cd7034ef7ac66499dc040d93cf4a93
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
heketi topology loaded.
Saving /tmp/heketi-storage.json
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job "heketi-storage-copy-job" created
service "heketi-storage-endpoints" labeled
deploymentconfig "deploy-heketi" deleted
route "deploy-heketi" deleted
service "deploy-heketi" deleted
job "heketi-storage-copy-job" deleted
pod "deploy-heketi-1-30c06" deleted
secret "heketi-storage-secret" deleted
template "deploy-heketi" deleted
service "heketi" created
route "heketi" created
deploymentconfig "heketi" created
Waiting for heketi pod to start ... OK
heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run
administrative commands you can install 'heketi-cli' and use it as follows:
# heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list
You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:
# /usr/bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
For dynamic provisioning, create a StorageClass similar to this:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
Deployment complete!
Copy to ClipboardCopied!Toggle word wrapToggle overflow
注意
For more information on the cns-deploy commands, refer to the man page of the cns-deploy.
For more information on the cns-deploy commands, refer to the man page of the cns-deploy.
Copy to ClipboardCopied!Toggle word wrapToggle overflow
+
cns-deploy --help
# cns-deploy --help
Copy to ClipboardCopied!Toggle word wrapToggle overflow
要部署 S3 兼容对象存储以及 Heketi 和 Red Hat Gluster Storage pod,请执行以下命令:
object-sc 和 object-capacity 是可选参数。其中,object-sc 用于指定已存在的 StorageClass,用于创建 Red Hat Gluster 存储卷来备份对象存储和对象容量,这是将存储对象数据的 Red Hat Gluster 存储卷的总容量。
例如:
cns-deploy /opt/topology.json --deploy-gluster --namespace storage-project --admin-key secret --user-key mysecret --yes --log-file=/var/log/cns-deploy/444-cns-deploy.log --object-account testvolume --object-user adminuser --object-password itsmine --verbose
Using OpenShift CLI.
Checking status of namespace matching 'storage-project':
storage-project Active 56m
Using namespace "storage-project".
Checking for pre-existing resources...
GlusterFS pods ...
Checking status of pods matching '--selector=glusterfs=pod':
No resources found.
Timed out waiting for pods matching '--selector=glusterfs=pod'.
not found.
deploy-heketi pod ...
Checking status of pods matching '--selector=deploy-heketi=pod':
No resources found.
Timed out waiting for pods matching '--selector=deploy-heketi=pod'.
not found.
heketi pod ...
Checking status of pods matching '--selector=heketi=pod':
No resources found.
Timed out waiting for pods matching '--selector=heketi=pod'.
not found.
glusterblock-provisioner pod ...
Checking status of pods matching '--selector=glusterfs=block-provisioner-pod':
No resources found.
Timed out waiting for pods matching '--selector=glusterfs=block-provisioner-pod'.
not found.
gluster-s3 pod ...
Checking status of pods matching '--selector=glusterfs=s3-pod':
No resources found.
Timed out waiting for pods matching '--selector=glusterfs=s3-pod'.
not found.
Creating initial resources ... /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/deploy-heketi-template.yaml 2>&1
template "deploy-heketi" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-service-account.yaml 2>&1
serviceaccount "heketi-service-account" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-template.yaml 2>&1
template "heketi" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/glusterfs-template.yaml 2>&1
template "glusterfs" created
/usr/bin/oc -n storage-project policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account 2>&1
role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
/usr/bin/oc -n storage-project adm policy add-scc-to-user privileged -z heketi-service-account
OK
Marking 'dhcp46-122.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp46-122.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp46-122.lab.eng.blr.redhat.com" labeled
Marking 'dhcp46-9.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp46-9.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp46-9.lab.eng.blr.redhat.com" labeled
Marking 'dhcp46-134.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp46-134.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp46-134.lab.eng.blr.redhat.com" labeled
Deploying GlusterFS pods.
/usr/bin/oc -n storage-project process -p NODE_LABEL=glusterfs glusterfs | /usr/bin/oc -n storage-project create -f - 2>&1
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ...
Checking status of pods matching '--selector=glusterfs=pod':
glusterfs-6fj2v 1/1 Running 0 52s
glusterfs-ck40f 1/1 Running 0 52s
glusterfs-kbtz4 1/1 Running 0 52s
OK
/usr/bin/oc -n storage-project create secret generic heketi-config-secret --from-file=private_key=/dev/null --from-file=./heketi.json --from-file=topology.json=/opt/topology.json
secret "heketi-config-secret" created
/usr/bin/oc -n storage-project label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret
secret "heketi-config-secret" labeled
/usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= deploy-heketi | /usr/bin/oc -n storage-project create -f - 2>&1
service "deploy-heketi" created
route "deploy-heketi" created
deploymentconfig "deploy-heketi" created
Waiting for deploy-heketi pod to start ...
Checking status of pods matching '--selector=deploy-heketi=pod':
deploy-heketi-1-hf9rn 1/1 Running 0 2m
OK
Determining heketi service URL ... OK
/usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' topology load --json=/etc/heketi/topology.json 2>&1
Creating cluster ... ID: 252509038eb8568162ec5920c12bc243
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node dhcp46-122.lab.eng.blr.redhat.com ... ID: 73ad287ae1ef231f8a0db46422367c9a
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
Creating node dhcp46-9.lab.eng.blr.redhat.com ... ID: 0da1b20daaad2d5c57dbfc4f6ab78001
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
Creating node dhcp46-134.lab.eng.blr.redhat.com ... ID: 4b3b62fc0efd298dedbcdacf0b498e65
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
heketi topology loaded.
/usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json --image rhgs3/rhgs-volmanager-rhel7:3.3.0-17 2>&1
Saving /tmp/heketi-storage.json
/usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- cat /tmp/heketi-storage.json | /usr/bin/oc -n storage-project create -f - 2>&1
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job "heketi-storage-copy-job" created
Checking status of pods matching '--selector=job-name=heketi-storage-copy-job':
heketi-storage-copy-job-87v6n 0/1 Completed 0 7s
/usr/bin/oc -n storage-project label --overwrite svc heketi-storage-endpoints glusterfs=heketi-storage-endpoints heketi=storage-endpoints
service "heketi-storage-endpoints" labeled
/usr/bin/oc -n storage-project delete all,service,jobs,deployment,secret --selector="deploy-heketi" 2>&1
deploymentconfig "deploy-heketi" deleted
route "deploy-heketi" deleted
service "deploy-heketi" deleted
job "heketi-storage-copy-job" deleted
pod "deploy-heketi-1-hf9rn" deleted
secret "heketi-storage-secret" deleted
/usr/bin/oc -n storage-project delete dc,route,template --selector="deploy-heketi" 2>&1
template "deploy-heketi" deleted
/usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= heketi | /usr/bin/oc -n storage-project create -f - 2>&1
service "heketi" created
route "heketi" created
deploymentconfig "heketi" created
Waiting for heketi pod to start ...
Checking status of pods matching '--selector=heketi=pod':
heketi-1-zzblp 1/1 Running 0 31s
OK
Determining heketi service URL ... OK
heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run
administrative commands you can install 'heketi-cli' and use it as follows:
# heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list
You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:
# /usr/bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
For dynamic provisioning, create a StorageClass similar to this:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
Ready to create and provide GlusterFS volumes.
sed -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
clusterrole "glusterblock-provisioner-runner" created
serviceaccount "glusterblock-provisioner" created
clusterrolebinding "glusterblock-provisioner" created
deploymentconfig "glusterblock-provisioner-dc" created
Waiting for glusterblock-provisioner pod to start ...
Checking status of pods matching '--selector=glusterfs=block-provisioner-pod':
glusterblock-provisioner-dc-1-xm6bv 1/1 Running 0 6s
OK
Ready to create and provide Gluster block volumes.
/usr/bin/oc -n storage-project create secret generic heketi-storage-project-admin-secret --from-literal=key= --type=kubernetes.io/glusterfs
secret "heketi-storage-project-admin-secret" created
/usr/bin/oc -n storage-project label --overwrite secret heketi-storage-project-admin-secret glusterfs=s3-heketi-storage-project-admin-secret gluster-s3=heketi-storage-project-admin-secret
secret "heketi-storage-project-admin-secret" labeled
sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/' -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
storageclass "glusterfs-for-s3" created
sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${VOLUME_CAPACITY}/2Gi/' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
persistentvolumeclaim "gluster-s3-claim" created
persistentvolumeclaim "gluster-s3-meta-claim" created
Checking status of persistentvolumeclaims matching '--selector=glusterfs in (s3-pvc, s3-meta-pvc)':
gluster-s3-claim Bound pvc-35b6c1f0-9c65-11e7-9c8c-005056b3ded1 2Gi RWX glusterfs-for-s3 18s
gluster-s3-meta-claim Bound pvc-35b86e7a-9c65-11e7-9c8c-005056b3ded1 1Gi RWX glusterfs-for-s3 18s
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/gluster-s3-template.yaml 2>&1
template "gluster-s3" created
/usr/bin/oc -n storage-project process -p S3_ACCOUNT=testvolume -p S3_USER=adminuser -p S3_PASSWORD=itsmine gluster-s3 | /usr/bin/oc -n storage-project create -f - 2>&1
service "gluster-s3-service" created
route "gluster-s3-route" created
deploymentconfig "gluster-s3-dc" created
Waiting for gluster-s3 pod to start ...
Checking status of pods matching '--selector=glusterfs=s3-pod':
gluster-s3-dc-1-x3x4q 1/1 Running 0 6s
OK
Ready to create and provide Gluster object volumes.
Deployment complete!
# cns-deploy /opt/topology.json --deploy-gluster --namespace storage-project --admin-key secret --user-key mysecret --yes --log-file=/var/log/cns-deploy/444-cns-deploy.log --object-account testvolume --object-user adminuser --object-password itsmine --verbose
Using OpenShift CLI.
Checking status of namespace matching 'storage-project':
storage-project Active 56m
Using namespace "storage-project".
Checking for pre-existing resources...
GlusterFS pods ...
Checking status of pods matching '--selector=glusterfs=pod':
No resources found.
Timed out waiting for pods matching '--selector=glusterfs=pod'.
not found.
deploy-heketi pod ...
Checking status of pods matching '--selector=deploy-heketi=pod':
No resources found.
Timed out waiting for pods matching '--selector=deploy-heketi=pod'.
not found.
heketi pod ...
Checking status of pods matching '--selector=heketi=pod':
No resources found.
Timed out waiting for pods matching '--selector=heketi=pod'.
not found.
glusterblock-provisioner pod ...
Checking status of pods matching '--selector=glusterfs=block-provisioner-pod':
No resources found.
Timed out waiting for pods matching '--selector=glusterfs=block-provisioner-pod'.
not found.
gluster-s3 pod ...
Checking status of pods matching '--selector=glusterfs=s3-pod':
No resources found.
Timed out waiting for pods matching '--selector=glusterfs=s3-pod'.
not found.
Creating initial resources ... /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/deploy-heketi-template.yaml 2>&1
template "deploy-heketi" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-service-account.yaml 2>&1
serviceaccount "heketi-service-account" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-template.yaml 2>&1
template "heketi" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/glusterfs-template.yaml 2>&1
template "glusterfs" created
/usr/bin/oc -n storage-project policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account 2>&1
role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
/usr/bin/oc -n storage-project adm policy add-scc-to-user privileged -z heketi-service-account
OK
Marking 'dhcp46-122.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp46-122.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp46-122.lab.eng.blr.redhat.com" labeled
Marking 'dhcp46-9.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp46-9.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp46-9.lab.eng.blr.redhat.com" labeled
Marking 'dhcp46-134.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp46-134.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp46-134.lab.eng.blr.redhat.com" labeled
Deploying GlusterFS pods.
/usr/bin/oc -n storage-project process -p NODE_LABEL=glusterfs glusterfs | /usr/bin/oc -n storage-project create -f - 2>&1
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ...
Checking status of pods matching '--selector=glusterfs=pod':
glusterfs-6fj2v 1/1 Running 0 52s
glusterfs-ck40f 1/1 Running 0 52s
glusterfs-kbtz4 1/1 Running 0 52s
OK
/usr/bin/oc -n storage-project create secret generic heketi-config-secret --from-file=private_key=/dev/null --from-file=./heketi.json --from-file=topology.json=/opt/topology.json
secret "heketi-config-secret" created
/usr/bin/oc -n storage-project label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret
secret "heketi-config-secret" labeled
/usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= deploy-heketi | /usr/bin/oc -n storage-project create -f - 2>&1
service "deploy-heketi" created
route "deploy-heketi" created
deploymentconfig "deploy-heketi" created
Waiting for deploy-heketi pod to start ...
Checking status of pods matching '--selector=deploy-heketi=pod':
deploy-heketi-1-hf9rn 1/1 Running 0 2m
OK
Determining heketi service URL ... OK
/usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' topology load --json=/etc/heketi/topology.json 2>&1
Creating cluster ... ID: 252509038eb8568162ec5920c12bc243
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node dhcp46-122.lab.eng.blr.redhat.com ... ID: 73ad287ae1ef231f8a0db46422367c9a
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
Creating node dhcp46-9.lab.eng.blr.redhat.com ... ID: 0da1b20daaad2d5c57dbfc4f6ab78001
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
Creating node dhcp46-134.lab.eng.blr.redhat.com ... ID: 4b3b62fc0efd298dedbcdacf0b498e65
Adding device /dev/sdd ... OK
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
heketi topology loaded.
/usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json --image rhgs3/rhgs-volmanager-rhel7:3.3.0-17 2>&1
Saving /tmp/heketi-storage.json
/usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- cat /tmp/heketi-storage.json | /usr/bin/oc -n storage-project create -f - 2>&1
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job "heketi-storage-copy-job" created
Checking status of pods matching '--selector=job-name=heketi-storage-copy-job':
heketi-storage-copy-job-87v6n 0/1 Completed 0 7s
/usr/bin/oc -n storage-project label --overwrite svc heketi-storage-endpoints glusterfs=heketi-storage-endpoints heketi=storage-endpoints
service "heketi-storage-endpoints" labeled
/usr/bin/oc -n storage-project delete all,service,jobs,deployment,secret --selector="deploy-heketi" 2>&1
deploymentconfig "deploy-heketi" deleted
route "deploy-heketi" deleted
service "deploy-heketi" deleted
job "heketi-storage-copy-job" deleted
pod "deploy-heketi-1-hf9rn" deleted
secret "heketi-storage-secret" deleted
/usr/bin/oc -n storage-project delete dc,route,template --selector="deploy-heketi" 2>&1
template "deploy-heketi" deleted
/usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= heketi | /usr/bin/oc -n storage-project create -f - 2>&1
service "heketi" created
route "heketi" created
deploymentconfig "heketi" created
Waiting for heketi pod to start ...
Checking status of pods matching '--selector=heketi=pod':
heketi-1-zzblp 1/1 Running 0 31s
OK
Determining heketi service URL ... OK
heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run
administrative commands you can install 'heketi-cli' and use it as follows:
# heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list
You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:
# /usr/bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
For dynamic provisioning, create a StorageClass similar to this:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
Ready to create and provide GlusterFS volumes.
sed -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
clusterrole "glusterblock-provisioner-runner" created
serviceaccount "glusterblock-provisioner" created
clusterrolebinding "glusterblock-provisioner" created
deploymentconfig "glusterblock-provisioner-dc" created
Waiting for glusterblock-provisioner pod to start ...
Checking status of pods matching '--selector=glusterfs=block-provisioner-pod':
glusterblock-provisioner-dc-1-xm6bv 1/1 Running 0 6s
OK
Ready to create and provide Gluster block volumes.
/usr/bin/oc -n storage-project create secret generic heketi-storage-project-admin-secret --from-literal=key= --type=kubernetes.io/glusterfs
secret "heketi-storage-project-admin-secret" created
/usr/bin/oc -n storage-project label --overwrite secret heketi-storage-project-admin-secret glusterfs=s3-heketi-storage-project-admin-secret gluster-s3=heketi-storage-project-admin-secret
secret "heketi-storage-project-admin-secret" labeled
sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/' -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
storageclass "glusterfs-for-s3" created
sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${VOLUME_CAPACITY}/2Gi/' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
persistentvolumeclaim "gluster-s3-claim" created
persistentvolumeclaim "gluster-s3-meta-claim" created
Checking status of persistentvolumeclaims matching '--selector=glusterfs in (s3-pvc, s3-meta-pvc)':
gluster-s3-claim Bound pvc-35b6c1f0-9c65-11e7-9c8c-005056b3ded1 2Gi RWX glusterfs-for-s3 18s
gluster-s3-meta-claim Bound pvc-35b86e7a-9c65-11e7-9c8c-005056b3ded1 1Gi RWX glusterfs-for-s3 18s
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/gluster-s3-template.yaml 2>&1
template "gluster-s3" created
/usr/bin/oc -n storage-project process -p S3_ACCOUNT=testvolume -p S3_USER=adminuser -p S3_PASSWORD=itsmine gluster-s3 | /usr/bin/oc -n storage-project create -f - 2>&1
service "gluster-s3-service" created
route "gluster-s3-route" created
deploymentconfig "gluster-s3-dc" created
Waiting for gluster-s3 pod to start ...
Checking status of pods matching '--selector=glusterfs=s3-pod':
gluster-s3-dc-1-x3x4q 1/1 Running 0 6s
OK
Ready to create and provide Gluster object volumes.
Deployment complete!
Copy to ClipboardCopied!Toggle word wrapToggle overflow
Brick 多路是一个功能,允许在一个进程中添加多个 brick。这可以减少资源消耗,并允许我们运行超过相同内存消耗的 brick 数。在每个集群上的其中一个 Red Hat Gluster Storage 节点上执行以下命令来启用 brick-multiplexing:
执行以下命令以启用 brick 多路功能:
gluster vol set all cluster.brick-multiplex on
# gluster vol set all cluster.brick-multiplex on
Copy to ClipboardCopied!Toggle word wrapToggle overflow
例如:
gluster vol set all cluster.brick-multiplex on
Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y
volume set: success
# gluster vol set all cluster.brick-multiplex on
Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y
volume set: success
Copy to ClipboardCopied!Toggle word wrapToggle overflow
重启 heketidb 卷:
gluster vol stop heketidbstorage
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: heketidbstorage: success
# gluster vol stop heketidbstorage
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: heketidbstorage: success
Copy to ClipboardCopied!Toggle word wrapToggle overflow
Copy to ClipboardCopied!Toggle word wrapToggle overflow
要验证 Heketi 是否已使用拓扑载入,请执行以下命令:
heketi-cli topology info
# heketi-cli topology info
Copy to ClipboardCopied!Toggle word wrapToggle overflow
注意
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Managing_Clusters[].
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Managing_Clusters[].
Copy to ClipboardCopied!Toggle word wrapToggle overflow