This project is mirrored from https://gitee.com/wangmingco/rook.git.
Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
- 09 May, 2022 1 commit
-
-
小 白蛋 authored
-
- 01 Dec, 2020 10 commits
-
-
Travis Nielsen authored
ceph: Update examples and base image to v15.2.7
-
Travis Nielsen authored
docs: Clear the pending release notes for 1.6
-
Blaine Gardner authored
ceph: add log collector
-
Travis Nielsen authored
With the release of the latest octopus v15.2.7 we update the base of the operator image and set the examples to use the same release to pick up the security and other bug fixes. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
Travis Nielsen authored
core: fix label merge
-
Travis Nielsen authored
The pending release notes were for v1.5, now cleared for adding v1.6 features. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
Travis Nielsen authored
ceph: cleanup should ignore ceph daemon pods that are not scheduled on any node.
-
Santosh Pillai authored
Before cleaning up the cluster, we wait for all the daemon pods to be cleaned up. This fails when a daemon is in pending state and has no NodeName. This PR ignores daemon pods that are not scheduled on any node. Signed-off-by:
Santosh Pillai <sapillai@redhat.com>
-
Sébastien Han authored
We can now collect logs directly into a side-car container. A new CRD spec has been added: spec: logCollector: enabled: true periodicity: 24h Every 24h we will rotate log files for each Ceph daemon. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Alexander Trost authored
This makes it so the label merging function returns a new Labels object with the merged content instead of modifying the existing one, which would cause previously merged labels to be returned for a different set of input labels. Signed-off-by:
Alexander Trost <galexrt@googlemail.com>
-
- 30 Nov, 2020 2 commits
-
-
Blaine Gardner authored
docs: update Blaine's email in OWNERS.md
-
Blaine Gardner authored
Signed-off-by:
Blaine Gardner <blaine.gardner@redhat.com>
-
- 27 Nov, 2020 3 commits
-
-
Satoru Takeuchi authored
test: Fixed a wrong test suite name
-
Hiroyuki Kaneko authored
A test suite name is not existing in the integration test suites. Replace the wrong name "TestRookClusterInstallation_SmokeTest" with "TestARookClusterInstallation_SmokeTest" Signed-off-by:
Hiroyuki Kaneko <hkaneko@redhat.com>
-
Satoru Takeuchi authored
ceph: fix RBAC for cron cash pruner
-
- 26 Nov, 2020 6 commits
-
-
Sébastien Han authored
Minikube script fixes
-
Shachar Sharon authored
Minikube deprecates '--vm-driver' option. When user starts new minkube run with '--driver=xxx', the test will fail and cause him to use the wrong default value of 'virtualbox'. Tested with minikube=v1.15.1, driver=kvm2 on Fedora32. Signed-off-by:
Shachar Sharon <ssharon@redhat.com>
-
Shachar Sharon authored
Users may prefer to use docker alternatives, by setting DOCKERCMD variable, which is set by 'common.sh'. Use it in 'copy_image_to_cluster' instead of plain 'docker'. Tested with DOCKERCMD=podman (podman-2.1.1). Signed-off-by:
Shachar Sharon <ssharon@redhat.com>
-
Sébastien Han authored
The rook-ceph-system service account was lacking delete permission and the operator was complaining. Closes: https://github.com/rook/rook/issues/6708 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
We don't need to print this every 60s in the operator log. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Satoru Takeuchi authored
ceph: fix pod labels set on csi components
-
- 25 Nov, 2020 6 commits
-
-
Satoru Takeuchi authored
ceph: fix metadata device passed by-id
-
Sébastien Han authored
ceph: ability to abort orchestration
-
Sébastien Han authored
We can now prioritize orchestrations on certain event. Today only two events will cancel on-going orchestrations (if any): * request for cluster deletion * request for cluster upgrade If one of the two are caught by the watcher we will cancel the on-going orchestration. For that we implemented a simple approach based on check points, where we will check for a cancellation request in certain part of the orchestration. Mainly before each mons/mgr/osds orchestration loops. This solution is not perfect, but we are waiting for the controller-runtime to release its 0.7 version which will embed context support. With that we will be able to cancel reconciles more precisely and rapidly. Operator log example: ``` 2020-11-24 13:54:59.499719 I | op-mon: parsing mon endpoints: a=10.109.126.120:6789 2020-11-24 13:54:59.499719 I | op-mon: parsing mon endpoints: a=10.109.126.120:6789 2020-11-25 12:59:12.986264 I | ceph-cluster-controller: done reconciling ceph cluster in namespace "rook-ceph" 2020-11-25 13:07:33.776947 I | ceph-cluster-controller: CR has changed for "rook-ceph". diff= v1.ClusterSpec{ CephVersion: v1.CephVersionSpec{ Image: "ceph/ceph:v15.2.5", - AllowUnsupported: true, + AllowUnsupported: false, }, DriveGroups: nil, Storage: {UseAllNodes: true, Selection: {UseAllDevices: &true}}, ... // 20 identical fields } 2020-11-25 13:07:33.777039 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2020-11-25 13:07:33.785088 I | op-mon: parsing mon endpoints: a=10.107.242.49:6789,b=10.109.71.30:6789,c=10.98.93.224:6789 2020-11-25 13:07:33.788626 I | ceph-cluster-controller: detecting the ceph image version for image ceph/ceph:v15.2.5... 2020-11-25 13:07:35.280789 I | ceph-cluster-controller: detected ceph image version: "15.2.5-0 octopus" 2020-11-25 13:07:35.280806 I | ceph-cluster-controller: validating ceph version from provided image 2020-11-25 13:07:35.285888 I | op-mon: parsing mon endpoints: a=10.107.242.49:6789,b=10.109.71.30:6789,c=10.98.93.224:6789 2020-11-25 13:07:35.287828 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2020-11-25 13:07:35.288082 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2020-11-25 13:07:35.621625 I | ceph-cluster-controller: cluster "rook-ceph": version "15.2.5-0 octopus" detected for image "ceph/ceph:v15.2.5" 2020-11-25 13:07:35.642688 I | op-mon: start running mons 2020-11-25 13:07:35.646323 I | op-mon: parsing mon endpoints: a=10.107.242.49:6789,b=10.109.71.30:6789,c=10.98.93.224:6789 2020-11-25 13:07:35.654070 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.107.242.49:6789","10.109.71.30:6789","10.98.93.224:6789"]}] data:a=10.107.242.49:6789,b=10.109.71.30:6789,c=10.98.93.224:6789 mapping:{"node":{"a":{"Name":"minikube","Hostname":"minikube","Address":"192.168.39.3"},"b":{"Name":"minikube","Hostname":"minikube","Address":"192.168.39.3"},"c":{"Name":"minikube","Hostname":"minikube","Address":"192.168.39.3"}}} maxMonId:2] 2020-11-25 13:07:35.868253 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2020-11-25 13:07:35.868573 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2020-11-25 13:07:37.074353 I | op-mon: targeting the mon count 3 2020-11-25 13:07:38.153435 I | op-mon: checking for basic quorum with existing mons 2020-11-25 13:07:38.178029 I | op-mon: mon "a" endpoint is [v2:10.107.242.49:3300,v1:10.107.242.49:6789] 2020-11-25 13:07:38.670191 I | op-mon: mon "b" endpoint is [v2:10.109.71.30:3300,v1:10.109.71.30:6789] 2020-11-25 13:07:39.477820 I | op-mon: mon "c" endpoint is [v2:10.98.93.224:3300,v1:10.98.93.224:6789] 2020-11-25 13:07:39.874094 I | op-mon: saved mon endpoints to config map map[csi-cluster-config-json:[{"clusterID":"rook-ceph","monitors":["10.107.242.49:6789","10.109.71.30:6789","10.98.93.224:6789"]}] data:a=10.107.242.49:6789,b=10.109.71.30:6789,c=10.98.93.224:6789 mapping:{"node":{"a":{"Name":"minikube","Hostname":"minikube","Address":"192.168.39.3"},"b":{"Name":"minikube","Hostname":"minikube","Address":"192.168.39.3"},"c":{"Name":"minikube","Hostname":"minikube","Address":"192.168.39.3"}}} maxMonId:2] 2020-11-25 13:07:40.467999 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2020-11-25 13:07:40.469733 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2020-11-25 13:07:41.071710 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config 2020-11-25 13:07:41.078903 I | cephclient: generated admin config in /var/lib/rook/rook-ceph 2020-11-25 13:07:41.125233 I | op-mon: deployment for mon rook-ceph-mon-a already exists. updating if needed 2020-11-25 13:07:41.327778 I | op-k8sutil: updating deployment "rook-ceph-mon-a" after verifying it is safe to stop 2020-11-25 13:07:41.327895 I | op-mon: checking if we can stop the deployment rook-ceph-mon-a 2020-11-25 13:07:44.045644 I | op-k8sutil: finished waiting for updated deployment "rook-ceph-mon-a" 2020-11-25 13:07:44.045706 I | op-mon: checking if we can continue the deployment rook-ceph-mon-a 2020-11-25 13:07:44.045740 I | op-mon: waiting for mon quorum with [a b c] 2020-11-25 13:07:44.109159 I | op-mon: mons running: [a b c] 2020-11-25 13:07:44.474596 I | op-mon: Monitors in quorum: [a b c] 2020-11-25 13:07:44.478565 I | op-mon: deployment for mon rook-ceph-mon-b already exists. updating if needed 2020-11-25 13:07:44.493374 I | op-k8sutil: updating deployment "rook-ceph-mon-b" after verifying it is safe to stop 2020-11-25 13:07:44.493403 I | op-mon: checking if we can stop the deployment rook-ceph-mon-b 2020-11-25 13:07:47.135524 I | op-k8sutil: finished waiting for updated deployment "rook-ceph-mon-b" 2020-11-25 13:07:47.135542 I | op-mon: checking if we can continue the deployment rook-ceph-mon-b 2020-11-25 13:07:47.135551 I | op-mon: waiting for mon quorum with [a b c] 2020-11-25 13:07:47.148820 I | op-mon: mons running: [a b c] 2020-11-25 13:07:47.445946 I | op-mon: Monitors in quorum: [a b c] 2020-11-25 13:07:47.448991 I | op-mon: deployment for mon rook-ceph-mon-c already exists. updating if needed 2020-11-25 13:07:47.462041 I | op-k8sutil: updating deployment "rook-ceph-mon-c" after verifying it is safe to stop 2020-11-25 13:07:47.462060 I | op-mon: checking if we can stop the deployment rook-ceph-mon-c 2020-11-25 13:07:48.853118 I | ceph-cluster-controller: CR has changed for "rook-ceph". diff= v1.ClusterSpec{ CephVersion: v1.CephVersionSpec{ - Image: "ceph/ceph:v15.2.5", + Image: "ceph/ceph:v15.2.6", AllowUnsupported: false, }, DriveGroups: nil, Storage: {UseAllNodes: true, Selection: {UseAllDevices: &true}}, ... // 20 identical fields } 2020-11-25 13:07:48.853140 I | ceph-cluster-controller: upgrade requested, cancelling any ongoing orchestration 2020-11-25 13:07:50.119584 I | op-k8sutil: finished waiting for updated deployment "rook-ceph-mon-c" 2020-11-25 13:07:50.119606 I | op-mon: checking if we can continue the deployment rook-ceph-mon-c 2020-11-25 13:07:50.119619 I | op-mon: waiting for mon quorum with [a b c] 2020-11-25 13:07:50.130860 I | op-mon: mons running: [a b c] 2020-11-25 13:07:50.431341 I | op-mon: Monitors in quorum: [a b c] 2020-11-25 13:07:50.431361 I | op-mon: mons created: 3 2020-11-25 13:07:50.734156 I | op-mon: waiting for mon quorum with [a b c] 2020-11-25 13:07:50.745763 I | op-mon: mons running: [a b c] 2020-11-25 13:07:51.045108 I | op-mon: Monitors in quorum: [a b c] 2020-11-25 13:07:51.054497 E | ceph-cluster-controller: failed to reconcile. failed to reconcile cluster "rook-ceph": failed to configure local ceph cluster: failed to create cluster: CANCELLING CURRENT ORCHESTATION 2020-11-25 13:07:52.055208 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph" 2020-11-25 13:07:52.070690 I | op-mon: parsing mon endpoints: a=10.107.242.49:6789,b=10.109.71.30:6789,c=10.98.93.224:6789 2020-11-25 13:07:52.088979 I | ceph-cluster-controller: detecting the ceph image version for image ceph/ceph:v15.2.6... 2020-11-25 13:07:53.904811 I | ceph-cluster-controller: detected ceph image version: "15.2.6-0 octopus" 2020-11-25 13:07:53.904862 I | ceph-cluster-controller: validating ceph version from provided image ``` Closes: https://github.com/rook/rook/issues/6587 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
The code was assuming that devices were passed by the user as "/dev/sda", this is bad! We all know people should be using paths like /dev/disk/by-id so we must support them. Closes: https://github.com/rook/rook/issues/6685 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Alexander Trost authored
Signed-off-by:
Alexander Trost <galexrt@googlemail.com>
-
Sébastien Han authored
ceph: periodically prune crash entries older than user-provided days
-
- 24 Nov, 2020 6 commits
-
-
Renan Campos authored
Rook's crashcollector pod posts entries to the ceph cluster when a crash occurs. Over time the number cluster may hold crash entries needlessly. To clean up old crash entries, this PR adds a field to the ceph cluster CR for the user to specify the number of days a crash entry should be kept for. Providing a value for the field keepXDays creates a cronjob that runs every day at midnight, calling "ceph crash prune <keepXDays>". Closes: https://github.com/rook/rook/issues/6332 Signed-off-by:
Renan Campos <rcampos@redhat.com>
-
Sébastien Han authored
Since we want to pass a context to it, let's extract the logic into its own predicate. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
Since the predicate for the CephCluster object will soon move into its own predidacte we need to export isDoNotReconcile so that it can be consummed by the "cluster" package. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
Since we moved to the controller-runtime, events are processed one by one and so are reconciles. This means we won't have multiple orchestrations happening at the same time. Thus removing this code. Also removing one unused variable. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
bot: fix codespell issue
-
Sébastien Han authored
Make the bot happy. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- 19 Nov, 2020 6 commits
-
-
Sébastien Han authored
Bump Controller Runtime version to 0.6
-
Travis Nielsen authored
ceph: update cephcsi to latest v3.1.2 release
-
Travis Nielsen authored
ceph: update Jenkins to skip Ceph tests
-
Sébastien Han authored
bot: auto-merge pull request under conditions and do not run ci when no code changes
-
Madhu Rajanna authored
updating cephcsi to v3.1.2 which is a latest bugfix release. Signed-off-by:
Madhu Rajanna <madhupr007@gmail.com>
-
subhamkrai authored
we now run our ceph test or ceph suite using GitHub actions, we can skip Jenkins test for the same. Signed-off-by:
subhamkrai <srai@redhat.com>
-