This project is mirrored from https://gitee.com/wangmingco/rook.git.
Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
- 11 Sep, 2020 1 commit
-
-
Travis Nielsen authored
The name of the disk in the CI changed from xvdc to nvme0n1 Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
- 28 May, 2020 1 commit
-
-
Sébastien Han authored
Now, we can not only cleanup monitor data, logs and crashes but the disks too. As part of the cleanupPolicy CR spec, we have a new setting called sanitizeDisks which holds more details: * confirmation: the confirmation message to sanitize disks, use "yes-really-sanitize-disks" to confirm This will **only** wipe the metadata, so it's a fast cleanup allowing you to re-install later but won't remove all the data from the drive. Signed-off-by:
Sébastien Han <seb@redhat.com> (cherry picked from commit c799e110)
-
- 07 Apr, 2020 1 commit
-
-
Satoru Takeuchi authored
Ceph integration test tries to wipe all lvms including the ones that are not related to this test. Signed-off-by:
Satoru Takeuchi <satoru.takeuchi@gmail.com> (cherry picked from commit 7e4f3279)
-
- 06 Apr, 2020 1 commit
-
-
Travis Nielsen authored
With the release of Octopus we need to run the integration tests on Octopus in order to have validation as a supported platform. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com> (cherry picked from commit 360e5552)
-
- 03 Apr, 2020 1 commit
-
-
Santosh Pillai authored
In case of multiple clusters, we don't want to delete all the mon directories under the dataDirHostPath during cluster cleanup. This PR deletes the mon directory only if the montior secret key matches. Signed-off-by:
Santosh Pillai <sapillai@redhat.com> (cherry picked from commit e1a76bfb)
-
- 30 Mar, 2020 1 commit
-
-
Santosh Pillai authored
In order to ensure proper clean up of all the rook-ceph data when the cluster is deleted, we need to clean up the dataDirHostPath (var/lib/rook) Signed-off-by:
Santosh Pillai <sapillai@redhat.com>
-
- 27 Mar, 2020 2 commits
-
-
Travis Nielsen authored
The integration tests are failing to run when the disks are not properly cleaned up and the bluestore label is still found. Now we run sgdisk to more completely zap the disk. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
Travis Nielsen authored
Run the tests on latest nautilus with the tag ceph/ceph:v14 so we are constantly testing on the latest release. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
- 05 Mar, 2020 4 commits
-
-
Travis Nielsen authored
The operator logs are necessary for troubleshooting why the pools may still exist at the end of the integration tests. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
Travis Nielsen authored
Cluster purging is no longer called, therefore can be removed. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
Sébastien Han authored
If pools are found, let's print them. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Travis Nielsen authored
The CI is failing currently in the OSD-on-pv scenario due to a change in v14.2.8. Until that is resolved, we need to unblock the tests by running against v14.2.7. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
- 04 Mar, 2020 1 commit
-
-
Travis Nielsen authored
With a finalizer on the pools, the pools were not always being purged during the integration tests. Now the multicluster suite will ensure its pool is purged. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
- 28 Feb, 2020 1 commit
-
-
Satoru Takeuchi authored
Signed-off-by:
Satoru Takeuchi <satoru.takeuchi@gmail.com>
-
- 18 Feb, 2020 1 commit
-
-
Travis Nielsen authored
The job to wipe devices fails the integration tests periodically. Now the job will be retried once if it fails to retry wiping the disks. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
- 17 Feb, 2020 1 commit
-
-
Travis Nielsen authored
The integration tests have been mostly running on the flex driver with only a newer test on the csi driver. With the CSI driver being the preferred driver going forward, now the integration tests will all be running with the CSI driver with the exception of a test suite that is only dedicated to the flex driver. A number of other test improvements are also made for code readability, test stability, and removing unused options. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
- 04 Feb, 2020 2 commits
-
-
Travis Nielsen authored
The integration test was disabled due to removing support of pre-ceph-volume OSDs as well as OSDs on directories. Now we re-enable the tests, with the following approach: - The base install in Rook v1.1 with the latest mimic release that had c-v support - The upgrade goes from v1.1 to Rook v1.2 then Rook master - The final upgrade step is from mimic to nautilus For efficiency the skipUpgradeChecks flag is added. Logs are also collected between each upgrade step to improve troubleshooting. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
Ashish Ranjan authored
This commit enables testing OSDs over PVCs by modifying the multi-cluster test to use PVC for provisioning OSDs when `manual` storageClass is present in the cluster. Signed-off-by:
Ashish Ranjan <aranjan@redhat.com>
-
- 30 Jan, 2020 1 commit
-
-
Nizamudeen authored
Fixed conditions getting resetted after the operator restart. Did the changes which required to implement conditions on the rook ceph cluster Conditions will eliminate the current status.State and incorporates a type which provides much more description to the current status of the cluster. Signed-off-by:
Nizamudeen <nia@redhat.com>
-
- 29 Jan, 2020 1 commit
-
-
Sébastien Han authored
Add coverage in the CI for the external cluster feature. Closes: https://github.com/rook/rook/issues/3691 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- 24 Jan, 2020 1 commit
-
-
Sébastien Han authored
Somehow the job was stuck on 'fdisk -l', the logic to wipe the block device changed and we don't need that command anymore. We wipe the disk in the PV loop. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- 15 Jan, 2020 1 commit
-
-
Sébastien Han authored
14.2.6 is released, let's use it! Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- 10 Dec, 2019 2 commits
-
-
Sébastien Han authored
With https://github.com/ceph/ceph-container/pull/1529, we now build vX.X.X version so we don't need to have the timestamp. Also we will automatically get the last changes from the OS/packages too more frequently. Closes: https://github.com/rook/rook/issues/4414 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Travis Nielsen authored
The new CephClient CRD needed to be cleaned up between the test suite runs. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
- 13 Nov, 2019 2 commits
-
-
Blaine Gardner authored
Before starting an integration test, run a job on all nodes to wipe the disks so that the integration tests will actually test Rook using the disks for storage. Signed-off-by:
Blaine Gardner <blaine.gardner@suse.com>
-
Blaine Gardner authored
During upgrade tests, Rook should verify that it can still run legacy OSDs. This includes directory-based OSDs, filestore disk OSDs, and bluestore disk OSDs installed without ceph-volume (i.e., before mimic v13.2.2) can still be run after upgrade. This necessitates running the upgrade test twice; once with filestore and once with bluestore. Signed-off-by:
Blaine Gardner <blaine.gardner@suse.com>
-
- 14 Oct, 2019 1 commit
-
-
travisn authored
The tests must ensure that the cluster is cleaned up so that other test suites will not be affected by cleanup of a previous test suite. Removing the cluster finalizer is critical to this cleanup. Signed-off-by:
travisn <tnielsen@redhat.com>
-
- 17 Sep, 2019 1 commit
-
-
Sébastien Han authored
v14.2.4 just got released so let's use it. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- 09 Sep, 2019 1 commit
-
-
travisn authored
THe boolean helm settings were only being applied if their value was true. If the desired value was false, the value would be skipped in the chart instead of adding it with the value of false. Signed-off-by:
travisn <tnielsen@redhat.com>
-
- 04 Sep, 2019 1 commit
-
-
Sébastien Han authored
Let's use the latest minor release of Ceph 14.2.3. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- 26 Aug, 2019 1 commit
-
-
Sébastien Han authored
The last image has disabled ephemeral repositories so it's now possible for images older than 15 days to install packages without having an error from non-existing repositories. Closes: https://github.com/rook/rook/issues/3662 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- 23 Aug, 2019 1 commit
-
-
jiffin authored
Signed-off-by:
jiffin <thottanjiffin@gmail.com> Signed-off-by:
Jon Cope <jcope@redhat.com> Signed-off-by:
jeffvance <jeff.h.vance@gmail.com>
-
- 13 Aug, 2019 2 commits
-
-
travisn authored
Signed-off-by:
travisn <tnielsen@redhat.com>
-
travisn authored
With the 1.1 release we should not be creating bluestore OSDs with rook's legacy partitioning scheme. All new bluestore OSDs should be created with ceph-volume. Therefore, the miminum version allowed is a version of mimic supporting ceph-volume. This commit also adds consistent version checking between the Add and Update methods for the operator handling the CR events. Signed-off-by:
travisn <tnielsen@redhat.com>
-
- 02 Aug, 2019 1 commit
-
-
travisn authored
Now we collect logs for all pods and all their init and main containers during the integration tests. No longer will we be missing logs from the integration tests as long as the pods are available when they are collected at the end of the test. The pod descriptions are also written to a log file instead of being included inline with the test output. Signed-off-by:
travisn <tnielsen@redhat.com> (cherry picked from commit 6e0bc338f90b3f36a89cf37b542ff4ff873b70bd)
-
- 22 Jul, 2019 1 commit
-
-
Sébastien Han authored
14.2.2 is out and along with it numerous fixes so let's use it. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- 18 Jul, 2019 1 commit
-
-
Blaine Gardner authored
Signed-off-by:
Blaine Gardner <blaine.gardner@suse.com>
-
- 16 Jul, 2019 1 commit
-
-
Sébastien Han authored
Logs will be lost if we collect them after pods are uninstalled. So we need to collect them before purging. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- 05 Jul, 2019 2 commits
-
-
travisn authored
The CI instances are not always being properly cleaned up between runs. This is an attempt to get the tests to ensure a clean install before proceeding with the test. Signed-off-by:
travisn <tnielsen@redhat.com>
-
Poornima G authored
Currently in cluster deploy, ceph image points to ceph/ceph:v14.2.1-20190430. Changing this to use ceph/daemon-base:latest-nautilus-devel as its the latest nautilus. Ceph nautilus has some changes in cephsfs mgr module, which is required by ceph-csi canary image, without which the create/purge of RWX PVCs will fail. These changes are not yet part of any released ceph version, i.e v14.2.1-20190430. Even in the short future, we expect some changes to land in Nautilus after 14.2.2, hence waiting for 14.2.2 release and updating to it, will not solve the issue. Therefore using latest-nautilus-devel, which will always have the changes required by ceph-csi. This is only a temperory change in the master, until 14.2.3 is released. Signed-off-by:
Poornima G <pgurusid@redhat.com>
-