This project is mirrored from https://gitee.com/wangmingco/rook.git.
Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
- 18 Nov, 2020 14 commits
-
-
Arun Kumar Mohan authored
Fetched latest lib-bucket-provisioner changes as well. Signed-off-by:
Arun Kumar Mohan <amohan@redhat.com>
-
Arun Kumar Mohan authored
Signed-off-by:
Arun Kumar Mohan <amohan@redhat.com>
-
Arun Kumar Mohan authored
Updating the dependencies' versions to match with the newer Operator SDK version v1.x Signed-off-by:
Arun Kumar Mohan <amohan@redhat.com>
-
Sébastien Han authored
ci: fix device intermittent failure
-
Sébastien Han authored
ceph: add snapshot scheduling for mirrored pools
-
Sébastien Han authored
When we are done creating the partitions it's good to give the kernel some time to reprobe the device and for udev to finish syncing up. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
Permissions on the disk might changed due to the partitions being created. So the CI user is not able to read the device correctly. Closes: https://github.com/rook/rook/issues/6580 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
ci: run integration test on github actions
-
Sébastien Han authored
ceph: Restore mon clusterIP if the service is missing
-
Sébastien Han authored
ceph: update cleanupPolicy design doc
-
Sébastien Han authored
Now, we can schedule snapshots on pools from the CephBlockPool CR when the pool is mirrored. It can be enabled like this: ``` mirroring: enabled: true mode: pool snapshotSchedules: - interval: 24h # daily snapshots startTime: 14:00:00-05:00 ``` Multiple schedules are supported since snapshotSchedules is a list. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Travis Nielsen authored
In a disaster recovery scenario, the mon service may have been accidentally deleted, while the expected mon endpoint is still found in the mon endpoints configmap. In this case, we create the mon service with the same endpoint as previously. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
subhamkrai authored
to run the integration test there needs to be a few changes in manifests like using `deviceFilter` and other related changes. Signed-off-by:
subhamkrai <srai@redhat.com>
-
subhamkrai authored
as we are planning to move toward Github actions for integration test. This commit runs ceph suites using Github action. Signed-off-by:
subhamkrai <srai@redhat.com>
-
- 17 Nov, 2020 9 commits
-
-
Travis Nielsen authored
docs: Clarify helm warning that could delete cluster
-
Travis Nielsen authored
In the helm chart the CRDs are installed if crds.enabled is set to true. If false, the helm chart will not install them. If changed to false during an upgrade, the CRDs will be removed and the cluster is destroyed. There is no way to prevent this while still being flexible about CRD management, so we make the warnings as clear as possible. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
Travis Nielsen authored
ceph: support ceph cluster and CSI on multus in different namespace
-
Travis Nielsen authored
ceph: add external script to the container image
-
Travis Nielsen authored
ceph: update ceph quick start doc to use new crds.yaml file
-
Santosh Pillai authored
crds were factored out from the common.yaml file and helm chart. This PR updates the quick start guide to use crds.yaml file. Signed-off-by:
Santosh Pillai <sapillai@redhat.com>
-
rohan47 authored
Added the support of NAD from diffrent namespaces. they can be referrenced as <namespace>/<name-of-nad> e.g., default/public-nw. Updated the multus doc to explain the same. Signed-off-by:
rohan47 <rohgupta@redhat.com>
-
Sébastien Han authored
We now ship the Rook container image with our external cluster scripts. Closes: https://github.com/rook/rook/issues/6644 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Santosh Pillai authored
Cleanup Policy design doc is not up to date to with respect to latest implementation. This PR updates the design doc. Signed-off-by:
Santosh Pillai <sapillai@redhat.com>
-
- 16 Nov, 2020 3 commits
-
-
rohan47 authored
Support ceph cluster and CSI on multus deployed in diffrent namespace. Previously csi was looking for multus config from the cluster deployed in the namespace in which rook-ceph-operator/csi was deployed. Now it will look for multus configuration from ceph clusters from all the namespaces. Signed-off-by:
rohan47 <rohgupta@redhat.com>
-
Sébastien Han authored
ceph: update snapshotterVersion to v3.0.0
-
subhamkrai authored
in the integration test we still use snapshotteVersion v2.1.0 but it's expected to use v3.0.0. Signed-off-by:
subhamkrai <srai@redhat.com>
-
- 13 Nov, 2020 2 commits
-
-
Travis Nielsen authored
ceph: allow deprecated fs preservePoolsOnDelete
-
Blaine Gardner authored
Allow filesystems with preservePoolsOnDelete to be used without errors. A deprecation warning will still appear in the operator logs, but existing cephfilesystem resources with preservePoolsOnDelete will still be updated, and Rook will assume the new, preferred preserveFilesystemOnDelete is set. Signed-off-by:
Blaine Gardner <blaine.gardner@redhat.com>
-
- 12 Nov, 2020 12 commits
-
-
Satoru Takeuchi authored
docs: Minor updates to the 1.5 Ceph upgrade guide
-
Travis Nielsen authored
The CRs must be created in a separate kubectl create command from the creation of the CRDs, otherwise the create command will fail. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
Travis Nielsen authored
Some minor updates to the Ceph upgrade guide for the v1.5 release after final testing. Signed-off-by:
Travis Nielsen <tnielsen@redhat.com>
-
Travis Nielsen authored
docs: update ceph upgrade to use common.yaml
-
Blaine Gardner authored
Use common.yaml to apply changes to resources instead of keeping an upgrade-*-apply.yaml file. Update the upgrade docs to reflect this also. Signed-off-by:
Blaine Gardner <blaine.gardner@redhat.com>
-
Travis Nielsen authored
ceph: allow custom labels to be added to the discover daemonset
-
Travis Nielsen authored
ceph: fix nfs daemons not updating
-
Blaine Gardner authored
docs: revise Ceph upgrade docs for v1.5 release
-
Blaine Gardner authored
Update docs and manifests as needed for v1.5 release. Signed-off-by:
Blaine Gardner <blaine.gardner@redhat.com>
-
Blaine Gardner authored
The NFS reconciler was not updating NFS daemons that already existed. The controller would fail to update daemons if the number of active daemons did not increase or decrease. Make sure existing NFS daemons are updated to new versions with all scale up and scale down events. Resolves #6611 Signed-off-by:
Blaine Gardner <blaine.gardner@redhat.com>
-
Alexander Trost authored
Signed-off-by:
Alexander Trost <galexrt@googlemail.com>
-
Travis Nielsen authored
ceph: allow pod labels to be added to CSI components
-