Unverified Commit 594746ad authored by Travis Nielsen's avatar Travis Nielsen Committed by GitHub
Browse files

Merge pull request #3802 from SUSE/release-1.1-beta1

Release 1.1 beta1
Showing with 462 additions and 254 deletions
+462 -254
......@@ -420,16 +420,37 @@ ceph osd pool set rbd pg_num 512
## Custom ceph.conf Settings
**WARNING:** The advised method for controlling Ceph configuration is to manually use the Ceph CLI
or the Ceph dashboard because this offers the most flexibility. If configs must be set as part of
Rook, the `configOverrides` section of the CephCluster resource is recommended. Read the
[CephCluster CRD docs](ceph-cluster-crd.md#ceph-config-overrides) first before considering this
advanced method.
or the Ceph dashboard because this offers the most flexibility. It is highly recommended that this
only be used when absolutely necessary and that the `config` be reset to an empty string if/when the
configurations are no longer necessary. Configurations in the config file will make the Ceph cluster
less configurable from the CLI and dashboard and may make future tuning or debugging difficult.
Setting configs via Ceph's CLI requires that at least one mon be available for the configs to be
set, and setting configs via dashboard requires at least one mgr to be available. Ceph may also have
a small number of very advanced settings that aren't able to be modified easily via CLI or
dashboard. In order to set configurations before monitors are available or to set problematic
configuration settings, the `rook-config-override` ConfigMap exists, and the `config` field can be
set with the contents of a `ceph.conf` file. The contents will be propagated to all mon, mgr, OSD,
MDS, and RGW daemons as an `/etc/ceph/ceph.conf` file.
**WARNING:** Rook performs no validation on the config, so the validity of the settings is the
user's responsibility.
With Rook the full swath of
[Ceph settings](http://docs.ceph.com/docs/master/rados/configuration/) are available
to use on your storage cluster. When we supply Rook with a ceph.conf file those
settings will be propagated to all Mon, OSD, MDS, and RGW daemons to use.
Each daemon will need to be restarted where you want the settings applied:
- mons: ensure all three mons are online and healthy before restarting each mon pod, one at a time.
- mgrs: the pods are stateless and can be restarted as needed, but note that this will disrupt the
Ceph dashboard during restart.
- OSDs: restart your the pods by deleting them, one at a time, and running `ceph -s`
between each restart to ensure the cluster goes back to "active/clean" state.
- RGW: the pods are stateless and can be restarted as needed.
- MDS: the pods are stateless and can be restarted as needed.
After the pod restart, the new settings should be in effect. Note that if the ConfigMap in the Ceph
cluster's namespace is created before the cluster is created, the daemons will pick up the settings
at first launch.
### Example
In this example we will set the default pool `size` to two, and tell OSD
daemons not to change the weight of OSDs on startup.
......@@ -455,9 +476,8 @@ data:
config: ""
```
To apply your desired configuration, you will need to update this ConfigMap.
The next time the daemon pod(s) start, the settings will be merged with the default
settings created by Rook.
To apply your desired configuration, you will need to update this ConfigMap. The next time the
daemon pod(s) start, they will use the updated configs.
```bash
kubectl -n rook-ceph edit configmap rook-config-override
......@@ -478,22 +498,6 @@ data:
osd pool default size = 2
```
Each daemon will need to be restarted where you want the settings applied:
- Mons: ensure all three mons are online and healthy before restarting each mon pod, one at a time
- OSDs: restart your the pods by deleting them, one at a time, and running `ceph -s`
between each restart to ensure the cluster goes back to "active/clean" state.
- RGW: the pods are stateless and can be restarted as needed
- MDS: the pods are stateless and can be restarted as needed
After the pod restart, your new settings should be in effect. Note that if you create
the ConfigMap in the `rook-ceph` namespace before the cluster is even created
the daemons will pick up the settings at first launch.
The only validation of the settings done by Rook is whether the settings can be merged
using the ini file format with the default settings created by Rook. Beyond that,
the validity of the settings is your responsibility.
## OSD CRUSH Settings
A useful view of the [CRUSH Map](http://docs.ceph.com/docs/master/rados/operations/crush-map/)
......
......@@ -168,6 +168,9 @@ parameters:
fstype: xfs
# Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/
reclaimPolicy: Retain
# Optional, if you want to add dynamic resize for PVC. Works for Kubernetes 1.14+
# For now only ext3, ext4, xfs resize support provided, like in Kubernetes itself.
allowVolumeExpansion: true
```
Create the pool and storage class.
......@@ -228,6 +231,8 @@ parameters:
clusterNamespace: rook-ceph
# Specify the filesystem type of the volume. If not specified, it will use `ext4`.
fstype: xfs
# Works for Kubernetes 1.14+
allowVolumeExpansion: true
```
......
......@@ -110,6 +110,8 @@ If this value is empty, each pod will get an ephemeral directory to store their
- `hostNetwork`: uses network of the hosts instead of using the SDN below the containers.
- `mon`: contains mon related options [mon settings](#mon-settings)
For more details on the mons and when to choose a number other than `3`, see the [mon health design doc](https://github.com/rook/rook/blob/master/design/mon-health.md).
- `mgr`: manager top level section
- `modules`: is the list of Ceph manager modules to enable
- `rbdMirroring`: The settings for rbd mirror daemon(s). Configuring which pools or images to be mirrored must be completed in the rook toolbox by running the
[rbd mirror](http://docs.ceph.com/docs/mimic/rbd/rbd-mirroring/) command.
- `workers`: The number of rbd daemons to perform the rbd mirroring between clusters.
......@@ -130,6 +132,8 @@ For more details on the mons and when to choose a number other than `3`, see the
- `disruptionManagement`: The section for configuring management of daemon disruptions
- `managePodBudgets`: if `true`, the operator will create and manage PodDsruptionBudgets for OSD, Mon, RGW, and MDS daemons. OSD PDBs are managed dynamically via the strategy outlined in the [design](https://github.com/rook/rook/blob/master/design/ceph-managed-disruptionbudgets.md). The operator will block eviction of OSDs by default and unblock them safely when drains are detected.
- `osdMaintenanceTimeout`: is a duration in minutes that determines how long an entire failureDomain like `region/zone/host` will be held in `noout` (in addition to the default DOWN/OUT interval) when it is draining. This is only relevant when `managePodBudgets` is `true`. The default value is `30` minutes.
- `manageMachineDisruptionBudgets`: if `true`, the operator will create and manage MachineDisruptionBudgets to ensure OSDs are only fenced when the cluster is healthy. Only available on OpenShift.
- `machineDisruptionBudgetNamespace`: the namespace in which to watch the MachineDisruptionBudgets.
### Mon Settings
......@@ -151,6 +155,20 @@ To change the defaults that the operator uses to determine the mon health and wh
- `ROOK_MON_HEALTHCHECK_INTERVAL`: The frequency with which to check if mons are in quorum (default is 45 seconds)
- `ROOK_MON_OUT_TIMEOUT`: The interval to wait before marking a mon as "out" and starting a new mon to replace it in the quorum (default is 600 seconds)
### Mgr Settings
You can use the cluster CR to enable or disable any manager module. This can be configured like so:
```yaml
mgr:
modules:
- name: <name of the module>
enabled: true
```
Some modules will have special configuration to ensure the module is fully functional after being enabled. Specifically:
- `pg_autoscaler`: Rook will configure all new pools with PG autoscaling by setting: `osd_pool_default_pg_autoscale_mode = on`
### Node Settings
In addition to the cluster level settings specified above, each individual node can also specify configuration to override the cluster level settings and defaults.
If a node does not specify any configuration then it will inherit the cluster level settings.
......@@ -275,7 +293,7 @@ Here are the current minimum amounts of memory in MB to apply so that Rook will
- `mon`: 1024MB
- `mgr`: 512MB
- `osd`: 4096MB
- `osd`: 2048MB
- `mds`: 4096MB
- `rbdmirror`: 512MB
......@@ -289,62 +307,6 @@ For more information on resource requests/limits see the official Kubernetes doc
- `cpu`: Limit for CPU (example: one CPU core `1`, 50% of one CPU core `500m`).
- `memory`: Limit for Memory (example: one gigabyte of memory `1Gi`, half a gigabyte of memory `512Mi`).
### Ceph config overrides
Users are advised to use Ceph's CLI or dashboard to configure Ceph; however, some users may want to
specify some settings they want set in the `CephCluster` manifest. This is possible using the
`configOverrides` section. Config overrides follow the same syntax as [Ceph's documentation for
setting config values](https://docs.ceph.com/docs/master/rados/configuration/ceph-conf/#commands),
and configuration is stored in Ceph's centralized configuration database.
Each item in `configOverrides` has `who`, `option`, and `value` properties. `who` can be `global` to
affect all Ceph daemons, a daemon type, an individual Ceph daemon, or a glob matching multiple
daemons. It may also use a
[mask](https://docs.ceph.com/docs/master/rados/configuration/ceph-conf/#sections-and-masks).
`option` is the configuration option to be overridden, and `value` is the value to set on the
configuration option. All properties are strings.
Rook will only set configuration overrides once when an orchestration is run. For example, on node
addition or removal, when disks change on a node, or when the operator starts (or restarts). This
means that users can change Ceph's configuration from the CLI or dashboard as desired without Rook
constantly fighting to control the setting. This is particularly valuable in situations where values
need to be changed to debug errors or test changes to tuning parameters.
**WARNING:** Remember that Rook will set these overrides any time the operator restarts, so user
changes via Ceph's CLI or dashboard to values set here will not persist forever.
Some examples:
```yaml
configOverrides:
- who: global
option: log_file
value: /var/log/custom-log-dir
- who: mon
option: mon_compact_on_start
value: "true"
- who: mgr.a
option: mgr_stats_period
value: "10"
- who: osd.*
option: osd_memory_target
value: "2147483648"
- who: osd/rack:foo
option: debug_ms
value: "20"
```
#### Advanced config
Setting configs in the Ceph mons' centralized database this way requires that at least one mon be
available for the configs to be set. Ceph may also have a small number of very advanced settings
that aren't able to be modified easily in the configuration database. In order to set configurations
before monitors are available or to set problematic configuration settings, the
`rook-config-override` ConfigMap exists, and the `config` field can be set with the contents of a
`ceph.conf` file. It is highly recommended that this only be used when absolutely necessary and that
the `config` be reset to an empty string if/when the configurations are no longer necessary, as this
will make the Ceph cluster less configurable from the CLI and dashboard and may make future tuning
or debugging difficult. Read more about this in the
[advanced configuration docs](ceph-advanced-configuration.md#custom-cephconf-settings).
## Samples
Here are several samples for configuring Ceph clusters. Each of the samples must also include the namespace and corresponding access granted for management by the Ceph operator. See the [common cluster resources](#common-cluster-resources) below.
......
---
title: Configuration
weight: 3700
indent: true
---
# Configuration
For most any Ceph cluster, the user will want to--and may need to--change some Ceph
configurations. These changes often may be warranted in order to alter performance to meet SLAs or
to update default data resiliency settings.
**WARNING:** Modify Ceph settings carefully, and review the
[Ceph configuration documentation](https://docs.ceph.com/docs/master/rados/configuration/) before
making any changes. Changing the settings could result in unhealthy daemons or even data loss if
used incorrectly.
## Required configurations
Rook and Ceph both strive to make configuration as easy as possible, but there are some
configuration options which users are well advised to consider for any production cluster.
### Default PG and PGP counts
`osd_pool_default_pg_num` and `osd_pool_default_pgp_num`. The number of PGs and PGPs can be
configured on a per-pool basis, but it is highly advised to set default values that are appropriate
for your Ceph cluster. Appropriate values depend on the number of OSDs the user expects to have
backing each pool. The Ceph
[OSD and Pool config docs](https://docs.ceph.com/docs/master/rados/operations/placement-groups/#a-preselection-of-pg-num)
provide detailed information about how to tune these parameters.
An easier option exists for Rook-Ceph clusters running Ceph Nautilus (v14.2.x) or newer. Nautilus
[introduced the PG auto-scaler mgr module](https://ceph.com/rados/new-in-nautilus-pg-merging-and-autotuning/)
capable of automatically managing PG and PGP values for pools. This module is not enabled by default
but can be enabled by the following setting in the [CephCluster CR](ceph-cluster-crd.md#mgr-settings):
```yaml
mgr:
modules:
- name: pg_autoscaler
enabled: true
```
With that setting, the autoscaler will be enabled for all new pools. If you do not desire to have
the autoscaler enabled for all pools, you will need to use the Rook toolbox to enable the module
and [enable the autoscaling](https://docs.ceph.com/docs/master/rados/operations/placement-groups/)
on individual pools.
## Specifying configuration options
### Toolbox + Ceph CLI
The most recommended way of configuring Ceph is to set Ceph's configuration directly. The first
method for doing so is to use Ceph's CLI from the Rook-Ceph toolbox pod. Using the toolbox pod is
detailed [here](ceph-toolbox.md). From the toolbox, the user can change Ceph configurations, enable
manager modules, create users and pools, and much more.
### Ceph Dashboard
The Ceph Dashboard, examined in more detail [here](ceph-dashboard.md), is another way of setting
some of Ceph's configuration directly. Configuration by the Ceph dashboard is recommended with the
same priority as configuration via the Ceph CLI (above).
### Advanced configuration via ceph.conf override ConfigMap
Setting configs via Ceph's CLI requires that at least one mon be available for the configs to be
set, and setting configs via dashboard requires at least one mgr to be available. Ceph may also have
a small number of very advanced settings that aren't able to be modified easily via CLI or
dashboard. The **least** recommended method for configuring Ceph is intended as a last-resort
fallback in situations like these. This is covered in detail
[here](ceph-advanced-configuration.md#custom-cephconf-settings).
......@@ -36,7 +36,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: rook/ceph:v1.1.0-beta.0
image: rook/ceph:v1.1.0-beta.1
command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
......
This diff is collapsed.
......@@ -17,7 +17,7 @@ metadata:
name: rook-edgefs
namespace: rook-edgefs
spec:
edgefsImageName: edgefs/edgefs:1.2.31
edgefsImageName: edgefs/edgefs:1.2.64
serviceAccount: rook-edgefs-cluster
dataDirHostPath: /data
storage:
......@@ -40,7 +40,7 @@ metadata:
name: rook-edgefs
namespace: rook-edgefs
spec:
edgefsImageName: edgefs/edgefs:1.2.31
edgefsImageName: edgefs/edgefs:1.2.64
serviceAccount: rook-edgefs-cluster
dataDirHostPath: /data
storage:
......@@ -63,7 +63,7 @@ Settings can be specified at the global level to apply to the cluster as a whole
### Cluster metadata
- `name`: The name that will be used internally for the EdgeFS cluster. Most commonly the name is the same as the namespace since multiple clusters are not supported in the same namespace.
- `namespace`: The Kubernetes namespace that will be created for the Rook cluster. The services, pods, and other resources created by the operator will be added to this namespace. The common scenario is to create a single Rook cluster. If multiple clusters are created, they must not have conflicting devices or host paths.
- `edgefsImageName`: EdgeFS image to use. If not specified then `edgefs/edgefs:latest` is used. We recommend to specify particular image version for production use, for example `edgefs/edgefs:1.2.31`.
- `edgefsImageName`: EdgeFS image to use. If not specified then `edgefs/edgefs:latest` is used. We recommend to specify particular image version for production use, for example `edgefs/edgefs:1.2.64`.
### Cluster Settings
- `dataDirHostPath`: The path on the host ([hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)) where config and data should be stored for each of the services. If the directory does not exist, it will be created. Because this directory persists on the host, it will remain after pods are deleted. If `storage` settings not provided then provisioned hostPath will also be used as a storage device for Target pods (automatic provisioning via `rtlfs`).
......@@ -92,7 +92,7 @@ If this value is empty, each pod will get an ephemeral directory to store their
- [storage configuration settings](#storage-configuration-settings)
- `skipHostPrepare`: By default all nodes selected for EdgeFS deployment will be automatically configured via preparation jobs. If this option set to `true` node configuration will be skipped.
- `trlogProcessingInterval`: Controls for how many seconds cluster would aggregate object modifications prior to processing it by accounting, bucket updates, ISGW Links and notifications components. Has to be defined in seconds and must be composite of 60, i.e. 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30. Default is 10. Recommended range is 2 - 20. This is cluster wide setting and cannot be easily changed after cluster is created. Any new node added has to reflect exactly the same setting.
- `trlogKeepDays`: Controls for how many days cluster need to keep transaction log interval batches with version manifest references. If you planning to have cluster disconnected from ISGW downlinks for longer period time, consider to increase this value. Default is 7. This is cluster wide setting and cannot be easily changed after cluster is created.
- `trlogKeepDays`: Controls for how many days cluster need to keep transaction log interval batches with version manifest references. If you planning to have cluster disconnected from ISGW downlinks for longer period time, consider to increase this value. Default is 3. This is cluster wide setting and cannot be easily changed after cluster is created.
- `maxContainerCapacity`: Overrides default total disks capacity per target container. Default is "132Ti".
- `useHostLocalTime`: Force usage of the host's /etc/localtime inside EdgeFS containers. Default is `false`.
#### Node Updates
......@@ -130,7 +130,7 @@ Below are the settings available, both at the cluster and individual node level,
The following storage selection settings are specific to EdgeFS and do not apply to other backends. All variables are key-value pairs represented as strings. While EdgeFS supports multiple backends, it is not recommended to mix them within same cluster. In case of `devices` (physical or emulated raw disks), EdgeFS will automatically use `rtrd` backend. In all other cases `rtlfs` (local file system) will be used.
**IMPORTANT** Keys needs to be case-sensitive and values has to be provided as strings.
- `useMetadataOffload`: Dynamically detect appropriate SSD/NVMe device to use for the metadata on each node. Performance can be improved by using a low latency device as the metadata device, while other spinning platter (HDD) devices on a node are used to store data. Typical and recommended proportion is in range of 1:1 - 1:6. Default is false. Applicable only to rtrd.
- `useMetadataMask`: Defines what parts of metadata needs to be stored on offloaded devices. Default is 0x7d, offload all except second level manifests. For maximum performance, when you have enough SSD/NVMe capacity provisioned, set it to 0xff, i.e. all metadata. Applicable only to rtrd.
- `useMetadataMask`: Defines what parts of metadata needs to be stored on offloaded devices. Default is 0xff, offload all metadata. To save SSD/NVMe capacity, set it to 0x7d to offload all except second level manifests. Applicable only to rtrd.
- `useBCache`: When `useMetadataOffload` is true, enable use of BCache. Default is false. Applicable only to rtrd and when host has "bcache" kernel module preloaded.
- `useBCacheWB`: When `useMetadataOffload` and `useBCache` is true, this option can enable use of BCache write-back cache. By default BCache only used as read cache in front of HDD. Applicable only to rtrd.
- `useAllSSD`: When set to true, only SSD/NVMe non rotational devices will be used. Default is false and if `useMetadataOffload` not defined then only rotational devices (HDDs) will be picked up during node provisioning phase.
......@@ -139,6 +139,7 @@ The following storage selection settings are specific to EdgeFS and do not apply
- `mdReserved`: For hybrid (SSD/HDD) use case, adjusting mdReserved can be necessary when combined with BCache read/write caches. Allowed range 10-99% of automatically calcuated slice.
- `rtVerifyChid`: Verify transferred or read payload. Payload can be data or metadata chunk of flexible size between 4K and 8MB. EdgeFS uses SHA-3 variant to cryptographically sign each chunk and uses it for self validation, self healing and FlexHash addressing. In case of low CPU systems verification after networking transfer prior to write can be disabled by setting this parameter to 0. In case of high CPU systems, verification after read but before networking transfer can be enabled by setting this parameter to 2. Default is 1, i.e. verify after networking transfer only. Setting it to 0 may improve CPU utilization at the cost of reduced availability. However, for objects with 3 or more replicas, availability isn't going to be visibly affected.
- `lmdbPageSize`: Defines default LMDB page size in bytes. Default is 16384. For capacity (all HDD) or hybrid (HDD/SSD) systems consider to increase this value to 32768 to achieve higher throughput performance. For all SSD and small database workloads, consider to decrease this to 8192 to achieve lower latency and higher IOPS. Please be advised that smaller values MAY cause fragmentation. Acceptable values are 4096, 8192, 16384 and 32768.
- `lmdbMdPageSize`: Defines SSD metadata offload LMDB page size in bytes. Default is 8192. For large amount of small objects or files, consider to decrease this to 4096 to achieve better SSD capacity utilization. Acceptable values are 4096, 8192, 16384 and 32768.
- `sync`: Defines default behavior of write operations at device or directory level. Acceptable values are 0, 1 (default), 2, 3.
- `0`: No syncing will happen. Highest performance possible and good for HPC scratch types of deployments. This option will still sustain crash of pods or software bugs. It will not sustain server power loss an may cause node / device level inconsistency.
- `1`: Default method. Will guarantee node / device consistency in case of power loss with reduced durability.
......@@ -202,7 +203,7 @@ metadata:
name: rook-edgefs
namespace: rook-edgefs
spec:
edgefsImageName: edgefs/edgefs:1.2.31
edgefsImageName: edgefs/edgefs:1.2.64
dataDirHostPath: /var/lib/rook
serviceAccount: rook-edgefs-cluster
# cluster level storage configuration and selection
......@@ -226,7 +227,7 @@ metadata:
name: rook-edgefs
namespace: rook-edgefs
spec:
edgefsImageName: edgefs/edgefs:1.2.31
edgefsImageName: edgefs/edgefs:1.2.64
dataDirHostPath: /var/lib/rook
serviceAccount: rook-edgefs-cluster
# cluster level storage configuration and selection
......@@ -260,7 +261,7 @@ metadata:
name: rook-edgefs
namespace: rook-edgefs
spec:
edgefsImageName: edgefs/edgefs:1.2.31
edgefsImageName: edgefs/edgefs:1.2.64
dataDirHostPath: /var/lib/rook
serviceAccount: rook-edgefs-cluster
placement:
......@@ -295,7 +296,7 @@ metadata:
name: rook-edgefs
namespace: rook-edgefs
spec:
edgefsImageName: edgefs/edgefs:1.2.31
edgefsImageName: edgefs/edgefs:1.2.64
dataDirHostPath: /var/lib/rook
serviceAccount: rook-edgefs-cluster
# cluster level resource requests/limits configuration
......@@ -323,7 +324,7 @@ metadata:
name: rook-edgefs
namespace: rook-edgefs
spec:
edgefsImageName: edgefs/edgefs:1.2.31
edgefsImageName: edgefs/edgefs:1.2.64
dataDirHostPath: /var/lib/rook
serviceAccount: rook-edgefs-cluster
network:
......
......@@ -68,7 +68,7 @@ export CLUSTER_NAME="rook-edgefs"
The majority of the upgrade will be handled by the Rook operator. Begin the upgrade by changing the
EdgeFS image field in the cluster CRD (`spec:edgefsImageName`).
```sh
NEW_EDGEFS_IMAGE='edgefs/edgefs:1.2.50'
NEW_EDGEFS_IMAGE='edgefs/edgefs:1.2.64'
kubectl -n $CLUSTER_NAME patch Cluster $CLUSTER_NAME --type=merge \
-p "{\"spec\": {\"edgefsImageName\": \"$NEW_EDGEFS_IMAGE\"}}"
```
......
......@@ -474,6 +474,27 @@
revision = "279bed98673dd5bef374d3b6e4b09e2af76183bf"
version = "v1.0.0-rc1"
[[projects]]
branch = "openshift-4.2-cluster-api-0.1.0"
digest = "1:1707d29b96c606896605af52a284985abbe29980baeffeea486ad5aba4797638"
name = "github.com/openshift/cluster-api"
packages = [
"pkg/apis/machine/common",
"pkg/apis/machine/v1beta1",
]
pruneopts = "UT"
revision = "072f7d777dc81aac0dd222686e1516dd9e7db38b"
[[projects]]
digest = "1:231797c5447a21f16af4127737f8546a1cc77014ed65dc409a37245133b285f5"
name = "github.com/openshift/machine-api-operator"
packages = [
"pkg/apis/healthchecking",
"pkg/apis/healthchecking/v1alpha1",
]
pruneopts = "UT"
revision = "a0949226d20ea454cf08252a182a8e32054027c3"
[[projects]]
digest = "1:e5d0bd87abc2781d14e274807a470acd180f0499f8bf5bb18606e9ec22ad9de9"
name = "github.com/pborman/uuid"
......@@ -1392,7 +1413,8 @@
revision = "21c4ce38f2a793ec01e925ddc31216500183b773"
[[projects]]
digest = "1:b7f9d0b6c5e0e999f781f7b2277feab6a6a04c4c8159e04484a3ebb85aa68807"
branch = "release-0.2"
digest = "1:9f0ef29c01f23bbd2913084e7d9c692115c6c480ef6d1d5807773f077a1c57d3"
name = "sigs.k8s.io/controller-runtime"
packages = [
"pkg/cache",
......@@ -1416,6 +1438,7 @@
"pkg/reconcile",
"pkg/recorder",
"pkg/runtime/inject",
"pkg/scheme",
"pkg/source",
"pkg/source/internal",
"pkg/webhook",
......@@ -1425,7 +1448,6 @@
]
pruneopts = "UT"
revision = "e1159d6655b260c4812fd0792cd1344ecc96a57e"
version = "v0.2.0"
[[projects]]
branch = "release-1.14"
......@@ -1485,6 +1507,8 @@
"github.com/kube-object-storage/lib-bucket-provisioner/pkg/provisioner",
"github.com/kube-object-storage/lib-bucket-provisioner/pkg/provisioner/api",
"github.com/kube-object-storage/lib-bucket-provisioner/pkg/provisioner/api/errors",
"github.com/openshift/cluster-api/pkg/apis/machine/v1beta1",
"github.com/openshift/machine-api-operator/pkg/apis/healthchecking/v1alpha1",
"github.com/pkg/errors",
"github.com/rook/operator-kit",
"github.com/spf13/cobra",
......
......@@ -130,3 +130,7 @@ ignored = [
[[constraint]]
branch = "master"
name = "github.com/kube-object-storage/lib-bucket-provisioner"
[[override]]
name = "github.com/openshift/machine-api-operator"
revision = "a0949226d20ea454cf08252a182a8e32054027c3"
......@@ -51,9 +51,11 @@ an example usage
- Added a new property in `storageClassDeviceSets` named `portable`:
- If `true`, the OSDs will be allowed to move between nodes during failover. This requires a storage class that supports portability (e.g. `aws-ebs`, but not the local storage provisioner).
- If `false`, the OSDs will be assigned to a node permanently. Rook will configure Ceph's CRUSH map to support the portability.
- The Ceph cluster custom resource now contains a `configOverrides` section where users can specify
configuration changes to Ceph which Rook should apply.
- Rook can now manage MachineDisruptionBudgets for the OSDs (only available on OpenShift). MachineDisruptionBudgets for OSDs are dynamically managed as documented in the `disruptionManagement` section of the [CephCluster CR](Documentation/ceph-cluster-crd.md##luster-settings). This can be enabled with the `manageMachineDisruptionBudgets` flag in the cluster CR.
- Rook can now manage PodDisruptionBudgets for the following Daemons: OSD, Mon, RGW, MDS. OSD budgets are dynamically managed as documented in the [design](https://github.com/rook/rook/blob/master/design/ceph-managed-disruptionbudgets.md). This can be enabled with the `managePodBudgets` flag in the cluster CR. When this is enabled, drains on OSDs will be blocked by default and dynamically unblocked in a safe manner one failureDomain at a time. When a failure domain is draining, it will be marked as no out for a longer time than the default DOWN/OUT interval.
- Rook now has a new config CRD `mgr` to enable ceph manager modules
- Flexvolume plugin now supports dynamic PVC expansion.
- The Rook-enforced minimum memory for OSD pods has been reduced from 4096M to 2048M
### YugabyteDB
......
......@@ -36,12 +36,15 @@ $(HELM):
@rm -fr $(TOOLS_HOST_DIR)/tmp
@$(HELM) init -c
# TODO: after helm 3.0 released just pass --set image.tag=foo
# to helm lint and helm package steps
define helm.chart
$(HELM_OUTPUT_DIR)/$(1)-$(VERSION).tgz: $(HELM) $(HELM_OUTPUT_DIR) $(shell find $(HELM_CHARTS_DIR)/$(1) -type f)
@echo === helm package $(1)
@$(SED_CMD) 's|%%VERSION%%|$(VERSION)|g' $(HELM_CHARTS_DIR)/$(1)/values.yaml
@$(HELM) lint --strict $(abspath $(HELM_CHARTS_DIR)/$(1))
@$(HELM) package --version $(VERSION) -d $(HELM_OUTPUT_DIR) $(abspath $(HELM_CHARTS_DIR)/$(1))
@cp -r $(HELM_CHARTS_DIR)/$(1) $(OUTPUT_DIR)
@$(SED_CMD) 's|%%VERSION%%|$(VERSION)|g' $(OUTPUT_DIR)/$(1)/values.yaml
@$(HELM) lint --strict $(abspath $(OUTPUT_DIR)/$(1))
@$(HELM) package --version $(VERSION) -d $(HELM_OUTPUT_DIR) $(abspath $(OUTPUT_DIR)/$(1))
$(HELM_INDEX): $(HELM_OUTPUT_DIR)/$(1)-$(VERSION).tgz
endef
$(foreach p,$(HELM_CHARTS),$(eval $(call helm.chart,$(p))))
......
values.yaml-e
......@@ -133,6 +133,38 @@ rules:
- "*"
verbs:
- "*"
- apiGroups:
- policy
- apps
resources:
#this is for the clusterdisruption controller
- poddisruptionbudgets
#this is for both clusterdisruption and nodedrain controllers
- deployments
verbs:
- "*"
- apiGroups:
- healthchecking.openshift.io
resources:
- machinedisruptionbudgets
verbs:
- get
- list
- watch
- create
- update
- delete
- apiGroups:
- machine.openshift.io
resources:
- machines
verbs:
- get
- list
- watch
- create
- update
- delete
---
# Aspects of ceph-mgr that require cluster-wide access
kind: ClusterRole
......@@ -204,8 +236,52 @@ rules:
- get
- list
- watch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: rook-ceph-object-bucket
labels:
operator: rook
storage-backend: ceph
rbac.ceph.rook.io/aggregate-to-rook-ceph-mgr-cluster: "true"
rules:
- apiGroups:
- ""
verbs:
- "*"
resources:
- secrets
- configmaps
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- list
- watch
- apiGroups:
- "objectbucket.io"
verbs:
- "*"
resources:
- "*"
{{- if ((.Values.agent) and .Values.agent.mountSecurityMode) and ne .Values.agent.mountSecurityMode "Any" }}
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: rook-ceph-osd
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
......
......@@ -31,6 +31,33 @@ subjects:
name: rook-ceph-mgr
namespace: {{ .Release.Namespace }}
---
# Allow the ceph osd to access cluster-wide resources necessary for determining their topology location
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: rook-ceph-osd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: rook-ceph-osd
subjects:
- kind: ServiceAccount
name: rook-ceph-osd
namespace: {{ .Release.Namespace }}
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: rook-ceph-object-bucket
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: rook-ceph-object-bucket
subjects:
- kind: ServiceAccount
name: rook-ceph-system
namespace: rook-ceph
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
......
......@@ -47,6 +47,15 @@ spec:
maximum: 9
minimum: 0
type: integer
mgr:
properties:
modules:
items:
properties:
name:
type: string
enabled:
type: boolean
network:
properties:
hostNetwork:
......@@ -197,9 +206,9 @@ spec:
codingChunks:
type: integer
additionalPrinterColumns:
- name: MdsCount
- name: ActiveMDS
type: string
description: Number of MDSs
description: Number of desired active MDS daemons
JSONPath: .spec.metadataServer.activeCount
- name: Age
type: date
......
......@@ -21,7 +21,7 @@ spec:
effect: NoSchedule
containers:
- name: ceph-after-reboot-check
image: rook/ceph:v1.1.0-beta.0
image: rook/ceph:v1.1.0-beta.1
imagePullPolicy: IfNotPresent
command: ["/scripts/status-check.sh"]
env:
......
......@@ -21,7 +21,7 @@ spec:
effect: NoSchedule
containers:
- name: ceph-before-reboot-check
image: rook/ceph:v1.1.0-beta.0
image: rook/ceph:v1.1.0-beta.1
imagePullPolicy: IfNotPresent
command: ["/scripts/status-check.sh"]
env:
......
......@@ -186,7 +186,7 @@ subjects:
serviceAccountName: rook-cassandra-operator
containers:
- name: rook-cassandra-operator
image: rook/cassandra:v1.1.0-beta.0
image: rook/cassandra:v1.1.0-beta.1
imagePullPolicy: "Always"
args: ["cassandra", "operator"]
env:
......
......@@ -73,3 +73,8 @@ spec:
volumeMode: Block
accessModes:
- ReadWriteOnce
disruptionManagement:
managePodBudgets: false
osdMaintenanceTimeout: 30
manageMachineDisruptionBudgets: false
machineDisruptionBudgetNamespace: openshift-machine-api
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment