Unverified Commit 9a3dc52a authored by travisn's avatar travisn
Browse files

ceph: minimum version allowed is mimic 13.2.4


With the 1.1 release we should not be creating bluestore OSDs
with rook's legacy partitioning scheme. All new bluestore OSDs
should be created with ceph-volume. Therefore, the miminum
version allowed is a version of mimic supporting ceph-volume.

This commit also adds consistent version checking between
the Add and Update methods for the operator handling the
CR events.
Signed-off-by: default avatartravisn <tnielsen@redhat.com>
parent 258a6655
Showing with 147 additions and 151 deletions
+147 -151
......@@ -88,12 +88,12 @@ Settings can be specified at the global level to apply to the cluster as a whole
### Cluster Settings
- `cephVersion`: The version information for launching the ceph daemons.
- `image`: The image used for running the ceph daemons. For example, `ceph/ceph:v12.2.9-20181026` or `ceph/ceph:v13.2.2-20181023`.
- `image`: The image used for running the ceph daemons. For example, `ceph/ceph:v13.2.6-20190604` or `ceph/ceph:v14.2.1-20190430`.
For the latest ceph images, see the [Ceph DockerHub](https://hub.docker.com/r/ceph/ceph/tags/).
To ensure a consistent version of the image is running across all nodes in the cluster, it is recommended to use a very specific image version.
Tags also exist that would give the latest version, but they are only recommended for test environments. For example, the tag `v13` will be updated each time a new mimic build is released.
Using the `v13` or similar tag is not recommended in production because it may lead to inconsistent versions of the image running across different nodes in the cluster.
- `allowUnsupported`: If `true`, allow an unsupported major version of the Ceph release. Currently only `luminous` and `mimic` are supported, so `nautilus` would require this to be set to `true`. Should be set to `false` in production.
Tags also exist that would give the latest version, but they are only recommended for test environments. For example, the tag `v14` will be updated each time a new nautilus build is released.
Using the `v14` or similar tag is not recommended in production because it may lead to inconsistent versions of the image running across different nodes in the cluster.
- `allowUnsupported`: If `true`, allow an unsupported major version of the Ceph release. Currently `mimic` and `nautilus` are supported, so `octopus` would require this to be set to `true`. Should be set to `false` in production.
- `dataDirHostPath`: The path on the host ([hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)) where config and data should be stored for each of the services. If the directory does not exist, it will be created. Because this directory persists on the host, it will remain after pods are deleted.
- On **Minikube** environments, use `/data/rook`. Minikube boots into a tmpfs but it provides some [directories](https://github.com/kubernetes/minikube/blob/master/docs/persistent_volumes.md) where files can be persisted across reboots. Using one of these directories will ensure that Rook's data and configuration files are persisted and that enough storage space is available.
- **WARNING**: For test scenarios, if you delete a cluster and start a new cluster on the same hosts, the path used by `dataDirHostPath` must be deleted. Otherwise, stale keys and other config will remain from the previous cluster and the new mons will fail to start.
......
......@@ -22,12 +22,10 @@ This is the default setting in the example manifests.
enabled: true
```
The Rook operator will enable the ceph-mgr dashboard module. A K8s service will be created to expose that port inside the cluster. The ports enabled by Rook will depend
on the version of Ceph that is running:
- Luminous: Port 7000 on http
- Mimic and newer: Port 8443 on https
The Rook operator will enable the ceph-mgr dashboard module. A K8s service will be created to expose that port inside the cluster. Rook will
enable port 8443 for https access.
This example shows that port 8443 was configured for Mimic or newer.
This example shows that port 8443 was configured.
```bash
kubectl -n rook-ceph get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
......@@ -71,8 +69,7 @@ The following dashboard configuration settings are supported:
* `ssl` The dashboard may be served without SSL (useful for when you deploy the
dashboard behind a proxy already served using SSL) by setting the `ssl` option
to be false. Note that the ssl setting will be ignored in Luminous as well as
Mimic 13.2.2 or older where it is not supported
to be false.
## Viewing the Dashboard External to the Cluster
......@@ -87,7 +84,6 @@ NodePort, LoadBalancer, or ExternalIPs.
The simplest way to expose the service in minikube or similar environment is using the NodePort to open a port on the
VM that can be accessed by the host. To create a service with the NodePort, save this yaml as `dashboard-external-https.yaml`.
(For Luminous you will need to set the `port` and `targetPort` to 7000 and connect via `http`.)
```yaml
apiVersion: v1
......
......@@ -52,10 +52,10 @@ Now we are ready to setup [block](https://ceph.com/ceph-storage/block-storage/),
### Block Devices
Ceph can provide raw block device volumes to pods. Each example below sets up a storage class which can then be used to provision a block device in kubernetes pods. The storage class is defined with a [pool](http://docs.ceph.com/docs/nautilus/rados/operations/pools/) which defines the level of data redundancy in Ceph:
Ceph can provide raw block device volumes to pods. Each example below sets up a storage class which can then be used to provision a block device in kubernetes pods. The storage class is defined with [a pool](http://docs.ceph.com/docs/master/rados/operations/pools/) which defines the level of data redundancy in Ceph:
- `storageclass.yaml`: This example illustrates replication of 3 for production scenarios and requires at least three nodes. Your data is replicated on three different kubernetes worker nodes and intermittent or long-lasting single node failures will not result in data unavailability or loss.
- `storageclass-ec.yaml`: Configures erasure coding for data durability rather than replication. [Ceph's erasure coding](http://docs.ceph.com/docs/nautilus/rados/operations/erasure-code/) is more efficient than replication so you can get high reliability without the 3x replication cost of the preceding example (but at the cost of higher computational encoding and decoding costs on the worker nodes). Erasure coding requires at least three nodes. See the [Erasure coding](ceph-pool-crd.md#erasure-coded) documentation for more details. **Note: Erasure coding is only available with the flex driver. Support from the CSI driver is coming soon.**
- `storageclass-ec.yaml`: Configures erasure coding for data durability rather than replication. [Ceph's erasure coding](http://docs.ceph.com/docs/master/rados/operations/erasure-code/) is more efficient than replication so you can get high reliability without the 3x replication cost of the preceding example (but at the cost of higher computational encoding and decoding costs on the worker nodes). Erasure coding requires at least three nodes. See the [Erasure coding](ceph-pool-crd.md#erasure-coded) documentation for more details. **Note: Erasure coding is only available with the flex driver. Support from the CSI driver is coming soon.**
- `storageclass-test.yaml`: Replication of 1 for test scenarios and it requires only a single node. Do not use this for applications that store valuable data or have high-availability storage requirements, since a single node failure can result in data loss.
The storage classes are found in different sub-directories depending on the driver:
......
......@@ -54,7 +54,7 @@ High performance applications typically will not use erasure coding due to the p
When creating an erasure-coded pool, it is highly recommended to create the pool when you have **bluestore OSDs** in your cluster
(see the [OSD configuration settings](ceph-cluster-crd.md#osd-configuration-settings). Filestore OSDs have
[limitations](http://docs.ceph.com/docs/luminous/rados/operations/erasure-code/#erasure-coding-with-overwrites) that are unsafe and lower performance.
[limitations](http://docs.ceph.com/docs/master/rados/operations/erasure-code/#erasure-coding-with-overwrites) that are unsafe and lower performance.
## Pool Settings
......
......@@ -274,8 +274,9 @@ Verify the Ceph cluster's health using the [health verification section](#health
# Ceph Version Upgrades
Rook 1.0 is the last Rook release which will support Ceph's Luminous (v12.x.x) version. Users are
advised to upgrade to Mimic (v13.x.x) or Nautilus (v14.x.x) now.
Rook 1.0 was the last Rook release which will support Ceph's Luminous (v12.x.x) version. Users are
required to upgrade to Mimic (v13.2.4 or newer) or Nautilus (v14.2.x) now. Rook 1.1 will only run with
Mimic or newer, though running with Octopus requires the `allowUnsupported: true` flag.
**IMPORTANT: This section only applies to clusters running Rook 1.0 or newer**
......
......@@ -15,6 +15,7 @@ an example usage
### Ceph
- The minimum version supported by Rook is now Ceph Mimic v13.2.4.
- The Ceph CSI driver is enabled by default and preferred over the flex driver
- The flex driver can be disabled in operator.yaml by setting ROOK_ENABLE_FLEX_DRIVER=false
- The CSI drivers can be disabled by setting ROOK_CSI_ENABLE_CEPHFS=false and ROOK_CSI_ENABLE_RBD=false
......@@ -46,6 +47,8 @@ an example usage
## Breaking Changes
### Ceph
- The minimum version supported by Rook is Ceph Mimic v13.2.4. Before upgrading to v1.1 it is required to update the version of Ceph to at least this version.
- The CSI driver is enabled by default. Documentation has been changed significantly for block and filesystem to use the CSI driver instead of flex.
While the flex driver is still supported, it is anticipated to be deprecated soon.
......
......@@ -23,9 +23,6 @@ spec:
type: boolean
image:
type: string
name:
pattern: ^(luminous|mimic|nautilus)$
type: string
dashboard:
properties:
enabled:
......
......@@ -17,11 +17,12 @@ metadata:
spec:
cephVersion:
# The container image used to launch the Ceph daemon pods (mon, mgr, osd, mds, rgw).
# v12 is luminous, v13 is mimic, and v14 is nautilus.
# v13 is mimic, v14 is nautilus, and v15 is octopus.
# RECOMMENDATION: In production, use a specific version tag instead of the general v14 flag, which pulls the latest release and could result in different
# versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/.
image: ceph/ceph:v14.2.2-20190722
# Whether to allow unsupported versions of Ceph. Currently luminous, mimic and nautilus are supported, with the recommendation to upgrade to nautilus.
# Whether to allow unsupported versions of Ceph. Currently mimic and nautilus are supported, with the recommendation to upgrade to nautilus.
# Octopus is the version allowed when this is set to true.
# Do not set to true in production.
allowUnsupported: false
# The path on the host where configuration files will be persisted. Must be specified.
......
......@@ -42,9 +42,6 @@ spec:
type: boolean
image:
type: string
name:
pattern: ^(luminous|mimic|nautilus)$
type: string
dashboard:
properties:
enabled:
......
......@@ -84,7 +84,7 @@ type ClusterSpec struct {
// VersionSpec represents the settings for the Ceph version that Rook is orchestrating.
type CephVersionSpec struct {
// Image is the container image used to launch the ceph daemons, such as ceph/ceph:v12.2.7 or ceph/ceph:v13.2.1
// Image is the container image used to launch the ceph daemons, such as ceph/ceph:v13.2.6 or ceph/ceph:v14.2.1
Image string `json:"image,omitempty"`
// Whether to allow unsupported versions (do not set to true in production)
......
/*
Copyright 2018 The Rook Authors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
const (
DefaultLuminousImage = "ceph/ceph:v12.2.9-20181026"
)
......@@ -171,8 +171,8 @@ func IsMultiFSEnabled() bool {
// SetNumMDSRanks sets the number of mds ranks (max_mds) for a Ceph filesystem.
func SetNumMDSRanks(context *clusterd.Context, cephVersion cephver.CephVersion, clusterName, fsName string, activeMDSCount int32) error {
// Noted sections 1 and 2 are necessary for reducing max_mds in Luminous.
// See more: [1] http://docs.ceph.com/docs/luminous/cephfs/upgrading/
// Noted sections 1 and 2 are necessary for reducing max_mds.
// See more: [1] http://docs.ceph.com/docs/nautilus/cephfs/upgrading/
// [2] https://tracker.ceph.com/issues/23172
// * Noted section 1 - See note at top of function
......@@ -195,7 +195,7 @@ func SetNumMDSRanks(context *clusterd.Context, cephVersion cephver.CephVersion,
// Now check the error to see if we can even determine whether we should reduce or not
if errAtStart != nil {
return fmt.Errorf(`failed to get filesystem %s info needed to ensure mds rank can be changed correctly,
if Ceph version is Luminous (12.y.z) and num active mdses (max_mds) was lowered, USER should deactivate extra active mdses manually: %v`,
if num active mdses (max_mds) was lowered, USER should deactivate extra active mdses manually: %+v`,
fsName, errAtStart)
}
if int(activeMDSCount) > fsAtStart.MDSMap.MaxMDS {
......
......@@ -38,23 +38,14 @@ func MgrDisableModule(context *clusterd.Context, clusterName, name string) error
func MgrSetConfig(context *clusterd.Context, clusterName, mgrName string, cephVersion cephver.CephVersion, key, val string, force bool) (bool, error) {
var getArgs, setArgs []string
mgrID := fmt.Sprintf("mgr.%s", mgrName)
if cephVersion.IsLuminous() {
getArgs = append(getArgs, "config-key", "get", key)
if val == "" {
setArgs = append(setArgs, "config-key", "del", key)
} else {
setArgs = append(setArgs, "config-key", "set", key, val)
}
getArgs = append(getArgs, "config", "get", mgrID, key)
if val == "" {
setArgs = append(setArgs, "config", "rm", mgrID, key)
} else {
getArgs = append(getArgs, "config", "get", mgrID, key)
if val == "" {
setArgs = append(setArgs, "config", "rm", mgrID, key)
} else {
setArgs = append(setArgs, "config", "set", mgrID, key, val)
}
if force && cephVersion.IsAtLeastNautilus() {
setArgs = append(setArgs, "--force")
}
setArgs = append(setArgs, "config", "set", mgrID, key, val)
}
if force && cephVersion.IsAtLeastNautilus() {
setArgs = append(setArgs, "--force")
}
// Retrieve previous value to monitor changes
......
......@@ -84,13 +84,15 @@ func newCluster(c *cephv1.CephCluster, context *clusterd.Context, csiMutex *sync
}
}
// detectCephVersion loads the ceph version from the image and checks that it meets the version requirements to
// run in the cluster
func (c *cluster) detectCephVersion(rookImage, cephImage string, timeout time.Duration) (*cephver.CephVersion, error) {
logger.Infof("detecting the ceph image version for image %s...", cephImage)
versionReporter, err := cmdreporter.New(
c.context.Clientset, &c.ownerRef,
detectVersionName, detectVersionName, c.Namespace,
[]string{"ceph"}, []string{"--version"},
rookImage, cephImage,
)
rookImage, cephImage)
if err != nil {
return nil, fmt.Errorf("failed to set up ceph version job. %+v", err)
}
......@@ -112,11 +114,27 @@ func (c *cluster) detectCephVersion(rookImage, cephImage string, timeout time.Du
if err != nil {
return nil, fmt.Errorf("failed to extract ceph version. %+v", err)
}
logger.Infof("Detected ceph image version: %s", version)
return version, nil
}
func (c *cluster) validateCephVersion(version *cephver.CephVersion) error {
if !version.IsAtLeast(cephver.Minimum) {
return fmt.Errorf("the version does not meet the minimum version: %s", cephver.Minimum.String())
}
if !version.Supported() {
logger.Warningf("unsupported ceph version detected: %s.", version)
if c.Spec.CephVersion.AllowUnsupported {
return nil
}
return fmt.Errorf("allowUnsupported must be set to true to run with this version: %v", version)
}
return nil
}
func (c *cluster) createInstance(rookImage string, cephVersion cephver.CephVersion) error {
var err error
c.setOrchestrationNeeded()
......
......@@ -14,15 +14,18 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
// Package cluster to manage a Ceph cluster.
// Package cluster to manage Kubernetes storage.
package cluster
import (
"encoding/json"
"testing"
cephv1 "github.com/rook/rook/pkg/apis/ceph.rook.io/v1"
"github.com/rook/rook/pkg/clusterd"
"github.com/rook/rook/pkg/daemon/ceph/client"
cephver "github.com/rook/rook/pkg/operator/ceph/version"
testop "github.com/rook/rook/pkg/operator/test"
"github.com/stretchr/testify/assert"
)
......@@ -108,3 +111,46 @@ func TestDiffImageSpecAndClusterRunningVersion(t *testing.T) {
assert.NoError(t, err)
assert.False(t, m)
}
func TestMinVersion(t *testing.T) {
c := testSpec()
c.Spec.CephVersion.AllowUnsupported = true
// All versions less than 13.2.4 are invalid
v := &cephver.CephVersion{Major: 12, Minor: 2, Extra: 10}
assert.Error(t, c.validateCephVersion(v))
v = &cephver.CephVersion{Major: 13, Minor: 2, Extra: 3}
assert.Error(t, c.validateCephVersion(v))
// All versions at least 13.2.4 are valid
v = &cephver.CephVersion{Major: 13, Minor: 2, Extra: 4}
assert.NoError(t, c.validateCephVersion(v))
v = &cephver.CephVersion{Major: 14}
assert.NoError(t, c.validateCephVersion(v))
v = &cephver.CephVersion{Major: 15}
assert.NoError(t, c.validateCephVersion(v))
}
func TestSupportedVersion(t *testing.T) {
c := testSpec()
// Supported versions are valid
v := &cephver.CephVersion{Major: 14, Minor: 2, Extra: 0}
assert.NoError(t, c.validateCephVersion(v))
// Unsupported versions are not valid
v = &cephver.CephVersion{Major: 15, Minor: 2, Extra: 0}
assert.Error(t, c.validateCephVersion(v))
// Unsupported versions are now valid
c.Spec.CephVersion.AllowUnsupported = true
assert.NoError(t, c.validateCephVersion(v))
}
func testSpec() cluster {
clientset := testop.New(1)
context := &clusterd.Context{
Clientset: clientset,
}
return cluster{Spec: &cephv1.ClusterSpec{}, context: context}
}
......@@ -259,22 +259,19 @@ func (c *ClusterController) onAdd(obj interface{}) {
}
// Start the Rook cluster components. Retry several times in case of failure.
validOrchestration := true
failedMessage := ""
state := cephv1.ClusterStateError
err = wait.Poll(clusterCreateInterval, clusterCreateTimeout, func() (bool, error) {
cephVersion, err := cluster.detectCephVersion(c.rookImage, cluster.Spec.CephVersion.Image, detectCephVersionTimeout)
cephVersion, canRetry, err := c.detectAndValidateCephVersion(cluster, cluster.Spec.CephVersion.Image)
if err != nil {
logger.Errorf("unknown ceph major version. %+v", err)
return false, nil
}
if !cluster.Spec.CephVersion.AllowUnsupported {
if !cephVersion.Supported() {
err = fmt.Errorf("unsupported ceph version detected: %s. allowUnsupported must be set to true to run with this version", cephVersion)
logger.Errorf("%+v", err)
validOrchestration = false
// it may seem strange to log error and exit true but we don't want to retry if the version is not supported
failedMessage = fmt.Sprintf("failed the ceph version check. %+v", err)
logger.Errorf(failedMessage)
if !canRetry {
// it may seem strange to exit true but we don't want to retry if the version is not supported
return true, nil
}
return false, nil
}
// This tries to determine if the operator was restarted and we loss the state
......@@ -327,15 +324,15 @@ func (c *ClusterController) onAdd(obj interface{}) {
// cluster is created, update the cluster CRD status now
c.updateClusterStatus(clusterObj.Namespace, clusterObj.Name, cephv1.ClusterStateCreated, "")
state = cephv1.ClusterStateCreated
failedMessage = ""
return true, nil
})
if err != nil || !validOrchestration {
message := fmt.Sprintf("giving up creating cluster in namespace %s after %s", cluster.Namespace, clusterCreateTimeout)
if !validOrchestration {
message = fmt.Sprintf("giving up creating cluster in namespace %s", cluster.Namespace)
}
c.updateClusterStatus(clusterObj.Namespace, clusterObj.Name, cephv1.ClusterStateError, message)
c.updateClusterStatus(clusterObj.Namespace, clusterObj.Name, state, failedMessage)
if state == cephv1.ClusterStateError {
// the cluster could not be initialized
return
}
......@@ -480,40 +477,30 @@ func (c *ClusterController) onUpdate(oldObj, newObj interface{}) {
logger.Infof("update event for cluster %s is supported, orchestrating update now", newClust.Namespace)
// At this point, clusterInfo might not be initialized
// If we have deployed a new operator and failed on allowUnsupported
// there is no way we can continue, even we set allowUnsupported to true clusterInfo is gone
// So we have to re-populate it
detectVersion := false
if !cluster.Info.IsInitialized() {
logger.Infof("cluster information are not initialized, populating them.")
detectVersion = true
cluster.Info, _, _, err = mon.LoadClusterInfo(c.context, cluster.Namespace)
if err != nil {
logger.Errorf("failed to load clusterInfo %+v", err)
}
}
// if the image changed, we need to detect the new image version
versionChanged := false
if oldClust.Spec.CephVersion.Image != newClust.Spec.CephVersion.Image {
logger.Infof("the ceph version changed. detecting the new image version...")
version, err := cluster.detectCephVersion(c.rookImage, newClust.Spec.CephVersion.Image, detectCephVersionTimeout)
versionChanged = true
logger.Infof("the ceph version changed")
version, _, err := c.detectAndValidateCephVersion(cluster, newClust.Spec.CephVersion.Image)
if err != nil {
logger.Errorf("unknown ceph major version. %+v", err)
return
}
cluster.Info.CephVersion = *version
} else {
// At this point, clusterInfo might not be initialized
// If we have deployed a new operator and failed on allowUnsupported
// there is no way we can continue, even we set allowUnsupported to true clusterInfo is gone
// So we have to re-populate it
if !cluster.Info.IsInitialized() {
logger.Infof("cluster information are not initialized, populating them.")
cluster.Info, _, _, err = mon.LoadClusterInfo(c.context, cluster.Namespace)
if err != nil {
logger.Errorf("failed to load clusterInfo %+v", err)
}
// Re-setting cluster version too since LoadClusterInfo does not load it
version, err := cluster.detectCephVersion(c.rookImage, newClust.Spec.CephVersion.Image, detectCephVersionTimeout)
if err != nil {
logger.Errorf("unknown ceph major version. %+v", err)
return
}
cluster.Info.CephVersion = *version
logger.Infof("ceph version is still %s on image %s", &cluster.Info.CephVersion, cluster.Spec.CephVersion.Image)
}
}
logger.Debugf("old cluster: %+v", oldClust.Spec)
......@@ -572,6 +559,17 @@ func (c *ClusterController) onUpdate(oldObj, newObj interface{}) {
}
}
func (c *ClusterController) detectAndValidateCephVersion(cluster *cluster, image string) (*cephver.CephVersion, bool, error) {
version, err := cluster.detectCephVersion(c.rookImage, image, detectCephVersionTimeout)
if err != nil {
return nil, true, err
}
if err := cluster.validateCephVersion(version); err != nil {
return nil, false, err
}
return version, false, nil
}
func (c *ClusterController) handleUpdate(crdName string, cluster *cluster) (bool, error) {
c.updateClusterStatus(cluster.Namespace, crdName, cephv1.ClusterStateUpdating, "")
......@@ -796,7 +794,7 @@ func (c *ClusterController) updateClusterStatus(namespace, name string, state ce
// get the most recent cluster CRD object
cluster, err := c.context.RookClientset.CephV1().CephClusters(namespace).Get(name, metav1.GetOptions{})
if err != nil {
logger.Errorf("failed to get cluster from namespace %s prior to updating its status: %+v", namespace, err)
logger.Errorf("failed to get cluster from namespace %s prior to updating its status to %s. %+v", namespace, state, err)
}
// update the status on the retrieved cluster object
......
......@@ -45,10 +45,7 @@ type mgrConfig struct {
func (c *Cluster) dashboardPort() int {
if c.dashboard.Port == 0 {
// select default ports
if c.clusterInfo.CephVersion.IsLuminous() {
return dashboardPortHTTP
}
// select default port
return dashboardPortHTTPS
}
// crd validates port >= 0
......
......@@ -92,7 +92,7 @@ func (c *Cluster) configureDashboard(m *mgrConfig) error {
return nil
}
// Ceph docs about the dashboard module: http://docs.ceph.com/docs/luminous/mgr/dashboard/
// Ceph docs about the dashboard module: http://docs.ceph.com/docs/nautilus/mgr/dashboard/
func (c *Cluster) toggleDashboardModule(m *mgrConfig) error {
if c.dashboard.Enabled {
if err := client.MgrEnableModule(c.context, c.Namespace, dashboardModuleName, true); err != nil {
......@@ -150,11 +150,6 @@ func (c *Cluster) configureDashboardModule(m *mgrConfig) error {
}
func (c *Cluster) initializeSecureDashboard() error {
if c.clusterInfo.CephVersion.IsLuminous() {
logger.Infof("skipping cert and user configuration on luminous")
return nil
}
// we need to wait a short period after enabling the module before we can call the `ceph dashboard` commands.
time.Sleep(dashboardInitWaitTime)
......
......@@ -57,17 +57,9 @@ func TestOrchestratorModules(t *testing.T) {
c := &Cluster{clusterInfo: clusterInfo, context: context}
// the modules are skipped on luminous
c.clusterInfo.CephVersion = cephver.Luminous
err := c.configureOrchestratorModules()
assert.Nil(t, err)
assert.False(t, orchestratorModuleEnabled)
assert.False(t, rookModuleEnabled)
assert.False(t, rookBackendSet)
// the modules are skipped on mimic
c.clusterInfo.CephVersion = cephver.Mimic
err = c.configureOrchestratorModules()
err := c.configureOrchestratorModules()
assert.Nil(t, err)
assert.False(t, orchestratorModuleEnabled)
assert.False(t, rookModuleEnabled)
......
......@@ -75,12 +75,6 @@ func (c *Cluster) makeDeployment(mgrConfig *mgrConfig) *apps.Deployment {
}
c.annotations.ApplyToObjectMeta(&podSpec.ObjectMeta)
c.placement.ApplyToPodSpec(&podSpec.Spec)
if c.clusterInfo.CephVersion.IsLuminous() {
// prepend the keyring-copy workaround for luminous clusters
podSpec.Spec.InitContainers = append(
[]v1.Container{c.makeCopyKeyringInitContainer(mgrConfig)},
podSpec.Spec.InitContainers...)
}
replicas := int32(1)
if len(c.annotations) == 0 {
......@@ -116,12 +110,6 @@ func (c *Cluster) makeDeployment(mgrConfig *mgrConfig) *apps.Deployment {
func (c *Cluster) needHttpBindFix() bool {
needed := true
// if luminous and >= 12.2.12
if c.clusterInfo.CephVersion.IsLuminous() &&
c.clusterInfo.CephVersion.IsAtLeast(cephver.CephVersion{Major: 12, Minor: 2, Extra: 12}) {
needed = false
}
// if mimic and >= 13.2.6
if c.clusterInfo.CephVersion.IsMimic() &&
c.clusterInfo.CephVersion.IsAtLeast(cephver.CephVersion{Major: 13, Minor: 2, Extra: 6}) {
......@@ -150,7 +138,7 @@ func (c *Cluster) clearHttpBindFix(mgrConfig *mgrConfig) {
// there are two forms of the configuration key that might exist which
// depends not on the current version, but on the version that may be
// the version being upgraded from.
for _, ver := range []cephver.CephVersion{cephver.Luminous, cephver.Mimic} {
for _, ver := range []cephver.CephVersion{cephver.Mimic} {
changed, err := client.MgrSetConfig(c.context, c.Namespace, mgrConfig.DaemonID, ver,
fmt.Sprintf("mgr/%s/server_addr", module), "", false)
logger.Infof("clearing http bind fix mod=%s ver=%s changed=%t err=%+v", module, &ver, changed, err)
......@@ -195,11 +183,7 @@ func (c *Cluster) makeSetServerAddrInitContainer(mgrConfig *mgrConfig, mgrModule
// N: config set mgr.a mgr/<mod>/server_addr $(ROOK_CEPH_<MOD>_SERVER_ADDR) --force
podIPEnvVar := "ROOK_POD_IP"
cfgSetArgs := []string{"config", "set"}
if c.clusterInfo.CephVersion.IsLuminous() {
cfgSetArgs[0] = "config-key"
} else {
cfgSetArgs = append(cfgSetArgs, fmt.Sprintf("mgr.%s", mgrConfig.DaemonID))
}
cfgSetArgs = append(cfgSetArgs, fmt.Sprintf("mgr.%s", mgrConfig.DaemonID))
cfgPath := fmt.Sprintf("mgr/%s/%s/server_addr", mgrModule, mgrConfig.DaemonID)
cfgSetArgs = append(cfgSetArgs, cfgPath, opspec.ContainerEnvVarReference(podIPEnvVar))
if c.clusterInfo.CephVersion.IsAtLeastNautilus() {
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment