This project is mirrored from https://gitee.com/wangmingco/rook.git. Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
  1. 05 Jul, 2019 1 commit
  2. 03 Jul, 2019 1 commit
  3. 01 Jul, 2019 1 commit
  4. 24 Jun, 2019 1 commit
    • Sébastien Han's avatar
      ceph: stop enforcing crush tunable · d3845e7f
      Sébastien Han authored
      This requirement is long gone and was reported in 2017. Clients should
      have been updated by then, so there is no reason to set the CRUSH
      tunable to such an old client. Actually, we should let Ceph run its own
      tunable. They can always be changed later. Also, this was really
      restrictive and applied on every orchestration and thus would override
      any other config done by an administrator. Even worse, this will trigger
      data movement back and forth...
      
      Closes: https://github.com/rook/rook/issues/3138
      
      Signed-off-by: default avatarSébastien Han <seb@redhat.com>
      d3845e7f
  5. 17 Jun, 2019 1 commit
  6. 14 Jun, 2019 1 commit
    • Sébastien Han's avatar
      ceph: refactor rgw bootstrap · 93b24486
      Sébastien Han authored
      This commit does multiple things:
      
      * remove support for AllNodes where we would deploy one rgw per node on
      all the nodes.
      * a transition path is implemented in the code so that if someone has an
      existing deployment, daemonsets will be removed and replaced by an
      deployments.
      * when using "instances", each rgw deployed has its own key which makes
      Ceph reporting the exact number of rgw running, see:
      
      ```
      [root@rook-ceph-operator-775cf575c5-bh4sr /]# ceph -s
        cluster:
          id:     611fcf39-0669-4864-9a12-debb35c0397a
          health: HEALTH_OK
      
        services:
          mon: 3 daemons, quorum a,b,c (age 12h)
          mgr: a(active, since 12h)
          osd: 3 osds: 3 up (since 12h), 3 in (since 12h)
          rgw: 3 daemons active (my.store.a, my.store.b, my.store.c)
      
        data:
          pools:   6 pools, 600 pgs
          objects: 235 objects, 3.8 KiB
          usage:   3.0 GiB used, 84 GiB / 87 GiB avail
          pgs:     600 active+clean
      ```
      
      Closes: https://github.com/rook/rook/issues/2474, https://github.com/rook/rook/issues/2957 and https://github.com/rook/rook/issues/3245
      
      Signed-off-by: default avatarSébastien Han <seb@redhat.com>
      93b24486
  7. 13 Jun, 2019 1 commit
  8. 07 Jun, 2019 1 commit
  9. 05 Jun, 2019 1 commit
  10. 15 May, 2019 1 commit
    • Santosh Pillai's avatar
      added device class pool property · 7d5afa29
      Santosh Pillai authored
      
      - Updated code to use deviceClass property when a pool is created for both "replicated" and "erasure code".
      - Updated "ceph crush rule create-..." command to use "create-replicated" instead of "create-simple"
      - Updated unit tests
      - Updated (ceph-pool-crd.md) documentation to reflect the changes.
      - Updated pending release notes.
      Signed-off-by: default avatarSantosh Pillai <sapillai@redhat.com>
      7d5afa29
  11. 14 May, 2019 1 commit
  12. 03 May, 2019 1 commit
  13. 30 Apr, 2019 1 commit
    • travisn's avatar
      write flex settings to config file instead of env vars · 7cdfcc39
      travisn authored
      
      The flex settings cannot be passed to the driver with environment variables.
      There is no context available for returning the flex settings except
      that the flex driver will look in a config file in the same directory.
      These settings must be valid or else the driver will return the default
      settings.
      Signed-off-by: default avatartravisn <tnielsen@redhat.com>
      7cdfcc39
  14. 28 Apr, 2019 1 commit
    • travisn's avatar
      build: promote builds only to master and release · 81f55a12
      travisn authored
      
      The alpha, beta, and stable channels do not match the rook release process.
      Each storage provider defines their own stability based on the CRDs
      rather than defining it with the release process. Rook really
      only has two release streams: master and the official releases.
      Master is published with each merge, while releases go through
      a signoff process to ensure quality. Thus, the alpha, beta,
      and stable channels are removed and replaced with a
      single release channel.
      Signed-off-by: default avatartravisn <tnielsen@redhat.com>
      81f55a12
  15. 24 Apr, 2019 2 commits
  16. 23 Apr, 2019 2 commits
  17. 22 Apr, 2019 1 commit
  18. 18 Apr, 2019 1 commit
  19. 17 Apr, 2019 1 commit
  20. 16 Apr, 2019 1 commit
  21. 15 Apr, 2019 1 commit
    • Sébastien Han's avatar
      ceph: allow logging control · b8874d7f
      Sébastien Han authored
      We now run all the daemon with a new option from Nautilus 14.2.1 which
      allows us to tell to a daemon to not log on file. However, we can decide
      to activate logging by editing the configuration flag 'log_to_file' via
      the centralized config store like this for a particular daemon:
      
      ceph config set mon.a log_to_file true
      
      This is useful when a daemon keeps crashing and we want to collect log
      files on the system.
      
      Fixes: https://github.com/rook/rook/issues/2881
      
      Signed-off-by: default avatarSébastien Han <seb@redhat.com>
      b8874d7f
  22. 11 Apr, 2019 1 commit
    • Blaine Gardner's avatar
      ceph: remove osds only when certain · a8008298
      Blaine Gardner authored
      
      Make the Ceph operator more cautious about when it decides to remove
      nodes from the Rook-Ceph cluster which are acting as osd hosts.
      
      When `useAllNodes` is set to `true` we assume that the user wants to
      have the most hands-off experience. Node removals are allowed when a
      node is delted from Kubernetes and when a node has its taints/affinities
      modified by the user (but not by automatic k8s modification as much as
      possible).
      
      When `useAllnodes` is set to `false` the only time a node is removed is
      if it is removed from the Ceph cluster definition.
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      a8008298
  23. 10 Apr, 2019 1 commit
  24. 07 Apr, 2019 1 commit
  25. 04 Apr, 2019 1 commit
  26. 19 Mar, 2019 1 commit
    • travisn's avatar
      Increase the number of mons when nodes are added · 7548e76b
      travisn authored
      
      The desired number of mons could change depending on the number of nodes in a cluster.
      For example, three mons would be the min number of mons in a production cluster with at
      least three nodes. If there are five or more nodes, the number of mons could increase
      to five in order to increase the failure tolerance to two nodes.
      
      This is accomplished by a new setting in the cluster CRD preferredCount. If the number
      of hosts exceeds preferredCount, more mons are added to quorum. If the number
      of hosts drops below the preferred count, the operator would reduce the quorum size to
      the smaller desired count.
      Signed-off-by: default avatartravisn <tnielsen@redhat.com>
      7548e76b
  27. 14 Mar, 2019 1 commit
  28. 13 Mar, 2019 1 commit
  29. 07 Mar, 2019 1 commit
    • Blaine Gardner's avatar
      rgw: configure entirely in operator · 086fa823
      Blaine Gardner authored
      
      Configure the Ceph rgw daemon completely from the operator a la the
      recent changes to the Ceph mon, mgr, and mds operators.
      
      Create the rgw deployment or daemonset first, and then create the
      keyring secret for the object store with its owner reference as the
      corresponding deployment or daemonset. When the replication controller
      is deleted, the secret is also deleted.
      
      The RGW's mime.types file is now stored in a configmap with a different
      file created for each object store. This is primarily just a means to
      get the mime.types file into the rgw pod, but the added benefit is that
      the administrator can modify the configmap, which could reduce
      susceptibility to file type execution vulnerabilities (worst case).
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      086fa823
  30. 28 Feb, 2019 1 commit
    • Blaine Gardner's avatar
      ceph mds: configure completely from operator · 4eb4fb7a
      Blaine Gardner authored
      
      Configure the Ceph mds daemon completely from the operator a la the
      recent changes to the Ceph mon and mgr operators.
      
      Create the mds deployments first and then
      create the keyring secrets for them with their owner reference as the
      corresponding deployment. This will mean that the secrets do not need to
      be micromanaged. When the deployment is deleted, the secret is also
      deleted. This has not been necessary for the mons or the manager since
      the mons share a keyring with a lifespan of the cluster, as does the
      mgr, which currently has single-mgr support only.
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      4eb4fb7a
  31. 14 Feb, 2019 1 commit
  32. 13 Feb, 2019 1 commit
  33. 11 Feb, 2019 1 commit
    • Blaine Gardner's avatar
      ceph mgr: configure completely from operator · 03ea5f0a
      Blaine Gardner authored
      
      Configure the Ceph mgr daemon completely from the operator a la the
      recent changes to the Ceph mon operator.
      
      The pod spec for the mgr changed quite a bit, and instead of updating
      the unit tests of questionable use, some additional unit test tools
      applicable to any Ceph daemon have been added and used with the mgr. The
      mgrs unit tests should now be more useful and get in the way of devs
      less, and they can be used by other daemons later.
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      03ea5f0a
  34. 05 Feb, 2019 1 commit
    • Blaine Gardner's avatar
      ceph mons: set up entirely in operator · ec416cdd
      Blaine Gardner authored
      
      Make the Rook config-init unnecessary for mons, and remove that init
      container. Perform all mon configuration steps in the operator, and set
      up the mon pods and k8s environment such that only Ceph containers are
      needed for running mons.
      
      This should help streamline changes to the mons, as there will be no
      need to change the `daemon/mon` code or `cmd/rook/ceph` code with mon
      changes in the future.
      
      This work starts to lay the groundwork for supporting the
      `design/ceph-config-updates.md` design.
      
      Notable new bits:
      
      Create a keyring secret store helper for storing dameon keyrings, and
      use it to store the mon keyring. Mon pods mount the keyring into a
      k8s secret-backed volume.
      
      Create a configmap store for the Ceph config file which can be mounted
      into pods/containers directly to /etc/ceph/ceph.conf. Also store
      individual mon_host and mon_initial_members values which can be mapped
      into pods as environment variables and used in Ceph commandline flags,
      enabling the mon pods to have the most up-to-date information about the
      mon cluster when restarting and without need for operator intervention.
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      ec416cdd
  35. 04 Feb, 2019 1 commit
  36. 01 Feb, 2019 1 commit
    • Blaine Gardner's avatar
      ceph: no longer create fallback osd · 78d47191
      Blaine Gardner authored
      
      Rook's behavior should not be to create a default OSD in dataDirHostPath
      when no devices are present on a node. Any preexisting fallback osd
      should be kept as long as no dirs or disks have been specified to keep
      the legacy behavior for those clusters running with these osds in place.
      The legacy deletion behavior is also kept, removing the osd as soon as
      any dir or device is specified other than the dataDirHostPath dir.
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      78d47191
  37. 24 Jan, 2019 1 commit
  38. 15 Jan, 2019 1 commit