This project is mirrored from https://gitee.com/wangmingco/rook.git. Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
  1. 23 Apr, 2019 1 commit
    • travisn's avatar
      ceph: upgrade all daemons during ceph upgrade · 9dcddb5b
      travisn authored
      
      The CephCluster CR contains settings that are needed by other
      CRs to configure the Ceph daemons. When the CephCluster CR
      is updated, the updates will now be passed on to each of the
      CR controllers to ensure the daemons are updated properly
      without requiring an operator restart.
      
      When calling the controllers from another controller,
      we ensure that only a single goroutine is handling
      CRs at any given time to prevent contention across
      multiple CRs of the same type.
      Signed-off-by: default avatartravisn <tnielsen@redhat.com>
      9dcddb5b
  2. 22 Apr, 2019 1 commit
  3. 18 Apr, 2019 1 commit
  4. 17 Apr, 2019 1 commit
  5. 16 Apr, 2019 1 commit
  6. 15 Apr, 2019 1 commit
    • Sébastien Han's avatar
      ceph: allow logging control · b8874d7f
      Sébastien Han authored
      We now run all the daemon with a new option from Nautilus 14.2.1 which
      allows us to tell to a daemon to not log on file. However, we can decide
      to activate logging by editing the configuration flag 'log_to_file' via
      the centralized config store like this for a particular daemon:
      
      ceph config set mon.a log_to_file true
      
      This is useful when a daemon keeps crashing and we want to collect log
      files on the system.
      
      Fixes: https://github.com/rook/rook/issues/2881
      
      Signed-off-by: default avatarSébastien Han <seb@redhat.com>
      b8874d7f
  7. 11 Apr, 2019 1 commit
    • Blaine Gardner's avatar
      ceph: remove osds only when certain · a8008298
      Blaine Gardner authored
      
      Make the Ceph operator more cautious about when it decides to remove
      nodes from the Rook-Ceph cluster which are acting as osd hosts.
      
      When `useAllNodes` is set to `true` we assume that the user wants to
      have the most hands-off experience. Node removals are allowed when a
      node is delted from Kubernetes and when a node has its taints/affinities
      modified by the user (but not by automatic k8s modification as much as
      possible).
      
      When `useAllnodes` is set to `false` the only time a node is removed is
      if it is removed from the Ceph cluster definition.
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      a8008298
  8. 10 Apr, 2019 1 commit
  9. 07 Apr, 2019 1 commit
  10. 04 Apr, 2019 1 commit
  11. 19 Mar, 2019 1 commit
    • travisn's avatar
      Increase the number of mons when nodes are added · 7548e76b
      travisn authored
      
      The desired number of mons could change depending on the number of nodes in a cluster.
      For example, three mons would be the min number of mons in a production cluster with at
      least three nodes. If there are five or more nodes, the number of mons could increase
      to five in order to increase the failure tolerance to two nodes.
      
      This is accomplished by a new setting in the cluster CRD preferredCount. If the number
      of hosts exceeds preferredCount, more mons are added to quorum. If the number
      of hosts drops below the preferred count, the operator would reduce the quorum size to
      the smaller desired count.
      Signed-off-by: default avatartravisn <tnielsen@redhat.com>
      7548e76b
  12. 14 Mar, 2019 1 commit
  13. 13 Mar, 2019 1 commit
  14. 07 Mar, 2019 1 commit
    • Blaine Gardner's avatar
      rgw: configure entirely in operator · 086fa823
      Blaine Gardner authored
      
      Configure the Ceph rgw daemon completely from the operator a la the
      recent changes to the Ceph mon, mgr, and mds operators.
      
      Create the rgw deployment or daemonset first, and then create the
      keyring secret for the object store with its owner reference as the
      corresponding deployment or daemonset. When the replication controller
      is deleted, the secret is also deleted.
      
      The RGW's mime.types file is now stored in a configmap with a different
      file created for each object store. This is primarily just a means to
      get the mime.types file into the rgw pod, but the added benefit is that
      the administrator can modify the configmap, which could reduce
      susceptibility to file type execution vulnerabilities (worst case).
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      086fa823
  15. 28 Feb, 2019 1 commit
    • Blaine Gardner's avatar
      ceph mds: configure completely from operator · 4eb4fb7a
      Blaine Gardner authored
      
      Configure the Ceph mds daemon completely from the operator a la the
      recent changes to the Ceph mon and mgr operators.
      
      Create the mds deployments first and then
      create the keyring secrets for them with their owner reference as the
      corresponding deployment. This will mean that the secrets do not need to
      be micromanaged. When the deployment is deleted, the secret is also
      deleted. This has not been necessary for the mons or the manager since
      the mons share a keyring with a lifespan of the cluster, as does the
      mgr, which currently has single-mgr support only.
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      4eb4fb7a
  16. 14 Feb, 2019 1 commit
  17. 13 Feb, 2019 1 commit
  18. 11 Feb, 2019 1 commit
    • Blaine Gardner's avatar
      ceph mgr: configure completely from operator · 03ea5f0a
      Blaine Gardner authored
      
      Configure the Ceph mgr daemon completely from the operator a la the
      recent changes to the Ceph mon operator.
      
      The pod spec for the mgr changed quite a bit, and instead of updating
      the unit tests of questionable use, some additional unit test tools
      applicable to any Ceph daemon have been added and used with the mgr. The
      mgrs unit tests should now be more useful and get in the way of devs
      less, and they can be used by other daemons later.
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      03ea5f0a
  19. 05 Feb, 2019 1 commit
    • Blaine Gardner's avatar
      ceph mons: set up entirely in operator · ec416cdd
      Blaine Gardner authored
      
      Make the Rook config-init unnecessary for mons, and remove that init
      container. Perform all mon configuration steps in the operator, and set
      up the mon pods and k8s environment such that only Ceph containers are
      needed for running mons.
      
      This should help streamline changes to the mons, as there will be no
      need to change the `daemon/mon` code or `cmd/rook/ceph` code with mon
      changes in the future.
      
      This work starts to lay the groundwork for supporting the
      `design/ceph-config-updates.md` design.
      
      Notable new bits:
      
      Create a keyring secret store helper for storing dameon keyrings, and
      use it to store the mon keyring. Mon pods mount the keyring into a
      k8s secret-backed volume.
      
      Create a configmap store for the Ceph config file which can be mounted
      into pods/containers directly to /etc/ceph/ceph.conf. Also store
      individual mon_host and mon_initial_members values which can be mapped
      into pods as environment variables and used in Ceph commandline flags,
      enabling the mon pods to have the most up-to-date information about the
      mon cluster when restarting and without need for operator intervention.
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      ec416cdd
  20. 04 Feb, 2019 1 commit
  21. 01 Feb, 2019 1 commit
    • Blaine Gardner's avatar
      ceph: no longer create fallback osd · 78d47191
      Blaine Gardner authored
      
      Rook's behavior should not be to create a default OSD in dataDirHostPath
      when no devices are present on a node. Any preexisting fallback osd
      should be kept as long as no dirs or disks have been specified to keep
      the legacy behavior for those clusters running with these osds in place.
      The legacy deletion behavior is also kept, removing the osd as soon as
      any dir or device is specified other than the dataDirHostPath dir.
      Signed-off-by: default avatarBlaine Gardner <blaine.gardner@suse.com>
      78d47191
  22. 24 Jan, 2019 1 commit
  23. 15 Jan, 2019 1 commit
  24. 10 Jan, 2019 1 commit
  25. 09 Jan, 2019 3 commits
  26. 08 Jan, 2019 1 commit
  27. 20 Dec, 2018 1 commit
  28. 16 Dec, 2018 1 commit
  29. 08 Dec, 2018 1 commit
  30. 07 Dec, 2018 2 commits
  31. 05 Dec, 2018 2 commits
  32. 04 Dec, 2018 1 commit
  33. 30 Nov, 2018 1 commit
  34. 28 Nov, 2018 1 commit
  35. 21 Nov, 2018 1 commit
  36. 08 Nov, 2018 1 commit