- 09 Sep, 2019 4 commits
-
-
Denise Schannon authored
-
michelia feng authored
Problem: Sometimes get etcd unhealth alert, but may cause by network or disk problem, actually etcd is health Solution: Improve group wait seconds to 10 mins as issue mentioned Issue: https://github.com/rancher/rancher/issues/19474
-
rmweir authored
-
Darren Shepherd authored
-
- 07 Sep, 2019 1 commit
-
-
Dan Ramich authored
Add logic to stream
-
- 06 Sep, 2019 9 commits
-
-
rmweir authored
-
Dan Ramich authored
Update websockets
-
Dax McDonald authored
-
Dax McDonald authored
-
rajashree authored
Adding a worker node to an RKE cluster does not lead to rke up. But the provisioner controller still calls the update, and sets the taints capability after it. This commit checks if the update actually let to rke up and sets taints capability based on that
-
rajashree authored
k8s 1.16 has moved deployments to apps/v1 apigroup DeploymentRollback and RollbackTo are deprecated in apps/v1 This commit handles deployment rollback separately for apps/v1, by updating podSpec of a deployment with the podSpec of the replicaSet This is the recommended approach from k8s client-go issues https://github.com/kubernetes/client-go/issues/398#issuecomment-382546959
-
Guangbo Chen authored
-
michelia feng authored
Problem: After monitoring charts upgraded, this expression value change from no data point to 0 Solution: Update the thredshold for this expression Issue: https://github.com/rancher/rancher/issues/22680
-
Prachi Damle authored
-
- 05 Sep, 2019 8 commits
-
-
Dan Ramich authored
Bump telemetry to version 0.5.7
-
Caleb Bron authored
-
Alena Prokharchyk authored
-
Caleb Bron authored
-
Murali Paluru authored
-
Dan Ramich authored
Remove monitoring and logging catalog id setting
-
Dan Ramich authored
Cleanup legacy globalrolebindings finalizers to deleted clusters
-
michelia feng authored
Problem: Alert depends on the monitoring catalog setting when deployment, it need update setting when release, this isn't convenient and easy cause problem Solution: Remove monitoring catalog from setting and remove legacy code Issue: https://github.com/rancher/rancher/issues/22615
-
- 04 Sep, 2019 10 commits
-
-
Dax McDonald authored
-
Denise Schannon authored
-
Sebastiaan van Steenis authored
Problem: service-sidekick was being every plan execution, docker containers were started while already running and because of that, log-linker was run for every container that was started while it was already running Root cause: service-sidekick was not excluded from plan execution (should be similar to share-mnt which is excluded), and there is no check for containers already running Solution: exclude service-sidekick from being checked besides being present, check container is running before starting + running log linker, exclude service-sidekick from running log-linker
-
Guangbo Chen authored
-
Denise Schannon authored
-
gitlawr authored
So as to make resolved notifications recognizable.
-
gitlawr authored
Problem: When multiple recipients are configured in pipeline notification, one of them get notified multiple times. Solution: Avoid using loop iterator variable in go routines.
-
orangedeng authored
-
orangedeng authored
**Problem:** Setting node pool taints and node template taints doesn't work. **Solution:** Add kubelet args `--register-with-taints` parameters to kubelet worker process. So those taints will be added after kubelet is up and running. If `--register-with-taints` args is set by kubelet service extra_args, taints from node template, node pool and kubelet will be merged.
-
rajashree authored
-
- 03 Sep, 2019 7 commits
-
-
Dax McDonald authored
-
Darren Shepherd authored
-
Darren Shepherd authored
-
rajashree authored
The global ns rbac controller calls DeepEqual on the entire Role and RoleBinding objects, this can cause continuous updates since some fields such as CreationTimestamp have to be different. This commit calls DeepEqual only on Subjects of RoleBinding, and on Rules in case of Roles.
-
Dax McDonald authored
Previously, this only removed finalizers from deleted grbs with a 1 hour delay. This meant that when a user had their admin access removed it would take one hour for the change to propogate if they had a finalizer to a non-existent cluster in their list of finalizers. Now, we premptively remove "bad" finalziers from grbs that point to non-existent clusters so that on change they will not need to be cleaned. This only occurs once and all grbs created after this fix will have the "field.cattle.io/grbUpgrade" annotation to avoid running through this sync handler.
-
rajashree authored
The global ns rbac controller calls DeepEqual on the entire Role and RoleBinding objects, this can cause continuous updates since some fields such as CreationTimestamp have to be different. This commit calls DeepEqual only on Subjects of RoleBinding, and on Rules in case of Roles.
-
kinarashah authored
- sync only if interval!=0 and hash changes - git clone instead of using raw github content - updated log msgs
-
- 02 Sep, 2019 1 commit
-
-
orangedeng authored
**Problem:** HPA with cpu rules for newly created workload will get `transitioning=error` because the HPA controller can't get the cpu metrics for newly created workload. **Solutions:** Ignore the transitioning error for the first 60s after HPA has been created.
-