Commit 9c5c209f authored by Alan Peng's avatar Alan Peng
Browse files

Added prometheus operator and updated components version

Showing with 349 additions and 268 deletions
+349 -268
......@@ -46,7 +46,7 @@ firewall-cmd --complete-reload
(2)安装docker-compose命令
```
sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
```
```
......@@ -75,7 +75,7 @@ curl -L https://raw.githubusercontent.com/wise2c-devops/breeze/v1.13.1/docker-co
docker-compose up -d
```
如果一切正常,部署机的88端口将能够被正常访问。
如果一切正常(注意deploy-playbook这个容器是个卷容器,它是退出状态这是正常现象),部署机的88端口将能够被正常访问。
2. 在部署机上做好对集群内其它所有服务器的ssh免密登录,命令为:
......@@ -88,6 +88,10 @@ docker-compose up -d
ssh-copy-id 192.168.9.12
ssh-copy-id 192.168.9.13
ssh-copy-id 192.168.9.20
ssh-copy-id 192.168.9.21
...
......@@ -105,7 +109,7 @@ docker-compose up -d
重复该步骤直至将集群所需的全部节点服务器加入:
(k8s master服务器、k8s minion node服务器、registry服务器等等):
(k8s master服务器、k8s worker node服务器、harbor服务器等等):
![Alt](./manual/BreezeScreenShots004.png)
......@@ -129,22 +133,30 @@ docker-compose up -d
![Alt](./manual/BreezeScreenShots014.png)
点击“添加组件”按钮,对每个组件进行设置和分配服务器:
点击“下一步”再点击“添加组件”按钮,对每个组件进行设置和分配服务器:
(docker角色、registry角色、etcd角色、loadbalance角色、kubernetes角色)
(docker角色、harbor角色、loadbalance角色、etcd角色、kubernetes角色、prometheus角色
![Alt](./manual/BreezeScreenShots015.png)
如果希望Breeze部署程序使用界面里输入的主机名代替当前服务器的主机名,则勾选format host name选项框:
![Alt](./manual/BreezeScreenShots016.png)
![Alt](./manual/BreezeScreenShots017.png)
镜像仓库设置这里的registy entry point是指用户端访问镜像仓库的URL,可以直接写IP地址或写对应的域名:
镜像仓库设置这里的harbor entry point是指用户端访问镜像仓库的URL,可以直接写IP地址或写对应的域名:
![Alt](./manual/BreezeScreenShots018.png)
![Alt](./manual/BreezeScreenShots019.png)
接下来是设置高可用组件(haproxy+keepalived):
vip for k8s master是指三个k8s master服务器的高可用虚拟浮动IP地址;网卡请填写实际操作系统下的网卡名,注意请保证3个节点网卡名一致;router id和virtual router id请确保不同k8s集群使用不同的值。
![Alt](./manual/haproxy-keepalived-001.png)
Etcd可以选择部署于K8S Master节点也可以选择独立的三台主机:
![Alt](./manual/BreezeScreenShots020.png)
......@@ -153,19 +165,11 @@ Etcd可以选择部署于K8S Master节点也可以选择独立的三台主机:
![Alt](./manual/BreezeScreenShots022.png)
接下来是设置高可用组件(haproxy+keepalived):
vip for k8s master是指三个k8s master服务器的高可用虚拟浮动IP地址;网卡请填写实际操作系统下的网卡名,注意请保证3个节点网卡名一致;router id和virtual router id请确保不同k8s集群使用不同的值。
![Alt](./manual/haproxy-keepalived-001.png)
![Alt](./manual/haproxy-keepalived-002.png)
kubernetes entry point是指高可用的一个设定值,如果生产环境有硬件或软件负载均衡指向这里的k8s master所有节点,那么就可以在这里填写负载均衡的统一入口地址。
相对于昂贵的F5专业硬件设备,我们也可以使用HAProxy和Keepalived的组合轻松完成这个设置,Breeze自带这个组合模块的部署。
例如下图的 192.168.9.30:6444 就是k8s集群高可用的统一入口,k8s的minion node会使用这个地址访问API Server。请注意如果使用的是Breeze自带的高可用组件haproxy+keepalived,则请填写实际的虚IP与默认端口6444。
例如下图的 192.168.9.30:6444 就是k8s集群高可用的统一入口,k8s的worker node会使用这个地址访问API Server。请注意如果使用的是Breeze自带的高可用组件haproxy+keepalived,则请填写实际的虚IP与默认端口6444。
![Alt](./manual/BreezeScreenShots023.png)
......@@ -183,9 +187,9 @@ kubernetes entry point是指高可用的一个设定值,如果生产环境有
![Alt](./manual/BreezeScreenShots032.png)
以上例子是3台etcd、3台k8s master、3台k8s minion node、1台镜像仓库的环境。实际可以增减规模。
以上例子是3台etcd、3台k8s master、3台k8s worker node、1台镜像仓库的环境。实际可以增减规模。
Kubernetes Dashboard的访问入口我们采用了NodePort:30300的方式暴露端口,因此可以通过 https://node-ip:30300 来访问Dashboard页面。
Kubernetes Dashboard的访问入口我们采用了NodePort:30300的方式暴露端口,因此可以通过火狐浏览器访问 https://任意服务器IP:30300 来登录Dashboard页面,注意其它浏览器例如Chrome因为不接受自签名证书会拒绝访问请求
新版本Dashboard引入了验证模式,可以通过以下命令获取admin-user的访问令牌:
......
......@@ -13,6 +13,12 @@
iptables与firewalld都不是真正的防火墙,它们都只是用来定义防火墙策略的防火墙管理工具而已,或者说,它们只是一种服务。iptables服务会把配置好的防火墙策略交由内核层面的netfilter网络过滤器来处理,而firewalld服务则是把配置好的防火墙策略交由内核层面的nftables包过滤框架来处理。对于RHEL/CentOS 7系列,我们推荐的做法就是删掉iptables服务启用firewalld服务,注意不是删掉iptables命令。然后用命令firewall-cmd --set-default-zone=trusted来“关闭”防火墙。这样docker和kubernetes运行时才不会出故障。
docker最终还是要调用iptables命令的,它不在乎你的系统底层究竟是iptables服务还是firewalld服务,总之要么转换成netfilter模块执行要么转换成nftables模块执行。我们的部署程序,在安装docker的环节中,已经为您的主机做了这样的设置。也就是防火墙服务是active的,但是policy是trusted,这样是最佳方法。当然如果您实际生产环境不允许过于宽松的防火墙,可以手动再去使用firewall-cmd命令控制严格的具体ACL条目。
10. 所有被部署的服务器在部署工作开始之前请使用命令:
```
hostnamectl set-hostname 主机名
```
确保环境合规。
10. 所有被部署的服务器在部署工作开始之前请使用命令:
```
hostnamectl set-hostname 主机名
......@@ -50,12 +56,3 @@ sed -i "s/.*server:.*/ server: https:\/\/{{ endpoint }}/g" /etc/kubernetes/kubel
```
其中,endpoint为breeze的web页面上Kubernetes组件所填的“Kubernetes entry point”。
(6)安装完毕后,通过kubectl get nodes确认新节点已经添加进来。
13. CoreDNS服务容器起不来,故障相关解释如下:
https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters
https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#coredns-pods-have-crashloopbackoff-or-error-state
https://www.jianshu.com/p/08526d0ba398
最常见的是/etc/resolve.conf文件里配置了错误的dns,或者压根没有做配置,系统采用了默认的127.0.0.1,这样得用命令nmtui去配置界面写入正确的dns服务器比如114.114.114.114并保存后重启网络服务。或者按简书里讲的去掉那个loop;再者或者是由于机器环境不对,得使用正确的docker版本以及关闭SElinux。
......@@ -2,7 +2,7 @@ version: '2'
services:
deploy:
container_name: deploy-main
image: wise2c/pagoda:v1.0
image: wise2c/pagoda:v1.1
restart: always
entrypoint: sh
command:
......@@ -23,12 +23,12 @@ services:
network_mode: "service:deploy"
playbook:
container_name: deploy-playbook
image: wise2c/playbook:v1.13
image: wise2c/playbook:v1.13.1
volumes:
- playbook:/workspace
yum-repo:
container_name: deploy-yumrepo
image: wise2c/yum-repo:v1.13
image: wise2c/yum-repo:v1.13.1.2
ports:
- 2009:2009
restart: always
......
- name: set hostname
hostname:
name: '{{ hostname }}'
when: format_hostname
- name: get seed ip
shell:
echo $SSH_CONNECTION | cut -d " " -f 1
register: ip
- name: add seed to /etc/hosts
blockinfile:
path: /etc/hosts
block: '{{ ip.stdout }} {{ wise2c_seed_host }}'
marker: '# {mark} WISE2C DEPLOY MANAGED BLOCK {{ wise2c_seed_host }}'
- name: add to /etc/hosts
blockinfile:
path: /etc/hosts
block: '{{ item.key }} {{ item.value.hostname }}'
marker: "# {mark} WISE2C DEPLOY MANAGED BLOCK {{ item.key }}"
with_dict: "{{ hostvars }}"
- name: disabled selinux
selinux:
state: disabled
- name: start firewalld
systemd:
name: firewalld
enabled: true
state: started
- name: config firewalld
shell: |
firewall-cmd --set-default-zone=trusted
firewall-cmd --complete-reload
- name: distribute wise2c repo
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/wise2c.repo.j2', dest: '/etc/yum.repos.d/wise2c.repo' }
- name: distribute ipvs bootload file
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/ipvs.conf.j2', dest: '/etc/modules-load.d/ipvs.conf' }
- name: install docker
yum:
disablerepo: '*'
enablerepo: wise2c
update_cache: true
state: present
name: '{{ item }}'
with_items:
- rsync
- jq
- docker-ce
- python-docker-py
- docker-compose
- chrony
- name: distribute chrony server config
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/chrony-server.conf.j2', dest: '/etc/chrony.conf' }
when: inventory_hostname == ansible_play_batch[0]
- name: distribute chrony client config
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/chrony-client.conf.j2', dest: '/etc/chrony.conf' }
when: inventory_hostname != ansible_play_batch[0]
- name: start chrony
systemd:
name: chronyd
daemon_reload: true
enabled: true
state: started
- name: clear docker config
copy:
content: ''
dest: '{{ item }}'
with_items:
- /etc/sysconfig/docker
- /etc/sysconfig/docker-storage
- /etc/sysconfig/docker-storage-setup
- /etc/sysconfig/docker-network
- name: init docker to create folder /etc/docker
systemd:
name: docker
daemon_reload: true
enabled: true
state: restarted
- name: distribute docker config
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/daemon.json.j2', dest: '/etc/docker/daemon.json' }
- name: reload & restart docker
systemd:
name: docker
daemon_reload: true
enabled: true
state: restarted
- name: set sysctl
sysctl:
name: '{{ item }}'
value: 1
state: present
reload: true
with_items:
- net.ipv4.ip_forward
- net.bridge.bridge-nf-call-iptables
- net.bridge.bridge-nf-call-ip6tables
......@@ -4,12 +4,8 @@
any_errors_fatal: true
vars:
path: /var/tmp/wise2c/docker
tasks:
- name: set hostname
hostname:
name: '{{ hostname }}'
when: format_hostname
tasks:
- name: get seed ip
shell:
echo $SSH_CONNECTION | cut -d " " -f 1
......@@ -27,118 +23,11 @@
block: '{{ item.key }} {{ item.value.hostname }}'
marker: "# {mark} WISE2C DEPLOY MANAGED BLOCK {{ item.key }}"
with_dict: "{{ hostvars }}"
- name: disabled selinux
selinux:
state: disabled
- name: start firewalld
systemd:
name: firewalld
enabled: true
state: started
- name: config firewalld
shell: |
firewall-cmd --set-default-zone=trusted
firewall-cmd --complete-reload
- name: distribute wise2c repo
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/wise2c.repo.j2', dest: '/etc/yum.repos.d/wise2c.repo' }
- name: distribute ipvs bootload file
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/ipvs.conf.j2', dest: '/etc/modules-load.d/ipvs.conf' }
- name: install docker
yum:
disablerepo: '*'
enablerepo: wise2c
update_cache: true
state: present
name: '{{ item }}'
with_items:
- rsync
- jq
- docker-ce
- python-docker-py
- docker-compose
- chrony
- name: distribute chrony server config
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/chrony-server.conf.j2', dest: '/etc/chrony.conf' }
when: inventory_hostname == ansible_play_batch[0]
- name: distribute chrony client config
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/chrony-client.conf.j2', dest: '/etc/chrony.conf' }
when: inventory_hostname != ansible_play_batch[0]
- name: start chrony
systemd:
name: chronyd
daemon_reload: true
enabled: true
state: started
- name: check docker
script: scripts/check_docker.sh {{ harbor }}
register: check_output
- block:
- name: clear docker config
copy:
content: ''
dest: '{{ item }}'
with_items:
- /etc/sysconfig/docker
- /etc/sysconfig/docker-storage
- /etc/sysconfig/docker-storage-setup
- /etc/sysconfig/docker-network
- name: init docker to create folder /etc/docker
systemd:
name: docker
daemon_reload: true
enabled: true
state: restarted
- name: distribute docker config
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/daemon.json.j2', dest: '/etc/docker/daemon.json' }
- name: reload & restart docker
systemd:
name: docker
daemon_reload: true
enabled: true
state: restarted
- name: set sysctl
sysctl:
name: '{{ item }}'
value: 1
state: present
reload: true
with_items:
- net.ipv4.ip_forward
- net.bridge.bridge-nf-call-iptables
- net.bridge.bridge-nf-call-ip6tables
- name: setup docker on all nodes
include_tasks: docker.ansible
when: check_output.stdout != 'true'
......@@ -2,16 +2,12 @@
hosts: hosts
user: root
tasks:
# - name: install docker
# yum:
# disablerepo: '*'
# enablerepo: wise2c
# state: absent
# name: '{{ item }}'
# with_items:
# - rsync
# - jq
# - chrony
# - docker
# - python-docker-py
# - docker-compose
\ No newline at end of file
- name: remove docker
yum:
disablerepo: '*'
enablerepo: wise2c
state: absent
name: '{{ item }}'
with_items:
- docker-ce
- docker-compose
......@@ -9,7 +9,7 @@ version=`cat ${path}/components-version.txt |grep "Harbor" |awk '{print $3}'`
echo "" >> ${path}/yat/harbor.yml.gotmpl
echo "version: v${version}" >> ${path}/yat/harbor.yml.gotmpl
curl -L https://storage.googleapis.com/harbor-releases/release-${version}/harbor-offline-installer-v${version}.tgz \
curl -L https://storage.googleapis.com/harbor-releases/release-${version%.*}.0/harbor-offline-installer-v${version}.tgz \
-o ${path}/file/harbor-offline-installer-v${version}.tgz
curl -sSL https://raw.githubusercontent.com/vmware/harbor/v${version}/make/harbor.cfg \
......
......@@ -2,12 +2,17 @@
hosts: harbor
user: root
tasks:
# - name: stop & rm harbor
# docker_service:
# project_src: '{{ cpath }}/harbor'
# state: absent
# remove_volumes: true
- name: stop & rm harbor
docker_service:
project_src: '{{ cpath }}/harbor'
state: absent
remove_volumes: true
shell: docker-compose stop && docker-compose rm -f
args:
chdir: '{{ cpath }}/harbor'
- name: clean harbor directory
file:
path: '{{ item }}'
......@@ -19,4 +24,6 @@
- /data/config
- /data/job_logs
- /data/psc
- /data/secretkey
- /data/redis
- '{{ cpath }}'
......@@ -90,7 +90,7 @@ subjects:
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
......@@ -109,7 +109,7 @@ spec:
spec:
containers:
- name: kubernetes-dashboard
image: {{ registry_endpoint }}/{{ registry_project }}/kubernetes-dashboard-amd64:v1.10.0
image: {{ registry_endpoint }}/{{ registry_project }}/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
......
- name: make k8s master dir
file:
path: '{{ item }}'
state: directory
mode: 0755
with_items:
- /etc/kubernetes/pki
- '{{ path }}'
- $HOME/.kube
- name: remove swapfile from /etc/fstab
mount:
name: swap
fstype: swap
state: absent
- name: disable swap
command: swapoff -a
- block:
- name: install kubernetes components
yum:
disablerepo: '*'
enablerepo: wise2c
update_cache: true
state: present
name: '{{ item }}'
with_items:
- kubernetes-cni-0.6.0
- kubectl-{{ kubernetes_version[1:] }}
- kubelet-{{ kubernetes_version[1:] }}
- kubeadm-{{ kubernetes_version[1:] }}
- name: unarchive cfssl tool
unarchive:
src: file/cfssl-tools.tar.gz
dest: /usr/local/bin
- name: copy prometheus k8s nodes fix scripts
copy:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
mode: 0755
with_items:
- { src: 'file/prometheus-fix-master-nodes.sh', dest: '/var/tmp/wise2c/kubernetes/' }
- { src: 'file/prometheus-fix-worker-nodes.sh', dest: '/var/tmp/wise2c/kubernetes/' }
- name: distribute kubelet config
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/kubelet.conf.j2', dest: '/etc/systemd/system/kubelet.service.d/wise2c-kubelet.conf' }
- name: reload & enable kubelet
systemd:
name: kubelet
daemon_reload: true
enabled: true
- name: set sysctl
sysctl:
name: '{{ item }}'
value: 1
state: present
reload: true
with_items:
- net.bridge.bridge-nf-call-iptables
- net.bridge.bridge-nf-call-ip6tables
#!/bin/bash
# Check if there are no api server cert files under /etc/kubernetes/pki
set -e
if [ -f "/etc/kubernetes/pki/apiserver.crt" ] || [ -f "/etc/kubernetes/pki/apiserver.key" ] ; then
echo "/etc/kubernetes/pki/apiserver.crt or /etc/kubernetes/pki/apiserver.key already exists!"
echo "Please execute command kubeadm reset -f if you want to reinstall the cluster."
exit 1
fi
set +e
# Get host IP address and hostname
WISE2C_IP_LABEL=$(cat /etc/hosts |grep -A 1 'BEGIN WISE2C DEPLOY MANAGED BLOCK' |grep -v '#' |grep -v '^\-\-' |wc |awk '{print $1}')
......@@ -18,6 +30,8 @@ fi
HOST_VIP=`cat /var/tmp/wise2c/kubernetes/kubeadm.conf | grep -A 1 SAN | tail -1 | awk '{print $2}'`
set -e
# K8S apiserver certificate
cd /var/tmp/wise2c/kubernetes
cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -config=ca-config.json -hostname=127.0.0.1,10.96.0.1,$HOST_IP,$HOST_VIP,$HOST_NAME,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local -profile=kubernetes apiserver-csr.json | cfssljson -bare apiserver
......
#!/bin/bash
set -e
# Check if there are no cert files under /etc/kubernetes/pki
if [ "`ls -A /etc/kubernetes/pki/`" != "" ]; then
exit 1
fi
# K8S CA
cd /var/tmp/wise2c/kubernetes/
......
......@@ -45,7 +45,7 @@ curl -sSL https://github.com/wise2c-devops/breeze/raw/v1.13/kubernetes-playbook/
| sed -e "s,quay.io/coreos,{{ registry_endpoint }}/{{ registry_project }},g" > ${path}/template/kube-flannel.yml.j2
dashboard_repo=${kubernetes_repo}
dashboard_version="v1.10.0"
dashboard_version="v1.10.1"
echo "dashboard_repo: ${dashboard_repo}" >> ${path}/yat/all.yml.gotmpl
echo "dashboard_version: ${dashboard_version}" >> ${path}/yat/all.yml.gotmpl
......
- name: install kubernetes package
hosts: all
- name: set up kubernetes master nodes
hosts: master
user: root
vars:
path: /var/tmp/wise2c/kubernetes
tasks:
- name: make k8s master dir
file:
path: '{{ item }}'
state: directory
mode: 0755
with_items:
- /etc/kubernetes/pki
- '{{ path }}'
- $HOME/.kube
- name: check kubernetes
script: scripts/check_kubelet.sh
tasks:
- name: check kubernetes services
script: scripts/check_kubelet_kubeproxy.sh
register: check_output
- name: remove swapfile from /etc/fstab
mount:
name: swap
fstype: swap
state: absent
- name: disable swap
command: swapoff -a
- block:
- name: install kubernetes components
yum:
disablerepo: '*'
enablerepo: wise2c
update_cache: true
state: present
name: '{{ item }}'
with_items:
- kubernetes-cni-0.6.0
- kubectl-{{ kubernetes_version[1:] }}
- kubelet-{{ kubernetes_version[1:] }}
- kubeadm-{{ kubernetes_version[1:] }}
- name: unarchive cfssl tool
unarchive:
src: file/cfssl-tools.tar.gz
dest: /usr/local/bin
- name: copy prometheus k8s nodes fix scripts
copy:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
mode: 0755
with_items:
- { src: 'file/prometheus-fix-master-nodes.sh', dest: '/var/tmp/wise2c/kubernetes/' }
- { src: 'file/prometheus-fix-worker-nodes.sh', dest: '/var/tmp/wise2c/kubernetes/' }
- name: distribute kubelet config
template:
src: '{{ item.src }}'
dest: '{{ item.dest }}'
with_items:
- { src: 'template/kubelet.conf.j2', dest: '/etc/systemd/system/kubelet.service.d/wise2c-kubelet.conf' }
- name: setup master nodes
include_tasks: master-node.ansible
when: (not (check_output.stdout == True)) and (not (add_worker_node_only == True))
- name: reload & enable kubelet
systemd:
name: kubelet
daemon_reload: true
enabled: true
- name: set sysctl
sysctl:
name: '{{ item }}'
value: 1
state: present
reload: true
with_items:
- net.bridge.bridge-nf-call-iptables
- net.bridge.bridge-nf-call-ip6tables
- name: setup kubernetes worker nodes
hosts: node
user: root
vars:
path: /var/tmp/wise2c/kubernetes
- name: setup master
include_tasks: master.ansible
when: role == 'master'
tasks:
- name: check kubernetes services
script: scripts/check_kubelet_kubeproxy.sh
register: check_output
- name: setup node
include_tasks: node.ansible
when: role == 'node'
when: check_output.stdout != 'true'
- name: setup worker nodes
include_tasks: worker-node.ansible
when: not (check_output.stdout == True)
- name: init setup on master nodes
include_tasks: both.ansible
- name: copy k8s images
copy:
src: '{{ item.src }}'
......
[
{
"variable": "AddWorkerNodesOnly",
"label": "Just add new worker nodes, do not reinstall this cluster",
"description": "Existing master nodes will not be updated. Please install docker for new worker nodes at first.",
"type": "bool",
"default": "false",
"required": true
},
{
"variable": "master",
"label": "kubernetes master hosts",
"description": "hosts to setup kubernetes master",
"label": "kubernetes master nodes",
"description": "hosts to setup kubernetes master nodes",
"type": "host",
"required": false
},
{
"variable": "node",
"label": "kubenetes node hosts",
"description": "hosts to setup kubernetes node",
"variable": "worker",
"label": "kubenetes worker nodes",
"description": "hosts to setup kubernetes worker nodes",
"type": "host",
"required": false
},
......
- name: clean k8s master
- name: reset kubernetes cluster
hosts: all
user: root
tasks:
......@@ -6,7 +6,15 @@
shell: |
kubeadm reset -f
- name: install kubernetes components
- name: iptables reset
shell: |
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
- name: ipvs reset
shell: |
ipvsadm --clear
- name: remove kubernetes components
yum:
state: absent
disablerepo: '*'
......@@ -18,7 +26,7 @@
- kubelet-{{ kubernetes_version[1:] }}
- kubeadm-{{ kubernetes_version[1:] }}
- name: clean link
- name: clean flannel link
shell: |
ip link delete cni0
ip link delete flannel.1
......
#! /bin/bash
# while read -r line
# do
# if [[ "${line}" =~ "server: https://$1:$2" ]]
# then printf ${line}
# fi
# done < /etc/kubernetes/kubelet.conf
code=`curl -sL -o /dev/null -w %{response_code} http://127.0.0.1:10255/stats`
if [ "${code}" == "200" ]; then
printf true
else
printf false
fi
\ No newline at end of file
#! /bin/bash
kubelet_code_stats=`curl -sLk -o /dev/null -w %{response_code} https://127.0.0.1:10250/stats`
kubelet_code_errortest=`curl -sLk -o /dev/null -w %{response_code} https://127.0.0.1:10250/errortest`
kubeproxy_code_healthz=`curl -sLk -o /dev/null -w %{response_code} http://127.0.0.1:10256/healthz`
kubeproxy_code_errortest=`curl -sLk -o /dev/null -w %{response_code} http://127.0.0.1:10256/errortest`
if ( [ "$kubelet_code_stats" == "200" ] || [ "$kubelet_code_stats" == "401" ] ) && [ "$kubelet_code_errortest" == "404" ]; then
kubelet_health=true
else
kubelet_haalth=false
fi
if ( [ "$kubeproxy_code_healthz" == "200" ] || [ "$kubeproxy_code_healthz" == "503" ] ) && [ "$kubeproxy_code_errortest" == "404" ]; then
kubeproxy_health=true
else
kubeproxy_haalth=false
fi
if [ "${kubelet_health}" == true ] && [ "${kubeproxy_health}" == true ]; then
printf true
else
printf false
fi
......@@ -105,7 +105,7 @@ spec:
spec:
containers:
- name: kubernetes-dashboard
image: {{ registry_endpoint }}/{{ registry_project }}/kubernetes-dashboard-amd64:v1.10.0
image: {{ registry_endpoint }}/{{ registry_project }}/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment