Commit 6b57f61b authored by Mengxin Liu's avatar Mengxin Liu Committed by oilbeater
Browse files

release v0.8.0

parent 381270d3
master 1.1-dev acl acl-doc add_unknown_to_lsp allow-subnet arm bandwidth base/enable-dbg centralized-policy-route centralized_nat chore/build-ovs chore/coc-roadmap chore/crd-to-v1 chore/merge-image chore/ovs-vsctl-log chore/remove-networks-crd chore/render-kind-yaml chore/show-gw-error chore/size ci/arm-pr ci/base ci/base-update ci/improvement ci/ipv6 ci/no-pr-push ci/retry ci/trivy ci/ubuntu-version ci/update-kind cni core/ovn-update crd-print db-monitor debug delete-qos delete-qos-queue delete_ip dev/2.13 doc/custom-kubeconfig doc/optimization doc/vip docs/1.0-pre docs/corigine docs/dpdk-pod-name docs/iface-ic docs/internal-port-vlan docs/namespace docs/optimize docs/optimize-cilium dualstack dualstack_merge ecmp ecmp_static_route encap-ip env-check fdb feat/add-lint feat/bgp feat/controller-metrics feat/dev-image feat/disable-ping-check feat/distribute-eip feat/github-action feat/grafana feat/gw feat/keep-chassis-name feat/ko feat/log feat/mcast feat/multi-nic feat/multicast feat/ovn-ic feat/remove-cluster-ip feat/select-leader-by-label feat/session-lb feat/sfc feat/ssl feat/update-ovn feat/vlan-geneve feat/vlan-regex feat/vpc-lb fix-base fix-dnat fix-resubmit-limit fix/acl fix/avx512 fix/bad-prefix-error fix/check-crd fix/check-special-subnet fix/cleanup fix/controller-keepalive fix/duplicate-ifaceid fix/ecmp-hash fix/error-log fix/forward-accept fix/gc-resource fix/gw-del fix/ic-restart fix/init-bugs fix/init-ping fix/iptables fix/ipv6-svc fix/ko-tcpdump fix/metrics-name fix/missing-date fix/nbctl-timeout fix/node-acl fix/np-log fix/offline-remove fix/ovn-cluster fix/ovn-healthcheck fix/ovn-northd-flipflop fix/pod-del fix/provider-check fix/recycle-evicted-pod fix/remove-privillege fix/reset-ovn0 fix/service-name-port fix/src-priority fix/sriov-issues fix/subnet-without-protocol fix/udp-checksum fix/uninstall fix/update-lost fix/vlan-del fix_make_kind_reload gc-vm-lsp internal-port internal_port internal_tcpdump ip ipam join klog/v2 log/rotate ls-dnat-mod-dl-dst lsp-address lsp-ipam mahz-master merge-vlan monitor/metrics monitor/pinger-metrics monitor_db_con multus multus-cni-update namespace nat-gw nbctl networkpolicy nodeport np_master ns-subnet ovn-controller ovn-db-recover ovs-nonstop ovs-win64-ci perf/4.18 perf/alias perf/libovsdb perf/optimization perf/policy-route perf/recycle-pod-early perf/route-port-address perf/skip-evicted-pod perf/stt perf/tuning-guide poc policy-route policy-route-1.8 port-group push-img qos qos-e2e qos-query refactor/controller refactor/other_config refactor/pod-control refactor/pod-controller reflactor_note release-0.10 release-0.9 release-0.9.0 release-1.0 release-1.10 release-1.2 release-1.3 release-1.4 release-1.5 release-1.5-vpc release-1.5.2 release-1.6 release-1.7 release-1.8 release-1.8-kubevirt release-1.8-lint release-1.8-monitor release-1.9 release-1.9-monitor release/1.1 release/prepare-1.9 remove_no_need_parms_svcAsName restore revert-1094-vpc-lb revert-1264-yd-master revert-1309-fixcni revert-395-perf/policy-route revert-397-fix/ipv6-route revert-399-fix/as-inconsist security/ubuntu-update security/update-ubuntu sg-acl stspod subnet subnet_ips svc sync-ovn-db test/fix-flaky testing update-ovs update/1.7-1.8 update/1.8.2 update_version upgrade-ovs vlan vm-migrate vm-static-ip vpc-nat-gw webhook v1.10.7 v1.10.6 v1.10.5 v1.10.4 v1.10.3 v1.10.2 v1.10.1 v1.10.0 v1.9.14 v1.9.13 v1.9.12 v1.9.10 v1.9.9 v1.9.8 v1.9.7 v1.9.6 v1.9.5 v1.9.4 v1.9.3 v1.9.2 v1.9.1 v1.9.0 v1.8.14 v1.8.12 v1.8.11 v1.8.9 v1.8.8 v1.8.7 v1.8.6 v1.8.5 v1.8.4 v1.8.3 v1.8.2 v1.8.1 v1.8.0 v1.7.3 v1.7.2 v1.7.1 v1.7.0 v1.6.3 v1.6.2 v1.6.1 v1.6.0 v1.5.2 v1.5.1 v1.5.0 v1.4.0 v1.3.0 v1.2.1 v1.2.0 v1.1.1 v1.1.0 v1.0.1 v1.0.0 v0.10.2 v0.10.1 v0.10.0 v0.9.1 v0.9.0 v0.8.0 v
No related merge requests found
Showing with 200 additions and 24 deletions
+200 -24
# CHANGELOG
## v0.8.0 -- 2019/10/08
### Gateway
* Support active-backup mode centralized gateway high available
### Diagnose Tools
* Kubectl plugin to trace/tcpdump/diagnose pod network traffic
* Pinger to test cluster network quality and expose metrics to Prometheus
### IPAM
* Join subnet ip now can be displayed by `kubectl get ip`
### Security
* Enable port security to prevent Mac and IP spoofing
* Allow nodes to pods traffic for private subnet
### Mics
* Support hostport
* Update OVN/OVS to 2.11.3
* Update Go to 1.13
## v0.7.0 -- 2019/08/21
### IPAM
......
......@@ -19,8 +19,9 @@ Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It off
- **Namespaced Gateways**: Every Namespace can have a dedicated Gateway for Egress traffic.
- **Direct External Connectivity**:Pod IP can be exposed to external network directly.
- **Traffic Mirror**: Duplicated container network traffic for monitoring and diagnosing.
- **IPv6 support**: Kube-OVN support ipv6-only mode pod network.
- **Kubectl Plugin**: Handy tools to diagnose container network.
- **IPv6 Support**: Kube-OVN supports ipv6-only mode pod network.
- **TroubleShooting Tools**: Handy tools to diagnose, trace, monitor and dump container network traffic to help troubleshooting complicate network issues.
- **Prometheus Integration**: Exposing network quality metrics like pod/node/service/dns connectivity/latency in Prometheus format.
## Planned Future Work
- Hardware Offloading and DPDK Support
......@@ -48,7 +49,8 @@ If you want to install Kubernetes from scratch, you can try [kubespray](https://
- [Traffic Mirror](docs/mirror.md)
- [Webhook](docs/webhook.md)
- [IPv6](docs/ipv6.md)
- [Kubectl Plugin](docs/kubectl-plugin.md)
- [Tracing/Diagnose/Dump Traffic with Kubectl Plugin](docs/kubectl-plugin.md)
- [Prometheus Integration](docs/pinger.md)
## Kube-OVN vs. Other CNI Implementation
......
v0.8.0-pre
\ No newline at end of file
v0.8.0
......@@ -16,7 +16,8 @@ showHelp(){
tcpdump(){
namespacedPod="$1"; shift
namespace=$(echo "$namespacedPod" | cut -d "/" -f2)
namespace=$(echo "$namespacedPod" | cut -d "/" -f1)
podName=$(echo "$namespacedPod" | cut -d "/" -f2)
if [ "$podName" = "$namespacedPod" ]; then
nodeName=$(kubectl get pod "$podName" -o jsonpath={.spec.nodeName})
mac=$(kubectl get pod "$podName" -o jsonpath={.metadata.annotations.ovn\\.kubernetes\\.io/mac_address})
......@@ -81,7 +82,7 @@ trace(){
exit 1
fi
gwMac=$(kubectl exec -it ovn-central-8ddc7dd8-fm2c4 -n $KUBE_OVN_NS -- ovn-nbctl --data=bare --no-heading --columns=mac find logical_router_port name=ovn-cluster-"$ls" | tr -d '\r')
gwMac=$(kubectl exec -it $CENTRAL_POD -n $KUBE_OVN_NS -- ovn-nbctl --data=bare --no-heading --columns=mac find logical_router_port name=ovn-cluster-"$ls" | tr -d '\r')
if [ -z "$gwMac" ]; then
echo "get gw mac failed"
......
......@@ -21,19 +21,19 @@ Kube-OVN includes two parts:
`kubectl label node <Node on which to deploy OVN DB> kube-ovn/role=master`
2. Install Kube-OVN related CRDs
`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/v0.7.0/yamls/crd.yaml`
`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/v0.8.0/yamls/crd.yaml`
3. Install native OVS and OVN components:
`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/v0.7.0/yamls/ovn.yaml`
`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/v0.8.0/yamls/ovn.yaml`
4. Install the Kube-OVN Controller and CNI plugins:
`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/v0.7.0/yamls/kube-ovn.yaml`
`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/v0.8.0/yamls/kube-ovn.yaml`
That's all! You can now create some pods and test connectivity.
For high-available ovn db, see [high available](high-available.md)
If you want to enable IPv6 on default subnet and node subnet, please apply https://raw.githubusercontent.com/alauda/kube-ovn/v0.7.0/yamls/kube-ovn-ipv6.yaml on Step 3.
If you want to enable IPv6 on default subnet and node subnet, please apply https://raw.githubusercontent.com/alauda/kube-ovn/v0.8.0/yamls/kube-ovn-ipv6.yaml on Step 3.
## More Configuration
......@@ -79,7 +79,7 @@ If you want to enable IPv6 on default subnet and node subnet, please apply https
1. Remove Kubernetes resources:
```bash
wget https://raw.githubusercontent.com/alauda/kube-ovn/v0.7.0/dist/images/cleanup.sh
wget https://raw.githubusercontent.com/alauda/kube-ovn/v0.8.0/dist/images/cleanup.sh
bash cleanup.sh
```
......@@ -91,4 +91,4 @@ If you want to enable IPv6 on default subnet and node subnet, please apply https
rm -rf /etc/openvswitch
rm -rf /etc/cni/net.d/00-kube-ovn.conflist
```
3. Reboot the Node to remove ipset/iptables rules and nics.
\ No newline at end of file
3. Reboot the Node to remove ipset/iptables rules and nics.
......@@ -2,4 +2,4 @@
Through Kube-OVN does support both protocol subnets coexist in a cluster, Kubernetes control plan now only support one protocol. So you will lost some ability like probe and service discovery if you use a protocol other than the kubernetes control plan. We recommend you use only one same ip protocol that same with kubernetes control plan.
To enable IPv6 support you need to modify the installation yaml to specify the default subnet and node subnet cidrBlock and gateway with a ipv6 format. You can apply this [v6 version yaml](https://raw.githubusercontent.com/alauda/kube-ovn/v0.7.0/yamls/kube-ovn-ipv6.yaml) at [installation step 3](install.md#to-install) for a quick start.
\ No newline at end of file
To enable IPv6 support you need to modify the installation yaml to specify the default subnet and node subnet cidrBlock and gateway with a ipv6 format. You can apply this [v6 version yaml](https://raw.githubusercontent.com/alauda/kube-ovn/v0.8.0/yamls/kube-ovn-ipv6.yaml) at [installation step 3](install.md#to-install) for a quick start.
......@@ -40,3 +40,136 @@ Available Subcommands:
trace {namespace/podname} {target ip address} {icmp|tcp|udp} [target tcp or udp port]
diagnose {all|node} [nodename] diagnose connectivity of all nodes or a specific node
```
1. Show ovn-sb overview
```bash
[root@node2 ~]# kubectl ko sbctl show
Chassis "36f129a9-276f-4d96-964b-7d3703001b81"
hostname: "node1.cluster.local"
Encap geneve
ip: "10.0.129.96"
options: {csum="true"}
Port_Binding "tiller-deploy-849b7c6496-5l9r6.kube-system"
Port_Binding "kube-ovn-pinger-5mq4g.kube-ovn"
Port_Binding "nginx-6b4b85b77b-rk9tq.acl"
Port_Binding "node-node1"
Port_Binding "piquant-magpie-nginx-ingress-default-backend-84776f949b-jthhh.kube-system"
Port_Binding "ds1-l6n7p.default"
Chassis "9ced77f4-dae4-4e0b-b3fe-15dd82104e67"
hostname: "node2.cluster.local"
Encap geneve
ip: "10.0.128.15"
options: {csum="true"}
Port_Binding "ds1-wqpdz.default"
Port_Binding "node-node2"
Port_Binding "kube-ovn-pinger-8xhhv.kube-ovn"
Chassis "dc922a96-97d4-418d-a45f-8989d2b6dc91"
hostname: "node3.cluster.local"
Encap geneve
ip: "10.0.128.35"
options: {csum="true"}
Port_Binding "ds1-dflpx.default"
Port_Binding "coredns-585c7897d4-59xkc.kube-system"
Port_Binding "node-node3"
Port_Binding "kube-ovn-pinger-gc8l6.kube-ovn"
Port_Binding "coredns-585c7897d4-7dglw.kube-system"
```
2. Dump pod ICMP traffic
```bash
[root@node2 ~]# kubectl ko tcpdump default/ds1-l6n7p icmp
+ kubectl exec -it kube-ovn-cni-wlg4s -n kube-ovn -- tcpdump -nn -i d7176fe7b4e0_h icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on d7176fe7b4e0_h, link-type EN10MB (Ethernet), capture size 262144 bytes
06:52:36.619688 IP 100.64.0.3 > 10.16.0.4: ICMP echo request, id 2, seq 1, length 64
06:52:36.619746 IP 10.16.0.4 > 100.64.0.3: ICMP echo reply, id 2, seq 1, length 64
06:52:37.619588 IP 100.64.0.3 > 10.16.0.4: ICMP echo request, id 2, seq 2, length 64
06:52:37.619630 IP 10.16.0.4 > 100.64.0.3: ICMP echo reply, id 2, seq 2, length 64
06:52:38.619933 IP 100.64.0.3 > 10.16.0.4: ICMP echo request, id 2, seq 3, length 64
06:52:38.619973 IP 10.16.0.4 > 100.64.0.3: ICMP echo reply, id 2, seq 3, length 64
```
3. Show ovn flow from a pod to a destination
```bash
[root@node2 ~]# kubectl ko trace default/ds1-l6n7p 8.8.8.8 icmp
+ kubectl exec ovn-central-5bc494cb5-np9hm -n kube-ovn -- ovn-trace --ct=new ovn-default 'inport == "ds1-l6n7p.default" && ip.ttl == 64 && icmp && eth.src == 0a:00:00:10:00:05 && ip4.src == 10.16.0.4 && eth.dst == 00:00:00:B8:CA:43 && ip4.dst == 8.8.8.8'
# icmp,reg14=0xf,vlan_tci=0x0000,dl_src=0a:00:00:10:00:05,dl_dst=00:00:00:b8:ca:43,nw_src=10.16.0.4,nw_dst=8.8.8.8,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=0,icmp_code=0
ingress(dp="ovn-default", inport="ds1-l6n7p.default")
-----------------------------------------------------
0. ls_in_port_sec_l2 (ovn-northd.c:4143): inport == "ds1-l6n7p.default" && eth.src == {0a:00:00:10:00:05}, priority 50, uuid 39453393
next;
1. ls_in_port_sec_ip (ovn-northd.c:2898): inport == "ds1-l6n7p.default" && eth.src == 0a:00:00:10:00:05 && ip4.src == {10.16.0.4}, priority 90, uuid 81bcd485
next;
3. ls_in_pre_acl (ovn-northd.c:3269): ip, priority 100, uuid 7b4f4971
reg0[0] = 1;
next;
5. ls_in_pre_stateful (ovn-northd.c:3396): reg0[0] == 1, priority 100, uuid 36cdd577
ct_next;
ct_next(ct_state=new|trk)
-------------------------
6. ls_in_acl (ovn-northd.c:3759): ip && (!ct.est || (ct.est && ct_label.blocked == 1)), priority 1, uuid 7608af5b
reg0[1] = 1;
next;
10. ls_in_stateful (ovn-northd.c:3995): reg0[1] == 1, priority 100, uuid 2aba1b90
ct_commit(ct_label=0/0x1);
next;
16. ls_in_l2_lkup (ovn-northd.c:4470): eth.dst == 00:00:00:b8:ca:43, priority 50, uuid 5c9c3c9f
outport = "ovn-default-ovn-cluster";
output;
....Skip More....
```
4. Diagnose network connectivity
```bash
[root@node2 ~]# kubectl ko diagnose all
### start to diagnose node node1
I1008 07:04:40.475604 26434 ping.go:139] ovs-vswitchd and ovsdb are up
I1008 07:04:40.570824 26434 ping.go:151] ovn_controller is up
I1008 07:04:40.570859 26434 ping.go:35] start to check node connectivity
I1008 07:04:44.586096 26434 ping.go:57] ping node: node1 10.0.129.96, count: 5, loss rate 0.00%, average rtt 0.23ms
I1008 07:04:44.592764 26434 ping.go:57] ping node: node3 10.0.128.35, count: 5, loss rate 0.00%, average rtt 0.63ms
I1008 07:04:44.592791 26434 ping.go:57] ping node: node2 10.0.128.15, count: 5, loss rate 0.00%, average rtt 0.54ms
I1008 07:04:44.592889 26434 ping.go:74] start to check pod connectivity
I1008 07:04:48.669057 26434 ping.go:101] ping pod: kube-ovn-pinger-5mq4g 10.16.0.12, count: 5, loss rate 0.00, average rtt 0.18ms
I1008 07:04:48.769217 26434 ping.go:101] ping pod: kube-ovn-pinger-8xhhv 10.16.0.10, count: 5, loss rate 0.00, average rtt 0.64ms
I1008 07:04:48.769219 26434 ping.go:101] ping pod: kube-ovn-pinger-gc8l6 10.16.0.13, count: 5, loss rate 0.00, average rtt 0.73ms
I1008 07:04:48.769325 26434 ping.go:119] start to check dns connectivity
I1008 07:04:48.777062 26434 ping.go:129] resolve dns kubernetes.default.svc.cluster.local to [10.96.0.1] in 7.71ms
### finish diagnose node node1
### start to diagnose node node2
I1008 07:04:49.231462 16925 ping.go:139] ovs-vswitchd and ovsdb are up
I1008 07:04:49.241636 16925 ping.go:151] ovn_controller is up
I1008 07:04:49.241694 16925 ping.go:35] start to check node connectivity
I1008 07:04:53.254327 16925 ping.go:57] ping node: node2 10.0.128.15, count: 5, loss rate 0.00%, average rtt 0.16ms
I1008 07:04:53.354411 16925 ping.go:57] ping node: node1 10.0.129.96, count: 5, loss rate 0.00%, average rtt 15.65ms
I1008 07:04:53.354464 16925 ping.go:57] ping node: node3 10.0.128.35, count: 5, loss rate 0.00%, average rtt 15.71ms
I1008 07:04:53.354492 16925 ping.go:74] start to check pod connectivity
I1008 07:04:57.382791 16925 ping.go:101] ping pod: kube-ovn-pinger-8xhhv 10.16.0.10, count: 5, loss rate 0.00, average rtt 0.16ms
I1008 07:04:57.483725 16925 ping.go:101] ping pod: kube-ovn-pinger-5mq4g 10.16.0.12, count: 5, loss rate 0.00, average rtt 1.74ms
I1008 07:04:57.483750 16925 ping.go:101] ping pod: kube-ovn-pinger-gc8l6 10.16.0.13, count: 5, loss rate 0.00, average rtt 1.81ms
I1008 07:04:57.483813 16925 ping.go:119] start to check dns connectivity
I1008 07:04:57.490402 16925 ping.go:129] resolve dns kubernetes.default.svc.cluster.local to [10.96.0.1] in 6.56ms
### finish diagnose node node2
### start to diagnose node node3
I1008 07:04:58.094738 21692 ping.go:139] ovs-vswitchd and ovsdb are up
I1008 07:04:58.176064 21692 ping.go:151] ovn_controller is up
I1008 07:04:58.176096 21692 ping.go:35] start to check node connectivity
I1008 07:05:02.193091 21692 ping.go:57] ping node: node3 10.0.128.35, count: 5, loss rate 0.00%, average rtt 0.21ms
I1008 07:05:02.293256 21692 ping.go:57] ping node: node2 10.0.128.15, count: 5, loss rate 0.00%, average rtt 0.58ms
I1008 07:05:02.293256 21692 ping.go:57] ping node: node1 10.0.129.96, count: 5, loss rate 0.00%, average rtt 0.68ms
I1008 07:05:02.293368 21692 ping.go:74] start to check pod connectivity
I1008 07:05:06.314977 21692 ping.go:101] ping pod: kube-ovn-pinger-gc8l6 10.16.0.13, count: 5, loss rate 0.00, average rtt 0.37ms
I1008 07:05:06.415222 21692 ping.go:101] ping pod: kube-ovn-pinger-5mq4g 10.16.0.12, count: 5, loss rate 0.00, average rtt 0.82ms
I1008 07:05:06.415317 21692 ping.go:101] ping pod: kube-ovn-pinger-8xhhv 10.16.0.10, count: 5, loss rate 0.00, average rtt 0.64ms
I1008 07:05:06.415354 21692 ping.go:119] start to check dns connectivity
I1008 07:05:06.420595 21692 ping.go:129] resolve dns kubernetes.default.svc.cluster.local to [10.96.0.1] in 5.21ms
### finish diagnose node node3
```
Pinger makes network requests between pods/nodes/services/dns to test the connectivity in the cluster and expose metrics in Prometheus format.
## Prometheus Integration
Pinger exposes metrics at `:8080/metrics`, it will show following metrics
```bash
pinger_ovs_up
pinger_ovs_down
pinger_ovn_controller_up
pinger_ovn_controller_down
pinger_dns_healthy
pinger_dns_unhealthy
pinger_dns_latency_ms
pinger_pod_ping_latency_ms
pinger_pod_ping_lost_total
pinger_node_ping_latency_ms
pinger_node_ping_lost_total
```
......@@ -38,7 +38,7 @@ spec:
hostNetwork: true
containers:
- name: kube-ovn-controller
image: "index.alauda.cn/alaudak8s/kube-ovn-controller:v0.8.0-pre"
image: "index.alauda.cn/alaudak8s/kube-ovn-controller:v0.8.0"
imagePullPolicy: Always
command:
- /kube-ovn/start-controller.sh
......@@ -109,7 +109,7 @@ spec:
hostPID: true
initContainers:
- name: install-cni
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.8.0-pre"
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.8.0"
imagePullPolicy: Always
command: ["/kube-ovn/install-cni.sh"]
volumeMounts:
......@@ -119,7 +119,7 @@ spec:
name: cni-bin
containers:
- name: cni-server
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.8.0-pre"
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.8.0"
command: ["sh", "/kube-ovn/start-cniserver.sh"]
args:
- --enable-mirror=false
......@@ -202,7 +202,7 @@ spec:
hostPID: true
containers:
- name: pinger
image: "index.alauda.cn/alaudak8s/kube-ovn-pinger:v0.8.0-pre"
image: "index.alauda.cn/alaudak8s/kube-ovn-pinger:v0.8.0"
imagePullPolicy: Always
securityContext:
runAsUser: 0
......
......@@ -38,7 +38,7 @@ spec:
hostNetwork: true
containers:
- name: kube-ovn-controller
image: "index.alauda.cn/alaudak8s/kube-ovn-controller:v0.8.0-pre"
image: "index.alauda.cn/alaudak8s/kube-ovn-controller:v0.8.0"
imagePullPolicy: Always
command:
- /kube-ovn/start-controller.sh
......@@ -112,7 +112,7 @@ spec:
hostPID: true
initContainers:
- name: install-cni
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.8.0-pre"
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.8.0"
imagePullPolicy: Always
command: ["/kube-ovn/install-cni.sh"]
volumeMounts:
......@@ -122,7 +122,7 @@ spec:
name: cni-bin
containers:
- name: cni-server
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.8.0-pre"
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.8.0"
imagePullPolicy: Always
command:
- sh
......@@ -206,7 +206,7 @@ spec:
hostPID: true
containers:
- name: pinger
image: "index.alauda.cn/alaudak8s/kube-ovn-pinger:v0.8.0-pre"
image: "index.alauda.cn/alaudak8s/kube-ovn-pinger:v0.8.0"
imagePullPolicy: Always
securityContext:
runAsUser: 0
......
......@@ -158,7 +158,7 @@ spec:
hostNetwork: true
containers:
- name: ovn-central
image: "index.alauda.cn/alaudak8s/kube-ovn-db:v0.8.0-pre"
image: "index.alauda.cn/alaudak8s/kube-ovn-db:v0.8.0"
imagePullPolicy: Always
env:
- name: POD_IP
......@@ -245,7 +245,7 @@ spec:
hostPID: true
containers:
- name: openvswitch
image: "index.alauda.cn/alaudak8s/kube-ovn-node:v0.8.0-pre"
image: "index.alauda.cn/alaudak8s/kube-ovn-node:v0.8.0"
imagePullPolicy: Always
securityContext:
runAsUser: 0
......
......@@ -38,7 +38,7 @@ spec:
hostNetwork: true
containers:
- name: kube-ovn-webhook
image: "index.alauda.cn/alaudak8s/kube-ovn-webhook:v0.8.0-pre"
image: "index.alauda.cn/alaudak8s/kube-ovn-webhook:v0.8.0"
imagePullPolicy: Always
command:
- /kube-ovn/start-webhook.sh
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment