Commit 463d6253 authored by MengxinLiu's avatar MengxinLiu Committed by oilbeater
Browse files

docs: add crd/ipv6 docs and bump version 0.6.0

parent 52d40dfc
Showing with 255 additions and 128 deletions
+255 -128
# CHANGELOG # CHANGELOG
## v0.6.0 -- 2019/07/22
### Features
* Support traffic mirror
* Use webhook to check ip conflict
* Beta IPv6 support
* Use subnet CRD to replace namespace annotation
* Use go mod to manage dependency
### Bug fixes
* Remove RBAC dependency on cluster-admin
* Use kubernetes nodename to replace hostname
## v0.5.0 -- 2019/06/06 ## v0.5.0 -- 2019/06/06
### Features ### Features
* Support NetworkPolicy by OVN ACL * Support NetworkPolicy by OVN ACL
......
...@@ -17,6 +17,7 @@ Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It off ...@@ -17,6 +17,7 @@ Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It off
- **Namespaced Gateways**: Every Namespace can have a dedicated Gateway for Egress traffic. - **Namespaced Gateways**: Every Namespace can have a dedicated Gateway for Egress traffic.
- **Direct External Connectivity**:Pod IP can be exposed to external network directly. - **Direct External Connectivity**:Pod IP can be exposed to external network directly.
- **Traffic Mirror**: Duplicated container network traffic for monitoring and diagnosing. - **Traffic Mirror**: Duplicated container network traffic for monitoring and diagnosing.
- **IPv6 support**: Kube-OVN support ipv6-only mode pod network.
## Planned Future Work ## Planned Future Work
- Hardware Offloading and DPDK Support - Hardware Offloading and DPDK Support
...@@ -35,12 +36,13 @@ Kube-OVN is easy to install with all necessary components/dependencies included. ...@@ -35,12 +36,13 @@ Kube-OVN is easy to install with all necessary components/dependencies included.
## Documents ## Documents
- [Namespaced Subnets](docs/subnet.md) - [Namespaced Subnets](docs/subnet.md)
- [Subnet Isolation](docs/isolation.md) - [Subnet Isolation](docs/subnet.md#isolation)
- [Static IP](docs/static-ip.md) - [Static IP](docs/static-ip.md)
- [Dynamic QoS](docs/qos.md) - [Dynamic QoS](docs/qos.md)
- [Gateway and Direct connect](docs/gateway.md) - [Gateway and Direct connect](docs/subnet.md#gateway)
- [Traffic Mirror](docs/mirror.md) - [Traffic Mirror](docs/mirror.md)
- [Webhook](docs/webhook.md) - [Webhook](docs/webhook.md)
- [IPv6](docs/ipv6.md)
## Contact ## Contact
Mail: mengxin#alauda.io Mail: mengxin#alauda.io
......
# Gateways
A Gateway is used to enable external network connectivity for Pods within the OVN Virtual Network.
Kube-OVN supports two kinds of Gateways: the distributed Gateway and the centralized Gateway. Also user can expose pod ip directly to external network.
For a distributed Gateway, outgoing traffic from Pods within the OVN network to external destinations will go through the Node where the Pod is hosted.
For a centralized gateway, outgoing traffic from Pods within the OVN network to external destinations will go through Gateway Node for the Namespace.
Use the following annotations in namespace to configure gateway:
- `ovn.kubernetes.io/gateway_type`: `distributed` or `centralized`, default is `distributed`.
- `ovn.kubernetes.io/gateway_node`: when `ovn.kubernetes.io/gateway_type` is `centralized` used this annotation to specify which node act as the namespace gateway.
- `ovn.kubernetes.io/gateway_nat`: `true` or `false`, whether pod ip need to be masqueraded when go through gateway. When `false`, pod ip will be exposed to external network directly, default `true`.
## Example
Add the following annotations when creating the Namespace:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: testns
annotations:
ovn.kubernetes.io/gateway_type: centralized
ovn.kubernetes.io/gateway_node: node1
ovn.kubernetes.io/gateway_nat: "true"
```
Create some Pods:
```yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: app1
namespace: testns
labels:
app: app1
spec:
selector:
matchLabels:
name: app1
template:
metadata:
labels:
name: app1
spec:
containers:
- name: toolbox
image: halfcrazy/toolbox
```
Open two terminals, one on the master:
`kubectl -n testns exec -it app1-xxxx ping 114.114.114.114`
And one on node1:
`tcpdump -n -i eth0 icmp and host 114.114.114.114`
\ No newline at end of file
...@@ -12,22 +12,24 @@ Kube-OVN includes two parts: ...@@ -12,22 +12,24 @@ Kube-OVN includes two parts:
*NOTE* Ubuntu 16.04 users should build the related ovs-2.11.1 kernel module to replace the kernel built-in module *NOTE* Ubuntu 16.04 users should build the related ovs-2.11.1 kernel module to replace the kernel built-in module
## To install ## To Install
1. Add the following label to the Node which will host the OVN DB and the OVN Control Plane: 1. Add the following label to the Node which will host the OVN DB and the OVN Control Plane:
`kubectl label node <Node on which to deploy OVN DB> kube-ovn/role=master` `kubectl label node <Node on which to deploy OVN DB> kube-ovn/role=master`
2. Install native OVS and OVN components: 2. Install native OVS and OVN components:
`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/v0.5.0/yamls/ovn.yaml` `kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/v0.6.0/yamls/ovn.yaml`
3. Install the Kube-OVN Controller and CNI plugins: 3. Install the Kube-OVN Controller and CNI plugins:
`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/v0.5.0/yamls/kube-ovn.yaml` `kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/v0.6.0/yamls/kube-ovn.yaml`
That's all! You can now create some pods and test connectivity. That's all! You can now create some pods and test connectivity.
For high-available ovn db, see [high available](high-available.md) For high-available ovn db, see [high available](high-available.md)
If you want to enable IPv6 on default subnet and node subnet, please apply https://raw.githubusercontent.com/alauda/kube-ovn/v0.6.0/yamls/kube-ovn-ipv6.yaml on Step 3.
## More Configuration ## More Configuration
### Controller Configuration ### Controller Configuration
...@@ -72,7 +74,7 @@ For high-available ovn db, see [high available](high-available.md) ...@@ -72,7 +74,7 @@ For high-available ovn db, see [high available](high-available.md)
1. Remove Kubernetes resources: 1. Remove Kubernetes resources:
```bash ```bash
wget https://raw.githubusercontent.com/alauda/kube-ovn/v0.5.0/dist/images/cleanup.sh wget https://raw.githubusercontent.com/alauda/kube-ovn/v0.6.0/dist/images/cleanup.sh
bash cleanup.sh bash cleanup.sh
``` ```
......
# IPv6
Through Kube-OVN does support both protocol subnets coexist in a cluster, Kubernetes control plan now only support one protocol. So you will lost some ability like probe and service discovery if you use a protocol other than the kubernetes control plan. We recommend you use only one same ip protocol that same with kubernetes control plan.
To enable IPv6 support you need to modify the installation yaml to specify the default subnet and node subnet cidrBlock and gateway with a ipv6 format. You can apply this [v6 version yaml](https://raw.githubusercontent.com/alauda/kube-ovn/v0.6.0/yamls/kube-ovn-ipv6.yaml) at [installation step 3](install.md#to-install) for a quick start.
\ No newline at end of file
# Subnet Isolation
Kube-OVN supports network isolation and access control at the Subnet level.
Use following annotations to specify the isolation policy:
- `ovn.kubernetes.io/private`: boolean, controls whether to deny traffic from IP addresses outside of this Subnet. Default: false.
- `ovn.kubernetes.io/allow`: strings of CIDR separated by commas, controls which addresses can access this Subnet, if `private=true`.
Example:
```bash
apiVersion: v1
kind: Namespace
metadata:
annotations:
ovn.kubernetes.io/cidr: 10.17.0.0/16
ovn.kubernetes.io/gateway: 10.17.0.1
ovn.kubernetes.io/logical_switch: ovn-subnet
ovn.kubernetes.io/exclude_ips: 10.17.0.0..10.17.0.10
ovn.kubernetes.io/private: "true"
ovn.kubernetes.io/allow: 10.17.0.0/16,10.18.0.0/16
name: ovn-subnet
```
\ No newline at end of file
# Subnets # Subnets
Kube-OVN uses annotations on Namespaces to create and share Subnets. If a Namespace has no related annotations, it will use the default Subnet (10.16.0.0/16) From v0.6.0 Kube-OVN will use Subnet crd to manage subnets. If you still use a version prior to v0.6.0 please update to this version to use new subnet.
Use the following annotations to define a Subnet: ## Example
- `ovn.kubernetes.io/cidr`: The CIDR of the Subnet. ```bash
- `ovn.kubernetes.io/gateway`: The Gateway address for the Subnet. apiVersion: kubeovn.io/v1
- `ovn.kubernetes.io/logical_switch`: The Logical Switch name in OVN. kind: Subnet
- `ovn.kubernetes.io/exclude_ips`: Addresses that should not be allocated to Pods. name: subnet-gateway
spec:
protocol: IPv4
default: false
namespaces:
- ns1
- ns2
cidrBlock: 100.64.0.0/16
gateway: 100.64.0.1
excludeIps:
- 100.64.0.1
private: true
allowSubnets:
- 10.16.0.0/16
- 10.18.0.0/16
gatewayType: centralized
gatewayNode: node1
natOutgoing: true
```
## Basic Configuration
- `protocol`: The ip protocol ,can be IPv4 or IPv6. *Note*: Through kube-ovn support both protocol subnets coexist in a cluster, kubernetes control plan now only support one protocol. So you will lost some ability like probe and service discovery if you use a protocol other than the kubernetes control plan.
- `default`: If set true, all namespaces that not bind to any subnets will use this subnet to allocate pod ip and share other network configuration. Note: Kube-OVN will create a default subnet and set this field to true. There can only be one default subnet in a cluster.
- `namespaces`: List of namespaces that bind to this subnet. If you want to bind a namespace to this subnet, edit and add the namespace name to this field.
- `cidrBlock`: The cidr of this subnet.
- `gateway`: The gateway address of this subnet.
- `excludeIps`: List of ips that you do not want to be allocated.
Example: ## Isolation
```bash Besides standard NetworkPolicy,Kube-OVN also supports network isolation and access control at the Subnet level to simplify the use of access control.
apiVersion: v1
kind: Namespace
metadata:
annotations:
ovn.kubernetes.io/cidr: 10.17.0.0/16
ovn.kubernetes.io/gateway: 10.17.0.1
ovn.kubernetes.io/logical_switch: ovn-subnet
ovn.kubernetes.io/exclude_ips: "192.168.0.4,192.168.0.30..192.168.0.60,192.168.0.110..192.168.0.120"
name: ovn-subnet
```
This YAML will create a Logical Switch named `ovn-subnet` in OVN, with CIDR 10.17.0.0/16, and Gateway 10.17.0.1. The IP addresses between 10.17.0.0 and 10.17.0.10 will not be allocated to the Pods. *Note*: NetworkPolicy take a higher priority than subnet isolation rules.
**NOTE**: In the current version, we only support creating a Subnet while creating a new Namespace. Modifying annotations after Namespace creation will not trigger Subnet creation/update in OVN. Dynamic Subnet configuration is planned for a future release. - `private`: Boolean, controls whether to deny traffic from IP addresses outside of this Subnet. Default: false.
- `allow`: Strings of CIDRs separated by commas, controls which addresses can access this Subnet, if `private=true`.
To share a Subnet across multiple Namespaces, point the annotation `ovn.kubernetes.io/logical_switch` to an existing Logical Switch when creating the Namespace. For example: ## Gateway
```bash Gateway is used to enable external network connectivity for Pods within the OVN Virtual Network.
apiVersion: v1
kind: Namespace Kube-OVN supports two kinds of Gateways: the distributed Gateway and the centralized Gateway. Also user can expose pod ip directly to external network.
metadata:
annotations: For a distributed Gateway, outgoing traffic from Pods within the OVN network to external destinations will go through the Node where the Pod is hosted.
ovn.kubernetes.io/logical_switch: ovn-subnet
name: ovn-share For a centralized gateway, outgoing traffic from Pods within the OVN network to external destinations will go through Gateway Node for the Namespace.
```
This YAML will create a Namespace ovn-share that uses the same Subnet as the previous Namespace `ovn-subnet`. - `gatewayType`: `distributed` or `centralized`, default is `distributed`.
\ No newline at end of file - `gatewayNode`: when `gatewayType` is `centralized` used this field to specify which node act as the namespace gateway.
- `natOutgoing`: `true` or `false`, whether pod ip need to be masqueraded when go through gateway. When `false`, pod ip will be exposed to external network directly, default `false`.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: kube-ovn-controller
namespace: kube-ovn
annotations:
kubernetes.io/description: |
kube-ovn controller
spec:
replicas: 1
selector:
matchLabels:
app: kube-ovn-controller
strategy:
rollingUpdate:
maxSurge: 0%
maxUnavailable: 100%
type: RollingUpdate
template:
metadata:
labels:
app: kube-ovn-controller
component: network
type: infra
spec:
tolerations:
- operator: Exists
effect: NoSchedule
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: kube-ovn-controller
topologyKey: kubernetes.io/hostname
serviceAccountName: ovn
hostNetwork: true
containers:
- name: kube-ovn-controller
image: "index.alauda.cn/alaudak8s/kube-ovn-controller:v0.6.0"
imagePullPolicy: Always
command:
- /kube-ovn/start-controller.sh
args:
- --default-cidr=2001:db8:0000:0000::/64
- --default-gateway=2001:db8:0000:0000::1
- --node-switch-cidr=2001:db8:0000:0001::/64
- --node-switch-gateway=2001:db8:0000:0001::1
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KUBE_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
readinessProbe:
exec:
command:
- nc
- -z
- -w3
- 127.0.0.1
- "10660"
periodSeconds: 3
livenessProbe:
exec:
command:
- nc
- -z
- -w3
- 127.0.0.1
- "10660"
initialDelaySeconds: 30
periodSeconds: 7
failureThreshold: 5
nodeSelector:
beta.kubernetes.io/os: "linux"
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: kube-ovn-cni
namespace: kube-ovn
annotations:
kubernetes.io/description: |
This daemon set launches the kube-ovn cni daemon.
spec:
selector:
matchLabels:
app: kube-ovn-cni
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: kube-ovn-cni
component: network
type: infra
spec:
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: ovn
hostNetwork: true
hostPID: true
initContainers:
- name: install-cni
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.6.0"
imagePullPolicy: Always
command: ["/kube-ovn/install-cni.sh"]
volumeMounts:
- mountPath: /etc/cni/net.d
name: cni-conf
- mountPath: /opt/cni/bin
name: cni-bin
containers:
- name: cni-server
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.6.0"
command: ["sh", "/kube-ovn/start-cniserver.sh"]
args:
- --enable-mirror=false
- --mtu=1420
imagePullPolicy: Always
securityContext:
runAsUser: 0
privileged: true
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /run/openvswitch
name: host-run-ovs
readinessProbe:
exec:
command:
- nc
- -z
- -w3
- 127.0.0.1
- "10665"
periodSeconds: 3
livenessProbe:
exec:
command:
- nc
- -z
- -w3
- 127.0.0.1
- "10665"
initialDelaySeconds: 30
periodSeconds: 7
failureThreshold: 5
nodeSelector:
beta.kubernetes.io/os: "linux"
volumes:
- name: host-run-ovs
hostPath:
path: /run/openvswitch
- name: cni-conf
hostPath:
path: /etc/cni/net.d
- name: cni-bin
hostPath:
path: /opt/cni/bin
\ No newline at end of file
...@@ -38,7 +38,7 @@ spec: ...@@ -38,7 +38,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kube-ovn-controller - name: kube-ovn-controller
image: "index.alauda.cn/alaudak8s/kube-ovn-controller:v0.6.0-pre" image: "index.alauda.cn/alaudak8s/kube-ovn-controller:v0.6.0"
imagePullPolicy: Always imagePullPolicy: Always
command: command:
- /kube-ovn/start-controller.sh - /kube-ovn/start-controller.sh
...@@ -112,7 +112,7 @@ spec: ...@@ -112,7 +112,7 @@ spec:
hostPID: true hostPID: true
initContainers: initContainers:
- name: install-cni - name: install-cni
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.6.0-pre" image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.6.0"
imagePullPolicy: Always imagePullPolicy: Always
command: ["/kube-ovn/install-cni.sh"] command: ["/kube-ovn/install-cni.sh"]
volumeMounts: volumeMounts:
...@@ -122,7 +122,7 @@ spec: ...@@ -122,7 +122,7 @@ spec:
name: cni-bin name: cni-bin
containers: containers:
- name: cni-server - name: cni-server
image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.6.0-pre" image: "index.alauda.cn/alaudak8s/kube-ovn-cni:v0.6.0"
imagePullPolicy: Always imagePullPolicy: Always
command: command:
- sh - sh
......
...@@ -154,7 +154,7 @@ spec: ...@@ -154,7 +154,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: ovn-central - name: ovn-central
image: "index.alauda.cn/alaudak8s/kube-ovn-db:v0.6.0-pre" image: "index.alauda.cn/alaudak8s/kube-ovn-db:v0.6.0"
imagePullPolicy: Always imagePullPolicy: Always
env: env:
- name: POD_IP - name: POD_IP
...@@ -241,7 +241,7 @@ spec: ...@@ -241,7 +241,7 @@ spec:
hostPID: true hostPID: true
containers: containers:
- name: openvswitch - name: openvswitch
image: "index.alauda.cn/alaudak8s/kube-ovn-node:v0.6.0-pre" image: "index.alauda.cn/alaudak8s/kube-ovn-node:v0.6.0"
imagePullPolicy: Always imagePullPolicy: Always
securityContext: securityContext:
runAsUser: 0 runAsUser: 0
......
...@@ -38,7 +38,7 @@ spec: ...@@ -38,7 +38,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kube-ovn-webhook - name: kube-ovn-webhook
image: "index.alauda.cn/alaudak8s/kube-ovn-webhook:v0.6.0-pre" image: "index.alauda.cn/alaudak8s/kube-ovn-webhook:v0.6.0"
imagePullPolicy: Always imagePullPolicy: Always
command: command:
- /kube-ovn/start-webhook.sh - /kube-ovn/start-webhook.sh
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment