Commit 9b2436e9 authored by xiaohua.pan's avatar xiaohua.pan
Browse files

init 3.11

parent 4c1cd2db
Showing with 276 additions and 25 deletions
+276 -25
---
CHANGEREPO: true
HOSTNAME: os39.test.it.example.com
HOSTNAME: os311.test.it.example.com
Change_Base_Registry: false
Harbor_Url: harbor.apps.it.example.com
......
#!/bin/bash
selinux=$(getenforce)
if [ "$selinux" != Enforcing ]
then
echo "Please setlinux Enforcing"
exit 10
fi
export CHANGEREPO=true
if [ $CHANGEREPO == true -a ! -d /etc/yum.repos.d/back ]
then
......@@ -6,11 +14,13 @@ then
cp files/all.repo /etc/yum.repos.d/
yum clean all
fi
current_path=`pwd`
yum localinstall tools/ansible-2.4.6.0-1.el7.ans.noarch.rpm -y
yum localinstall tools/ansible-2.6.5-1.el7.ans.noarch.rpm -y
ansible-playbook playbook.yml
cd $current_path/openshift-ansible-playbook
ansible-playbook playbooks/prerequisites.yml
ansible-playbook playbooks/deploy_cluster.yml
htpasswd -b /etc/origin/master/htpasswd admin admin
oc adm policy add-cluster-role-to-user cluster-admin admin
......@@ -15,7 +15,7 @@ gpgcheck=0
[openshift]
name=Openshift
baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/paas/$basearch/openshift-origin39/
baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/paas/$basearch/openshift-origin311/
gpgcheck=0
[epel]
......
File mode changed from 100755 to 100644
File mode changed from 100755 to 100644
......@@ -13,5 +13,4 @@ The table below outlines the defaults per `openshift_deployment_type`:
| **openshift_service_type** (also used for package names) | origin | atomic-openshift |
| **openshift.common.config_base** | /etc/origin | /etc/origin |
| **openshift_data_dir** | /var/lib/origin | /var/lib/origin |
| **openshift.master.registry_url oreg_url_node** | openshift/origin-${component}:${version} | openshift3/ose-${component}:${version} |
| **Image Streams** | centos | rhel |
......@@ -30,6 +30,10 @@ a set of tasks. Best practice suggests using absolute paths to the hook file to
openshift_master_upgrade_pre_hook=/usr/share/custom/pre_master.yml
openshift_master_upgrade_hook=/usr/share/custom/master.yml
openshift_master_upgrade_post_hook=/usr/share/custom/post_master.yml
openshift_node_upgrade_pre_hook=/usr/share/custom/pre_node.yml
openshift_node_upgrade_hook=/usr/share/custom/node.yml
openshift_node_upgrade_post_hook=/usr/share/custom/post_node.yml
# <snip>
```
......@@ -68,3 +72,19 @@ The file may **not** be a playbook.
- Runs **after** each master is upgraded and has had it's service/system restart.
- This hook runs against **each master** in serial.
- If a task needs to run against a different host, said task will need to use [``delegate_to`` or ``local_action``](http://docs.ansible.com/ansible/playbooks_delegation.html#delegation).
### openshift_node_upgrade_pre_hook
- Runs **before** each node is upgraded.
- This hook runs against **each node** in serial.
- If a task needs to run against a different host, said task will need to use [``delegate_to`` or ``local_action``](http://docs.ansible.com/ansible/playbooks_delegation.html#delegation).
### openshift_node_upgrade_hook
- Runs **after** each node is upgraded but **before** it's marked schedulable again..
- This hook runs against **each node** in serial.
- If a task needs to run against a different host, said task will need to use [``delegate_to`` or ``local_action``](http://docs.ansible.com/ansible/playbooks_delegation.html#delegation).
### openshift_node_upgrade_post_hook
- Runs **after** each node is upgraded; it's the last node upgrade action.
- This hook runs against **each node** in serial.
- If a task needs to run against a different host, said task will need to use [``delegate_to`` or ``local_action``](http://docs.ansible.com/ansible/playbooks_delegation.html#delegation).
File mode changed from 100755 to 100644
# approval == this is a good idea /approve
approvers:
- michaelgugino
- mtnbikenc
- sdodson
- vrutkovs
# review == this code is good /lgtm
reviewers:
- michaelgugino
- mtnbikenc
- sdodson
- vrutkovs
......@@ -61,7 +61,7 @@ Install base dependencies:
Requirements:
- Ansible >= 2.4.3.0
- Ansible >= 2.6.5, Ansible 2.7 is not yet supported and known to fail
- Jinja >= 2.7
- pyOpenSSL
- python-lxml
......@@ -94,11 +94,67 @@ cd openshift-ansible
sudo ansible-playbook -i inventory/hosts.localhost playbooks/prerequisites.yml
sudo ansible-playbook -i inventory/hosts.localhost playbooks/deploy_cluster.yml
```
## Node Group Definition and Mapping
In 3.10 and newer all members of the [nodes] inventory group must be assigned an
`openshift_node_group_name`. This value is used to select the configmap that
configures each node. By default there are three configmaps created; one for
each node group defined in `openshift_node_groups` and they're named
`node-config-master` `node-config-infra` `node-config-compute`. It's important
to note that the configmap is also the authoritative definition of node labels,
the old `openshift_node_labels` value is effectively ignored.
There are also two configmaps that label nodes into multiple roles, these are
not recommended for production clusters, however they're named
`node-config-all-in-one` and `node-config-master-infra` if you'd like to use
them to deploy non production clusters.
The default set of node groups is defined in
[roles/openshift_facts/defaults/main.yml] like so
```
openshift_node_groups:
- name: node-config-master
labels:
- 'node-role.kubernetes.io/master=true'
edits: []
- name: node-config-infra
labels:
- 'node-role.kubernetes.io/infra=true'
edits: []
- name: node-config-compute
labels:
- 'node-role.kubernetes.io/compute=true'
edits: []
- name: node-config-master-infra
labels:
- 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true'
edits: []
- name: node-config-all-in-one
labels:
- 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true,node-role.kubernetes.io/compute=true'
edits: []
```
When configuring this in the INI based inventory this must be translated into a
Python dictionary. Here's an example of a group named `node-config-all-in-one`
which is suitable for an All-In-One installation with
kubeletArguments.pods-per-core set to 20
```
openshift_node_groups=[{'name': 'node-config-all-in-one', 'labels': ['node-role.kubernetes.io/master=true', 'node-role.kubernetes.io/infra=true', 'node-role.kubernetes.io/compute=true'], 'edits': [{ 'key': 'kubeletArguments.pods-per-core','value': ['20']}]}]
```
For upgrades, the upgrade process will block until you have the required
configmaps in the openshift-node namespace. Please define
`openshift_node_groups` as explained above or accept the defaults and run the
playbooks/openshift-master/openshift_node_group.yml playbook to have them
created for you automatically.
## Complete Production Installation Documentation:
- [OpenShift Enterprise](https://docs.openshift.com/enterprise/latest/install_config/install/advanced_install.html)
- [OpenShift Origin](https://docs.openshift.org/latest/install_config/install/advanced_install.html)
- [OpenShift Container Platform](https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html)
- [OpenShift Origin](https://docs.okd.io/latest/install/index.html)
## Containerized OpenShift Ansible
......
......@@ -44,17 +44,17 @@ beginning of the installation process ensuring that these settings are applied
before attempting to pull any of the following images.
Origin
openshift/origin
openshift/node (node + openshift-sdn + openvswitch rpm for client tools)
openshift/openvswitch (centos7 + openvswitch rpm, runs ovsdb ovsctl processes)
registry.access.redhat.com/rhel7/etcd
OpenShift Enterprise
openshift3/ose
openshift3/node
openshift3/openvswitch
registry.access.redhat.com/rhel7/etcd
* note openshift3/* images come from registry.access.redhat.com and
docker.io/openshift/origin
docker.io/openshift/node (node + openshift-sdn + openvswitch rpm for client tools)
docker.io/openshift/openvswitch (centos7 + openvswitch rpm, runs ovsdb ovsctl processes)
registry.redhat.io/rhel7/etcd
OpenShift Container Platform
registry.redhat.io/openshift3/ose
registry.redhat.io/openshift3/node
registry.redhat.io/openshift3/openvswitch
registry.redhat.io/rhel7/etcd
* note openshift3/* images come from registry.redhat.io and
rely on the --additional-repository flag being set appropriately.
### Starting and Stopping Containers
......
......@@ -2,7 +2,7 @@
The [Dockerfile](images/installer/Dockerfile) in this repository can be used to build a containerized `openshift-ansible`. The resulting image can run any of the provided playbooks. See [BUILD.md](BUILD.md) for image build instructions.
The image is designed to **run as a non-root user**. The container's UID is mapped to the username `default` at runtime. Therefore, the container's environment reflects that user's settings, and the configuration should match that. For example `$HOME` is `/opt/app-root/src`, so ssh keys are expected to be under `/opt/app-root/src/.ssh`. If you ran a container as `root` you would have to adjust the container's configuration accordingly, e.g. by placing ssh keys under `/root/.ssh` instead. Nevertheless, the expectation is that containers will be run as non-root; for example, this container image can be run inside OpenShift under the default `restricted` [security context constraint](https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints).
The image is designed to **run as a non-root user**. The container's UID is mapped to the username `default` at runtime. Therefore, the container's environment reflects that user's settings, and the configuration should match that. For example `$HOME` is `/opt/app-root/src`, so ssh keys are expected to be under `/opt/app-root/src/.ssh`. If you ran a container as `root` you would have to adjust the container's configuration accordingly, e.g. by placing ssh keys under `/root/.ssh` instead. Nevertheless, the expectation is that containers will be run as non-root; for example, this container image can be run inside OpenShift under the default `restricted` [security context constraint](https://docs.okd.io/latest/architecture/additional_concepts/authorization.html#security-context-constraints).
**Note**: at this time there are known issues that prevent to run this image for installation/upgrade purposes (i.e. run one of the config/upgrade playbooks) from within one of the hosts that is also an installation target at the same time: if the playbook you want to run attempts to manage the docker daemon and restart it (like install/upgrade playbooks do) this would kill the container itself during its operation.
......@@ -30,7 +30,7 @@ Here is an example of how to run a containerized `openshift-ansible` playbook th
-e INVENTORY_FILE=/tmp/inventory \
-e PLAYBOOK_FILE=playbooks/openshift-checks/certificate_expiry/default.yaml \
-e OPTS="-v" -t \
openshift/origin-ansible
docker.io/openshift/origin-ansible
You might want to adjust some of the options in the example to match your environment and/or preferences. For example: you might want to create a separate directory on the host where you'll copy the ssh key and inventory files prior to invocation to avoid unwanted SELinux re-labeling of the original files or paths (see below).
......@@ -61,7 +61,7 @@ If the inventory file needs additional files then it can use the path `/var/lib/
Run the ansible system container:
```sh
atomic install --system --set INVENTORY_FILE=$(pwd)/inventory.origin openshift/origin-ansible
atomic install --system --set INVENTORY_FILE=$(pwd)/inventory.origin docker.io/openshift/origin-ansible
systemctl start origin-ansible
```
......
File mode changed from 100755 to 100644
File mode changed from 100755 to 100644
......@@ -7,7 +7,7 @@
[defaults]
# Set the log_path
#log_path = /tmp/ansible.log
log_path = ~/openshift-ansible.log
# Additional default options for OpenShift Ansible
forks = 20
......
File mode changed from 100755 to 100644
......@@ -406,7 +406,7 @@ For consistency, role names SHOULD follow the above naming pattern. It is import
Many times the `technology` portion of the pattern will line up with a package name. It is advised that whenever possible, the package name should be used.
.Examples:
* The role to configure a master is called `openshift_master`
* The role to configure a master is called `openshift_control_plane`
* The role to configure OpenShift specific yum repositories is called `openshift_repos`
=== Filters
......@@ -490,6 +490,8 @@ The Ansible `package` module calls the associated package manager for the underl
---
# tasks.yml
- name: Install etcd (for etcdctl)
package: name=etcd state=latest
package:
name: etcd
state: latest
register: install_result
----
File mode changed from 100755 to 100644
# OpenShift-Ansible Components
>**TL;DR: Look at playbooks/openshift-web-console as an example**
## General Guidelines
Components in OpenShift-Ansible consist of two main parts:
* Entry point playbook(s)
* Ansible role
* OWNERS files in both the playbooks and roles associated with the component
When writing playbooks and roles, follow these basic guidelines to ensure
success and maintainability.
### Idempotency
Definition:
>_an idempotent operation is one that has no additional effect if it is called
more than once with the same input parameters_
Ansible playbooks and roles should be written such that when the playbook is run
again with the same configuration, no tasks should report `changed` as well as
no material changes should be made to hosts in the inventory. Playbooks should
be re-runnable, but also be idempotent.
### Other advice for success
* Try not to leave artifacts like files or directories
* Avoid using `failed_when:` where ever possible
* Always `name:` your tasks
* Document complex logic or code in tasks
* Set role defaults in `defaults/main.yml`
* Avoid the use of `set_fact:`
## Building Component Playbooks
Component playbooks are divided between the root of the component directory and
the `private` directory. This allows other parts of openshift-ansible to import
component playbooks without also running the common initialization playbooks
unnecessarily.
Entry point playbooks are located in the `playbooks` directory and follow the
following structure:
```
playbooks/openshift-component_name
├── config.yml Entry point playbook
├── private
│   ├── config.yml Included by the Cluster Installer
│   └── roles -> ../../roles Don't forget to create this symlink
├── OWNERS Assign 2-3 approvers and reviewers
└── README.md Tell us what this component does
```
### Entry point config playbook
The primary component entry point playbook will at a minimum run the common
initialization playbooks and then import the private playbook.
```yaml
# playbooks/openshift-component_name/config.yml
---
- import_playbook: ../init/main.yml
- import_playbook: private/config.yml
```
### Private config playbook
The private component playbook will run the component role against the intended
host groups and provide any required variables. This playbook is also called
during cluster installs and upgrades. Think of this as the shareable portion of
the component playbooks.
```yaml
# playbooks/openshift-component_name/private/config.yml
---
- name: OpenShift Component_Name Installation
hosts: oo_first_master
tasks:
- import_role:
name: openshift_component_name
```
NOTE: The private playbook may also include wrapper plays for the Installer
Checkpoint plugin which will be discussed later.
## Building Component Roles
Component roles contain all of the necessary files and logic to install and
configure the component. The install portion of the role should also support
performing upgrades on the component.
Ansible roles are located in the `roles` directory and follow the following
structure:
```
roles/openshift_component_name
├── defaults
│   └── main.yml Defaults for variables used in the role
│ which can be overridden by the user
├── files
│   ├── component-config.yml
│   ├── component-rbac-template.yml
│   └── component-template.yml
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── OWNERS Assign 2-3 approvers and reviewers
├── README.md
├── tasks
│   └── main.yml Default playbook used when calling the role
├── templates
└── vars
└── main.yml Internal roles variables
```
### Component Installation
Where possible, Ansible modules should be used to perform idempotent operations
with the OpenShift API. Avoid using the `command` or `shell` modules with the
`oc` cli unless the required operation is not available through either the
`lib_openshift` modules or Ansible core modules.
The following is a basic flow of Ansible tasks for installation.
- Create the project (oc_project)
- Create a temp directory for processing files
- Copy the client config to temp
- Copy templates to temp
- Read existing config map
- Copy existing config map to temp
- Generate/update config map
- Reconcile component RBAC (oc_process)
- Apply component template (oc_process)
- Poll healthz and wait for it to come up
- Log status of deployment
- Clean up temp
### Component Removal
- Remove the project (oc_project)
## Enabling the Installer Checkpoint callback
- Add the wrapper plays to the entry point playbook
- Update the installer_checkpoint callback plugin
Details can be found in the installer_checkpoint role.
File mode changed from 100755 to 100644
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment