Multisite is a feature of Ceph that allows object stores to replicate its data over multiple Ceph clusters.
Multisite is a feature of Ceph that allows object stores to replicate its data over multiple Ceph clusters.
Multisite also allows object stores to be independent and isloated from other object stores in a cluster.
Multisite also allows object stores to be independent and isolated from other object stores in a cluster.
### Ceph Multisite data model
...
...
@@ -34,11 +34,11 @@ When a ceph-object-store is created without the `zone` section; a realm, zone gr
Since it is the only ceph-object-store in the realm, the data in the ceph-object-store remain independent and isolated from others on the same cluster.
When a ceph-object-store is created with the `zone` section, the Ceph Multisite will be configured.
When a ceph-object-store is created with the `zone` section, the Ceph Multisite will be configured.
The ceph-object-store will join a zone, zone group, and realm with a different than it's own.
This allows the ceph-object-store to replacte it's data over multiple Ceph clusters.
This allows the ceph-object-store to replace it's data over multiple Ceph clusters.
### Overview Ceph Multisite Steps
To enable Ceph's multisite, the following steps need to happen.
...
...
@@ -66,7 +66,7 @@ To enable Ceph's multisite, the following steps need to happen.
### Future Design Roadmap
At the moment the multisite resources only handles Day 1 initial configuration.
At the moment the multisite resources only handles Day 1 initial configuration.
Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster.
...
...
@@ -75,7 +75,7 @@ To be clear, when the ceph-object-{realm, zone group, zone} resource is deleted
Future iterations of this design will address these Day 2 operations and other such as:
- Initializing and modifying Storage Classes
- Deletion of the CR relecting deletion of the realm, zone group, & zone
- Deletion of the CR reflecting deletion of the realm, zone group, & zone
- The status of the ceph-object-{realm, zone group, zone} reflecting the status of the realm, zone group, and zone.
@@ -43,7 +43,7 @@ The NFS CRD spec will specify the following:
- client (The host or network to which the export is being shared)
- client options (The options to be used for the client) e.g., read and write permission, root squash etc.
The parameters to configure NFS CRD are demonstrated in the example bellow which is followed by a table that explains the parameters:
The parameters to configure NFS CRD are demonstrated in the example below which is followed by a table that explains the parameters:
A simple example for sharing a volume(could be hostPath, cephFS, cephRBD, googlePD, EBS etc.) using NFS, without client specification and per export based configuration, whose NFS-Ganesha export entry looks like: