Unverified Commit 8bf75bd4 authored by Nolan Brubaker's avatar Nolan Brubaker Committed by GitHub
Browse files

Add change log and docs site for v1.4.0-beta.1 (#2523)


* Update site for v1.4.0-beta.1
Signed-off-by: default avatarNolan Brubaker <brubakern@vmware.com>

* Add v1.4.0-beta.1 changelogs
Signed-off-by: default avatarNolan Brubaker <brubakern@vmware.com>

* Update upgrade link to v1.4
Signed-off-by: default avatarNolan Brubaker <brubakern@vmware.com>

* Correct docs links in changelog
Signed-off-by: default avatarNolan Brubaker <brubakern@vmware.com>
parent e400be9c
Showing with 1671 additions and 0 deletions
+1671 -0
......@@ -2,6 +2,7 @@
* [CHANGELOG-1.3.md][13]
## Development release:
* [CHANGELOG-1.4.md][14]
* [Unreleased Changes][0]
## Older releases:
......@@ -19,6 +20,7 @@
* [CHANGELOG-0.3.md][1]
[14]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.4.md
[13]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.3.md
[12]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.2.md
[11]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.1.md
......
## v1.4.0-beta.1
### 2020-05-08
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.4.0-beta.1
### Container Image
`velero/velero:v1.4.0-beta.1`
### Documentation
https://velero.io/docs/v1.4-pre/
### Upgrading
https://velero.io/docs/v1.4-pre/upgrade-to-1.4/
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.4.0-beta.1
### Highlights
* Added beta-level CSI support
* Added custom CA certificate support
* Backup progress reporting
### All Changes
* Add details of CSI volumesnapshotcontents associated with a backup to `velero backup describe` when the `EnableCSI` feature flag is given on the velero client. (#2448, @nrb)
* report backup progress (number of items backed up so far out of an estimated total number of items) during backup in the logs and as status fields on the Backup custom resource (#2440, @skriss)
* allow feature flags to be passed from install CLI (#2503, @ashish)
* sync backups' CSI API objects into the cluster as part of the backup sync controller (#2496, @ashish)
* bug fix: in error location logging hook, if the item logged under the `error` key doesn't implement the `error` interface, don't return an error since this is a valid scenario (#2487, @skriss)
* bug fix: in CRD restore plugin, don't use runtime.DefaultUnstructuredConverter.FromUnstructured(...) to avoid conversion issues when float64 fields contain int values (#2484, @skriss)
* during backup deletion also delete CSI volumesnapshotcontents that were created as a part of the backup but the associated volumesnapshot object does not exist (#2480, @ashish)
* If plugins don't support the `--features` flag, don't pass it to them. Also, update the standard plugin server to ignore unknown flags. (#2479, @skriss)
* At backup time, if a CustomResourceDefinition appears to have been created via the v1beta1 endpoint, retrieve it from the v1beta1 endpoint instead of simply changing the APIVersion. (#2478, @nrb)
* update container base images from ubuntu:bionic to ubuntu:focal (#2471, @skriss)
* bug fix: when a resource includes/excludes list contains unresolvable items, don't remove them from the list, so that the list doesn't inadvertently end up matching *all* resources. (#2462, @skriss)
* Azure: add support for getting storage account key for restic directly from an environment variable (#2455, @jaygridley)
* Support to skip VSL validation for the backup having SnapshotVolumes set to false or created with `--snapshot-volumes=false` (#2450, @mynktl)
* during backup deletion also delete CSI volumesnapshots that were created as a part of the backup (#2411, @ashish)
* clarify the wording for restore describe for namespaces included Instead of showing it as "*" explicitly mention that all the namespaces from the backup object are included. (#1918, @raghavendrabhat)
* When the EnableCSI feature flag is provided, upload CSI VolumeSnapshots and VolumeSnapshotContents to object storage as gzipped JSON. (#2323, @nrb)
* bug fix: populate namespace in logs for backup errors (#2438, @skriss)
* bug fix: save PodVolumeBackup manifests to object storage even if the volume was empty, so that on restore, the PV is dynamically reprovisioned if applicable (#2390, @skriss)
* adding annotations on backup CRD for k8s major, minor and git versions (#2346, @brito)
* bump Kubernetes module dependencies to v0.17.4 to get fix for https://github.com/kubernetes/kubernetes/issues/86149 (#2407, @skriss)
* Adding new restoreItemAction for PVC to update the selected-node annotation (#2377, @mynktl)
* Added a --cacert flag to the install command to provide the CA bundle to use when verifying TLS connections to object storage (#2368, @mansam)
* Added a `--cacert` flag to the velero client describe, download, and logs commands to allow passing a path to a certificate to use when verifying TLS connections to object storage. Also added a corresponding client config option called `cacert` which takes a path to a certificate bundle to use as a default when `--cacert` is not specified. (#2364, @mansam)
* support setting a custom CA certificate on a BSL to use when verifying TLS connections (#2353, @mansam)
* add CSI snapshot API types into default restore priorities (#2318, @ashish)
* refactoring: wait for all informer caches to sync before running controllers (#2299, @skriss)
* refactor restore code to lazily resolve resources via discovery and eliminate second restore loop for instances of restored CRDs (#2248, @skriss)
* upgrade to go 1.14 and migrate from `dep` to go modules (#2214, @skriss)
......@@ -55,6 +55,12 @@ defaults:
version: master
gh: https://github.com/vmware-tanzu/velero/tree/master
layout: "docs"
- scope:
path: docs/v1.4-pre
values:
version: v1.4-pre
gh: https://github.com/vmware-tanzu/velero/tree/v1.4.0-beta.1
layout: "docs"
- scope:
path: docs/v1.3.2
values:
......@@ -173,6 +179,7 @@ versioning: true
latest: v1.3.2
versions:
- master
- v1.4-pre
- v1.3.2
- v1.3.1
- v1.3.0
......
......@@ -3,6 +3,7 @@
# that the navigation for older versions still work.
master: master-toc
v1.4-pre: v1-4-pre-toc
v1.3.2: v1-3-2-toc
v1.3.1: v1-3-1-toc
v1.3.0: v1-3-0-toc
......
toc:
- title: Introduction
subfolderitems:
- page: About Velero
url: /index.html
- page: How Velero works
url: /how-velero-works
- page: About locations
url: /locations
- title: Install
subfolderitems:
- page: Basic Install
url: /basic-install
- page: Customize Installation
url: /customize-installation
- page: Upgrade to 1.4
url: /upgrade-to-1.4
- page: Supported providers
url: /supported-providers
- page: Evaluation install
url: /contributions/minio
- page: Restic integration
url: /restic
- page: Examples
url: /examples
- page: Uninstalling
url: /uninstalling
- title: Use
subfolderitems:
- page: Disaster recovery
url: /disaster-case
- page: Cluster migration
url: /migration-case
- page: Backup reference
url: /backup-reference
- page: Restore reference
url: /restore-reference
- page: Run in any namespace
url: /namespace
- page: Extend with hooks
url: /hooks
- page: CSI Support (beta)
url: /csi
- page: Verifying Self-signed Certificates
url: /self-signed-certificates
- title: Plugins
subfolderitems:
- page: Overview
url: /overview-plugins
- page: Custom plugins
url: /custom-plugins
- title: Troubleshoot
subfolderitems:
- page: Troubleshooting
url: /troubleshooting
- page: Troubleshoot an install or setup
url: /debugging-install
- page: Troubleshoot a restore
url: /debugging-restores
- page: Troubleshoot Restic
url: /restic#troubleshooting
- title: Contribute
subfolderitems:
- page: Start Contributing
url: /start-contributing
- page: Development
url: /development
- page: Build from source
url: /build-from-source
- page: Run locally
url: /run-locally
- page: Code standards
url: /code-standards
- page: Website guidelines
url: /website-guidelines
- title: More information
subfolderitems:
- page: Backup file format
url: /output-file-format
- page: API types
url: /api-types
- page: FAQ
url: /faq
- page: ZenHub
url: /zenhub
- page: Support Process
url: /support-process
![100]
[![Build Status][1]][2]
## Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Migrate cluster resources to other clusters.
* Replicate your production cluster to development and testing clusters.
Velero consists of:
* A server that runs on your cluster
* A command-line client that runs locally
## Documentation
This site is our documentation home with installation instructions, plus information about customizing Velero for your needs, architecture, extending Velero, contributing to Velero and more.
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
## Troubleshooting
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
## Contributing
If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our [Start contributing](https://velero.io/docs/v1.4.0-beta.1/start-contributing/) documentation for guidance on how to setup Velero for development.
## Changelog
See [the list of releases][6] to find out about feature changes.
[1]: https://travis-ci.org/vmware-tanzu/velero.svg?branch=master
[2]: https://travis-ci.org/vmware-tanzu/velero
[4]: https://github.com/vmware-tanzu/velero/issues
[6]: https://github.com/vmware-tanzu/velero/releases
[9]: https://kubernetes.io/docs/setup/
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
[12]: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/README.md
[14]: https://github.com/kubernetes/kubernetes
[24]: https://groups.google.com/forum/#!forum/projectvelero
[25]: https://kubernetes.slack.com/messages/velero
[30]: troubleshooting.md
[100]: img/velero.png
# Table of Contents
## API types
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
(hooks)
* [Backup][1]
* [Restore][2]
* [Schedule][3]
* [BackupStorageLocation][4]
* [VolumeSnapshotLocation][5]
[1]: backup.md
[2]: restore.md
[3]: schedule.md
[4]: backupstoragelocation.md
[5]: volumesnapshotlocation.md
# Backup API Type
## Use
The `Backup` API type is used as a request for the Velero server to perform a backup. Once created, the
Velero Server immediately starts the backup process.
## API GroupVersion
Backup belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Backup` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Backup
# Standard Kubernetes metadata. Required.
metadata:
# Backup name. May be any valid Kubernetes object name. Required.
name: a
# Backup namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the backup. Required.
spec:
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for this backup.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before this backup is eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Actions to perform at different times during a backup. The only hook currently supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Currently only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Currently only "exec" hooks are supported.
post:
# Same content as pre above.
# Status about the Backup. Users should not set any data here.
status:
# The version of this Backup. The only version currently supported is 1.
version: 1
# The date and time when the Backup is eligible for garbage collection.
expiration: null
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
# Date/time when the backup started being processed.
startTimestamp: 2019-04-29T15:58:43Z
# Date/time when the backup finished being processed.
completionTimestamp: 2019-04-29T15:58:56Z
# Number of volume snapshots that Velero tried to create for this backup.
volumeSnapshotsAttempted: 2
# Number of volume snapshots that Velero successfully created for this backup.
volumeSnapshotsCompleted: 1
# Number of warnings that were logged by the backup.
warnings: 2
# Number of errors that were logged by the backup.
errors: 0
```
# Velero Backup Storage Locations
## Backup Storage Location
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
A sample YAML `BackupStorageLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
backupSyncPeriod: 2m0s
provider: aws
objectStorage:
bucket: myBucket
config:
region: us-west-2
profile: "default"
```
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String | Required Field | The name for whichever object storage provider will be used to store the backups. See [your object storage provider's plugin documentation][0] for the appropriate value to use. |
| `objectStorage` | ObjectStorageLocation | Required Field | Specification of the object storage for the given provider. |
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
| `objectStorage/caCert` | String | Optional Field | A base64 encoded CA bundle to be used when verifying TLS connections |
| `config` | map[string]string | None (Optional) | Provider-specific configuration keys/values to be passed to the object store plugin. See [your object storage provider's plugin documentation][0] for details. |
| `accessMode` | String | `ReadWrite` | How Velero can access the backup storage location. Valid values are `ReadWrite`, `ReadOnly`. |
| `backupSyncPeriod` | metav1.Duration | Optional Field | How frequently Velero should synchronize backups in object storage. Default is Velero's server backup sync period. Set this to `0s` to disable sync. |
[0]: ../supported-providers.md
# Restore API Type
## Use
The `Restore` API type is used as a request for the Velero server to perform a Restore. Once created, the
Velero Server immediately starts the Restore process.
## API GroupVersion
Restore belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Restore` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Restore
# Standard Kubernetes metadata. Required.
metadata:
# Restore name. May be any valid Kubernetes object name. Required.
name: a-very-special-backup-0000111122223333
# Restore namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the restore. Required.
spec:
# BackupName is the unique name of the Velero backup to restore from.
backupName: a-very-special-backup
# Array of namespaces to include in the restore. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the restore. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the restore. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the restore. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the restore. For example, if a
# PersistentVolumeClaim is included in the restore, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the restore. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# NamespaceMapping is a map of source namespace names to
# target namespace names to restore into. Any source namespaces not
# included in the map will be restored into namespaces of the same name.
namespaceMapping:
namespace-backup-from: namespace-to-restore-to
# RestorePVs specifies whether to restore all included PVs
# from snapshot (via the cloudprovider).
restorePVs: true
# ScheduleName is the unique name of the Velero schedule
# to restore from. If specified, and BackupName is empty, Velero will
# restore from the most recent successful backup created from this schedule.
scheduleName: my-scheduled-backup-name
# RestoreStatus captures the current status of a Velero restore. Users should not set any data here.
status:
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
# Number of warnings that were logged by the restore.
warnings: 2
# Errors is a count of all error messages that were generated
# during execution of the restore. The actual errors are stored in object
# storage.
errors: 0
# FailureReason is an error that caused the entire restore
# to fail.
failureReason:
```
# Schedule API Type
## Use
The `Schedule` API type is used as a repeatable request for the Velero server to perform a backup for a given cron notation. Once created, the
Velero Server will start the backup process. It will then wait for the next valid point of the given cron expression and execute the backup
process on a repeating basis.
## API GroupVersion
Schedule belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Schedule` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Schedule
# Standard Kubernetes metadata. Required.
metadata:
# Schedule name. May be any valid Kubernetes object name. Required.
name: a
# Schedule namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the scheduled backup. Required.
spec:
# Schedule is a Cron expression defining when to run the Backup
schedule: 0 7 * * *
# Template is the spec that should be used for each backup triggered by this schedule.
template:
# Array of namespaces to include in the scheduled backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the scheduled backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the scheduled backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the scheduled backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the scheduled backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the scheduled backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for backups under this schedule.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before backups created on this schedule are eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Actions to perform at different times during a backup. The only hook currently supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Currently only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Currently only "exec" hooks are supported.
post:
# Same content as pre above.
status:
# The current phase of the latest scheduled backup. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# Date/time of the last backup for a given schedule
lastBackup:
# An array of any validation errors encountered.
validationErrors:
```
# Velero Volume Snapshot Location
## Volume Snapshot Location
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
A sample YAML `VolumeSnapshotLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: aws-default
namespace: velero
spec:
provider: aws
config:
region: us-west-2
profile: "default"
```
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String | Required Field | The name for whichever storage provider will be used to create/store the volume snapshots. See [your volume snapshot provider's plugin documentation][0] for the appropriate value to use. |
| `config` | map[string]string | None (Optional) | Provider-specific configuration keys/values to be passed to the volume snapshotter plugin. See [your volume snapshot provider's plugin documentation][0] for details. |
[0]: ../supported-providers.md
# Backup Reference
## Exclude Specific Items from Backup
It is possible to exclude individual items from being backed up, even if they match the resource/namespace/label selectors defined in the backup spec. To do this, label the item as follows:
```bash
kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=true
```
# Basic Install
- [Basic Install](#basic-install)
- [Prerequisites](#prerequisites)
- [Install the CLI](#install-the-cli)
- [Option 1: macOS - Homebrew](#option-1-macos---homebrew)
- [Option 2: GitHub release](#option-2-github-release)
- [Install and configure the server components](#install-and-configure-the-server-components)
- [Command line Autocompletion](#command-line-autocompletion)
Use this doc to get a basic installation of Velero.
Refer [this document](customize-installation.md) to customize your installation.
## Prerequisites
- Access to a Kubernetes cluster, v1.10 or later, with DNS and container networking enabled.
- `kubectl` installed locally
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].
There are supported storage providers for both cloud-provider environments and on-premises environments. For more details on on-premises scenarios, see the [on-premises documentation][2].
## Install the CLI
### Option 1: macOS - Homebrew
On macOS, you can use [Homebrew](https://brew.sh) to install the `velero` client:
```bash
brew install velero
```
### Option 2: GitHub release
1. Download the [latest release][1]'s tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz
```
1. Move the extracted `velero` binary to somewhere in your `$PATH` (e.g. `/usr/local/bin` for most users).
## Install and configure the server components
There are two supported methods for installing the Velero server components:
- the `velero install` CLI command
- the [Helm chart](https://vmware-tanzu.github.io/helm-charts/)
Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations. The steps to install and configure the Velero server components along with the appropriate plugins are specific to your chosen storage provider. To find installation instructions for your chosen storage provider, follow the documentation link for your provider at our [supported storage providers][0] page
_Note: if your object storage provider is different than your volume snapshot provider, follow the installation instructions for your object storage provider first, then return here and follow the instructions to [add your volume snapshot provider][4]._
## Command line Autocompletion
Please refer to [this part of the documentation][5].
[0]: supported-providers.md
[1]: https://github.com/vmware-tanzu/velero/releases/latest
[2]: on-premises.md
[3]: overview-plugins.md
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
[5]: customize-installation.md#optional-velero-cli-configurations
# Build from source
## Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later.
* A DNS server on the cluster
* `kubectl` installed
* [Go][5] installed (minimum version 1.8)
## Get the source
### Option 1) Get latest (recommended)
```bash
mkdir $HOME/go
export GOPATH=$HOME/go
go get github.com/vmware-tanzu/velero
```
Where `go` is your [import path][4] for Go.
For Go development, it is recommended to add the Go import path (`$HOME/go` in this example) to your path.
### Option 2) Release archive
Download the archive named `Source code` from the [release page][22] and extract it in your Go import path as `src/github.com/vmware-tanzu/velero`.
Note that the Makefile targets assume building from a git repository. When building from an archive, you will be limited to the `go build` commands described below.
## Build
There are a number of different ways to build `velero` depending on your needs. This section outlines the main possibilities.
When building by using `make`, it will place the binaries under `_output/bin/$GOOS/$GOARCH`. For example, you will find the binary for darwin here: `_output/bin/darwin/amd64/velero`, and the binary for linux here: `_output/bin/linux/amd64/velero`. `make` will also splice version and git commit information in so that `velero version` displays proper output.
Note: `velero install` will also use the version information to determine which tagged image to deploy. If you would like to overwrite what image gets deployed, use the `image` flag (see below for instructions on how to build images).
### Build the binary
To build the `velero` binary on your local machine, compiled for your OS and architecture, run one of these two commands:
```bash
go build ./cmd/velero
```
```bash
make local
```
### Cross compiling
To build the velero binary targeting linux/amd64 within a build container on your local machine, run:
```bash
make build
```
For any specific platform, run `make build-<GOOS>-<GOARCH>`.
For example, to build for the Mac, run `make build-darwin-amd64`.
Velero's `Makefile` has a convenience target, `all-build`, that builds the following platforms:
* linux-amd64
* linux-arm
* linux-arm64
* linux-ppc64le
* darwin-amd64
* windows-amd64
## Making images and updating Velero
If after installing Velero you would like to change the image used by its deployment to one that contains your code changes, you may do so by updating the image:
```bash
kubectl -n velero set image deploy/velero velero=myimagerepo/velero:$VERSION
```
To build a Velero container image, first set the `$REGISTRY` environment variable. For example, if you want to build the `gcr.io/my-registry/velero-amd64:master` image, set `$REGISTRY` to `gcr.io/my-registry`. If this variable is not set, the default is `velero`.
Optionally, set the `$VERSION` environment variable to change the image tag. Then, run:
```bash
make container
```
For any specific platform, run `ARCH=<GOOS>-<GOARCH> make container`
For example, to build an image for the Power (ppc64le), run:
```bash
ARCH=linux-ppc64le make container
```
_Note: By default, ARCH is set to linux-amd64_
To push your image to the registry. For example, if you want to push the `gcr.io/my-registry/velero-amd64:master` image, run:
```bash
make push
```
For any specific platform, run `ARCH=<GOOS>-<GOARCH> make push`
For example, to push image for the Power (ppc64le), run:
```bash
ARCH=linux-ppc64le make push
```
_Note: By default, ARCH is set to linux-amd64_
To create and push your manifest to the registry. For example, if you want to create and push the `gcr.io/my-registry/velero:master` manifest, run:
```bash
make manifest
```
For any specific platform, run `MANIFEST_PLATFORMS=<GOARCH> make manifest`
For example, to create and push manifest only for amd64, run:
```bash
MANIFEST_PLATFORMS=amd64 make manifest
```
_Note: By default, MANIFEST_PLATFORMS is set to amd64, ppc64le_
To run the entire workflow, run:
`REGISTRY=<$REGISTRY> VERSION=<$VERSION> ARCH=<GOOS>-<GOARCH> MANIFEST_PLATFORMS=<GOARCH> make container push manifest`
For example, to run the workflow only for amd64
```bash
REGISTRY=myrepo VERSION=foo MANIFEST_PLATFORMS=amd64 make container push manifest
```
_Note: By default, ARCH is set to linux-amd64_
For example, to run the workflow only for ppc64le
```bash
REGISTRY=myrepo VERSION=foo ARCH=linux-ppc64le MANIFEST_PLATFORMS=ppc64le make container push manifest
```
For example, to run the workflow for all supported platforms
```bash
REGISTRY=myrepo VERSION=foo make all-containers all-push all-manifests
```
Note: if you want to update the image but not change its name, you will have to trigger Kubernetes to pick up the new image. One way of doing so is by deleting the Velero deployment pod:
```bash
kubectl -n velero delete pods -l deploy=velero
```
[4]: https://blog.golang.org/organizing-go-code
[5]: https://golang.org/doc/install
[22]: https://github.com/vmware-tanzu/velero/releases
# Code Standards
## Adding a changelog
Authors are expected to include a changelog file with their pull requests. The changelog file
should be a new file created in the `changelogs/unreleased` folder. The file should follow the
naming convention of `pr-username` and the contents of the file should be your text for the
changelog.
velero/changelogs/unreleased <- folder
000-username <- file
Add that to the PR.
## Code
- Log messages are capitalized.
- Error messages are kept lower-cased.
- Wrap/add a stack only to errors that are being directly returned from non-velero code, such as an API call to the Kubernetes server.
```bash
errors.WithStack(err)
```
- Prefer to use the utilities in the Kubernetes package [`sets`](https://godoc.org/github.com/kubernetes/apimachinery/pkg/util/sets).
```bash
k8s.io/apimachinery/pkg/util/sets
```
## Imports
For imports, we use the following convention:
<group><version><api | client | informer | ...>
Example:
import (
corev1api "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
corev1client "k8s.io/client-go/kubernetes/typed/core/v1"
corev1listers "k8s.io/client-go/listers/core/v1"
velerov1api ""github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerov1client "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/typed/velero/v1"
)
## Mocks
We use a package to generate mocks for our interfaces.
Example: if you want to change this mock: https://github.com/vmware-tanzu/velero/blob/v1.4.0-beta.1/pkg/restic/mocks/restorer.go
Run:
```bash
go get github.com/vektra/mockery/.../
cd pkg/restic
mockery -name=Restorer
```
Might need to run `make update` to update the imports.
## Kubernetes Labels
When generating label values, be sure to pass them through the `label.GetValidName()` helper function.
This will help ensure that the values are the proper length and format to be stored and queried.
In general, UIDs are safe to persist as label values.
This function is not relevant to annotation values, which do not have restrictions.
## DCO Sign off
All authors to the project retain copyright to their work. However, to ensure
that they are only submitting work that they have rights to, we are requiring
everyone to acknowledge this by signing their work.
Any copyright notices in this repo should specify the authors as "the Velero contributors".
To sign your work, just add a line like this at the end of your commit message:
```
Signed-off-by: Joe Beda <joe@heptio.com>
```
This can easily be done with the `--signoff` option to `git commit`.
By doing this you state that you can certify the following (from https://developercertificate.org/):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
# Use IBM Cloud Object Storage as Velero's storage destination.
You can deploy Velero on IBM [Public][5] or [Private][4] clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Velero's backups.
To set up IBM Cloud Object Storage (COS) as Velero's destination, you:
* Download an official release of Velero
* Create your COS instance
* Create an S3 bucket
* Define a service that can store data in the bucket
* Configure and start the Velero server
## Download Velero
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
## Create COS instance
If you don’t have a COS instance, you can create a new one, according to the detailed instructions in [Creating a new resource instance][1].
## Create an S3 bucket
Velero requires an object storage bucket to store backups in. See instructions in [Create some buckets to store your data][2].
## Define a service that can store data in the bucket.
The process of creating service credentials is described in [Service credentials][3].
Several comments:
1. The Velero service will write its backup into the bucket, so it requires the “Writer” access role.
2. Velero uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keys — a set of HMAC credentials. You can create these HMAC credentials by specifying `{“HMAC”:true}` as an optional inline parameter. See step 3 in the [Service credentials][3] guide.
3. After successfully creating a Service credential, you can view the JSON definition of the credential. Under the `cos_hmac_keys` entry there are `access_key_id` and `secret_access_key`. We will use them in the next step.
4. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
aws_access_key_id=<ACCESS_KEY_ID>
aws_secret_access_key=<SECRET_ACCESS_KEY>
```
where the access key id and secret are the values that we got above.
## Install and start Velero
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it.
```bash
velero install \
--provider aws \
--bucket <YOUR_BUCKET> \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=<YOUR_REGION>,s3ForcePathStyle="true",s3Url=<YOUR_URL_ACCESS_POINT>
```
Velero does not currently have a volume snapshot plugin for IBM Cloud, so creating volume snapshots is disabled.
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
(Optional) Specify [CPU and memory resource requests and limits][15] for the Velero/restic pods.
Once the installation is complete, remove the default `VolumeSnapshotLocation` that was created by `velero install`, since it's specific to AWS and won't work for IBM Cloud:
```bash
kubectl -n velero delete volumesnapshotlocation.velero.io default
```
For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation.
## Installing the nginx example (optional)
If you run the nginx example, in file `examples/nginx-app/with-pv.yaml`:
Uncomment `storageClassName: <YOUR_STORAGE_CLASS_NAME>` and replace with your `StorageClass` name.
[0]: namespace.md
[1]: https://console.bluemix.net/docs/services/cloud-object-storage/basics/order-storage.html#creating-a-new-resource-instance
[2]: https://console.bluemix.net/docs/services/cloud-object-storage/getting-started.html#create-buckets
[3]: https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.html#service-credentials
[4]: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/kc_welcome_containers.html
[5]: https://console.bluemix.net/docs/containers/container_index.html#container_index
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
[15]: customize-installation.md#customize-resource-requests-and-limits
## Quick start evaluation install with Minio
The following example sets up the Velero server and client, then backs up and restores a sample application.
For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster.
For additional functionality with this setup, see the section below on how to [expose Minio outside your cluster][1].
**NOTE** The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope.
See [Set up Velero on your platform][3] for how to configure Velero for a production environment.
If you encounter issues with installing or configuring, see [Debugging Installation Issues](debugging-install.md).
### Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later. **Note:** restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. Restic support is not required for this example, but may be of interest later. See [Restic Integration][17].
* A DNS server on the cluster
* `kubectl` installed
### Download Velero
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
#### MacOS Installation
On Mac, you can use [HomeBrew](https://brew.sh) to install the `velero` client:
```bash
brew install velero
```
### Set up server
These instructions start the Velero server and a Minio instance that is accessible from within the cluster only. See [Expose Minio outside your cluster][31] for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `velero describe` commands.
1. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
```
1. Start the server and the local storage service. In the Velero directory, run:
```
kubectl apply -f examples/minio/00-minio-deployment.yaml
```
```
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
```
This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`).
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
1. Deploy the example nginx application:
```bash
kubectl apply -f examples/nginx-app/base.yaml
```
1. Check to see that both the Velero and nginx deployments are successfully created:
```
kubectl get deployments -l component=velero --namespace=velero
kubectl get deployments --namespace=nginx-example
```
### Back up
1. Create a backup for any object that matches the `app=nginx` label selector:
```
velero backup create nginx-backup --selector app=nginx
```
Alternatively if you want to backup all objects *except* those matching the label `backup=ignore`:
```
velero backup create nginx-backup --selector 'backup notin (ignore)'
```
1. (Optional) Create regularly scheduled backups based on a cron expression using the `app=nginx` label selector:
```
velero schedule create nginx-daily --schedule="0 1 * * *" --selector app=nginx
```
Alternatively, you can use some non-standard shorthand cron expressions:
```
velero schedule create nginx-daily --schedule="@daily" --selector app=nginx
```
See the [cron package's documentation][30] for more usage examples.
1. Simulate a disaster:
```
kubectl delete namespace nginx-example
```
1. To check that the nginx deployment and service are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
You should get no results.
NOTE: You might need to wait for a few minutes for the namespace to be fully cleaned up.
### Restore
1. Run:
```
velero restore create --from-backup nginx-backup
```
1. Run:
```
velero restore get
```
After the restore finishes, the output looks like the following:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` and `ERRORS` are 0. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, you can look at them in detail:
```
velero restore describe <RESTORE_NAME>
```
For more information, see [the debugging information][18].
### Clean up
If you want to delete any backups you created, including data in object storage and persistent
volume snapshots, you can run:
```
velero backup delete BACKUP_NAME
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do
this for each backup you want to permanently delete. A future version of Velero will allow you to
delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run:
```
velero backup get BACKUP_NAME
```
To completely uninstall Velero, minio, and the nginx example app from your Kubernetes cluster:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
kubectl delete -f examples/nginx-app/base.yaml
```
## Expose Minio outside your cluster with a Service
When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can:
- Change the Minio Service type from `ClusterIP` to `NodePort`.
- Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`.
You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config.
### Expose Minio with Service of type NodePort
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client.
You must also get the Minio URL, which you can then specify as the value of the `publicUrl` field in your backup storage location config.
1. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`.
1. Get the Minio URL:
- if you're running Minikube:
```shell
minikube service minio --namespace=velero --url
```
- in any other environment:
1. Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client.
1. Append the value of the NodePort to get a complete URL. You can get this value by running:
```shell
kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}'
```
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_FROM_PREVIOUS_STEP>` as a field under `spec.config`. You must include the `http://` or `https://` prefix.
## Expose Minio outside your cluster with Kubernetes in Docker (KinD):
Kubernetes in Docker currently does not have support for NodePort services (see [this issue](https://github.com/kubernetes-sigs/kind/issues/99)). In this case, you can use a port forward to access the Minio bucket.
In a terminal, run the following:
```shell
MINIO_POD=$(kubectl get pods -n velero -l component=minio -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward $MINIO_POD -n velero 9000:9000
```
Then, in another terminal:
```shell
kubectl edit backupstoragelocation default -n velero
```
Add `publicUrl: http://localhost:9000` under the `spec.config` section.
### Work with Ingress
Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Velero configuration with Minio.
In this case:
1. Keep the Service type as `ClusterIP`.
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_AND_PORT_OF_INGRESS>` as a field under `spec.config`.
[1]: #expose-minio-with-service-of-type-nodeport
[3]: ../customize-installation.md
[17]: ../restic.md
[18]: ../debugging-restores.md
[26]: https://github.com/vmware-tanzu/velero/releases
[30]: https://godoc.org/github.com/robfig/cron
# Use Oracle Cloud as a Backup Storage Provider for Velero
## Introduction
[Velero](https://velero.io/) is a tool used to backup and migrate Kubernetes applications. Here are the steps to use [Oracle Cloud Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) as a destination for Velero backups.
1. [Download Velero](#download-velero)
2. [Create A Customer Secret Key](#create-a-customer-secret-key)
3. [Create An Oracle Object Storage Bucket](#create-an-oracle-object-storage-bucket)
4. [Install Velero](#install-velero)
5. [Clean Up](#clean-up)
6. [Examples](#examples)
7. [Additional Reading](#additional-reading)
## Download Velero
1. Download the [latest release](https://github.com/vmware-tanzu/velero/releases/) of Velero to your development environment. This includes the `velero` CLI utility and example Kubernetes manifest files. For example:
```
wget https://github.com/vmware-tanzu/velero/releases/download/v1.0.0/velero-v1.0.0-linux-amd64.tar.gz
```
*We strongly recommend that you use an official release of Velero. The tarballs for each release contain the velero command-line client. The code in the master branch of the Velero repository is under active development and is not guaranteed to be stable!*
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`
4. Run `velero` to confirm the CLI has been installed correctly. You should see an output like this:
```
$ velero
Velero is a tool for managing disaster recovery, specifically for Kubernetes
cluster resources. It provides a simple, configurable, and operationally robust
way to back up your application state and associated data.
If you're familiar with kubectl, Velero supports a similar model, allowing you to
execute commands such as 'velero get backup' and 'velero create schedule'. The same
operations can also be performed as 'velero backup get' and 'velero schedule create'.
Usage:
velero [command]
```
## Create A Customer Secret Key
1. Oracle Object Storage provides an API to enable interoperability with Amazon S3. To use this Amazon S3 Compatibility API, you need to generate the signing key required to authenticate with Amazon S3. This special signing key is an Access Key/Secret Key pair. Follow these steps to [create a Customer Secret Key](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#To4). Refer to this link for more information about [Working with Customer Secret Keys](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#s3).
2. Create a Velero credentials file with your Customer Secret Key:
```
$ vi credentials-velero
[default]
aws_access_key_id=bae031188893d1eb83719648790ac850b76c9441
aws_secret_access_key=MmY9heKrWiNVCSZQ2Mf5XTJ6Ys93Bw2d2D6NMSTXZlk=
```
## Create An Oracle Object Storage Bucket
Create an Oracle Cloud Object Storage bucket called `velero` in the root compartment of your Oracle Cloud tenancy. Refer to this page for [more information about creating a bucket with Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/managingbuckets.htm#usingconsole).
## Install Velero
You will need the following information to install Velero into your Kubernetes cluster with Oracle Object Storage as the Backup Storage provider:
```
velero install \
--provider [provider name] \
--bucket [bucket name] \
--prefix [tenancy name] \
--use-volume-snapshots=false \
--secret-file [secret file location] \
--backup-location-config region=[region],s3ForcePathStyle="true",s3Url=[storage API endpoint]
```
- `--provider` Because we are using the S3-compatible API, we will use `aws` as our provider.
- `--bucket` The name of the bucket created in Oracle Object Storage - in our case this is named `velero`.
- ` --prefix` The name of your Oracle Cloud tenancy - in our case this is named `oracle-cloudnative`.
- `--use-volume-snapshots=false` Velero does not currently have a volume snapshot plugin for Oracle Cloud creating volume snapshots is disabled.
- `--secret-file` The path to your `credentials-velero` file.
- `--backup-location-config` The path to your Oracle Object Storage bucket. This consists of your `region` which corresponds to your Oracle Cloud region name ([List of Oracle Cloud Regions](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm?Highlight=regions)) and the `s3Url`, the S3-compatible API endpoint for Oracle Object Storage based on your region: `https://oracle-cloudnative.compat.objectstorage.[region name].oraclecloud.com`
For example:
```
velero install \
--provider aws \
--bucket velero \
--prefix oracle-cloudnative \
--use-volume-snapshots=false \
--secret-file /Users/mboxell/bin/velero/credentials-velero \
--backup-location-config region=us-phoenix-1,s3ForcePathStyle="true",s3Url=https://oracle-cloudnative.compat.objectstorage.us-phoenix-1.oraclecloud.com
```
This will create a `velero` namespace in your cluster along with a number of CRDs, a ClusterRoleBinding, ServiceAccount, Secret, and Deployment for Velero. If your pod fails to successfully provision, you can troubleshoot your installation by running: `kubectl logs [velero pod name]`.
## Clean Up
To remove Velero from your environment, delete the namespace, ClusterRoleBinding, ServiceAccount, Secret, and Deployment and delete the CRDs, run:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```
This will remove all resources created by `velero install`.
## Examples
After creating the Velero server in your cluster, try this example:
### Basic example (without PersistentVolumes)
1. Start the sample nginx app: `kubectl apply -f examples/nginx-app/base.yaml`
This will create an `nginx-example` namespace with a `nginx-deployment` deployment, and `my-nginx` service.
```
$ kubectl apply -f examples/nginx-app/base.yaml
namespace/nginx-example created
deployment.apps/nginx-deployment created
service/my-nginx created
```
You can see the created resources by running `kubectl get all`
```
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-67594d6bf6-4296p 1/1 Running 0 20s
pod/nginx-deployment-67594d6bf6-f9r5s 1/1 Running 0 20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-nginx LoadBalancer 10.96.69.166 <pending> 80:31859/TCP 21s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 2 2 2 2 21s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-67594d6bf6 2 2 2 21s
```
2. Create a backup: `velero backup create nginx-backup --include-namespaces nginx-example`
```
$ velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
```
At this point you can navigate to appropriate bucket, which we called `velero`, in the Oracle Cloud Object Storage console to see the resources backed up using Velero.
3. Simulate a disaster by deleting the `nginx-example` namespace: `kubectl delete namespaces nginx-example`
```
$ kubectl delete namespaces nginx-example
namespace "nginx-example" deleted
```
Wait for the namespace to be deleted. To check that the nginx deployment, service, and namespace are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
This should return: `No resources found.`
4. Restore your lost resources: `velero restore create --from-backup nginx-backup`
```
$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20190604102710" submitted successfully.
Run `velero restore describe nginx-backup-20190604102710` or `velero restore logs nginx-backup-20190604102710` for more details.
```
Running `kubectl get namespaces` will show that the `nginx-example` namespace has been restored along with its contents.
5. Run: `velero restore get` to view the list of restored resources. After the restore finishes, the output looks like the following:
```
$ velero restore get
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20190604104249 nginx-backup Completed 0 0 2019-06-04 10:42:39 -0700 PDT <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column shows `Completed`, and `WARNINGS` and `ERRORS` will show `0`. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, for instance if the `STATUS` column displays `FAILED` instead of `InProgress`, you can look at them in detail with `velero restore describe <RESTORE_NAME>`
6. Clean up the environment with `kubectl delete -f examples/nginx-app/base.yaml`
```
$ kubectl delete -f examples/nginx-app/base.yaml
namespace "nginx-example" deleted
deployment.apps "nginx-deployment" deleted
service "my-nginx" deleted
```
If you want to delete any backups you created, including data in object storage, you can run: `velero backup delete BACKUP_NAME`
```
$ velero backup delete nginx-backup
Are you sure you want to continue (Y/N)? Y
Request to delete backup "nginx-backup" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Velero will allow you to delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run: `velero backup get BACKUP_NAME` or more generally `velero backup get`:
```
$ velero backup get nginx-backup
An error occurred: backups.velero.io "nginx-backup" not found
```
```
$ velero backup get
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
```
## Additional Reading
* [Official Velero Documentation](https://velero.io/docs/v1.4.0-beta.1/)
* [Oracle Cloud Infrastructure Documentation](https://docs.cloud.oracle.com/)
# Container Storage Interface Snapshot Support in Velero
_This feature is under development. Documentation may not be up-to-date and features may not work as expected._
# Prerequisites
The following are the prerequisites for using Velero to take Container Storage Interface (CSI) snapshots:
1. The cluster is Kubernetes version 1.17 or greater.
1. The cluster is running a CSI driver capable of support volume snapshots at the [v1beta1 API level](https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-cis-volume-snapshot-beta/).
1. The Velero server is running with the `--features EnableCSI` feature flag to enable CSI logic in Velero's core. See [Enabling Features][1] for more information.
1. The Velero [CSI plugin](https://github.com/vmware-tanzu/velero-plugin-for-csi/) is installed to integrate with the CSI volume snapshot APIs.
1. When restoring CSI volumesnapshots across clusters, the name of the CSI driver in the destination cluster is the same as that on the source cluster to ensure cross cluster portability of CSI volumesnapshots
# Implementation Choices
This section documents some of the choices made during implementation of the Velero [CSI plugin](https://github.com/vmware-tanzu/velero-plugin-for-csi/):
1. Volumesnapshots created by the plugin will be retained only for the lifetime of the backup even if the `DeletionPolicy` on the volumesnapshotclass is set to `Retain`. To accomplish this, during deletion of the backup the prior to deleting the volumesnapshot, volumesnapshotcontent object will be patched to set its `DeletionPolicy` to `Delete`. Thus deleting volumesnapshot object will result in cascade delete of the volumesnapshotcontent and the snapshot in the storage provider.
1. Volumesnapshotcontent objects created during a velero backup that are dangling, unbound to a volumesnapshot object, will also be discovered, through labels, and deleted on backup deletion.
# Known Limitations
1. The Velero [CSI plugin](https://github.com/vmware-tanzu/velero-plugin-for-csi/), to backup CSI backed PVCs, will choose the first VolumeSnapshotClass in the cluster that has the same driver name. _[Issue #17](https://github.com/vmware-tanzu/velero-plugin-for-csi/issues/17)_
# Roadmap
Velero's support level for CSI volume snapshotting will follow upstream Kubernetes support for it, and will reach general availability sometime
after volume snapshotting is GA in upstream Kubernetes. Beta support is expected to launch in Velero v1.4.
[1]: customize-installation.md#enable-server-side-features
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment