Unverified Commit 558e4b90 authored by Steve Kriss's avatar Steve Kriss Committed by GitHub
Browse files

v1.2.0-beta.1 release (#1995)


* generate v1.2.0-beta.1 docs site
Signed-off-by: default avatarSteve Kriss <krisss@vmware.com>

* v1.2.0-beta.1 changelog
Signed-off-by: default avatarSteve Kriss <krisss@vmware.com>

* add PR 1994 changelog
Signed-off-by: default avatarSteve Kriss <krisss@vmware.com>

* fix image tag
Co-Authored-By: default avatarAdnan Abdulhussein <adnan@prydoni.us>
Signed-off-by: default avatarSteve Kriss <krisss@vmware.com>
parent 0c1fc819
Showing with 1659 additions and 0 deletions
+1659 -0
......@@ -2,6 +2,7 @@
* [CHANGELOG-1.1.md][11]
## Development release:
* [v1.2.0-beta.1][12]
* [Unreleased Changes][0]
## Older releases:
......@@ -17,6 +18,7 @@
* [CHANGELOG-0.3.md][1]
[12]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.2.md
[11]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.1.md
[10]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.0.md
[9]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.11.md
......
## v1.2.0-beta.1
#### 2019-10-24
### Download
- https://github.com/vmware-tanzu/velero/releases/tag/v1.2.0-beta.1
### Container Image
`velero/velero:v1.2.0-beta.1`
### Documentation
https://velero.io/docs/v1.2.0-beta.1/
### Upgrading
If you're upgrading from a previous version of Velero, there are several changes you'll need to be aware of:
- Container images are now published to Docker Hub. To upgrade your server, use the new image `velero/velero:v1.2.0-beta.1`.
- The AWS, Microsoft Azure, and GCP provider plugins that were previously part of the Velero binary have been extracted to their own standalone repositories/plugin images. If you are using one of these three providers, you will need to explicitly add the appropriate plugin to your Velero install:
- [AWS] `velero plugin add velero/velero-plugin-for-aws:v1.0.0-beta.1`
- [Azure] `velero plugin add velero/velero-plugin-for-microsoft-azure:v1.0.0-beta.1`
- [GCP] `velero plugin add velero/velero-plugin-for-gcp:v1.0.0-beta.1`
### Highlights
- The AWS, Microsoft Azure, and GCP provider plugins that were previously part of the Velero binary have been extracted to their own standalone repositories/plugin images. They now function like any other provider plugin.
- Container images are now published to Docker Hub: `velero/velero:v1.2.0-beta.1`.
- Several improvements have been made to the restic integration:
- Backup and restore progress is now updated on the `PodVolumeBackup` and `PodVolumeRestore` custom resources and viewable via `velero backup/restore describe` while operations are in progress.
- Read-write-many PVCs are now only backed up once.
- Backups of PVCs remain incremental across pod reschedules.
- A structural schema has been added to the Velero CRDs that are created by `velero install` to enable validation of API fields.
- During restores that use the `--namespace-mappings` flag to clone a namespace within a cluster, PVs will now be cloned as needed.
### All Changes
* Allow backup storage locations to specify backup sync period or toggle off sync (#1936, @betta1)
* Remove cloud provider code (#1985, @carlisia)
* Restore action for cluster/namespace role bindings (#1974, @alexander)
* Add `--no-default-backup-location` flag to `velero install` (#1931, @Frank51)
* If includeClusterResources is nil/auto, pull in necessary CRDs in backupResource (#1831, @sseago)
* Azure: add support for Azure China/German clouds (#1938, @andyzhangx)
* Add a new required `--plugins` flag for `velero install` command. `--plugins` takes a list of container images to add as initcontainers. (#1930, @nrb)
* restic: only backup read-write-many PVCs at most once, even if they're annotated for backup from multiple pods. (#1896, @skriss)
* Azure: add support for cross-subscription backups (#1895, @boxcee)
* adds `insecureSkipTLSVerify` server config for AWS storage and `--insecure-skip-tls-verify` flag on client for self-signed certs (#1793, @s12chung)
* Add check to update resource field during backupItem (#1904, @spiffcs)
* Add `LD_LIBRARY_PATH` (=/plugins) to the env variables of velero deployment. (#1893, @lintongj)
* backup sync controller: stop using `metadata/revision` file, do a full diff of bucket contents vs. cluster contents each sync interval (#1892, @skriss)
* bug fix: during restore, check item's original namespace, not the remapped one, for inclusion/exclusion (#1909, @skriss)
* adds structural schema to Velero CRDs created on Velero install, enabling validation of Velero API fields (#1898, @prydonius)
* GCP: add support for specifying a Cloud KMS key name to use for encrypting backups in a storage location. (#1879, @skriss)
* AWS: add support for SSE-S3 AES256 encryption via `serverSideEncryption` config field in BackupStorageLocation (#1869, @skriss)
* change default `restic prune` interval to 7 days, add `velero server/install` flags for specifying an alternate default value. (#1864, @skriss)
* velero install: if `--use-restic` and `--wait` are specified, wait up to a minute for restic daemonset to be ready (#1859, @skriss)
* report restore progress in PodVolumeRestores and expose progress in the velero restore describe --details command (#1854, @prydonius)
* Jekyll Site updates - modifies documentation to use a wider layout; adds better markdown table formatting (#1848, @ccbayer)
* fix excluding additional items with the velero.io/exclude-from-backup=true label (#1843, @prydonius)
* report backup progress in PodVolumeBackups and expose progress in the velero backup describe --details command. Also upgrades restic to v0.9.5 (#1821, @prydonius)
* Add `--features` argument to all velero commands to provide feature flags that can control enablement of pre-release features. (#1798, @nrb)
* when backing up PVCs with restic, specify `--parent` flag to prevent full volume rescans after pod reschedules (#1807, @skriss)
* remove 'restic check' calls from before/after 'restic prune' since they're redundant (#1794, @skriss)
* fix error formatting due interpreting % as printf formatted strings (#1781, @s12chung)
* when using `velero restore create --namespace-mappings ...` to create a second copy of a namespace in a cluster, create copies of the PVs used (#1779, @skriss)
* adds --from-schedule flag to the `velero create backup` command to create a Backup from an existing Schedule (#1734, @prydonius)
* add `--allow-partially-failed` flag to `velero restore create` for use with `--from-schedule` to allow partially-failed backups to be restored (#1994, @skriss)
......@@ -55,6 +55,12 @@ defaults:
version: master
gh: https://github.com/vmware-tanzu/velero/tree/master
layout: "docs"
- scope:
path: docs/v1.2.0-beta.1
values:
version: v1.2.0-beta.1
gh: https://github.com/vmware-tanzu/velero/tree/v1.2.0-beta.1
layout: "docs"
- scope:
path: docs/v1.1.0
values:
......@@ -148,6 +154,7 @@ versioning: true
latest: v1.1.0
versions:
- master
- v1.2.0-beta.1
- v1.1.0
- v1.0.0
- v0.11.0
......
......@@ -3,6 +3,7 @@
# that the navigation for older versions still work.
master: master-toc
v1.2.0-beta.1: v1-2-0-beta-1-toc
v1.1.0: v1-1-0-toc
v1.0.0: v1-0-0-toc
v0.11.0: v011-toc
......
toc:
- title: Introduction
subfolderitems:
- page: About Velero
url: /index.html
- page: How Velero works
url: /how-velero-works
- page: About locations
url: /locations
- title: Install
subfolderitems:
- page: Overview
url: /install-overview
- page: Upgrade to 1.1
url: /upgrade-to-1.1
- page: Supported providers
url: /supported-providers
- page: Evaluation install
url: /contributions/minio
- page: Restic integration
url: /restic
- page: Examples
url: /examples
- page: Uninstalling
url: /uninstalling
- title: Use
subfolderitems:
- page: Disaster recovery
url: /disaster-case
- page: Cluster migration
url: /migration-case
- page: Backup reference
url: /backup-reference
- page: Restore reference
url: /restore-reference
- page: Run in any namespace
url: /namespace
- page: Extend with hooks
url: /hooks
- title: Plugins
subfolderitems:
- page: Overview
url: /overview-plugins
- page: Custom plugins
url: /custom-plugins
- title: Troubleshoot
subfolderitems:
- page: Troubleshooting
url: /troubleshooting
- page: Troubleshoot an install or setup
url: /debugging-install
- page: Troubleshoot a restore
url: /debugging-restores
- page: Troubleshoot Restic
url: /restic#troubleshooting
- title: Contribute
subfolderitems:
- page: Start Contributing
url: /start-contributing
- page: Development
url: /development
- page: Build from source
url: /build-from-source
- page: Run locally
url: /run-locally
- title: More information
subfolderitems:
- page: Backup file format
url: /output-file-format
- page: API types
url: /api-types
- page: FAQ
url: /faq
- page: ZenHub
url: /zenhub
![100]
[![Build Status][1]][2]
## Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Migrate cluster resources to other clusters.
* Replicate your production cluster to development and testing clusters.
Velero consists of:
* A server that runs on your cluster
* A command-line client that runs locally
## Documentation
[The documentation][29] provides a getting started guide, plus information about building from source, architecture, extending Velero, and more.
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
## Troubleshooting
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
## Contributing
If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our [Start contributing](https://velero.io/docs/v1.2.0-beta.1/start-contributing/) documentation for guidance on how to setup Velero for development.
## Changelog
See [the list of releases][6] to find out about feature changes.
[1]: https://travis-ci.org/vmware-tanzu/velero.svg?branch=master
[2]: https://travis-ci.org/vmware-tanzu/velero
[4]: https://github.com/vmware-tanzu/velero/issues
[6]: https://github.com/vmware-tanzu/velero/releases
[9]: https://kubernetes.io/docs/setup/
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
[12]: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/README.md
[14]: https://github.com/kubernetes/kubernetes
[24]: https://groups.google.com/forum/#!forum/projectvelero
[25]: https://kubernetes.slack.com/messages/velero
[28]: install-overview.md
[29]: https://velero.io/docs/v1.2.0-beta.1/
[30]: troubleshooting.md
[100]: img/velero.png
# Table of Contents
## API types
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
(hooks)
* [Backup][1]
* [Schedule][2]
* [BackupStorageLocation][3]
* [VolumeSnapshotLocation][4]
[1]: backup.md
[2]: schedule.md
[3]: backupstoragelocation.md
[4]: volumesnapshotlocation.md
# Backup API Type
## Use
The `Backup` API type is used as a request for the Velero server to perform a backup. Once created, the
Velero Server immediately starts the backup process.
## API GroupVersion
Backup belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Backup` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Backup
# Standard Kubernetes metadata. Required.
metadata:
# Backup name. May be any valid Kubernetes object name. Required.
name: a
# Backup namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the backup. Required.
spec:
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for this backup.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before this backup is eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Actions to perform at different times during a backup. The only hook currently supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Currently only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Currently only "exec" hooks are supported.
post:
# Same content as pre above.
# Status about the Backup. Users should not set any data here.
status:
# The version of this Backup. The only version currently supported is 1.
version: 1
# The date and time when the Backup is eligible for garbage collection.
expiration: null
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
# Date/time when the backup started being processed.
startTimestamp: 2019-04-29T15:58:43Z
# Date/time when the backup finished being processed.
completionTimestamp: 2019-04-29T15:58:56Z
# Number of volume snapshots that Velero tried to create for this backup.
volumeSnapshotsAttempted: 2
# Number of volume snapshots that Velero successfully created for this backup.
volumeSnapshotsCompleted: 1
# Number of warnings that were logged by the backup.
warnings: 2
# Number of errors that were logged by the backup.
errors: 0
```
# Velero Backup Storage Locations
## Backup Storage Location
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
A sample YAML `BackupStorageLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
backupSyncPeriod: 2m0s
provider: aws
objectStorage:
bucket: myBucket
config:
region: us-west-2
profile: "default"
```
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String (Velero natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the backups. |
| `objectStorage` | ObjectStorageLocation | Specification of the object storage for the given provider. |
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
| `config` | map[string]string<br><br>(See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs or your provider's documentation.) | None (Optional) | Configuration keys/values to be passed to the cloud provider for backup storage. |
| `accessMode` | String | `ReadWrite` | How Velero can access the backup storage location. Valid values are `ReadWrite`, `ReadOnly`. |
| `backupSyncPeriod` | metav1.Duration | Optional Field | How frequently Velero should synchronize backups in object storage. Default is Velero's server backup sync period. Set this to `0s` to disable sync. |
#### AWS
**(Or other S3-compatible storage)**
##### config
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `region` | string | Empty | *Example*: "us-east-1"<br><br>See [AWS documentation][3] for the full list.<br><br>Queried from the AWS S3 API if not provided. |
| `s3ForcePathStyle` | bool | `false` | Set this to `true` if you are using a local storage service like Minio. |
| `s3Url` | string | Required field for non-AWS-hosted storage| *Example*: http://minio:9000<br><br>You can specify the AWS S3 URL here for explicitness, but Velero can already generate it from `region`, and `bucket`. This field is primarily for local storage services like Minio.|
| `publicUrl` | string | Empty | *Example*: https://minio.mycluster.com<br><br>If specified, use this instead of `s3Url` when generating download URLs (e.g., for logs). This field is primarily for local storage services like Minio.|
| `serverSideEncryption` | string | Empty | The name of the server-side encryption algorithm to use for uploading objects, e.g. `AES256`. If using SSE-KMS and `kmsKeyId` is specified, this field will automatically be set to `aws:kms` so does not need to be specified by the user. |
| `kmsKeyId` | string | Empty | *Example*: "502b409c-4da1-419f-a16e-eif453b3i49f" or "alias/`<KMS-Key-Alias-Name>`"<br><br>Specify an [AWS KMS key][10] id or alias to enable encryption of the backups stored in S3. Only works with AWS S3 and may require explicitly granting key usage rights.|
| `signatureVersion` | string | `"4"` | Version of the signature algorithm used to create signed URLs that are used by velero cli to download backups or fetch logs. Possible versions are "1" and "4". Usually the default version 4 is correct, but some S3-compatible providers like Quobyte only support version 1.|
| `profile` | string | "default" | AWS profile within the credential file to use for given store |
| `insecureSkipTLSVerify` | bool | `false` | Set this to `true` if you do not want to verify the TLS certificate when connecting to the object store--like self-signed certs in Minio. This is susceptible to man-in-the-middle attacks and is not recommended for production. |
#### Azure
##### config
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `resourceGroup` | string | Required Field | Name of the resource group containing the storage account for this backup storage location. |
| `storageAccount` | string | Required Field | Name of the storage account for this backup storage location. |
| `subscriptionId` | string | Optional Field | ID of the subscription for this backup storage location. |
#### GCP
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `kmsKeyName` | string | Empty | Name of the Cloud KMS key to use to encrypt backups stored in this location, in the form `projects/P/locations/L/keyRings/R/cryptoKeys/K`. See [customer-managed Cloud KMS keys](https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys) for details. |
| `serviceAccount` | string | Empty | Name of the GCP service account to use for this backup storage location. Specify the service account here if you want to use workload identity instead of providing the key file.
[0]: #aws
[1]: #gcp
[2]: #azure
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions
[10]: http://docs.aws.amazon.com/kms/latest/developerguide/overview.html
# Schedule API Type
## Use
The `Schedule` API type is used as a repeatable request for the Velero server to perform a backup for a given cron notation. Once created, the
Velero Server will start the backup process. It will then wait for the next valid point of the given cron expression and execute the backup
process on a repeating basis.
## API GroupVersion
Schedule belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Schedule` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Schedule
# Standard Kubernetes metadata. Required.
metadata:
# Schedule name. May be any valid Kubernetes object name. Required.
name: a
# Schedule namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the scheduled backup. Required.
spec:
# Schedule is a Cron expression defining when to run the Backup
schedule: 0 7 * * *
# Array of namespaces to include in the scheduled backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the scheduled backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the scheduled backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the scheduled backup. Resources may be shortcuts (e.g. 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the scheduled backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the scheduled backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for backups under this schedule.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before backups created on this schedule are eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Actions to perform at different times during a backup. The only hook currently supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Currently only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Currently only "exec" hooks are supported.
post:
# Same content as pre above.
status:
# The current phase of the latest scheduled backup. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# Date/time of the last backup for a given schedule
lastBackup:
# An array of any validation errors encountered.
validationErrors:
```
# Velero Volume Snapshot Location
## Volume Snapshot Location
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
A sample YAML `VolumeSnapshotLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: aws-default
namespace: velero
spec:
provider: aws
config:
region: us-west-2
profile: "default"
```
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String (Velero natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the volume. |
| `config` | See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs or your provider's documentation.
#### AWS
##### config
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `region` | string | Empty | *Example*: "us-east-1"<br><br>See [AWS documentation][3] for the full list.<br><br>Required. |
| `profile` | string | "default" | AWS profile within the credential file to use for given store |
#### Azure
##### config
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `apiTimeout` | metav1.Duration | 2m0s | How long to wait for an Azure API request to complete before timeout. |
| `resourceGroup` | string | Optional | The name of the resource group where volume snapshots should be stored, if different from the cluster's resource group. |
| `subscriptionId` | string | Optional | The ID of the subscription where volume snapshots should be stored, if different from the cluster's subscription. Requires `resourceGroup`to be set.
#### GCP
##### config
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `snapshotLocation` | string | Empty | *Example*: "us-central1"<br><br>See [GCP documentation][4] for the full list.<br><br>If not specified the snapshots are stored in the [default location][5]. |
| `project` | string | Empty | The project ID where snapshots should be stored, if different than the project that your IAM account is in. Optional. |
[0]: #aws
[1]: #gcp
[2]: #azure
[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions
[4]: https://cloud.google.com/storage/docs/locations#available_locations
[5]: https://cloud.google.com/compute/docs/disks/create-snapshots#default_location
# Backup Reference
## Exclude Specific Items from Backup
It is possible to exclude individual items from being backed up, even if they match the resource/namespace/label selectors defined in the backup spec. To do this, label the item as follows:
```bash
kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=true
```
# Build from source
## Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later.
* A DNS server on the cluster
* `kubectl` installed
* [Go][5] installed (minimum version 1.8)
## Get the source
### Option 1) Get latest (recommended)
```bash
mkdir $HOME/go
export GOPATH=$HOME/go
go get github.com/vmware-tanzu/velero
```
Where `go` is your [import path][4] for Go.
For Go development, it is recommended to add the Go import path (`$HOME/go` in this example) to your path.
### Option 2) Release archive
Download the archive named `Source code` from the [release page][22] and extract it in your Go import path as `src/github.com/vmware-tanzu/velero`.
Note that the Makefile targets assume building from a git repository. When building from an archive, you will be limited to the `go build` commands described below.
## Build
There are a number of different ways to build `velero` depending on your needs. This section outlines the main possibilities.
When building by using `make`, it will place the binaries under `_output/bin/$GOOS/$GOARCH`. For example, you will find the binary for darwin here: `_output/bin/darwin/amd64/velero`, and the binary for linux here: `_output/bin/linux/amd64/velero`. `make` will also splice version and git commit information in so that `velero version` displays proper output.
Note: `velero install` will also use the version information to determine which tagged image to deploy. If you would like to overwrite what image gets deployed, use the `image` flag (see below for instructions on how to build images).
### Build the binary
To build the `velero` binary on your local machine, compiled for your OS and architecture, run one of these two commands:
```bash
go build ./cmd/velero
```
```bash
make local
```
### Cross compiling
To build the velero binary targeting linux/amd64 within a build container on your local machine, run:
```bash
make build
```
For any specific platform, run `make build-<GOOS>-<GOARCH>`.
For example, to build for the Mac, run `make build-darwin-amd64`.
Velero's `Makefile` has a convenience target, `all-build`, that builds the following platforms:
* linux-amd64
* linux-arm
* linux-arm64
* darwin-amd64
* windows-amd64
## Making images and updating Velero
If after installing Velero you would like to change the image used by its deployment to one that contains your code changes, you may do so by updating the image:
```bash
kubectl -n velero set image deploy/velero velero=myimagerepo/velero:$VERSION
```
To build a Velero container image, first set the `$REGISTRY` environment variable. For example, if you want to build the `gcr.io/my-registry/velero:master` image, set `$REGISTRY` to `gcr.io/my-registry`. If this variable is not set, the default is `gcr.io/heptio-images`.
Optionally, set the `$VERSION` environment variable to change the image tag. Then, run:
```bash
make container
```
To push your image to the registry, run:
```bash
make push
```
Note: if you want to update the image but not change its name, you will have to trigger Kubernetes to pick up the new image. One way of doing so is by deleting the Velero deployment pod:
```bash
kubectl -n velero delete pods -l deploy=velero
```
[4]: https://blog.golang.org/organizing-go-code
[5]: https://golang.org/doc/install
[22]: https://github.com/vmware-tanzu/velero/releases
# Use IBM Cloud Object Storage as Velero's storage destination.
You can deploy Velero on IBM [Public][5] or [Private][4] clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Velero's backups.
To set up IBM Cloud Object Storage (COS) as Velero's destination, you:
* Download an official release of Velero
* Create your COS instance
* Create an S3 bucket
* Define a service that can store data in the bucket
* Configure and start the Velero server
## Download Velero
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
## Create COS instance
If you don’t have a COS instance, you can create a new one, according to the detailed instructions in [Creating a new resource instance][1].
## Create an S3 bucket
Velero requires an object storage bucket to store backups in. See instructions in [Create some buckets to store your data][2].
## Define a service that can store data in the bucket.
The process of creating service credentials is described in [Service credentials][3].
Several comments:
1. The Velero service will write its backup into the bucket, so it requires the “Writer” access role.
2. Velero uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keys — a set of HMAC credentials. You can create these HMAC credentials by specifying `{“HMAC”:true}` as an optional inline parameter. See step 3 in the [Service credentials][3] guide.
3. After successfully creating a Service credential, you can view the JSON definition of the credential. Under the `cos_hmac_keys` entry there are `access_key_id` and `secret_access_key`. We will use them in the next step.
4. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
aws_access_key_id=<ACCESS_KEY_ID>
aws_secret_access_key=<SECRET_ACCESS_KEY>
```
where the access key id and secret are the values that we got above.
## Install and start Velero
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it.
```bash
velero install \
--provider aws \
--bucket <YOUR_BUCKET> \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=<YOUR_REGION>,s3ForcePathStyle="true",s3Url=<YOUR_URL_ACCESS_POINT>
```
Velero does not currently have a volume snapshot plugin for IBM Cloud, so creating volume snapshots is disabled.
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
(Optional) Specify [CPU and memory resource requests and limits][15] for the Velero/restic pods.
Once the installation is complete, remove the default `VolumeSnapshotLocation` that was created by `velero install`, since it's specific to AWS and won't work for IBM Cloud:
```bash
kubectl -n velero delete volumesnapshotlocation.velero.io default
```
For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation.
## Installing the nginx example (optional)
If you run the nginx example, in file `examples/nginx-app/with-pv.yaml`:
Uncomment `storageClassName: <YOUR_STORAGE_CLASS_NAME>` and replace with your `StorageClass` name.
[0]: namespace.md
[1]: https://console.bluemix.net/docs/services/cloud-object-storage/basics/order-storage.html#creating-a-new-resource-instance
[2]: https://console.bluemix.net/docs/services/cloud-object-storage/getting-started.html#create-buckets
[3]: https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.html#service-credentials
[4]: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/kc_welcome_containers.html
[5]: https://console.bluemix.net/docs/containers/container_index.html#container_index
[6]: api-types/backupstoragelocation.md#aws
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
[15]: install-overview.md#velero-resource-requirements
## Quick start evaluation install with Minio
The following example sets up the Velero server and client, then backs up and restores a sample application.
For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster.
For additional functionality with this setup, see the section below on how to [expose Minio outside your cluster][1].
**NOTE** The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope.
See [Set up Velero on your platform][3] for how to configure Velero for a production environment.
If you encounter issues with installing or configuring, see [Debugging Installation Issues](debugging-install.md).
### Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later. **Note:** restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. Restic support is not required for this example, but may be of interest later. See [Restic Integration][17].
* A DNS server on the cluster
* `kubectl` installed
### Download Velero
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the master branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
#### MacOS Installation
On Mac, you can use [HomeBrew](https://brew.sh) to install the `velero` client:
```bash
brew install velero
```
### Set up server
These instructions start the Velero server and a Minio instance that is accessible from within the cluster only. See [Expose Minio outside your cluster][31] for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `velero describe` commands.
1. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
```
1. Start the server and the local storage service. In the Velero directory, run:
```
kubectl apply -f examples/minio/00-minio-deployment.yaml
```
```
velero install \
--provider aws \
--bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
```
This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`).
Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready.
1. Deploy the example nginx application:
```bash
kubectl apply -f examples/nginx-app/base.yaml
```
1. Check to see that both the Velero and nginx deployments are successfully created:
```
kubectl get deployments -l component=velero --namespace=velero
kubectl get deployments --namespace=nginx-example
```
### Back up
1. Create a backup for any object that matches the `app=nginx` label selector:
```
velero backup create nginx-backup --selector app=nginx
```
Alternatively if you want to backup all objects *except* those matching the label `backup=ignore`:
```
velero backup create nginx-backup --selector 'backup notin (ignore)'
```
1. (Optional) Create regularly scheduled backups based on a cron expression using the `app=nginx` label selector:
```
velero schedule create nginx-daily --schedule="0 1 * * *" --selector app=nginx
```
Alternatively, you can use some non-standard shorthand cron expressions:
```
velero schedule create nginx-daily --schedule="@daily" --selector app=nginx
```
See the [cron package's documentation][30] for more usage examples.
1. Simulate a disaster:
```
kubectl delete namespace nginx-example
```
1. To check that the nginx deployment and service are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
You should get no results.
NOTE: You might need to wait for a few minutes for the namespace to be fully cleaned up.
### Restore
1. Run:
```
velero restore create --from-backup nginx-backup
```
1. Run:
```
velero restore get
```
After the restore finishes, the output looks like the following:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` and `ERRORS` are 0. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, you can look at them in detail:
```
velero restore describe <RESTORE_NAME>
```
For more information, see [the debugging information][18].
### Clean up
If you want to delete any backups you created, including data in object storage and persistent
volume snapshots, you can run:
```
velero backup delete BACKUP_NAME
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do
this for each backup you want to permanently delete. A future version of Velero will allow you to
delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run:
```
velero backup get BACKUP_NAME
```
To completely uninstall Velero, minio, and the nginx example app from your Kubernetes cluster:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
kubectl delete -f examples/nginx-app/base.yaml
```
## Expose Minio outside your cluster with a Service
When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can:
- Change the Minio Service type from `ClusterIP` to `NodePort`.
- Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`.
You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config.
### Expose Minio with Service of type NodePort
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client.
You must also get the Minio URL, which you can then specify as the value of the `publicUrl` field in your backup storage location config.
1. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`.
1. Get the Minio URL:
- if you're running Minikube:
```shell
minikube service minio --namespace=velero --url
```
- in any other environment:
1. Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client.
1. Append the value of the NodePort to get a complete URL. You can get this value by running:
```shell
kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}'
```
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_FROM_PREVIOUS_STEP>` as a field under `spec.config`. You must include the `http://` or `https://` prefix.
## Expose Minio outside your cluster with Kubernetes in Docker (KinD):
Kubernetes in Docker currently does not have support for NodePort services (see [this issue](https://github.com/kubernetes-sigs/kind/issues/99)). In this case, you can use a port forward to access the Minio bucket.
In a terminal, run the following:
```shell
MINIO_POD=$(kubectl get pods -n velero -l component=minio -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward $MINIO_POD -n velero 9000:9000
```
Then, in another terminal:
```shell
kubectl edit backupstoragelocation default -n velero
```
Add `publicUrl: http://localhost:9000` under the `spec.config` section.
### Work with Ingress
Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Velero configuration with Minio.
In this case:
1. Keep the Service type as `ClusterIP`.
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_AND_PORT_OF_INGRESS>` as a field under `spec.config`.
[1]: #expose-minio-with-service-of-type-nodeport
[3]: ../install-overview.md
[17]: ../restic.md
[18]: ../debugging-restores.md
[26]: https://github.com/vmware-tanzu/velero/releases
[30]: https://godoc.org/github.com/robfig/cron
# Use Oracle Cloud as a Backup Storage Provider for Velero
## Introduction
[Velero](https://velero.io/) is a tool used to backup and migrate Kubernetes applications. Here are the steps to use [Oracle Cloud Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) as a destination for Velero backups.
1. [Download Velero](#download-velero)
2. [Create A Customer Secret Key](#create-a-customer-secret-key)
3. [Create An Oracle Object Storage Bucket](#create-an-oracle-object-storage-bucket)
4. [Install Velero](#install-velero)
5. [Clean Up](#clean-up)
6. [Examples](#examples)
7. [Additional Reading](#additional-reading)
## Download Velero
1. Download the [latest release](https://github.com/vmware-tanzu/velero/releases/) of Velero to your development environment. This includes the `velero` CLI utility and example Kubernetes manifest files. For example:
```
wget https://github.com/vmware-tanzu/velero/releases/download/v1.0.0/velero-v1.0.0-linux-amd64.tar.gz
```
*We strongly recommend that you use an official release of Velero. The tarballs for each release contain the velero command-line client. The code in the master branch of the Velero repository is under active development and is not guaranteed to be stable!*
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicty: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`
4. Run `velero` to confirm the CLI has been installed correctly. You should see an output like this:
```
$ velero
Velero is a tool for managing disaster recovery, specifically for Kubernetes
cluster resources. It provides a simple, configurable, and operationally robust
way to back up your application state and associated data.
If you're familiar with kubectl, Velero supports a similar model, allowing you to
execute commands such as 'velero get backup' and 'velero create schedule'. The same
operations can also be performed as 'velero backup get' and 'velero schedule create'.
Usage:
velero [command]
```
## Create A Customer Secret Key
1. Oracle Object Storage provides an API to enable interoperability with Amazon S3. To use this Amazon S3 Compatibility API, you need to generate the signing key required to authenticate with Amazon S3. This special signing key is an Access Key/Secret Key pair. Follow these steps to [create a Customer Secret Key](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#To4). Refer to this link for more information about [Working with Customer Secret Keys](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#s3).
2. Create a Velero credentials file with your Customer Secret Key:
```
$ vi credentials-velero
[default]
aws_access_key_id=bae031188893d1eb83719648790ac850b76c9441
aws_secret_access_key=MmY9heKrWiNVCSZQ2Mf5XTJ6Ys93Bw2d2D6NMSTXZlk=
```
## Create An Oracle Object Storage Bucket
Create an Oracle Cloud Object Storage bucket called `velero` in the root compartment of your Oracle Cloud tenancy. Refer to this page for [more information about creating a bucket with Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/managingbuckets.htm#usingconsole).
## Install Velero
You will need the following information to install Velero into your Kubernetes cluster with Oracle Object Storage as the Backup Storage provider:
```
velero install \
--provider [provider name] \
--bucket [bucket name] \
--prefix [tenancy name] \
--use-volume-snapshots=false \
--secret-file [secret file location] \
--backup-location-config region=[region],s3ForcePathStyle="true",s3Url=[storage API endpoint]
```
- `--provider` Because we are using the S3-compatible API, we will use `aws` as our provider.
- `--bucket` The name of the bucket created in Oracle Object Storage - in our case this is named `velero`.
- ` --prefix` The name of your Oracle Cloud tenancy - in our case this is named `oracle-cloudnative`.
- `--use-volume-snapshots=false` Velero does not currently have a volume snapshot plugin for Oracle Cloud creating volume snapshots is disabled.
- `--secret-file` The path to your `credentials-velero` file.
- `--backup-location-config` The path to your Oracle Object Storage bucket. This consists of your `region` which corresponds to your Oracle Cloud region name ([List of Oracle Cloud Regions](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm?Highlight=regions)) and the `s3Url`, the S3-compatible API endpoint for Oracle Object Storage based on your region: `https://oracle-cloudnative.compat.objectstorage.[region name].oraclecloud.com`
For example:
```
velero install \
--provider aws \
--bucket velero \
--prefix oracle-cloudnative \
--use-volume-snapshots=false \
--secret-file /Users/mboxell/bin/velero/credentials-velero \
--backup-location-config region=us-phoenix-1,s3ForcePathStyle="true",s3Url=https://oracle-cloudnative.compat.objectstorage.us-phoenix-1.oraclecloud.com
```
This will create a `velero` namespace in your cluster along with a number of CRDs, a ClusterRoleBinding, ServiceAccount, Secret, and Deployment for Velero. If your pod fails to successfully provision, you can troubleshoot your installation by running: `kubectl logs [velero pod name]`.
## Clean Up
To remove Velero from your environment, delete the namespace, ClusterRoleBinding, ServiceAccount, Secret, and Deployment and delete the CRDs, run:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```
This will remove all resources created by `velero install`.
## Examples
After creating the Velero server in your cluster, try this example:
### Basic example (without PersistentVolumes)
1. Start the sample nginx app: `kubectl apply -f examples/nginx-app/base.yaml`
This will create an `nginx-example` namespace with a `nginx-deployment` deployment, and `my-nginx` service.
```
$ kubectl apply -f examples/nginx-app/base.yaml
namespace/nginx-example created
deployment.apps/nginx-deployment created
service/my-nginx created
```
You can see the created resources by running `kubectl get all`
```
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-67594d6bf6-4296p 1/1 Running 0 20s
pod/nginx-deployment-67594d6bf6-f9r5s 1/1 Running 0 20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-nginx LoadBalancer 10.96.69.166 <pending> 80:31859/TCP 21s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 2 2 2 2 21s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-67594d6bf6 2 2 2 21s
```
2. Create a backup: `velero backup create nginx-backup --include-namespaces nginx-example`
```
$ velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
```
At this point you can navigate to appropriate bucket, which we called `velero`, in the Oracle Cloud Object Storage console to see the resources backed up using Velero.
3. Simulate a disaster by deleting the `nginx-example` namespace: `kubectl delete namespaces nginx-example`
```
$ kubectl delete namespaces nginx-example
namespace "nginx-example" deleted
```
Wait for the namespace to be deleted. To check that the nginx deployment, service, and namespace are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
This should return: `No resources found.`
4. Restore your lost resources: `velero restore create --from-backup nginx-backup`
```
$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20190604102710" submitted successfully.
Run `velero restore describe nginx-backup-20190604102710` or `velero restore logs nginx-backup-20190604102710` for more details.
```
Running `kubectl get namespaces` will show that the `nginx-example` namespace has been restored along with its contents.
5. Run: `velero restore get` to view the list of restored resources. After the restore finishes, the output looks like the following:
```
$ velero restore get
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20190604104249 nginx-backup Completed 0 0 2019-06-04 10:42:39 -0700 PDT <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column shows `Completed`, and `WARNINGS` and `ERRORS` will show `0`. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, for instance if the `STATUS` column displays `FAILED` instead of `InProgress`, you can look at them in detail with `velero restore describe <RESTORE_NAME>`
6. Clean up the environment with `kubectl delete -f examples/nginx-app/base.yaml`
```
$ kubectl delete -f examples/nginx-app/base.yaml
namespace "nginx-example" deleted
deployment.apps "nginx-deployment" deleted
service "my-nginx" deleted
```
If you want to delete any backups you created, including data in object storage, you can run: `velero backup delete BACKUP_NAME`
```
$ velero backup delete nginx-backup
Are you sure you want to continue (Y/N)? Y
Request to delete backup "nginx-backup" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Velero will allow you to delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run: `velero backup get BACKUP_NAME` or more generally `velero backup get`:
```
$ velero backup get nginx-backup
An error occurred: backups.velero.io "nginx-backup" not found
```
```
$ velero backup get
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
```
## Additional Reading
* [Official Velero Documentation](https://velero.io/docs/v1.2.0-beta.1/)
* [Oracle Cloud Infrastructure Documentation](https://docs.cloud.oracle.com/)
# Plugins
Velero has a plugin architecture that allows users to add their own custom functionality to Velero backups & restores without having to modify/recompile the core Velero binary. To add custom functionality, users simply create their own binary containing implementations of Velero's plugin kinds (described below), plus a small amount of boilerplate code to expose the plugin implementations to Velero. This binary is added to a container image that serves as an init container for the Velero server pod and copies the binary into a shared emptyDir volume for the Velero server to access.
Multiple plugins, of any type, can be implemented in this binary.
A fully-functional [sample plugin repository][1] is provided to serve as a convenient starting point for plugin authors.
## Plugin Naming
When naming your plugin, keep in mind that the name needs to conform to these rules:
- have two parts separated by '/'
- none of the above parts can be empty
- the prefix is a valid DNS subdomain name
- a plugin with the same name cannot not already exist
### Some examples:
```
- example.io/azure
- 1.2.3.4/5678
- example-with-dash.io/azure
```
You will need to give your plugin(s) a name when registering them by calling the appropriate `RegisterX` function: <https://github.com/vmware-tanzu/velero/blob/0e0f357cef7cf15d4c1d291d3caafff2eeb69c1e/pkg/plugin/framework/server.go#L42-L60>
## Plugin Kinds
Velero currently supports the following kinds of plugins:
- **Object Store** - persists and retrieves backups, backup logs and restore logs
- **Volume Snapshotter** - creates volume snapshots (during backup) and restores volumes from snapshots (during restore)
- **Backup Item Action** - executes arbitrary logic for individual items prior to storing them in a backup file
- **Restore Item Action** - executes arbitrary logic for individual items prior to restoring them into a cluster
## Plugin Logging
Velero provides a [logger][2] that can be used by plugins to log structured information to the main Velero server log or
per-backup/restore logs. It also passes a `--log-level` flag to each plugin binary, whose value is the value of the same
flag from the main Velero process. This means that if you turn on debug logging for the Velero server via `--log-level=debug`,
plugins will also emit debug-level logs. See the [sample repository][1] for an example of how to use the logger within your plugin.
## Plugin Configuration
Velero uses a ConfigMap-based convention for providing configuration to plugins. If your plugin needs to be configured at runtime,
define a ConfigMap like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: my-plugin-config
# must be in the namespace where the velero deployment
# is running
namespace: velero
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in change storageclass
# restore item action plugin)
velero.io/plugin-config: ""
# add a label whose key corresponds to the fully-qualified
# plugin name (e.g. mydomain.io/my-plugin-name), and whose
# value is the plugin type (BackupItemAction, RestoreItemAction,
# ObjectStore, or VolumeSnapshotter)
<fully-qualified-plugin-name>: <plugin-type>
data:
# add your configuration data here as key-value pairs
```
Then, in your plugin's implementation, you can read this ConfigMap to fetch the necessary configuration. See the [restic restore action][3]
for an example of this -- in particular, the `getPluginConfig(...)` function.
## Feature Flags
Velero will pass any known features flags as a comma-separated list of strings to the `--features` argument.
Once parsed into a `[]string`, the features can then be registered using the `NewFeatureFlagSet` function and queried with `features.Enabled(<featureName>)`.
## Environment Variables
Velero adds the `LD_LIBRARY_PATH` into the list of environment variables to provide the convenience for plugins that requires C libraries/extensions in the runtime.
[1]: https://github.com/vmware-tanzu/velero-plugin-example
[2]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/pkg/plugin/logger.go
[3]: https://github.com/vmware-tanzu/velero/blob/v1.2.0-beta.1/pkg/restore/restic_restore_action.go
# Debugging Installation Issues
## General
### `invalid configuration: no configuration has been provided`
This typically means that no `kubeconfig` file can be found for the Velero client to use. Velero looks for a kubeconfig in the
following locations:
* the path specified by the `--kubeconfig` flag, if any
* the path specified by the `$KUBECONFIG` environment variable, if any
* `~/.kube/config`
### Backups or restores stuck in `New` phase
This means that the Velero controllers are not processing the backups/restores, which usually happens because the Velero server is not running. Check the pod description and logs for errors:
```
kubectl -n velero describe pods
kubectl -n velero logs deployment/velero
```
## AWS
### `NoCredentialProviders: no valid providers in chain`
#### Using credentials
This means that the secret containing the AWS IAM user credentials for Velero has not been created/mounted properly
into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
* The `credentials-velero` file is formatted properly and has the correct values:
```
[default]
aws_access_key_id=<your AWS access key ID>
aws_secret_access_key=<your AWS secret access key>
```
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
#### Using kube2iam
This means that Velero can't read the content of the S3 bucket. Ensure the following:
* There is a Trust Policy document allowing the role used by kube2iam to assume Velero's role, as stated in the AWS config documentation.
* The new Velero role has all the permissions listed in the documentation regarding S3.
## Azure
### `Failed to refresh the Token` or `adal: Refresh request failed`
This means that the secrets containing the Azure service principal credentials for Velero has not been created/mounted
properly into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has all of the expected keys and each one has the correct value (see [setup instructions][0])
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
## GCE/GKE
### `open credentials/cloud: no such file or directory`
This means that the secret containing the GCE service account credentials for Velero has not been created/mounted properly
into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
[0]: azure-config.md#create-service-principal
# Debugging Restores
* [Example][0]
* [Structure][1]
## Example
When Velero finishes a Restore, its status changes to "Completed" regardless of whether or not there are issues during the process. The number of warnings and errors are indicated in the output columns from `velero restore get`:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
backup-test-20170726180512 backup-test Completed 155 76 2017-07-26 11:41:14 -0400 EDT <none>
backup-test-20170726180513 backup-test Completed 121 14 2017-07-26 11:48:24 -0400 EDT <none>
backup-test-2-20170726180514 backup-test-2 Completed 0 0 2017-07-26 13:31:21 -0400 EDT <none>
backup-test-2-20170726180515 backup-test-2 Completed 0 1 2017-07-26 13:32:59 -0400 EDT <none>
```
To delve into the warnings and errors into more detail, you can use `velero restore describe`:
```bash
velero restore describe backup-test-20170726180512
```
The output looks like this:
```
Name: backup-test-20170726180512
Namespace: velero
Labels: <none>
Annotations: <none>
Backup: backup-test
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: serviceaccounts
Excluded: nodes, events, events.events.k8s.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Phase: Completed
Validation errors: <none>
Warnings:
Velero: <none>
Cluster: <none>
Namespaces:
velero: serviceaccounts "velero" already exists
serviceaccounts "default" already exists
kube-public: serviceaccounts "default" already exists
kube-system: serviceaccounts "attachdetach-controller" already exists
serviceaccounts "certificate-controller" already exists
serviceaccounts "cronjob-controller" already exists
serviceaccounts "daemon-set-controller" already exists
serviceaccounts "default" already exists
serviceaccounts "deployment-controller" already exists
serviceaccounts "disruption-controller" already exists
serviceaccounts "endpoint-controller" already exists
serviceaccounts "generic-garbage-collector" already exists
serviceaccounts "horizontal-pod-autoscaler" already exists
serviceaccounts "job-controller" already exists
serviceaccounts "kube-dns" already exists
serviceaccounts "namespace-controller" already exists
serviceaccounts "node-controller" already exists
serviceaccounts "persistent-volume-binder" already exists
serviceaccounts "pod-garbage-collector" already exists
serviceaccounts "replicaset-controller" already exists
serviceaccounts "replication-controller" already exists
serviceaccounts "resourcequota-controller" already exists
serviceaccounts "service-account-controller" already exists
serviceaccounts "service-controller" already exists
serviceaccounts "statefulset-controller" already exists
serviceaccounts "ttl-controller" already exists
default: serviceaccounts "default" already exists
Errors:
Velero: <none>
Cluster: <none>
Namespaces: <none>
```
## Structure
Errors appear for incomplete or partial restores. Warnings appear for non-blocking issues (e.g. the
restore looks "normal" and all resources referenced in the backup exist in some form, although some
of them may have been pre-existing).
Both errors and warnings are structured in the same way:
* `Velero`: A list of system-related issues encountered by the Velero server (e.g. couldn't read directory).
* `Cluster`: A list of issues related to the restore of cluster-scoped resources.
* `Namespaces`: A map of namespaces to the list of issues related to the restore of their respective resources.
[0]: #example
[1]: #structure
# Development
## Update generated files
Run `make update` to regenerate files if you make the following changes:
* Add/edit/remove command line flags and/or their help text
* Add/edit/remove commands or subcommands
* Add new API types
* Add/edit/remove plugin protobuf message or service definitions
The following files are automatically generated from the source code:
* The clientset
* Listers
* Shared informers
* Documentation
* Protobuf/gRPC types
You can run `make verify` to ensure that all generated files (clientset, listers, shared informers, docs) are up to date.
## Test
To run unit tests, use `make test`.
## Vendor dependencies
If you need to add or update the vendored dependencies, see [Vendoring dependencies][11].
[11]: vendoring-dependencies.md
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment