user avatar
csi: update leader's ACL in volumewatcher
Tim Gross authored
The volumewatcher that runs on the leader needs to make RPC calls
rather than writing to raft (as we do in the deploymentwatcher)
because the unpublish workflow needs to make RPC calls to the
clients. This requires that the volumewatcher has access to the
leader's ACL token.

But when leadership transitions, the new leader creates a new leader
ACL token. This ACL token needs to be passed into the volumewatcher
when we enable it, otherwise the volumewatcher can find itself with a
stale token.
3fc98350
Name Last commit Last update
.changelog csi: update leader's ACL in volumewatcher
.circleci Merge tag 'v1.2.3' into merge-release-1.2.3-branch
.github fix: backport release branch target (#11627)
.tours Make number of scheduler workers reloadable (#11593)
acl lint: mark false positive or fix gocritic append lint errors.
api deps: adjust to gzip handler zero length response body
client deps: upgrade docker and runc
command deps: pty has new home
contributing Merge tag 'v1.2.3' into merge-release-1.2.3-branch
demo [demo] Kadalu CSI support for Nomad (#11207)
dev docs: swap master for main in Nomad repo
drivers deps: pty has new home
e2e chore: fixup inconsistent method receiver names. (#11704)
helper deps: upgrade docker and runc
integrations spelling: registrations
internal/testing/apitests Revert "Return SchedulerConfig instead of SchedulerConfigResponse struct (#10799)" (#11433)
jobspec Parse `job > group > consul` block in HCL1 (#11423)
jobspec2 Expose Consul template configuration parameters (#11606)
lib chore: fixup inconsistent method receiver names. (#11704)
nomad csi: update leader's ACL in volumewatcher
plugins chore: fixup inconsistent method receiver names. (#11704)
scheduler scheduler: detect and log unexpected scheduling collisions (#11793)
scripts golang security update 1.17.5
terraform terraform: update installed version used to 1.0.11.
testutil cli: refactor operator debug capture (#11466)
tools
ui
version
website
.gitattributes
.gitignore
.golangci.yml
CHANGELOG.md
GNUmakefile
LICENSE
README.md
Vagrantfile
build_linux_arm.go
go.mod
go.sum
main.go
main_test.go

Nomad Build Status Discuss

HashiCorp Nomad logo

Nomad is a simple and flexible workload orchestrator to deploy and manage containers (docker, podman), non-containerized applications (executable, Java), and virtual machines (qemu) across on-prem and clouds at scale.

Nomad is supported on Linux, Windows, and macOS. A commercial version of Nomad, Nomad Enterprise, is also available.

Nomad provides several key features:

  • Deploy Containers and Legacy Applications: Nomad’s flexibility as an orchestrator enables an organization to run containers, legacy, and batch applications together on the same infrastructure. Nomad brings core orchestration benefits to legacy applications without needing to containerize via pluggable task drivers.

  • Simple & Reliable: Nomad runs as a single binary and is entirely self contained - combining resource management and scheduling into a single system. Nomad does not require any external services for storage or coordination. Nomad automatically handles application, node, and driver failures. Nomad is distributed and resilient, using leader election and state replication to provide high availability in the event of failures.

  • Device Plugins & GPU Support: Nomad offers built-in support for GPU workloads such as machine learning (ML) and artificial intelligence (AI). Nomad uses device plugins to automatically detect and utilize resources from hardware devices such as GPU, FPGAs, and TPUs.

  • Federation for Multi-Region, Multi-Cloud: Nomad was designed to support infrastructure at a global scale. Nomad supports federation out-of-the-box and can deploy applications across multiple regions and clouds.

  • Proven Scalability: Nomad is optimistically concurrent, which increases throughput and reduces latency for workloads. Nomad has been proven to scale to clusters of 10K+ nodes in real-world production environments.

  • HashiCorp Ecosystem: Nomad integrates seamlessly with Terraform, Consul, Vault for provisioning, service discovery, and secrets management.

Quick Start

Testing

See Learn: Getting Started for instructions on setting up a local Nomad cluster for non-production use.

Optionally, find Terraform manifests for bringing up a development Nomad cluster on a public cloud in the terraform directory.

Production

See Learn: Nomad Reference Architecture for recommended practices and a reference architecture for production deployments.

Documentation

Full, comprehensive documentation is available on the Nomad website: https://www.nomadproject.io/docs

Guides are available on HashiCorp Learn.

Contributing

See the contributing directory for more developer documentation.