This project is mirrored from https://gitee.com/mirrors/nomad.git.
Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
- 10 Jan, 2022 1 commit
-
-
Derek Strickland authored
This PR exposes the following existing`consul-template` configuration options to Nomad jobspec authors in the `{job.group.task.template}` stanza. - `wait` It also exposes the following`consul-template` configuration to Nomad operators in the `{client.template}` stanza. - `max_stale` - `block_query_wait` - `consul_retry` - `vault_retry` - `wait` Finally, it adds the following new Nomad-specific configuration to the `{client.template}` stanza that allows Operators to set bounds on what `jobspec` authors configure. - `wait_bounds` Co-authored-by:
Tim Gross <tgross@hashicorp.com> Co-authored-by:
Michael Schurter <mschurter@hashicorp.com>
-
- 07 Jan, 2022 3 commits
-
-
Tim Gross authored
Client endpoints such as `alloc exec` are enforced on the client if the API client or CLI has "line of sight" to the client. This is already in the Learn guide but having it in the ACL configuration docs would be helpful.
-
Tim Gross authored
Small refactoring of the allocrunner hook for CSI to make it more testable, and a unit test that covers most of its logic.
-
Tim Gross authored
* Fixed name of `nomad.scheduler.allocs.reschedule` metric * Added new metrics to metrics reference documentation * Expanded definitions of "waiting" metrics * Changelog entry for #10236 and #10237
-
- 06 Jan, 2022 5 commits
-
-
Luiz Aoqui authored
-
Joel May authored
-
Joel May authored
-
Charlie Voiselle authored
## Development Environment Changes * Added stringer to build deps ## New HTTP APIs * Added scheduler worker config API * Added scheduler worker info API ## New Internals * (Scheduler)Worker API refactor—Start(), Stop(), Pause(), Resume() * Update shutdown to use context * Add mutex for contended server data - `workerLock` for the `workers` slice - `workerConfigLock` for the `Server.Config.NumSchedulers` and `Server.Config.EnabledSchedulers` values ## Other * Adding docs for scheduler worker api * Add changelog message Co-authored-by:
Derek Strickland <1111455+DerekStrickland@users.noreply.github.com>
-
Michael Schurter authored
Fix Node.Copy()
-
- 05 Jan, 2022 2 commits
-
-
Jai authored
Refactor: Breadcrumbs Service
-
Tim Gross authored
When `volumewatcher.Watcher` starts on the leader, it starts a watch on every volume and triggers a reap of unused claims on any change to that volume. But if a reaping is in-flight during leadership transitions, it will fail and the event that triggered the reap will be dropped. Perform one reap of unused claims at the start of the watcher so that leadership transitions don't drop this event.
-
- 04 Jan, 2022 2 commits
-
-
Arkadiusz authored
Perform one more read after receiving cancel when streaming file from the allocation API
-
James Rasell authored
docs: add 1.2.0 HCLv2 strict parsing upgrade note.
-
- 03 Jan, 2022 5 commits
-
-
Tim Gross authored
-
James Rasell authored
-
Tim Gross authored
-
Kevin Schoonover authored
-
Tim Gross authored
Add `per_page` and `next_token` handling to `Deployment.List` RPC, and allow the use of a wildcard namespace for namespace filtering.
-
- 25 Dec, 2021 1 commit
-
-
Jeff Escalante authored
-
- 24 Dec, 2021 1 commit
-
-
Michael Schurter authored
-
- 23 Dec, 2021 8 commits
-
-
Noel Quiles authored
* Update @hashicorp/react-subnav * Update <Subnav /> & <ProductDownloadsPage />
-
Michael Schurter authored
-
Michael Schurter authored
-
Michael Schurter authored
-
Jai Bhagat authored
-
Jai Bhagat authored
-
Tim Gross authored
The task runner prestart hooks take a `joincontext` so they have the option to exit early if either of two contexts are canceled: from killing the task or client shutdown. Some tasks exit without being shutdown from the server, so neither of the joined contexts ever gets canceled and we leak the `joincontext` (48 bytes) and its internal goroutine. This primarily impacts batch jobs and any task that fails or completes early such as non-sidecar prestart lifecycle tasks. Cancel the `joincontext` after the prestart call exits to fix the leak.
-
Tim Gross authored
The `go-getter` library was updated to 1.5.9 in #11481 to pick up a bug fix for automatically unpacking uncompressed tar archives. But this version had a regression in git `ref` param behavior and was patched in 1.5.10.
-
- 22 Dec, 2021 3 commits
-
-
Luiz Aoqui authored
-
Tim Gross authored
Adds a package `scheduler/benchmarks` with some examples of profiling and benchmarking the scheduler, along with helpers for loading real-world data for profiling. This tooling comes out of work done for #11712. These test benchmarks have not been added to CI because these particular profiles are mostly examples and the runs will add an excessive amount of time to CI runs for code that rarely changes in a way that has any chance of impacting performance.
-
Alex Carpenter authored
fix: redirects website `/home` to `/`
-
- 21 Dec, 2021 9 commits
-
-
Luiz Aoqui authored
-
Shishir authored
-
Tim Gross authored
When the scheduler picks a node for each evaluation, the `LimitIterator` provides at most 2 eligible nodes for the `MaxScoreIterator` to choose from. This keeps scheduling fast while producing acceptable results because the results are binpacked. Jobs with a `spread` block (or node affinity) remove this limit in order to produce correct spread scoring. This means that every allocation within a job with a `spread` block is evaluated against _all_ eligible nodes. Operators of large clusters have reported that jobs with `spread` blocks that are eligible on a large number of nodes can take longer than the nack timeout to evaluate (60s). Typical evaluations are processed in milliseconds. In practice, it's not necessary to evaluate every eligible node for every allocation on large clusters, because the `RandomIterator` at the base of the scheduler stack produces enough variation in each pass that the likelihood of an uneven spread is negligible. Note that feasibility is checked before the limit, so this only impacts the number of _eligible_ nodes available for scoring, not the total number of nodes. This changeset sets the iterator limit for "large" `spread` block and node affinity jobs to be equal to the number of desired allocations. This brings an example problematic job evaluation down from ~3min to ~10s. The included tests ensure that we have acceptable spread results across a variety of large cluster topologies.
-
Jai Bhagat authored
-
Jai Bhagat authored
-
Jai Bhagat authored
-
Jai Bhagat authored
-
Jai Bhagat authored
-
Jai Bhagat authored
-