This project is mirrored from https://gitee.com/mirrors/nomad.git.
Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
- 07 Apr, 2020 1 commit
-
-
Nick Ethier authored
-
- 06 Apr, 2020 21 commits
-
-
Nick Ethier authored
tr/service_hook: prevent Update from running before Poststart finish
-
Tim Gross authored
-
Michael Lange authored
UI: Change CSI to Storage and mark it as beta
-
Nick Ethier authored
-
Buck Doyle authored
This closes #7456. It hides the terminal when the job is dead and displays an error when trying to open an exec session for a task that isn’t running. There’s a skipped test for the latter behaviour that I’ll have to come back for.
-
Buck Doyle authored
This closes #7454. It makes use of the existing watchable tools to allow the exec popup sidebar to be live-updating. It also adds alphabetic sorting of task groups and tasks.
-
Drew Bailey authored
add note about query params for filtering
-
Drew Bailey authored
-
Drew Bailey authored
Fixes bug that prevented group shutdown_delay updates
-
Michael Lange authored
UI: CSI Acceptance Tests
-
Drew Bailey authored
-
Drew Bailey authored
-
Tim Gross authored
-
Drew Bailey authored
-
Drew Bailey authored
-
Drew Bailey authored
Group shutdown delay updates were not properly handled in Update hook. This commit also ensures that plan output is displayed.
-
Tim Gross authored
-
Tim Gross authored
The `Job.Deregister` call will block on the client CSI controller RPCs while the alloc still exists on the Nomad client node. So we need to make the volume claim reaping async from the `Job.Deregister`. This allows `nomad job stop` to return immediately. In order to make this work, this changeset changes the volume GC so that the GC jobs are on a by-volume basis rather than a by-job basis; we won't have to query the (possibly deleted) job at the time of volume GC. We smuggle the volume ID and whether it's a purge into the GC eval ID the same way we smuggled the job ID previously.
-
Tim Gross authored
The CSI plugins uses the external volume ID for all operations, but the Client CSI RPCs uses the Nomad volume ID (human-friendly) for the mount paths. Pass the External ID as an arg in the RPC call so that the unpublish workflows have it without calling back to the server to find the external ID. The controller CSI plugins need the CSI node ID (or in other words, the storage provider's view of node ID like the EC2 instance ID), not the Nomad node ID, to determine how to detach the external volume.
-
Tim Gross authored
-
Mahmood Ali authored
Authenticate alloc/exec websocket requests
-
- 05 Apr, 2020 3 commits
-
-
Michael Lange authored
-
Michael Lange authored
-
Michael Lange authored
-
- 04 Apr, 2020 9 commits
-
-
Michael Lange authored
It was designed to be used this way, but allocationFor has never worked as intended
-
Michael Lange authored
-
Michael Lange authored
-
Michael Lange authored
-
Michael Lange authored
-
Michael Lange authored
-
Michael Lange authored
-
Michael Lange authored
-
Michael Lange authored
-
- 03 Apr, 2020 6 commits
-
-
Michael Lange authored
UI Configurable Page Sizes
-
Lang Martin authored
* nomad/state/state_store: error message copy/paste error * nomad/structs/structs: add a VolumeEval to the JobDeregisterResponse * nomad/job_endpoint: synchronously, volumeClaimReap on job Deregister * nomad/core_sched: make volumeClaimReap available without a CoreSched * nomad/job_endpoint: Deregister return early if the job is missing * nomad/job_endpoint_test: job Deregistion is idempotent * nomad/core_sched: conditionally ignore alloc status in volumeClaimReap * nomad/job_endpoint: volumeClaimReap all allocations, even running * nomad/core_sched_test: extra argument to collectClaimsToGCImpl * nomad/job_endpoint: job deregistration is not idempotent
-
Mahmood Ali authored
tests: deflake TestAutopilot_RollingUpdate
-
Mahmood Ali authored
I hypothesize that the flakiness in rolling update is due to shutting down s3 server before s4 is properly added as a voter. The chain of the flakiness is as follows: 1. Bootstrap with s1, s2, s3 2. Add s4 3. Wait for servers to register with 3 voting peers * But we already have 3 voters (s1, s2, and s3) * s4 is added as a non-voter in Raft v3 and must wait until autopilot promots it 4. Test proceeds without s4 being a voter 5. s3 shutdown 6. cluster changes stall due to leader election and too many pending configuration changes (e.g. removing s3 from raft, promoting s4). Here, I have the test wait until s4 is marked as a voter before shutting down s3, so we don't have too many configuration changes at once. In https://circleci.com/gh/hashicorp/nomad/57092, I noticed the following events: ``` TestAutopilot_RollingUpdate: autopilot_test.go:204: adding server s4 TestAutopilot_RollingUpdate: testlog.go:34: 2020-04-03T20:08:19.789Z [INFO] nomad/serf.g...
-
Tim Gross authored
When `nomad job inspect` encodes the response, if the decoded JSON from the API doesn't exactly match the API struct, the field value will be omitted even if it has a value. We only want the JSON struct tag to `omitempty`.
-
Tim Gross authored
This changeset: * adds eval status to the error messages emitted when we have placement failure in tests. The implementation here isn't quite perfect but it's a lot better than "condition not met". * enforces the ordering of teardown of the CSI test * doesn't pass the purge flag to one of the two CSI tests, so that we exercise both code paths.
-