This project is mirrored from https://gitee.com/mirrors/nomad.git. Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
  1. 10 Nov, 2022 1 commit
    • Seth Hoenig's avatar
      template: protect use of template manager with a lock (#15192) · 00c8cd37
      Seth Hoenig authored
      This PR protects access to `templateHook.templateManager` with its lock. So
      far we have not been able to reproduce the panic - but it seems either Poststart
      is running without a Prestart being run first (should be impossible), or the
      Update hook is running concurrently with Poststart, nil-ing out the templateManager
      in a race with Poststart.
      
      Fixes #15189
      00c8cd37
  2. 08 Nov, 2022 2 commits
    • Seth Hoenig's avatar
      make: add target cl for create changelog entry (#15186) · 72d58fcf
      Seth Hoenig authored
      
      * make: add target cl for create changelog entry
      
      This PR adds `tools/cl-entry` and the `make cl` Makefile target for
      conveniently creating correctly formatted Changelog entries.
      
      * Update tools/cl-entry/main.go
      Co-authored-by: default avatarLuiz Aoqui <luiz@hashicorp.com>
      
      * Update tools/cl-entry/main.go
      Co-authored-by: default avatarLuiz Aoqui <luiz@hashicorp.com>
      Co-authored-by: default avatarLuiz Aoqui <luiz@hashicorp.com>
      72d58fcf
    • Derek Strickland's avatar
      api: remove `mapstructure` tags from`Port` struct (#12916) · 7e8306e4
      Derek Strickland authored
      
      This PR solves a defect in the deserialization of api.Port structs when returning structs from theEventStream.
      
      Previously, the api.Port struct's fields were decorated with both mapstructure and hcl tags to support the network.port stanza's use of the keyword static when posting a static port value. This works fine when posting a job and when retrieving any struct that has an embedded api.Port instance as long as the value is deserialized using JSON decoding. The EventStream, however, uses mapstructure to decode event payloads in the api package. mapstructure expects an underlying field named static which does not exist. The result was that the Port.Value field would always be set to 0.
      
      Upon further inspection, a few things became apparent.
      
      The struct already has hcl tags that support the indirection during job submission.
      Serialization/deserialization with both the json and hcl packages produce the desired result.
      The use of of the mapstructure tags provided no value as the Port struct contains only fields with primitive types.
      This PR:
      
      Removes the mapstructure tags from the api.Port structs
      Updates the job parsing logic to use hcl instead of mapstructure when decoding Port instances.
      Closes #11044
      Co-authored-by: default avatarDerekStrickland <dstrickland@hashicorp.com>
      Co-authored-by: default avatarPiotr Kazmierczak <470696+pkazmierczak@users.noreply.github.com>
      7e8306e4
  3. 07 Nov, 2022 8 commits
  4. 06 Nov, 2022 1 commit
  5. 04 Nov, 2022 4 commits
    • Luiz Aoqui's avatar
      Update alloc after reconnect and enforece client heartbeat order (#15068) · 7828c02a
      Luiz Aoqui authored
      * scheduler: allow updates after alloc reconnects
      
      When an allocation reconnects to a cluster the scheduler needs to run
      special logic to handle the reconnection, check if a replacement was
      create and stop one of them.
      
      If the allocation kept running while the node was disconnected, it will
      be reconnected with `ClientStatus: running` and the node will have
      `Status: ready`. This combination is the same as the normal steady state
      of allocation, where everything is running as expected.
      
      In order to differentiate between the two states (an allocation that is
      reconnecting and one that is just running) the scheduler needs an extra
      piece of state.
      
      The current implementation uses the presence of a
      `TaskClientReconnected` task event to detect when the allocation has
      reconnected and thus must go through the reconnection process. But this
      event remains even after the allocation is reconnected, causing all
      future evals to consider the allocation as still reconnecting.
      
      This commit changes the reconnect logic to use an `AllocState` to
      register when the allocation was reconnected. This provides the
      following benefits:
      
        - Only a limited number of task states are kept, and they are used for
          many other events. It's possible that, upon reconnecting, several
          actions are triggered that could cause the `TaskClientReconnected`
          event to be dropped.
        - Task events are set by clients and so their timestamps are subject
          to time skew from servers. This prevents using time to determine if
          an allocation reconnected after a disconnect event.
        - Disconnect events are already stored as `AllocState` and so storing
          reconnects there as well makes it the only source of information
          required.
      
      With the new logic, the reconnection logic is only triggered if the
      last `AllocState` is a disconnect event, meaning that the allocation has
      not been reconnected yet. After the reconnection is handled, the new
      `ClientStatus` is store in `AllocState` allowing future evals to skip
      the reconnection logic.
      
      * scheduler: prevent spurious placement on reconnect
      
      When a client reconnects it makes two independent RPC calls:
      
        - `Node.UpdateStatus` to heartbeat and set its status as `ready`.
        - `Node.UpdateAlloc` to update the status of its allocations.
      
      These two calls can happen in any order, and in case the allocations are
      updated before a heartbeat it causes the state to be the same as a node
      being disconnected: the node status will still be `disconnected` while
      the allocation `ClientStatus` is set to `running`.
      
      The current implementation did not handle this order of events properly,
      and the scheduler would create an unnecessary placement since it
      considered the allocation was being disconnected. This extra allocation
      would then be quickly stopped by the heartbeat eval.
      
      This commit adds a new code path to handle this order of events. If the
      node is `disconnected` and the allocation `ClientStatus` is `running`
      the scheduler will check if the allocation is actually reconnecting
      using its `AllocState` events.
      
      * rpc: only allow alloc updates from `ready` nodes
      
      Clients interact with servers using three main RPC methods:
      
        - `Node.GetAllocs` reads allocation data from the server and writes it
          to the client.
        - `Node.UpdateAlloc` reads allocation from from the client and writes
          them to the server.
        - `Node.UpdateStatus` writes the client status to the server and is
          used as the heartbeat mechanism.
      
      These three methods are called periodically by the clients and are done
      so independently from each other, meaning that there can't be any
      assumptions in their ordering.
      
      This can generate scenarios that are hard to reason about and to code
      for. For example, when a client misses too many heartbeats it will be
      considered `down` or `disconnected` and the allocations it was running
      are set to `lost` or `unknown`.
      
      When connectivity is restored the to rest of the cluster, the natural
      mental model is to think that the client will heartbeat first and then
      update its allocations status into the servers.
      
      But since there's no inherit order in these calls the reverse is just as
      possible: the client updates the alloc status and then heartbeats. This
      results in a state where allocs are, for example, `running` while the
      client is still `disconnected`.
      
      This commit adds a new verification to the `Node.UpdateAlloc` method to
      reject updates from nodes that are not `ready`, forcing clients to
      heartbeat first. Since this check is done server-side there is no need
      to coordinate operations client-side: they can continue sending these
      requests independently and alloc update will succeed after the heartbeat
      is done.
      
      * chagelog: add entry for #15068
      
      * code review
      
      * client: skip terminal allocations on reconnect
      
      When the client reconnects with the server it synchronizes the state of
      its allocations by sending data using the `Node.UpdateAlloc` RPC and
      fetching data using the `Node.GetClientAllocs` RPC.
      
      If the data fetch happens before the data write, `unknown` allocations
      will still be in this state and would trigger the
      `allocRunner.Reconnect` flow.
      
      But when the server `DesiredStatus` for the allocation is `stop` the
      client should not reconnect the allocation.
      
      * apply more code review changes
      
      * scheduler: persist changes to reconnected allocs
      
      Reconnected allocs have a new AllocState entry that must be persisted by
      the plan applier.
      
      * rpc: read node ID from allocs in UpdateAlloc
      
      The AllocUpdateRequest struct is used in three disjoint use cases:
      
      1. Stripped allocs from clients Node.UpdateAlloc RPC using the Allocs,
         and WriteRequest fields
      2. Raft log message using the Allocs, Evals, and WriteRequest fields
      3. Plan updates using the AllocsStopped, AllocsUpdated, and Job fields
      
      Adding a new field that would only be used in one these cases (1) made
      things more confusing and error prone. While in theory an
      AllocUpdateRequest could send allocations from different nodes, in
      practice this never actually happens since only clients call this method
      with their own allocations.
      
      * scheduler: remove logic to handle exceptional case
      
      This condition could only be hit if, somehow, the allocation status was
      set to "running" while the client was "unknown". This was addressed by
      enforcing an order in "Node.UpdateStatus" and "Node.UpdateAlloc" RPC
      calls, so this scenario is not expected to happen.
      
      Adding unnecessary code to the scheduler makes it harder to read and
      reason about it.
      
      * more code review
      
      * remove another unused test
      7828c02a
    • Luiz Aoqui's avatar
      client: retry RPC call when no server is available (#15140) · f33bb5ec
      Luiz Aoqui authored
      When a Nomad service starts it tries to establish a connection with
      servers, but it also runs alloc runners to manage whatever allocations
      it needs to run.
      
      The alloc runner will invoke several hooks to perform actions, with some
      of them requiring access to the Nomad servers, such as Native Service
      Discovery Registration.
      
      If the alloc runner starts before a connection is established the alloc
      runner will fail, causing the allocation to be shutdown. This is
      particularly problematic for disconnected allocations that are
      reconnecting, as they may fail as soon as the client reconnects.
      
      This commit changes the RPC request logic to retry it, using the
      existing retry mechanism, if there are no servers available.
      f33bb5ec
    • Charlie Voiselle's avatar
      template: error on missing key (#15141) · 52a254ba
      Charlie Voiselle authored
      * Support error_on_missing_value for templates
      * Update docs for template stanza
      52a254ba
    • Seth Hoenig's avatar
      e2e: explicitly wait on task status in chroot download exec test (#15145) · 3c17552d
      Seth Hoenig authored
      Also add some debug log lines for this test, because it doesn't make sense
      for the allocation to be complete yet a task in the allocation to be not
      started yet, which is what the test failures are implying.
      3c17552d
  6. 03 Nov, 2022 5 commits
  7. 02 Nov, 2022 2 commits
  8. 01 Nov, 2022 4 commits
    • Seth Hoenig's avatar
      build: update to go1.19.3 (#15099) · 152f8af9
      Seth Hoenig authored
      152f8af9
    • Tim Gross's avatar
      volumewatcher: prevent panic on nil volume (#15101) · ffbae782
      Tim Gross authored
      If a GC claim is written and then volume is deleted before the `volumewatcher`
      enters its run loop, we panic on the nil-pointer access. Simply doing a
      nil-check at the top of the loop reveals a race condition around shutting down
      the loop just as a new update is coming in.
      
      Have the parent `volumeswatcher` send an initial update on the channel before
      returning, so that we're still holding the lock. Update the watcher's `Stop`
      method to set the running state, which lets us avoid having a second context and
      makes stopping synchronous. This reduces the cases we have to handle in the run
      loop.
      
      Updated the tests now that we'll safely return from the goroutine and stop the
      runner in a larger set of cases. Ran the tests with the `-race` detection flag
      and fixed up any problems found here as well.
      ffbae782
    • Tim Gross's avatar
      variables: limit rekey eval to half the nack timeout (#15102) · 18cb9c76
      Tim Gross authored
      In order to limit how much the rekey job can monopolize a scheduler worker, we
      limit how long it can run to 1min before stopping work and emitting a new
      eval. But this exactly matches the default nack timeout, so it'll fail the eval
      rather than getting a chance to emit a new one.
      
      Set the timeout for the rekey eval to half the configured nack timeout.
      18cb9c76
    • Tim Gross's avatar
      keyring: safely handle missing keys and restore GC (#15092) · 6b2da83f
      Tim Gross authored
      When replication of a single key fails, the replication loop breaks early and
      therefore keys that fall later in the sorting order will never get
      replicated. This is particularly a problem for clusters impacted by the bug that
      caused #14981 and that were later upgraded; the keys that were never replicated
      can now never be replicated, and so we need to handle them safely.
      
      Included in the replication fix:
      * Refactor the replication loop so that each key replicated in a function call
        that returns an error, to make the workflow more clear and reduce nesting. Log
        the error and continue.
      * Improve stability of keyring replication tests. We no longer block leadership
        on initializing the keyring, so there's a race condition in the keyring tests
        where we can test for the existence of the root key before the keyring has
        been initialize. Change this to an "eventually" test.
      
      But these fixes aren't enough to fix #14981 because they'll end up seeing an
      error once a second complaining about the missing key, so we also need to fix
      keyring GC so the keys can be removed from the state store. Now we'll store the
      key ID used to sign a workload identity in the Allocation, and we'll index the
      Allocation table on that so we can track whether any live Allocation was signed
      with a particular key ID.
      6b2da83f
  9. 31 Oct, 2022 6 commits
  10. 28 Oct, 2022 1 commit
    • Tim Gross's avatar
      refactor eval delete safety check (#15070) · 8c19a126
      Tim Gross authored
      The `Eval.Delete` endpoint has a helper that takes a list of jobs and allocs and
      determines whether the eval associated with those is safe to delete (based on
      their state). Filtering improvements to the `Eval.Delete` endpoint are going to
      need this check to run in the state store itself for consistency.
      
      Refactor to push this check down into the state store to keep the eventual diff
      for that work reasonable.
      8c19a126
  11. 27 Oct, 2022 6 commits