This project is mirrored from https://gitee.com/mirrors/nomad.git. Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
  1. 17 Nov, 2022 2 commits
    • Tim Gross's avatar
      keyring: update handle to state inside replication loop (#15227) · f54a50bb
      Tim Gross authored
      * keyring: update handle to state inside replication loop
      
      When keyring replication starts, we take a handle to the state store. But
      whenever a snapshot is restored, this handle is invalidated and no longer points
      to a state store that is receiving new keys. This leaks a bunch of memory too!
      
      In addition to operator-initiated restores, when fresh servers are added to
      existing clusters with large-enough state, the keyring replication can get
      started quickly enough that it's running before the snapshot from the existing
      clusters have been restored.
      
      Fix this by updating the handle to the state store on each pass.
      f54a50bb
    • Ayrat Badykov's avatar
      fix create snapshot request docs (#15242) · 322c6b3d
      Ayrat Badykov authored
      322c6b3d
  2. 16 Nov, 2022 3 commits
    • Tim Gross's avatar
      eval broker: shed all but one blocked eval per job after ack (#14621) · 1c4307b8
      Tim Gross authored
      When an evaluation is acknowledged by a scheduler, the resulting plan is
      guaranteed to cover up to the `waitIndex` set by the worker based on the most
      recent evaluation for that job in the state store. At that point, we no longer
      need to retain blocked evaluations in the broker that are older than that index.
      
      Move all but the highest priority / highest `ModifyIndex` blocked eval into a
      canceled set. When the `Eval.Ack` RPC returns from the eval broker it will
      signal a reap of a batch of cancelable evals to write to raft. This paces the
      cancelations limited by how frequently the schedulers are acknowledging evals;
      this should reduce the risk of cancelations from overwhelming raft relative to
      scheduler progress. In order to avoid straggling batches when the cluster is
      quiet, we also include a periodic sweep through the cancelable list.
      1c4307b8
    • Seth Hoenig's avatar
      e2e: swap bionic image for jammy (#15220) · 0e3606af
      Seth Hoenig authored
      0e3606af
    • Tim Gross's avatar
      test: ensure leader is still valid in reelection test (#15267) · 460f19b6
      Tim Gross authored
      The `TestLeader_Reelection` test waits for a leader to be elected and then makes
      some other assertions. But it implcitly assumes that there's no failure of
      leadership before shutting down the leader, which can lead to a panic in the
      tests. Assert there's still a leader before the shutdown.
      460f19b6
  3. 15 Nov, 2022 4 commits
  4. 14 Nov, 2022 3 commits
    • Tim Gross's avatar
      eval delete: move batching of deletes into RPC handler and state (#15117) · 65b3d01a
      Tim Gross authored
      During unusual outage recovery scenarios on large clusters, a backlog of
      millions of evaluations can appear. In these cases, the `eval delete` command can
      put excessive load on the cluster by listing large sets of evals to extract the
      IDs and then sending larges batches of IDs. Although the command's batch size
      was carefully tuned, we still need to be JSON deserialize, re-serialize to
      MessagePack, send the log entries through raft, and get the FSM applied.
      
      To improve performance of this recovery case, move the batching process into the
      RPC handler and the state store. The design here is a little weird, so let's
      look a the failed options first:
      
      * A naive solution here would be to just send the filter as the raft request and
        let the FSM apply delete the whole set in a single operation. Benchmarking with
        1M evals on a 3 node cluster demonstrated this can block the FSM apply for
        several minutes, which puts the cluster at risk if there's a leadership
        failover (the barrier write can't be made while this apply is in-flight).
      
      * A less naive but still bad solution would be to have the RPC handler filter
        and paginate, and then hand a list of IDs to the existing raft log
        entry. Benchmarks showed this blocked the FSM apply for 20-30s at a time and
        took roughly an hour to complete.
      
      Instead, we're filtering and paginating in the RPC handler to find a page token,
      and then passing both the filter and page token in the raft log. The FSM apply
      recreates the paginator using the filter and page token to get roughly the same
      page of evaluations, which it then deletes. The pagination process is fairly
      cheap (only abut 5% of the total FSM apply time), so counter-intuitively this
      rework ends up being much faster. A benchmark of 1M evaluations showed this
      blocked the FSM apply for 20-30ms at a time (typical for normal operations) and
      completes in less than 4 minutes.
      
      Note that, as with the existing design, this delete is not consistent: a new
      evaluation inserted "behind" the cursor of the pagination will fail to be
      deleted.
      65b3d01a
    • Douglas Jose's avatar
      Fix wrong reference to `vault` (#15228) · 1217a96e
      Douglas Jose authored
      1217a96e
    • Kyle Root's avatar
      263ed6f9
  5. 11 Nov, 2022 3 commits
    • Charlie Voiselle's avatar
      [bug] Return a spec on reconnect (#15214) · 9ad90290
      Charlie Voiselle authored
      client: fixed a bug where non-`docker` tasks with network isolation would leak network namespaces and iptables rules if the client was restarted while they were running
      9ad90290
    • Seth Hoenig's avatar
      client: avoid unconsumed channel in timer construction (#15215) · 5f3f5215
      Seth Hoenig authored
      * client: avoid unconsumed channel in timer construction
      
      This PR fixes a bug introduced in #11983 where a Timer initialized with 0
      duration causes an immediate tick, even if Reset is called before reading the
      channel. The fix is to avoid doing that, instead creating a Timer with a non-zero
      initial wait time, and then immediately calling Stop.
      
      * pr: remove redundant stop
      5f3f5215
    • Tim Gross's avatar
      exec: allow running commands from host volume (#14851) · 11a5f790
      Tim Gross authored
      The exec driver and other drivers derived from the shared executor check the
      path of the command before handing off to libcontainer to ensure that the
      command doesn't escape the sandbox. But we don't check any host volume mounts,
      which should be safe to use as a source for executables if we're letting the
      user mount them to the container in the first place.
      
      Check the mount config to verify the executable lives in the mount's host path,
      but then return an absolute path within the mount's task path so that we can hand
      that off to libcontainer to run.
      
      Includes a good bit of refactoring here because the anchoring of the final task
      path has different code paths for inside the task dir vs inside a mount. But
      I've fleshed out the test coverage of this a good bit to ensure we haven't
      created any regressions in the process.
      11a5f790
  6. 10 Nov, 2022 5 commits
    • Seth Hoenig's avatar
      docs: clarify how to access task meta values in templates (#15212) · 106dce9c
      Seth Hoenig authored
      This PR updates template and meta docs pages to give examples of accessing
      meta values in templates. To do so one must use the environment variable form
      of the meta key name, which isn't obvious and wasn't yet documented.
      106dce9c
    • Luiz Aoqui's avatar
      a2fed26f
    • Luiz Aoqui's avatar
      ci: re-enable tests on main (#15204) · e20af3cf
      Luiz Aoqui authored
      Now that the tests are grouped more tightly we don't use as many runners
      as before, so we can re-enable these without clogging the queue.
      e20af3cf
    • Piotr Kazmierczak's avatar
      acl: sso auth method schema and store functions (#15191) · 02253e6f
      Piotr Kazmierczak authored
      This PR implements ACLAuthMethod type, acl_auth_methods table schema and crud state store methods. It also updates nomadSnapshot.Persist and nomadSnapshot.Restore methods in order for them to work with the new table, and adds two new Raft messages: ACLAuthMethodsUpsertRequestType and ACLAuthMethodsDeleteRequestType
      
      This PR is part of the SSO work captured under ️ ticket #13120.
      02253e6f
    • Seth Hoenig's avatar
      template: protect use of template manager with a lock (#15192) · 00c8cd37
      Seth Hoenig authored
      This PR protects access to `templateHook.templateManager` with its lock. So
      far we have not been able to reproduce the panic - but it seems either Poststart
      is running without a Prestart being run first (should be impossible), or the
      Update hook is running concurrently with Poststart, nil-ing out the templateManager
      in a race with Poststart.
      
      Fixes #15189
      00c8cd37
  7. 08 Nov, 2022 2 commits
    • Seth Hoenig's avatar
      make: add target cl for create changelog entry (#15186) · 72d58fcf
      Seth Hoenig authored
      
      * make: add target cl for create changelog entry
      
      This PR adds `tools/cl-entry` and the `make cl` Makefile target for
      conveniently creating correctly formatted Changelog entries.
      
      * Update tools/cl-entry/main.go
      Co-authored-by: default avatarLuiz Aoqui <luiz@hashicorp.com>
      
      * Update tools/cl-entry/main.go
      Co-authored-by: default avatarLuiz Aoqui <luiz@hashicorp.com>
      Co-authored-by: default avatarLuiz Aoqui <luiz@hashicorp.com>
      72d58fcf
    • Derek Strickland's avatar
      api: remove `mapstructure` tags from`Port` struct (#12916) · 7e8306e4
      Derek Strickland authored
      
      This PR solves a defect in the deserialization of api.Port structs when returning structs from theEventStream.
      
      Previously, the api.Port struct's fields were decorated with both mapstructure and hcl tags to support the network.port stanza's use of the keyword static when posting a static port value. This works fine when posting a job and when retrieving any struct that has an embedded api.Port instance as long as the value is deserialized using JSON decoding. The EventStream, however, uses mapstructure to decode event payloads in the api package. mapstructure expects an underlying field named static which does not exist. The result was that the Port.Value field would always be set to 0.
      
      Upon further inspection, a few things became apparent.
      
      The struct already has hcl tags that support the indirection during job submission.
      Serialization/deserialization with both the json and hcl packages produce the desired result.
      The use of of the mapstructure tags provided no value as the Port struct contains only fields with primitive types.
      This PR:
      
      Removes the mapstructure tags from the api.Port structs
      Updates the job parsing logic to use hcl instead of mapstructure when decoding Port instances.
      Closes #11044
      Co-authored-by: default avatarDerekStrickland <dstrickland@hashicorp.com>
      Co-authored-by: default avatarPiotr Kazmierczak <470696+pkazmierczak@users.noreply.github.com>
      7e8306e4
  8. 07 Nov, 2022 8 commits
  9. 06 Nov, 2022 1 commit
  10. 04 Nov, 2022 4 commits
    • Luiz Aoqui's avatar
      Update alloc after reconnect and enforece client heartbeat order (#15068) · 7828c02a
      Luiz Aoqui authored
      * scheduler: allow updates after alloc reconnects
      
      When an allocation reconnects to a cluster the scheduler needs to run
      special logic to handle the reconnection, check if a replacement was
      create and stop one of them.
      
      If the allocation kept running while the node was disconnected, it will
      be reconnected with `ClientStatus: running` and the node will have
      `Status: ready`. This combination is the same as the normal steady state
      of allocation, where everything is running as expected.
      
      In order to differentiate between the two states (an allocation that is
      reconnecting and one that is just running) the scheduler needs an extra
      piece of state.
      
      The current implementation uses the presence of a
      `TaskClientReconnected` task event to detect when the allocation has
      reconnected and thus must go through the reconnection process. But this
      event remains even after the allocation is reconnected, causing all
      future evals to consider the allocation as still reconnecting.
      
      This commit changes the reconnect logic to use an `AllocState` to
      register when the allocation was reconnected. This provides the
      following benefits:
      
        - Only a limited number of task states are kept, and they are used for
          many other events. It's possible that, upon reconnecting, several
          actions are triggered that could cause the `TaskClientReconnected`
          event to be dropped.
        - Task events are set by clients and so their timestamps are subject
          to time skew from servers. This prevents using time to determine if
          an allocation reconnected after a disconnect event.
        - Disconnect events are already stored as `AllocState` and so storing
          reconnects there as well makes it the only source of information
          required.
      
      With the new logic, the reconnection logic is only triggered if the
      last `AllocState` is a disconnect event, meaning that the allocation has
      not been reconnected yet. After the reconnection is handled, the new
      `ClientStatus` is store in `AllocState` allowing future evals to skip
      the reconnection logic.
      
      * scheduler: prevent spurious placement on reconnect
      
      When a client reconnects it makes two independent RPC calls:
      
        - `Node.UpdateStatus` to heartbeat and set its status as `ready`.
        - `Node.UpdateAlloc` to update the status of its allocations.
      
      These two calls can happen in any order, and in case the allocations are
      updated before a heartbeat it causes the state to be the same as a node
      being disconnected: the node status will still be `disconnected` while
      the allocation `ClientStatus` is set to `running`.
      
      The current implementation did not handle this order of events properly,
      and the scheduler would create an unnecessary placement since it
      considered the allocation was being disconnected. This extra allocation
      would then be quickly stopped by the heartbeat eval.
      
      This commit adds a new code path to handle this order of events. If the
      node is `disconnected` and the allocation `ClientStatus` is `running`
      the scheduler will check if the allocation is actually reconnecting
      using its `AllocState` events.
      
      * rpc: only allow alloc updates from `ready` nodes
      
      Clients interact with servers using three main RPC methods:
      
        - `Node.GetAllocs` reads allocation data from the server and writes it
          to the client.
        - `Node.UpdateAlloc` reads allocation from from the client and writes
          them to the server.
        - `Node.UpdateStatus` writes the client status to the server and is
          used as the heartbeat mechanism.
      
      These three methods are called periodically by the clients and are done
      so independently from each other, meaning that there can't be any
      assumptions in their ordering.
      
      This can generate scenarios that are hard to reason about and to code
      for. For example, when a client misses too many heartbeats it will be
      considered `down` or `disconnected` and the allocations it was running
      are set to `lost` or `unknown`.
      
      When connectivity is restored the to rest of the cluster, the natural
      mental model is to think that the client will heartbeat first and then
      update its allocations status into the servers.
      
      But since there's no inherit order in these calls the reverse is just as
      possible: the client updates the alloc status and then heartbeats. This
      results in a state where allocs are, for example, `running` while the
      client is still `disconnected`.
      
      This commit adds a new verification to the `Node.UpdateAlloc` method to
      reject updates from nodes that are not `ready`, forcing clients to
      heartbeat first. Since this check is done server-side there is no need
      to coordinate operations client-side: they can continue sending these
      requests independently and alloc update will succeed after the heartbeat
      is done.
      
      * chagelog: add entry for #15068
      
      * code review
      
      * client: skip terminal allocations on reconnect
      
      When the client reconnects with the server it synchronizes the state of
      its allocations by sending data using the `Node.UpdateAlloc` RPC and
      fetching data using the `Node.GetClientAllocs` RPC.
      
      If the data fetch happens before the data write, `unknown` allocations
      will still be in this state and would trigger the
      `allocRunner.Reconnect` flow.
      
      But when the server `DesiredStatus` for the allocation is `stop` the
      client should not reconnect the allocation.
      
      * apply more code review changes
      
      * scheduler: persist changes to reconnected allocs
      
      Reconnected allocs have a new AllocState entry that must be persisted by
      the plan applier.
      
      * rpc: read node ID from allocs in UpdateAlloc
      
      The AllocUpdateRequest struct is used in three disjoint use cases:
      
      1. Stripped allocs from clients Node.UpdateAlloc RPC using the Allocs,
         and WriteRequest fields
      2. Raft log message using the Allocs, Evals, and WriteRequest fields
      3. Plan updates using the AllocsStopped, AllocsUpdated, and Job fields
      
      Adding a new field that would only be used in one these cases (1) made
      things more confusing and error prone. While in theory an
      AllocUpdateRequest could send allocations from different nodes, in
      practice this never actually happens since only clients call this method
      with their own allocations.
      
      * scheduler: remove logic to handle exceptional case
      
      This condition could only be hit if, somehow, the allocation status was
      set to "running" while the client was "unknown". This was addressed by
      enforcing an order in "Node.UpdateStatus" and "Node.UpdateAlloc" RPC
      calls, so this scenario is not expected to happen.
      
      Adding unnecessary code to the scheduler makes it harder to read and
      reason about it.
      
      * more code review
      
      * remove another unused test
      7828c02a
    • Luiz Aoqui's avatar
      client: retry RPC call when no server is available (#15140) · f33bb5ec
      Luiz Aoqui authored
      When a Nomad service starts it tries to establish a connection with
      servers, but it also runs alloc runners to manage whatever allocations
      it needs to run.
      
      The alloc runner will invoke several hooks to perform actions, with some
      of them requiring access to the Nomad servers, such as Native Service
      Discovery Registration.
      
      If the alloc runner starts before a connection is established the alloc
      runner will fail, causing the allocation to be shutdown. This is
      particularly problematic for disconnected allocations that are
      reconnecting, as they may fail as soon as the client reconnects.
      
      This commit changes the RPC request logic to retry it, using the
      existing retry mechanism, if there are no servers available.
      f33bb5ec
    • Charlie Voiselle's avatar
      template: error on missing key (#15141) · 52a254ba
      Charlie Voiselle authored
      * Support error_on_missing_value for templates
      * Update docs for template stanza
      52a254ba
    • Seth Hoenig's avatar
      e2e: explicitly wait on task status in chroot download exec test (#15145) · 3c17552d
      Seth Hoenig authored
      Also add some debug log lines for this test, because it doesn't make sense
      for the allocation to be complete yet a task in the allocation to be not
      started yet, which is what the test failures are implying.
      3c17552d
  11. 03 Nov, 2022 5 commits