This project is mirrored from https://:*****@github.com/hashicorp/terraform.git. Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
  1. 14 Dec, 2021 5 commits
    • Martin Atkins's avatar
      getmodules: Re-allow git:: source with ref=COMMIT_ID · c4d46e7c
      Martin Atkins authored
      Earlier versions of this code allowed "ref" to take any value that would
      be accepted by "git checkout" as a valid target of a symbolic ref. We
      inadvertently accepted a breaking change to upstream go-getter that broke
      that as part of introducing a shallow clone optimization, because shallow
      clone requires selecting a single branch.
      
      To restore the previous capabilities while retaining the "depth" argument,
      here we accept a compromise where "ref" has the stronger requirement of
      being a valid named ref in the remote repository if and only if "depth"
      is set to a value greater than zero. If depth isn't set or is less than
      one, we will do the old behavior of just cloning all of the refs in the
      remote repository in full and then switching to refer to the selected
      branch, tag, or naked commit ID as a separate step.
      
      This includes a heuristic to generate an additional error message hint if
      we get an error from "git clone" and it looks like the user might've been
      trying to use "depth" and "ref=COMMIT" together. We can't recognize that
      error accurately because it's only reported as human-oriented git command
      output, but this heuristic should hopefully minimize situations where we
      show it inappropriately.
      
      For now this is a change in the Terraform repository directly, so that we
      can expedite the fix to an already-reported regression. After this is
      released I tend to also submit a similar set of changes to upstream
      go-getter, at which point we can revert Terraform to using the upstream
      getter.GitGetter instead of our own local fork.
      c4d46e7c
    • Martin Atkins's avatar
      getmodules: Inline our own fork of getter.GitGetter · b0ff17ef
      Martin Atkins authored
      This is a pragmatic temporary solution to allow us to more quickly resolve
      an upstream regression in go-getter locally within Terraform, so that the
      work to upstream it for other callers can happen asynchronously and with
      less time pressure.
      
      This commit doesn't yet include any changes to address the bug, and
      instead aims to be functionally equivalent to getter.GitGetter. A
      subsequent commit will then address the regression, so that the diff of
      that commit will be easier to apply later to the upstream to get the same
      effect there.
      b0ff17ef
    • Chris Arcand's avatar
      Merge pull request #30142 from hashicorp/chrisarcand/remote-backend-no-workspaces-regression · 8b8fe277
      Chris Arcand authored
      command/meta_backend: Allow the remote backend to have no workspaces [again]
      8b8fe277
    • Chris Arcand's avatar
      command/meta_backend: Allow the remote backend to have no workspaces [again] · 98978b38
      Chris Arcand authored
      A regression introduced in d72a413e
      
      The comment explains, but TLDR: The remote backend actually *depended*
      on being able to write it's backend state even though an 'error'
      occurred (no workspaces).
      98978b38
    • Alisdair McDiarmid's avatar
      Merge pull request #30151 from hashicorp/f-non-existing-module-instance-crash · aa59eb42
      Alisdair McDiarmid authored
      core: Fix crash with orphaned module instance
      aa59eb42
  2. 13 Dec, 2021 2 commits
    • Martin Atkins's avatar
      command/format: Limitation of plans.ResourceInstanceDeleteBecauseNoModule · 096cddb4
      Martin Atkins authored
      This is an explicit technical debt note that our plan renderer isn't able
      to give a fully-specific hint in this particular case of deletion reason.
      
      This reason code means that at least one of the module instance keys in
      the resource's module path doesn't match an instance declared in the
      configuration, but the plan data structure doesn't retain enough
      information to know which is the first step in the path which refers to
      a missing instance, and so we just always return the whole thing.
      
      This would be confusing if we return module.foo[0].module.bar not being
      in the configuration as a result of module.foo not using "count"; it would
      be better to say "module.foo[0] is not in the configuration" instead.
      
      It would be most ideal to handle all of the different situations that
      ResourceInstanceDeleteBecauseWrongRepetition's rendering does, so that we
      can go further and explain exactly _why_ that module instance isn't
      declared anymore.
      
      We can do neither of those things today because only the Terraform Core
      "expander" component knows that information, and we've discarded that
      by the time we get to rendering a plan. To fix this one day would require
      preserving in the plan information about which module instances are
      declared, as a separate sidecar data structure from which resource
      instances we're taking actions on, and then using that to identify which
      step in addr.Module here first selects an invalid instance.
      096cddb4
    • Martin Atkins's avatar
      instances: Non-existing module instance has no resource instances · ec6fe93f
      Martin Atkins authored
      
      Previously we were treating it as a programming error to ask for the
      instances of a resource inside an instance of a module that is declared
      but whose declaration doesn't include the given instance key.
      
      However, that's actually a valid situation which can arise if, for
      example, the user has changed the repetition/expansion mode for an
      existing module call and so now all of the resource instances addresses it
      previously contained are "orphaned".
      
      To represent that, we'll instead say that an invalid instance key of a
      declared module behaves as if it contains no resource instances at all,
      regardless of the configurations of any resources nested inside. This
      then gives the result needed to successfully detect all of the former
      resource instances as "orphaned" and plan to destroy them.
      
      However, this then introduces a new case for
      NodePlannableResourceInstanceOrphan.deleteActionReason to deal with: the
      resource configuration still exists (because configuration isn't aware of
      individual module/resource instances) but the module instance does not.
      This actually allows us to resolve, at least partially, a previous missing
      piece of explaining to the user why the resource instances are planned
      for deletion in that case, finally allowing us to be explicit to the user
      that it's because of the module instance being removed, which
      internally we call plans.ResourceInstanceDeleteBecauseNoModule.
      Co-authored-by: default avatarAlisdair McDiarmid <alisdair@users.noreply.github.com>
      ec6fe93f
  3. 09 Dec, 2021 6 commits
  4. 08 Dec, 2021 25 commits
  5. 07 Dec, 2021 2 commits