This project is mirrored from https://:*****@github.com/hashicorp/terraform.git.
Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
- 05 Jan, 2022 10 commits
-
-
Martin Atkins authored
Normally when we cross-compile we do so without CGo, because we don't have suitable C headers available for systems other than the host. However, building for macOS on macOS is special because there are sufficient headers available on darwin_amd64 to build for both darwin_amd64 _and_ darwin_arm64. Also, we _must_ use CGo on macOS because the system resolver is only available via darwin's libc, and so building without CGo produces executables that don't resolve hostnames correctly. This is a conditional in bash to avoid having to duplicate the entire step. Perhaps later we'll find a more general version of this which can avoid the special case, but this is sufficient for the moment.
-
Martin Atkins authored
In our build workflow we'll treat Linux distribution packaging (currently .deb and .rpm packages) as a separate job, instead of embedding it into the "build" job, so that this step can happen concurrently with the other derived actions like the docker image build, and the e2etest runs.
-
Martin Atkins authored
This workflow only generates artifacts and doesn't need to modify anything about the repository.
-
Martin Atkins authored
We can use an extra matrix dimension to select which execution environment we'll use for each GOOS/GOARCH pair, and thus avoid duplicating the job definition for darwin just to set runs-on: macos-latest for it. This is not really an intended use of a matrix dimension because it's directly related to the existing "goos" one, rather than being an independent third dimension, but it doesn't matter in practice because we're using the "include" option to specify exact combinations, and thus we're not relying on the built-in functionality to generate all possible matrix combinations.
-
Martin Atkins authored
This should eventually grow to be a step that actually verifies the validity of the docs source prior to publishing the artifact that a downstream publishing pipeline can consume, but for the moment it's really just a placeholder since we have no such validation step and no downstream pipeline consuming this artifact. The general idea here is that the artifacts from this workflow should be sufficient for all downstream release steps to occur without any direct access to the Terraform CLI repository, and so this is intended to eventually meet that ideal but as of this commit the website docs publishing step _does_ still depend on direct access to this repository.
-
Martin Atkins authored
This uses the decoupled build and run strategy to run the e2etests so that we can arrange to run the tests against the real release packages produced elsewhere in this workflow, rather than ones generated just in time by the test harness. The modifications to make-archive.sh here make it more consistent with its originally-intended purpose of producing a harness for testing "real" release executables. Our earlier compromise of making it include its own terraform executable came from a desire to use that script as part of manual cross-platform testing when we weren't yet set up to support automation of those tests as we're doing here. That does mean, however, that the terraform-e2etest package content must be combined with content from a terraform release package in order to produce a valid contest for running the tests. We use a single job to cross-compile the test harness for all of the supported platforms, because that build is relatively fast and so not worth the overhead of matrix build, but then use a matrix build to actually run the tests so that we can run them in a worker matching the target platform. We currently have access only to amd64 (x64) runners in GitHub Actions and so for the moment this process is limited only to the subset of our supported platforms which use that architecture.
-
Martin Atkins authored
For the moment this is just an experimental additional sidecar package build process, separate from the one we really use for releases, so that we can get some experience building in the GitHub Actions environment before hopefully eventually switching to using the artifacts from this process as the packages we'll release through the official release channels. It will react to any push to one of our release branches or to a release tag by building official-release-like .zip, .deb, and .rpm packages, along with Docker images, based on the content of the corresponding commit. For the moment this doesn't actually produce _shippable_ packages because in particular it doesn't know how to update our version/version.go file to hard-code the correct version number. Once Go 1.18 is release and we've upgraded to it we'll switch to using debug.ReadBuildInfo to determine our version number at runtime and so no longer need to directly update a source file for each release, but that functionality isn't yet available in our current Go 1.17 release.
-
Alisdair McDiarmid authored
command/show: Disable plan state lineage checks
-
kmoe authored
dags: fix BasicEdge pointer issue
-
Katy Moe authored
When creating a Set of BasicEdges, the Hashcode function is used to determine map keys for the underlying set data structure. The string hex representation of the two vertices' pointers is unsafe to use as a map key, since these addresses may change between the time they are added to the set and the time the set is operated on. Instead we modify the Hashcode function to maintain the references to the underlying vertices so they cannot be garbage collected during the lifetime of the Set.
-
- 04 Jan, 2022 10 commits
-
-
Dylan Staley authored
Store website nav files (main)
-
James Bardin authored
dag: minor cleanup
-
James Bardin authored
cleanup some move graph handling
-
Alisdair McDiarmid authored
lang/funcs: Redact sensitive values from function errors
-
James Bardin authored
TransitiveReduction does not rely on having a single root, and only must be free of cycles. DepthFirstWalk and ReverseDepthFirstWalk do not do a topological sort, so if order matters TransitiveReduction must be run first.
-
James Bardin authored
These two functions were left during a refactor to ensure the old behavior of a sorted walk was still accessible in some manner. The package has since been removed from any public API, and the sorted versions are no longer called, so we can remove them.
-
James Bardin authored
Create a separate `validateMoveStatementGraph` function so that `ValidateMoves` and `ApplyMoves` both check the same conditions. Since we're not using the builtin `graph.Validate` method, because we may have multiple roots and want better cycle diagnostics, we need to add checks for self references too. While multiple roots are an error enforced by `Validate` for the concurrent walk, they are OK when using `TransitiveReduction` and `ReverseDepthFirstWalk`, so we can skip that check. Apply moves must first use `TransitiveReduction` to reduce the graph, otherwise nodes may be skipped if they are passed over by a transitive edge.
-
James Bardin authored
Changing only the index on a nested module will cause all nested moves to create cycles, since their full addresses will match both the From and To addresses. When building the dependency graph, check if the parent is only changing the index of the containing module, and prevent the backwards edge for the move.
-
James Bardin authored
Implied moves in nested modules were being skipped
-
James Bardin authored
Add a method for checking if the From and To addresses in a move statement are only changing the indexes of modules relative to the statement module. This is needed because move statement nested within the module will be able to match against both the From and To addresses, causing cycles in the order of move operations.
-
- 03 Jan, 2022 1 commit
-
-
Martin Atkins authored
There was an unintended regression in go-getter v1.5.9's GitGetter which caused us to temporarily fork that particular getter into Terraform to expedite a fix. However, upstream v1.5.10 now includes a functionally-equivalent fix and so we can heal that fork by upgrading. We'd also neglected to update the Module Sources docs when upgrading to go-getter v1.5.9 originally and so we were missing documentation about the new "depth" argument to enable shadow cloning, which I've added retroactively here along with documenting its restriction of only supporting named refs. This new go-getter release also introduces a new credentials-passing method for the Google Cloud Storage getter, and so we must incorporate that into the Terraform-level documentation about module sources.
-
- 22 Dec, 2021 5 commits
-
-
James Bardin authored
Handle move blocks within a module which is changing the index
-
James Bardin authored
Changing only the index on a nested module will cause all nested moves to create cycles, since their full addresses will match both the From and To addresses. When building the dependency graph, check if the parent is only changing the index of the containing module, and prevent the backwards edge for the move.
-
Martin Atkins authored
This paragraph is trying to say that try only works for dynamic errors and not for errors that are _not_ based on dynamic decision-making in expressions. I'm not sure if this typo was always here or if it was mistakenly "corrected" at some point, but either way the word "probably" changes the meaning of this sentence entirely, making it seem like Terraform is hedging the likelihood of a problem rather than checking exactly for one.
-
Barrett Clark authored
Cloud: Add parallelism back into the tests
-
Alisdair McDiarmid authored
refactoring: Move nested modules
-
- 21 Dec, 2021 6 commits
-
-
Dylan Staley authored
-
James Bardin authored
skip provider resolution when there are errors
-
James Bardin authored
Implied moves in nested modules were being skipped
-
James Bardin authored
Add a method for checking if the From and To addresses in a move statement are only changing the indexes of modules relative to the statement module. This is needed because move statement nested within the module will be able to match against both the From and To addresses, causing cycles in the order of move operations.
-
Alisdair McDiarmid authored
When applying module `moved` statements by iterating through modules in state, we previously required an exact match from the `moved` statement's `from` field and the module address. This permitted moving resources directly inside a module, but did not recur into module calls within those moved modules. This commit moves that exact match requirement so that it only applies to `moved` statements targeting resources. In turn this allows nested modules to be moved.
-
Laura Pacilio authored
Update taint command page to make alternative clearer
-
- 20 Dec, 2021 4 commits
-
-
Barrett Clark authored
As the cloud e2e tests evolved some common patters became apparent. This standardizes and consolidates the patterns into a common test runner that takes the table tests and runs them in parallel. Some tests also needed to be converted to utilize table tests.
-
Laura Pacilio authored
-
Laura Pacilio authored
-
Dylan Staley authored
Update make website workflow
-
- 17 Dec, 2021 4 commits
-
-
Martin Atkins authored
Previously we would only ever add new lock entries or update existing ones. However, it's possible that over time a module may _cease_ using a particular provider, at which point we ought to remove it from the lock file so that operations won't fail when seeing that the provider cache directory is inconsistent with the lock file. Now the provider installer (EnsureProviderVersions) will remove any lock file entries that relate to providers not included in the given requirements, which therefore makes the resulting lock file properly match the set of packages the installer wrote into the cache. This does potentially mean that someone could inadvertently defeat the lock by removing a provider dependency, running "terraform init", then undoing that removal, and finally running "terraform init" again. However, that seems relatively unlikely compared to the likelihood of removing a provider and keeping it removed, and in the event it _did_ happen the changes to the lock entry for that provider would be visible in the diff of the provider lock file as usual, and so could be noticed in code review just as for any other change to dependencies.
-
Alisdair McDiarmid authored
When showing a saved plan, we do not need to check the state lineage against current state, because the plan cannot be applied. This is relevant when plan and apply specify a `-state` argument to choose a non-default state file. In this case, the stored prior state in the plan will not match the default state file, so a lineage check will always error.
-
James Bardin authored
Apply graph failure handling
-
James Bardin authored
Apply should not return a nil state to be persisted.
-