Commit 13474bca authored by Mahmood Ali's avatar Mahmood Ali
Browse files

update links to use new canonical location

parent 11d25fdb
Showing with 46 additions and 46 deletions
+46 -46
......@@ -384,7 +384,7 @@ The `Task` object supports the following keys:
Consul for service discovery. A `Service` object represents a routable and
discoverable service on the network. Nomad automatically registers when a task
is started and de-registers it when the task transitions to the dead state.
[Click here](/guides/operations/consul-integration/index.html#service-discovery) to learn more about
[Click here](/guides/integrations/consul-integration/index.html#service-discovery) to learn more about
services. Below is the fields in the `Service` object:
- `Name`: An explicit name for the Service. Nomad will replace `${JOB}`,
......
......@@ -9,7 +9,7 @@ description: |-
# Sentinel Policies HTTP API
The `/sentinel/policies` and `/sentinel/policy/` endpoints are used to manage Sentinel policies.
For more details about Sentinel policies, please see the [Sentinel Policy Guide](/guides/security/sentinel-policy.html).
For more details about Sentinel policies, please see the [Sentinel Policy Guide](/guides/governance-and-policy/sentinel/sentinel-policy.html).
Sentinel endpoints are only available when ACLs are enabled. For more details about ACLs, please see the [ACL Guide](/guides/security/acl.html).
......
......@@ -14,7 +14,7 @@ or server functionality, including exposing interfaces for client consumption
and running jobs.
Due to the power and flexibility of this command, the Nomad agent is documented
in its own section. See the [Nomad Agent](/guides/operations/agent/index.html)
in its own section. See the [Nomad Agent](/guides/install/production/nomad-agent.html)
guide and the [Configuration](/docs/configuration/index.html) documentation section for
more information on how to use this command and the options it has.
......
......@@ -141,7 +141,7 @@ isolation is not supported as of now.
[lxc-create]: https://linuxcontainers.org/lxc/manpages/man1/lxc-create.1.html
[lxc-driver]: https://releases.hashicorp.com/nomad-driver-lxc
[lxc-guide]: /guides/external/lxc.html
[lxc-guide]: /guides/operating-a-job/external/lxc.html
[lxc_man]: https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html#lbAM
[plugin]: /docs/configuration/plugin.html
[plugin_dir]: /docs/configuration/index.html#plugin_dir
......
......@@ -11,7 +11,7 @@ description: |-
# Nomad Enterprise Namespaces
In [Nomad Enterprise](https://www.hashicorp.com/go/nomad-enterprise), a shared
cluster can be partitioned into [namespaces](/guides/security/namespaces.html) which allows
cluster can be partitioned into [namespaces](/guides/governance-and-policy/namespaces.html) which allows
jobs and their associated objects to be isolated from each other and other users
of the cluster.
......@@ -19,8 +19,8 @@ Namespaces enhance the usability of a shared cluster by isolating teams from the
jobs of others, provide fine grain access control to jobs when coupled with
[ACLs](/guides/security/acl.html), and can prevent bad actors from negatively impacting
the whole cluster when used in conjunction with
[resource quotas](/guides/security/quotas.html). See the
[Namespaces Guide](/guides/security/namespaces.html) for a thorough overview.
[resource quotas](/guides/governance-and-policy/quotas.html). See the
[Namespaces Guide](/guides/governance-and-policy/namespaces.html) for a thorough overview.
Click [here](https://www.hashicorp.com/go/nomad-enterprise) to set up a demo or
request a trial of Nomad Enterprise.
......@@ -11,13 +11,13 @@ description: |-
# Nomad Enterprise Resource Quotas
In [Nomad Enterprise](https://www.hashicorp.com/go/nomad-enterprise), operators can
define [quota specifications](/guides/security/quotas.html) and apply them to namespaces.
define [quota specifications](/guides/governance-and-policy/quotas.html) and apply them to namespaces.
When a quota is attached to a namespace, the jobs within the namespace may not
consume more resources than the quota specification allows.
This allows operators to partition a shared cluster and ensure that no single
actor can consume the whole resources of the cluster. See the
[Resource Quotas Guide](/guides/security/quotas.html) for more details.
[Resource Quotas Guide](/guides/governance-and-policy/quotas.html) for more details.
Click [here](https://www.hashicorp.com/go/nomad-enterprise) to set up a demo or
request a trial of Nomad Enterprise.
......@@ -9,7 +9,7 @@ description: |-
# Nomad Enterprise Sentinel Policy Enforcement
In [Nomad Enterprise](https://www.hashicorp.com/go/nomad-enterprise), operators can
create [Sentinel policies](/guides/security/sentinel-policy.html) for fine-grained policy
create [Sentinel policies](/guides/governance-and-policy/sentinel/sentinel-policy.html) for fine-grained policy
enforcement. Sentinel policies build on top of the ACL system and allow operators to define
policies such as disallowing jobs to be submitted to production on
Fridays. These extremely rich policies are defined as code. For example, to
......@@ -30,7 +30,7 @@ all_drivers_docker = rule {
}
```
See the [Sentinel Policies Guide](/guides/security/sentinel-policy.html) for additional details and examples.
See the [Sentinel Policies Guide](/guides/governance-and-policy/sentinel/sentinel-policy.html) for additional details and examples.
Click [here](https://www.hashicorp.com/go/nomad-enterprise) to set up a demo or
request a trial of Nomad Enterprise.
\ No newline at end of file
request a trial of Nomad Enterprise.
......@@ -622,7 +622,7 @@ system of a task for that driver.</small>
[check_restart_stanza]: /docs/job-specification/check_restart.html "check_restart stanza"
[consul_grpc]: https://www.consul.io/api/agent/check.html#grpc
[service-discovery]: /guides/operations/consul-integration/index.html#service-discovery/index.html "Nomad Service Discovery"
[service-discovery]: /guides/integrations/consul-integration/index.html#service-discovery/index.html "Nomad Service Discovery"
[interpolation]: /docs/runtime/interpolation.html "Nomad Runtime Interpolation"
[network]: /docs/job-specification/network.html "Nomad network Job Specification"
[qemu]: /docs/drivers/qemu.html "Nomad qemu Driver"
......
......@@ -205,7 +205,7 @@ task "server" {
[java]: /docs/drivers/java.html "Nomad Java Driver"
[Docker]: /docs/drivers/docker.html "Nomad Docker Driver"
[rkt]: /docs/drivers/rkt.html "Nomad rkt Driver"
[service_discovery]: /guides/operations/consul-integration/index.html#service-discovery/index.html "Nomad Service Discovery"
[service_discovery]: /guides/integrations/consul-integration/index.html#service-discovery/index.html "Nomad Service Discovery"
[template]: /docs/job-specification/template.html "Nomad template Job Specification"
[user_drivers]: /docs/configuration/client.html#_quot_user_checked_drivers_quot_
[user_blacklist]: /docs/configuration/client.html#_quot_user_blacklist_quot_
......
......@@ -19,7 +19,7 @@ application:
The Spark integration will use a generic job template by default. The template
includes groups and tasks for the driver, executors and (optionally) the
[shuffle service](/guides/spark/dynamic.html). The job itself and the tasks that
[shuffle service](/guides/analytical-workloads/spark/dynamic.html). The job itself and the tasks that
are created have the `spark.nomad.role` meta value defined accordingly:
```hcl
......@@ -57,7 +57,7 @@ job "structure" {
```
The default template can be customized indirectly by explicitly [setting
configuration properties](/guides/spark/configuration.html).
configuration properties](/guides/analytical-workloads/spark/configuration.html).
## Using a Custom Job Template
......@@ -122,5 +122,5 @@ The order of precedence for customized settings is as follows:
## Next Steps
Learn how to [allocate resources](/guides/spark/resource.html) for your Spark
Learn how to [allocate resources](/guides/analytical-workloads/spark/resource.html) for your Spark
applications.
......@@ -25,4 +25,4 @@ allocation in Nomad, the resources allocated to the executor tasks are not
## Next Steps
Learn how to [integrate Spark with HDFS](/guides/spark/hdfs.html).
Learn how to [integrate Spark with HDFS](/guides/analytical-workloads/spark/hdfs.html).
......@@ -135,5 +135,5 @@ Availability](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-h
## Next Steps
Learn how to [monitor the output](/guides/spark/monitoring.html) of your
Learn how to [monitor the output](/guides/analytical-workloads/spark/monitoring.html) of your
Spark applications.
......@@ -10,7 +10,7 @@ description: |-
By default, `spark-submit` in `cluster` mode will submit your application
to the Nomad cluster and return immediately. You can use the
[spark.nomad.cluster.monitorUntil](/guides/spark/configuration.html#spark-nomad-cluster-monitoruntil) configuration property to have
[spark.nomad.cluster.monitorUntil](/guides/analytical-workloads/spark/configuration.html#spark-nomad-cluster-monitoruntil) configuration property to have
`spark-submit` monitor the job continuously. Note that, with this flag set,
killing `spark-submit` will *not* stop the spark application, since it will be
running independently in the Nomad cluster.
......@@ -31,7 +31,7 @@ cause the driver process to continue to run. You can force termination
It is possible to reconstruct the web UI of a completed application using
Spark’s [history server](https://spark.apache.org/docs/latest/monitoring.html#viewing-after-the-fact).
The history server requires the event log to have been written to an accessible
location like [HDFS](/guides/spark/hdfs.html) or Amazon S3.
location like [HDFS](/guides/analytical-workloads/spark/hdfs.html) or Amazon S3.
Sample history server job file:
......@@ -85,7 +85,7 @@ job "spark-history-server" {
The job file above can also be found [here](https://github.com/hashicorp/nomad/blob/master/terraform/examples/spark/spark-history-server-hdfs.nomad).
To run the history server, first [deploy HDFS](/guides/spark/hdfs.html) and then
To run the history server, first [deploy HDFS](/guides/analytical-workloads/spark/hdfs.html) and then
create a directory in HDFS to store events:
```shell
......@@ -164,4 +164,4 @@ job "template" {
## Next Steps
Review the Nomad/Spark [configuration properties](/guides/spark/configuration.html).
Review the Nomad/Spark [configuration properties](/guides/analytical-workloads/spark/configuration.html).
......@@ -19,7 +19,7 @@ can be used to quickly provision a Spark-enabled Nomad environment in
AWS. The embedded [Spark example](https://github.com/hashicorp/nomad/tree/master/terraform/examples/spark)
provides for a quickstart experience that can be used in conjunction with
this guide. When you have a cluster up and running, you can proceed to
[Submitting applications](/guides/spark/submit.html).
[Submitting applications](/guides/analytical-workloads/spark/submit.html).
## Manually Provision a Cluster
......@@ -90,20 +90,20 @@ $ spark-submit \
### Using a Docker Image
An alternative to installing the JRE on every client node is to set the
[spark.nomad.dockerImage](/guides/spark/configuration.html#spark-nomad-dockerimage)
[spark.nomad.dockerImage](/guides/analytical-workloads/spark/configuration.html#spark-nomad-dockerimage)
configuration property to the URL of a Docker image that has the Java runtime
installed. If set, Nomad will use the `docker` driver to run Spark executors in
a container created from the image. The
[spark.nomad.dockerAuth](/guides/spark/configuration.html#spark-nomad-dockerauth)
[spark.nomad.dockerAuth](/guides/analytical-workloads/spark/configuration.html#spark-nomad-dockerauth)
configuration property can be set to a JSON object to provide Docker repository
authentication configuration.
When using a Docker image, both the Spark distribution and the application
itself can be included (in which case local URLs can be used for `spark-submit`).
Here, we include [spark.nomad.dockerImage](/guides/spark/configuration.html#spark-nomad-dockerimage)
Here, we include [spark.nomad.dockerImage](/guides/analytical-workloads/spark/configuration.html#spark-nomad-dockerimage)
and use local paths for
[spark.nomad.sparkDistribution](/guides/spark/configuration.html#spark-nomad-sparkdistribution)
[spark.nomad.sparkDistribution](/guides/analytical-workloads/spark/configuration.html#spark-nomad-sparkdistribution)
and the application JAR file:
```shell
......@@ -119,4 +119,4 @@ $ spark-submit \
## Next Steps
Learn how to [submit applications](/guides/spark/submit.html).
Learn how to [submit applications](/guides/analytical-workloads/spark/submit.html).
......@@ -39,7 +39,7 @@ Resource-related configuration properties are covered below.
The standard Spark memory properties will be propagated to Nomad to control
task resource allocation: `spark.driver.memory` (set by `--driver-memory`) and
`spark.executor.memory` (set by `--executor-memory`). You can additionally specify
[spark.nomad.shuffle.memory](/guides/spark/configuration.html#spark-nomad-shuffle-memory)
[spark.nomad.shuffle.memory](/guides/analytical-workloads/spark/configuration.html#spark-nomad-shuffle-memory)
to control how much memory Nomad allocates to shuffle service tasks.
## CPU
......@@ -48,11 +48,11 @@ Spark sizes its thread pools and allocates tasks based on the number of CPU
cores available. Nomad manages CPU allocation in terms of processing speed
rather than number of cores. When running Spark on Nomad, you can control how
much CPU share Nomad will allocate to tasks using the
[spark.nomad.driver.cpu](/guides/spark/configuration.html#spark-nomad-driver-cpu)
[spark.nomad.driver.cpu](/guides/analytical-workloads/spark/configuration.html#spark-nomad-driver-cpu)
(set by `--driver-cpu`),
[spark.nomad.executor.cpu](/guides/spark/configuration.html#spark-nomad-executor-cpu)
[spark.nomad.executor.cpu](/guides/analytical-workloads/spark/configuration.html#spark-nomad-executor-cpu)
(set by `--executor-cpu`) and
[spark.nomad.shuffle.cpu](/guides/spark/configuration.html#spark-nomad-shuffle-cpu)
[spark.nomad.shuffle.cpu](/guides/analytical-workloads/spark/configuration.html#spark-nomad-shuffle-cpu)
properties. When running on Nomad, executors will be configured to use one core
by default, meaning they will only pull a single 1-core task at a time. You can
set the `spark.executor.cores` property (set by `--executor-cores`) to allow
......@@ -64,9 +64,9 @@ Nomad does not restrict the network bandwidth of running tasks, bit it does
allocate a non-zero number of Mbit/s to each task and uses this when bin packing
task groups onto Nomad clients. Spark defaults to requesting the minimum of 1
Mbit/s per task, but you can change this with the
[spark.nomad.driver.networkMBits](/guides/spark/configuration.html#spark-nomad-driver-networkmbits),
[spark.nomad.executor.networkMBits](/guides/spark/configuration.html#spark-nomad-executor-networkmbits), and
[spark.nomad.shuffle.networkMBits](/guides/spark/configuration.html#spark-nomad-shuffle-networkmbits)
[spark.nomad.driver.networkMBits](/guides/analytical-workloads/spark/configuration.html#spark-nomad-driver-networkmbits),
[spark.nomad.executor.networkMBits](/guides/analytical-workloads/spark/configuration.html#spark-nomad-executor-networkmbits), and
[spark.nomad.shuffle.networkMBits](/guides/analytical-workloads/spark/configuration.html#spark-nomad-shuffle-networkmbits)
properties.
## Log rotation
......@@ -74,9 +74,9 @@ properties.
Nomad performs log rotation on the `stdout` and `stderr` of its tasks. You can
configure the number number and size of log files it will keep for driver and
executor task groups using
[spark.nomad.driver.logMaxFiles](/guides/spark/configuration.html#spark-nomad-driver-logmaxfiles)
and [spark.nomad.executor.logMaxFiles](/guides/spark/configuration.html#spark-nomad-executor-logmaxfiles).
[spark.nomad.driver.logMaxFiles](/guides/analytical-workloads/spark/configuration.html#spark-nomad-driver-logmaxfiles)
and [spark.nomad.executor.logMaxFiles](/guides/analytical-workloads/spark/configuration.html#spark-nomad-executor-logmaxfiles).
## Next Steps
Learn how to [dynamically allocate Spark executors](/guides/spark/dynamic.html).
Learn how to [dynamically allocate Spark executors](/guides/analytical-workloads/spark/dynamic.html).
......@@ -18,4 +18,4 @@ optionally the application driver itself, run as Nomad tasks in a Nomad job.
## Next Steps
The links in the sidebar contain detailed information about specific aspects of
the integration, beginning with [Getting Started](/guides/spark/pre.html).
the integration, beginning with [Getting Started](/guides/analytical-workloads/spark/pre.html).
......@@ -79,4 +79,4 @@ $ spark-submit --class org.apache.spark.examples.SparkPi \
## Next Steps
Learn how to [customize applications](/guides/spark/customizing.html).
Learn how to [customize applications](/guides/analytical-workloads/spark/customizing.html).
......@@ -19,4 +19,4 @@ description: |-
## Next Steps
[Next step](/guides/spark/name.html)
[Next step](/guides/analytical-workloads/spark/name.html)
......@@ -27,7 +27,7 @@ When combined with ACLs, the isolation of namespaces can be enforced, only
allowing designated users access to read or modify the jobs and associated
objects in a namespace.
When [resource quotas](/guides/security/quotas.html) are applied to a namespace they
When [resource quotas](/guides/governance-and-policy/quotas.html) are applied to a namespace they
provide a means to limit resource consumption by the jobs in the namespace. This
can prevent a single actor from consuming excessive cluster resources and
negatively impacting other teams and applications sharing the cluster.
......@@ -39,8 +39,8 @@ jobs, allocations, deployments, and evaluations.
Nomad does not namespace objects that are shared across multiple namespaces.
This includes nodes, [ACL policies](/guides/security/acl.html), [Sentinel
policies](/guides/security/sentinel-policy.html), and [quota
specifications](/guides/security/quotas.html).
policies](/guides/governance-and-policy/sentinel/sentinel-policy.html), and [quota
specifications](/guides/governance-and-policy/quotas.html).
## Working with Namespaces
......
......@@ -21,7 +21,7 @@ This is not present in the open source version of Nomad.
When many teams or users are sharing Nomad clusters, there is the concern that a
single user could use more than their fair share of resources. Resource quotas
provide a mechanism for cluster administrators to restrict the resources that a
[namespace](/guides/security/namespaces.html) has access to.
[namespace](/guides/governance-and-policy/namespaces.html) has access to.
## Quotas Objects
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment