By default, `spark-submit` in `cluster` mode will submit your application
to the Nomad cluster and return immediately. You can use the
[spark.nomad.cluster.monitorUntil](/guides/spark/configuration.html#spark-nomad-cluster-monitoruntil) configuration property to have
[spark.nomad.cluster.monitorUntil](/guides/analytical-workloads/spark/configuration.html#spark-nomad-cluster-monitoruntil) configuration property to have
`spark-submit` monitor the job continuously. Note that, with this flag set,
killing `spark-submit` will *not* stop the spark application, since it will be
running independently in the Nomad cluster.
...
...
@@ -31,7 +31,7 @@ cause the driver process to continue to run. You can force termination
It is possible to reconstruct the web UI of a completed application using