-
Jake Champlin authored
We recently ran into an issue on a small percentage of nomad-clients where the nomad-client was running successfully, but due to a race condition, could not correctly bind to the docker socket. This caused all of our nomad jobs to be allocated to a single nomad-client instead of being spread evenly across our clients. The only way to discover this was to run `nomad node-status <node>` and count each job allocation per node. This can lead to a fairly long debugging process if there are several nomad-clients. Including the number of allocations for each node in the `node-status` command would save a large amount of debug time. ``` jake@biscuits [12:08:41] [~] -> % nomad node-status ID Datacenter Name Class Drain Status Allocations 2b0aabc5 dc1 biscuits <none> false ready 0 ``` ``` jake@biscuits [12:08:55] [~] -> % nomad node-status ID Datacenter Name Class Drain Status Allocations 2b0aabc5 dc1 biscuits <none> false ready 1 ```
1e7e9fb8