- command-line interface to job_manager

Synopsis add [-c | --cache] [-s | --server] <job_description> modify [-c | --cache] [-s | --server] [-i | --index] [-p | --pattern] <job_description> delete [-c | --cache] [-s | --server] [-i | --index] [-p | --pattern] list [-c | --cache] [-s | --server] [-p | --pattern] [-t | --terse] merge [-c | --cache] <[[user@]remote_host:]remote_cache> [remote_hostname] update [-c | --cache] daemon [-c | --cache]  

Description provides a command-line interface to the job_manager python module, which enables a collection of jobs (principally long-running high-performance computing calculations) to be easily monitored and managed.

Jobs can be added, modified, deleted and subsets of the jobs can be viewed. Jobs are categorised according to the computer on which they are run. The local computer on which is run is treated specially and is called localhost.

Data is saved to a cache file between runs and different cache files can be merged together, including caches on remote servers directly over ssh.


Add a job running on the specified server with job details given by job description.
Modify the selected job(s) according to the job description fields supplied. Note that if neither a pattern nor an index is provided then no job is selected to be modified.
Delete the specified jobs. Note that if neither a pattern nor an index is provided then no job is selected to be modified.
List jobs which match the supplied search criteria. The complete list of jobs is printed out if no options are specified. Only fields of the job description which are not null are printed out.
Merge jobs from the remote_cache file into the current cache. The remote hostname nickname must be specified if the remote cache is actually a local file. If remote_hostname is not given and the remote_cache is on a remote machine, then the hostname in the address is used as the remote_hostname parameter.
Check all jobs on the localhost server and update the status of queueing or running jobs if they have started running or finished. The job status is checked by searching for the job_id using ps, qstat (for PBS-based queueing systems) and llq (for LoadLeveler queueing systems).
Run the update command once a minute. Designed to be run in the background as a daemon-type process.

Job description

The job_description consists of a list of key-value pairs. A new pair is started by a new key, so each value can contain spaces. The keys must terminate with a colon (‘:’) and have a space between the end of the key and the first word in the value. See below for examples.

Available elements of the job description are:

ID of the job. This ought to be unique and in order to work with the update and daemon commands should identiy the job by either being the pid of the job (for jobs running interactively) or the ID of the job in the queueing system. This value is most conveniently obtained from the environment or a queueing system environment variable.
Path to the directory in which the job is running.
Name of the program being executed.
Filename of the input file.
Filename of the output file.
status of job. Available values are: unknown, held, queueing, running, finished and analysed. Default: unknown.
File name of the submit script used. Only relevant for jobs run on clusters with queueing systems.
Comment and notes on the job.

Unless specified above, all elements default to being a null value.

A job must have a job_id, path and program specified. Other attributes are optional. Only the attributes to be set or modified need to specified with the add and modify commands.


-c, --cache Specify the location of the cache file containing data from previous runs. The default is $HOME/.cache/jm/jm.cache. The directory structure for the cache file will be created if necessary.
-s, --server Specify the server of the job. The default is the localhost server except for the list command, where the default is all servers. Can be specified multiple times, in which case the command is applied to each server in turn. However, this rarely makes sense for the add command.
-i, --index Select a job by its index on the specified server(s). Can be specified multiple times in order to select multiple jobs.
-p, --pattern Select a job by a given regular expression on the specified server(s). The regular expression is tested against all fields in the job description for each job and a job is selected if any of the fields match the regular expression.
-t, --terse Print only the hostname, index, job id and status of each job.


Create a job from inside a script. $$ is the current process id in bash.

$ add job_id: $$ path: $PWD status: running 

List all jobs.

$ list

Modify part of the job description.

$ modify --index 0 comment: a test calculation

Automatically update the status of running jobs

$ update

Run a daemon process to automatically update the status of running jobs once a minute using a non-default cache file.

$ daemon --cache /path/to/cache

Merge jobs from a remote server into the local job cache:

$ merge user@remote_server_fqdn:/path/to/remote_cache remote_server_name


The remote file is transferred by scp and requires password-free access to the remote server (e.g. by using ssh keys and ssh-agent). If this is not possible, copy the remote cache to the local machine and then merge using the local copy.

List a subset of jobs.

$ list --server remote_server
$ list --server localhost

Delete a job on the remote server.

$ delete --server remote --index 0


The script and the job_manager python module are distributed under the Modified BSD License. Please see the source files for more information.


Contact James Spencer ( regarding bug reports, suggestions for improvements or code contributions.