Scroll to navigation

PROCESS_WORKER_POOL.PY(1) User Commands PROCESS_WORKER_POOL.PY(1)

NAME

process_worker_pool.py - create a parsl process worker pool

DESCRIPTION

usage: process_worker_pool.py [-h] [-d] [-a ADDRESSES] --cert_dir CERT_DIR

[-l LOGDIR] [-u UID] [-b BLOCK_ID]
[-c CORES_PER_WORKER] [-m MEM_PER_WORKER] -t TASK_PORT [--max_workers_per_node MAX_WORKERS_PER_NODE] [-p PREFETCH_CAPACITY] [--hb_period HB_PERIOD] [--hb_threshold HB_THRESHOLD] [--drain_period DRAIN_PERIOD] [--address_probe_timeout ADDRESS_PROBE_TIMEOUT] [--poll POLL] -r RESULT_PORT --cpu-affinity CPU_AFFINITY [--available-accelerators [AVAILABLE_ACCELERATORS ...]] [--enable_mpi_mode] [--mpi-launcher {srun,aprun,mpiexec}]

options:

show this help message and exit
Enable logging at DEBUG level
Comma separated list of addresses at which the interchange could be reached
Path to certificate directory.
Process worker pool log directory
Unique identifier string for Manager
Block identifier for Manager
Number of cores assigned to each worker process. Default=1.0
GB of memory assigned to each worker process. Default=0, no assignment
REQUIRED: Task port for receiving tasks from the interchange
Caps the maximum workers that can be launched, default:infinity
Number of tasks that can be prefetched to the manager. Default is 0.
Heartbeat period in seconds. Uses manager default unless set
Heartbeat threshold in seconds. Uses manager default unless set
Drain this pool after specified number of seconds. By default, does not drain.
Timeout to probe for viable address to interchange. Default: 30s
Poll period used in milliseconds
REQUIRED: Result port for posting results to the interchange
Whether/how workers should control CPU affinity.
Names of available accelerators, if not given assumed to be zero accelerators available
Enable MPI mode
MPI launcher to use iff enable_mpi_mode=true
May 2024 process_worker_pool.py 2024.05.06+ds