PROCESS_WORKER_POOL.PY(1) User Commands PROCESS_WORKER_POOL.PY(1)

process_worker_pool.py - create a parsl process worker pool

usage: process_worker_pool.py [-h] [-d] [-a ADDRESSES] --cert_dir CERT_DIR

[-l LOGDIR] [-u UID] [-b BLOCK_ID]
[-c CORES_PER_WORKER] [-m MEM_PER_WORKER] -t TASK_PORT [--max_workers MAX_WORKERS] [-p PREFETCH_CAPACITY] [--hb_period HB_PERIOD] [--hb_threshold HB_THRESHOLD] [--address_probe_timeout ADDRESS_PROBE_TIMEOUT] [--poll POLL] -r RESULT_PORT --cpu-affinity CPU_AFFINITY [--available-accelerators [AVAILABLE_ACCELERATORS ...]] [--enable_mpi_mode] [--mpi-launcher {srun,aprun,mpiexec}]

show this help message and exit
Enable logging at DEBUG level
Comma separated list of addresses at which the interchange could be reached
Path to certificate directory.
Process worker pool log directory
Unique identifier string for Manager
Block identifier for Manager
Number of cores assigned to each worker process. Default=1.0
GB of memory assigned to each worker process. Default=0, no assignment
REQUIRED: Task port for receiving tasks from the interchange
Caps the maximum workers that can be launched, default:infinity
Number of tasks that can be prefetched to the manager. Default is 0.
Heartbeat period in seconds. Uses manager default unless set
Heartbeat threshold in seconds. Uses manager default unless set
Timeout to probe for viable address to interchange. Default: 30s
Poll period used in milliseconds
REQUIRED: Result port for posting results to the interchange
Whether/how workers should control CPU affinity.
Names of available accelerators
Enable MPI mode
MPI launcher to use iff enable_mpi_mode=true
February 2024 process_worker_pool.py 2024.02.26+ds
QR Code