vllm.v1.executor.uniproc_executor ¶
ExecutorWithExternalLauncher ¶
Bases: UniProcExecutor
An executor that uses external launchers to launch engines, specially designed for torchrun-compatible launchers, for offline inference with tensor parallelism.
see https://github.com/vllm-project/vllm/issues/11400 for the motivation, and examples/offline_inference/torchrun_example.py for the usage example.
The key idea: although it is tensor-parallel inference, we only create one worker per executor, users will launch multiple engines with torchrun-compatible launchers, and all these engines work together to process the same prompts. When scheduling is deterministic, all the engines will generate the same outputs, and they don't need to synchronize the states with each other.
Source code in vllm/v1/executor/uniproc_executor.py
_distributed_args ¶
Source code in vllm/v1/executor/uniproc_executor.py
_init_executor ¶
Initialize the worker and load the model.
Source code in vllm/v1/executor/uniproc_executor.py
determine_available_memory ¶
Source code in vllm/v1/executor/uniproc_executor.py
UniProcExecutor ¶
Bases: Executor
Source code in vllm/v1/executor/uniproc_executor.py
_distributed_args ¶
Return (distributed_init_method, rank, local_rank).
Source code in vllm/v1/executor/uniproc_executor.py
_init_executor ¶
Initialize the worker and load the model.
Source code in vllm/v1/executor/uniproc_executor.py
check_health ¶
collective_rpc ¶
collective_rpc(
method: str | Callable,
timeout: float | None = None,
args: tuple = (),
kwargs: dict | None = None,
non_block: bool = False,
) -> list[Any]
Source code in vllm/v1/executor/uniproc_executor.py
reinitialize_distributed ¶
reinitialize_distributed(
reconfig_request: ReconfigureDistributedRequest,
) -> None