vllm.model_executor.models.bee ¶
BeeDummyInputsBuilder ¶
Bases: LlavaDummyInputsBuilder[BeeProcessingInfo]
Source code in vllm/model_executor/models/bee.py
get_dummy_mm_data ¶
get_dummy_mm_data(
seq_len: int,
mm_counts: Mapping[str, int],
mm_options: Mapping[str, BaseDummyOptions]
| None = None,
) -> MultiModalDataDict
Source code in vllm/model_executor/models/bee.py
BeeForConditionalGeneration ¶
Bases: LlavaOnevisionForConditionalGeneration
Source code in vllm/model_executor/models/bee.py
hf_to_vllm_mapper class-attribute
instance-attribute
¶
hf_to_vllm_mapper = WeightsMapper(
orig_to_new_prefix={
"model.language_model.": "language_model.model.",
"model.vision_tower.": "vision_tower.",
"model.multi_modal_projector.": "multi_modal_projector.",
"model.image_newline": "image_newline",
"lm_head.": "language_model.lm_head.",
}
)
__init__ ¶
__init__(
*, vllm_config: VllmConfig, prefix: str = ""
) -> None
BeeMultiModalProjector ¶
Bases: Module
Source code in vllm/model_executor/models/bee.py
__init__ ¶
Source code in vllm/model_executor/models/bee.py
forward ¶
Source code in vllm/model_executor/models/bee.py
BeeProcessingInfo ¶
Bases: LlavaNextProcessingInfo
Source code in vllm/model_executor/models/bee.py
_get_num_unpadded_features ¶
_get_num_unpadded_features(
*,
original_height: int,
original_width: int,
npatches: int,
num_patch_height: int,
num_patch_width: int,
) -> tuple[int, int]
Override to use correct max_num_patches from vision_aspect_ratio.