Model Description
QTSplus-3B-FT is a Qwen2.5-VL–based multimodal LLM finetuned with Query‑Aware Token Selector (QTSplus), a lightweight visual token selection module that acts as an information gate between the vision encoder and the LLM.
- Query‑aware selection: scores vision tokens via cross‑attention against the input text query.
- Adaptive retention: predicts an instance‑specific budget and keeps only the most relevant tokens.
- Temporal reasoning: a small re‑encoder preserves temporal order with absolute time cues.
- Efficient long‑video understanding: up to 89% vision token compression and 28% end‑to‑end latency reduction on long videos (see paper for details).
Intended Uses & Limitations
Intended uses
- Long‑video question answering and captioning
- Multi‑image reasoning and story understanding
- Efficient multimodal chat with reduced latency on long inputs
Limitations
- May miss fine details if the predicted retention budget is too small.
- Inherits biases and failure modes from the base Qwen2.5‑VL model and training data.
- Not a safety‑aligned system; outputs may be inaccurate or unsafe without human oversight.
Quick Start
The repository is designed around a conda‑based Python 3.11 environment with a CUDA‑enabled GPU.
- Create and activate the conda environment
conda create -n qtsplus python=3.11 -y
conda activate qtsplus
- Install toolchain and CUDA toolkit
conda install conda-forge::gcc=11 conda-forge::gxx=11 -y
conda install nvidia/label/cuda-12.8.1::cuda-toolkit -y
conda install av -c conda-forge -y
- Install PyTorch with CUDA 12.8 support
pip3 install torch==2.9.0 torchvision --index-url https://download.pytorch.org/whl/cu128
- Install core Python libraries
pip install transformers==4.57.1
DS_BUILD_CUTLASS_OPS=0 DS_BUILD_RAGGED_DEVICE_OPS=0 DS_BUILD_EVOFORMER_ATTN=0 pip install deepspeed
pip install accelerate pandas wandb matplotlib scikit-learn datasets evaluate ftfy sentencepiece bitsandbytes
- Install FlashAttention (prebuilt wheel)
pip install https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.4.22/flash_attn-2.8.1+cu128torch2.9-cp311-cp311-linux_x86_64.whl
This wheel is specific to Linux x86_64, CUDA 12.8, PyTorch 2.9.0 and Python 3.11; if you deviate from this configuration, you will need to install a compatible FlashAttention build instead.
- Verify installation
After installation, you should be able to run:
python -c "import torch, transformers, deepspeed, accelerate; print(torch.cuda.is_available())"
which should print True on a correctly configured GPU machine.
Video example
import torch, glob, os
from transformers import AutoModelForCausalLM, AutoProcessor
from qwen_vl_utils import process_vision_info
model_id = "AlpachinoNLP/QTSplus-3B-FT"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float16
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to(dtype=dtype, device=device).eval()
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
question = "Summarize the key events in this video."
video_path = "/path/to/video.mp4"
messages = [{
"role": "user",
"content": [
{"type": "video", "video": video_path, "max_pixels": 360*420, "fps": 1.0},
{"type": "text", "text": question},
],
}]
chat = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
_, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(text=[chat], images=None, videos=video_inputs, padding=True, return_tensors="pt", **video_kwargs)
inputs = inputs.to(dtype=torch.float16, device=device)
# Pack vision inputs for QTSplus
pixel_values_videos = inputs.pop("pixel_values_videos", None)
video_grid_thw = inputs.pop("video_grid_thw", None)
inputs.pop("second_per_grid_ts", None)
vision_input = None
if pixel_values_videos is not None and video_grid_thw is not None:
vision_input = {"pixel_values_videos": pixel_values_videos, "video_grid_thw": video_grid_thw}
# Text ids from the question only (exclude special/system/vision tokens)
question_ids = processor.tokenizer(question, return_tensors="pt", add_special_tokens=False).input_ids.to(dtype=torch.long, device=device)
out_ids = model.generate(vision_input=vision_input, input_ids=inputs.input_ids, question_input_ids=question_ids, max_new_tokens=256)
trimmed = [o[len(i):] for i, o in zip(inputs.input_ids, out_ids)]
text = processor.batch_decode(trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(text[0])
Multiple images (treated as a video sequence)
images_dir = "/path/to/images"
image_list = sorted(glob.glob(os.path.join(images_dir, "*.jpg"))) or sorted(glob.glob(os.path.join(images_dir, "*.jpeg")))
messages = [{
"role": "user",
"content": [
{"type": "video", "video": image_list},
{"type": "text", "text": "What story do these images tell?"},
],
}]
chat = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
_, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(text=[chat], images=None, videos=video_inputs, padding=True, return_tensors="pt", **video_kwargs).to(dtype=torch.float16, device=device)
pixel_values_videos = inputs.pop("pixel_values_videos", None)
video_grid_thw = inputs.pop("video_grid_thw", None)
inputs.pop("second_per_grid_ts", None)
vision_input = {"pixel_values_videos": pixel_values_videos, "video_grid_thw": video_grid_thw}
out = model.generate(vision_input=vision_input, input_ids=inputs.input_ids, max_new_tokens=256)
print(processor.decode(out[0], skip_special_tokens=True))
Notes
- The chat template is applied via
processor.apply_chat_templateand expects the messages schema shown above. - QTSplus expects the vision payload under the
vision_inputkeyword argument during generation. - For fully offline use, pass
local_files_only=Truetofrom_pretrainedcalls once the files are cached locally.
Efficiency & Controls
The following QTSplus hyperparameters in config.json control compression and selection behavior:
qts_plus_rho_min/qts_plus_rho_max: min/max retention ratio bounds.(default: 0.05 / 0.5)qts_plus_tau_s: scoring temperature for cross‑attention.(default: 0.5)qts_plus_nmax: hard cap on selected tokens per sample. (default: 25600) These trade off detail vs. speed/memory. See the paper for guidance, ablations, and latency/throughput measurements.
Safety, Bias, and Limitations
- Outputs may be factually incorrect, biased, or unsafe. Do not use without human oversight.
- QTSplus compresses the vision stream; extremely small budgets may drop rare but important details.
- Inherits safety/bias characteristics from the underlying Qwen2.5‑VL model and training data.
Citation
If you find this work helpful, please cite:
@misc{li2025seeingforesttreesqueryaware,
title = {Seeing the Forest and the Trees: Query-Aware Tokenizer for Long-Video Multimodal Language Models},
author = {Siyou Li and Huanan Wu and Juexi Shao and Yinghao Ma and Yujian Gan and Yihao Luo and Yuwei Wang and Dong Nie and Lu Wang and Wengqing Wu and Le Zhang and Massimo Poesio and Juntao Yu},
year = {2025},
eprint = {2511.11910},
archivePrefix= {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2511.11910}
}
- Downloads last month
- 28