MLLM-Safety-Study This is the collection for CVPR 2025 paper: Do we really need curated malicious data for safety alignment in multi-modal large language models? palpit/MLLM-Safety-Study Viewer • Updated Apr 18 • 3.02k • 12 palpit/LLaVA-v1.5-7b-2000-llava-med-lora Updated Apr 27 • 224 palpit/LLaVA-v1.5-13b-2000-llava-med-lora Updated Apr 27 • 4
MLLM-Safety-Study This is the collection for CVPR 2025 paper: Do we really need curated malicious data for safety alignment in multi-modal large language models? palpit/MLLM-Safety-Study Viewer • Updated Apr 18 • 3.02k • 12 palpit/LLaVA-v1.5-7b-2000-llava-med-lora Updated Apr 27 • 224 palpit/LLaVA-v1.5-13b-2000-llava-med-lora Updated Apr 27 • 4