Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization

Weiyun Wang2,1, Zhe Chen3,1, Wenhai Wang4,1, Yue Cao3,1, Yangzhou Liu3,1,
Zhangwei Gao1, Jinguo Zhu1, Xizhou Zhu5,1, Lewei Lu6, Yu Qiao1, Jifeng Dai5,1,

1OpenGVLab, Shanghai AI Laboratory, 2Fudan University, 3Nanjing University,
4The Chinese University of Hong Kong, 5Tsinghua University, 6SenseTime Research,

News

❗️ We have updated the InternVL2.5-MPO series, which outperform their counterparts without MPO by an average of 2 points across all scales on the OpenCompass leaderboard. Please refer to this page for more details.

Abstract

Existing open-source multimodal large language models (MLLMs) generally follow a training process involving pre-training and supervised fine-tuning. However, these models suffer from distribution shifts, which limit their multimodal reasoning, particularly in the Chain-of-Thought (CoT) performance. To address this, we introduce a preference optimization (PO) process to enhance the multimodal reasoning capabilities of MLLMs. Specifically, (1) on the data side, we design an automated preference data construction pipeline to create MMPR, a high-quality, large-scale multimodal reasoning preference dataset. and (2) on the model side, we explore integrating PO with MLLMs, developing a simple yet effective method, termed Mixed Preference Optimization (MPO), which boosts multimodal CoT performance. Our approach demonstrates improved performance across multiple benchmarks, particularly in multimodal reasoning tasks. Notably, our model, InternVL2-8B-MPO, achieves an accuracy of 67.0 on MathVista, outperforming InternVL2-8B by 8.7 points and achieving performance comparable to the 10x larger InternVL2-76B. We hope this study could inspire further advancements in MLLMs.

Open-source model performance on MathVista. The X- and Y-axes represent the accuracy evaluated with direct-answer responses and CoT responses, respectively. The bubble size is positively correlated with the number of model parameters. The values in parentheses indicate the performance gap between CoT and direct-answer responses. Notably, most open-source models perform worse when answering with CoT.

Mixed Preference Optimization

The key insight behind MPO is that an effective PO process should enable the model to learn the relative preference between pairs of responses, the absolute quality of individual responses, and the process for generating preferred responses. We define the training objective as a combination of preference loss , quality loss , and generation loss , referred to as Mixed Preference Optimization:

where represents the weight assigned to each loss component. In this work, we empirically compare different variants of preference loss. Based on the experimental results, we use DPO as our preference loss and BCO as our quality loss.

Specifically, the DPO serves as the preference loss to enable the model to learn the relative preference between chosen and rejected responses. This algorithm optimizes the following loss function:

where is the KL penalty coefficient, and , , and are user query, chosen response, and rejected response, respectively. The policy model is initialized from model .

Additionally, the BCO loss is employed as the quality loss, which helps the model to understand the absolute quality of individual responses. The loss function is defined as:

where and represent the loss for chosen and rejected responses, respectively. Each response type's loss is calculated independently, requiring the model to differentiate the absolute quality of individual responses. The loss terms are given by:

where represents the reward shift, calculated as the moving average of previous rewards to stabilize training.

Finally, the SFT loss is used as the generation loss to help the model learn the generation process of preferred responses. The loss function is defined as:

MMPR

To construct a large-scale preference optimization dataset, we propose an efficient data construction pipeline. Specifically, we categorize the multimodal data into samples with clear ground truths and samples without clear ground truths.

  • For samples with clear ground truths, the model is prompted to first provide the reasoning process and then give the final answer in the format like "Final Answer: xxx". Responses matching the ground truth answer constitute the positive set , while those that do not match make up the negative set . Additionally, responses that fail to provide a clear final answer are also merged into . Given these responses labeled as positive or negative, we build the preference pairs by selecting a chosen response from and a negative response from .
  • For samples without clear ground truths, we propose a simple yet effective method: Dropout Next-Token Prediction (Dropout NTP). Specifically, we use the responses generated by InternVL2-8B as chosen answers. Given the chosen answer, we truncate it by half and then prompt InternVL2-8B to complete the remaining portion of the truncated answer without access to the image input. This generated completion serves as the rejected answer for the paired sample. It is worth noting that while the responses generated by InternVL2-8B may not be perfect, the completions generated without the image input will introduce more hallucinations than those generated with the image input. Therefore, the partial order relationship between the chosen and rejected responses holds true.

Experimental Results

Our InternVL2-8B-MPO achieves superior performance across 8 benchmarks, particularly excelling in multimodal reasoning tasks. On the MathVista benchmark, our model achieves an accuracy of 67.0%, outperforming InternVL2-8B by 8.7 points and achieving performance comparable to the 10$\times$ larger InternVL2-76B. On the MathVision benchmark, our model achieves an accuracy of 25.7%, establishing a new state-of-the-art performance among open-source models. These results demonstrate the effectiveness of our preference optimization approach in enhancing multimodal reasoning capabilities. Additionally, on the POPE benchmark, our model exhibits a 1.2-point improvement over InterVL2-8B, demonstrating the effectiveness of the perception data contained in our MMPR dataset to mitigate hallucinations. Furthermore, our model also shows superior performance compared to the InternVL2-8B on complex VQA benchmarks, indicating that the general abilities of our model are also improved, benefiting from enhanced reasoning abilities and mitigated hallucinations.

Model Name M3CoT MathVista MathVision MMVet LLaVA-Bench POPE CRPE MMHalBench
Gemini-1.5-Pro - 63.9 19.2 - - - - -
GPT-4o 64.3 63.8 30.4 69.1 97.6 86.9 76.6 4.0
GPT-4o-Mini 61.9 52.4 27.3 66.9 95.4 85.1 73.1 3.6
LLaVA-1.5-13B 39.5 27.6 11.1 36.3 70.7 85.9 55.6 2.4
Qwen2-VL-7B 57.8 58.2 21.1 60.6 67.7 88.1 74.4 3.4
MiniCPM-V-2-6-8B 56.0 60.6 23.4 57.4 83.4 87.3 75.2 3.6
LLaVA-OneVision-7B 52.3 63.2 18.4 51.4 79.9 88.4 73.7 3.1
InternVL2-26B 58.2 59.4 23.4 62.1 92.3 88.0 75.6 3.7
InternVL2-40B 63.6 63.7 21.4 65.5 100.5 88.4 77.3 3.9
InternVL2-76B 65.4 67.5 23.7 65.7 99.3 89.0 77.8 3.8
InternVL2-Pro 65.6 66.3 18.8 69.4 99.5 88.2 77.6 3.7
InternVL2-8B 59.3 58.3 20.4 54.2 73.2 86.9 75.0 3.3
InternVL2-8B-MPO (ours) 79.2 67.0 25.7 56.2 76.7 88.1 75.4 3.5

Citation


  @article{wang2024mpo,
    title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
    author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
    journal={arXiv preprint arXiv:2411.10442},
    year={2024}
  }

  @article{chen2024far,
    title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
    author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
    journal={arXiv preprint arXiv:2404.16821},
    year={2024}
  }

  @inproceedings{chen2024internvl,
    title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
    author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={24185--24198},
    year={2024}
  }

  

Acknowledgement

This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.