Merge-Bench: Resolve Merge Conflicts with Large Language Models

“Merge-Bench: Resolve Merge Conflicts with Large Language Models” by Benedikt Schesch and Michael D. Ernst. In ICPR 2026: Proceedings of the 28th International Conference on Pattern Recognition, (Lyon, France), Aug. 2026.

Abstract

This paper applies machine learning to the difficult and important task of version control merging. (1) We constructed a dataset, Merge-Bench, of 7938 real-world merge conflict hunks from 1439 GitHub repositories. The ground truth is the merge resolution that developers committed to the repository. Our dataset construction methodology is scalable to arbitrary amounts of data since no manual labeling is required. (2) We trained a model, LLMergeJ, to resolve merge conflicts in Java programs. Our approach uses Group Relative Policy Optimization (GRPO), an online reinforcement learning method, to train a Large Language Model (LLM). (3) We performed two evaluations of the performance of LLMs on resolving merge conflicts. On Java programs, LLMergeJ with 14B parameters outperforms 3 commercial LLMs, trailing only Gemini 2.5 Pro. Across 11 programming languages, commercial LLM performance is largely stable from language to language. The best models correctly resolve less than 60% of merge conflicts.

BibTeX entry:

@inproceedings{ScheschE2026,
   author = {Benedikt Schesch and Michael D. Ernst},
   title = {{Merge-Bench}: Resolve Merge Conflicts with Large Language Models},
   booktitle = {ICPR 2026: Proceedings of the 28th International
	Conference on Pattern Recognition},
   address = {Lyon, France},
   month = aug,
   year = {2026}
}

(This webpage was created with bibtex2web.)

Back to Michael Ernst's publications.