Setup-Independent Full Projector Compensation

1Southwest University, China 2Westlake University, China
Corresponding Author

Abstract

Projector compensation seeks to correct geometric and photometric distortions that occur when images are projected onto nonplanar or textured surfaces. However, most existing methods are highly setup-dependent, requiring fine-tuning or retraining whenever the surface, lighting, or projector–camera pose changes. Progress has been limited by two key challenges: (1) the absence of large, diverse training datasets and (2) existing geometric correction models are typically constrained by specific spatial setups; without further retraining or fine-tuning, they often fail to generalize directly to novel geometric configurations. We introduce SIComp, the first setup-independent framework for full projector compensation, capable of generalizing to unseen setups without fine-tuning or retraining. To enable this, we construct a large-scale real-world dataset spanning 277 distinct projector–camera setups. SIComp adopts a co-adaptive design that decouples geometry and photometry: A carefully tailored optical flow module performs online geometric correction, while a novel photometric network handles photometric compensation. To further enhance robustness under varying illumination, we integrate intensity-varying surface priors into the network design. Extensive experiments demonstrate that SIComp consistently produces high-quality compensation across diverse unseen setups, substantially outperforming existing methods in terms of generalization ability and establishing the first generalizable solution to projector compensation.

Overview

Overview of SIComp

Overview of our setup-independent full projector compensation framework (SIComp). SIComp is designed to generalize across unseen projector-camera poses, surfaces, and photometric conditions, enabling high-quality compensation without setup-specific retraining or fine-tuning. (a) Composition of the projector-camera system and an overview of the projector compensation task. (b) Factors involved in setup variation. (c) The difference between SIComp and previous methods: SIComp can directly perform compensation using pre-trained models.

Method

Method overview of SIComp

Overview of the SIComp framework. A tailored optical-flow module first estimates geometric alignment, and a dedicated compensation network then compensates for photometric distortions. Intensity-varying surface priors are further incorporated to improve robustness and generalization across unseen setups. (a) Sampling and preprocessing of surface images and uncompensated images. (b) Joint fine-tuning of the geometric network and the photometric network. (c) Compensation pipeline in real-world scenarios: the uncompensated image and the surface image are used as inputs, and the compensation image is generated, projected, and then captured.

Results

Qualitative results of SIComp

Qualitative results of SIComp on real compensation benchmarks. The first two rows show compensation results on unseen setups captured with the same hardware devices as the training set. The last two rows show compensation results on unseen setups captured with different hardware devices from the training set. Our method is denoted as SIComp, and surf indicates the number of multi-intensity surface inputs used for compensation.

Dataset

Overview of the SIComp benchmark dataset

Overview of the SIComp dataset. It contains 277 distinct projector-camera setups and 193,900 images, providing a large-scale benchmark for training and evaluation in setup-independent projector compensation. The benchmark dataset is hosted on Baidu Netdisk, and the download link is available on our GitHub project page.

BibTeX

@article{Li2026SIComp,
  title={Setup-Independent Full Projector Compensation},
  author={Haibo Li and Qingyue Deng and Jijiang Li and Haibin Ling and Bingyao Huang},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  note={In Press},
  year={2026}
}