Real-Time Adaptive Radiometric Compensation
IEEE Transactions on Visualization and Computer Graphics
Real world dynamic appearance enhancement with procam feedback
PROCAMS '08 Proceedings of the 5th ACM/IEEE International Workshop on Projector camera systems
Color mixing property of a projector-camera system
PROCAMS '08 Proceedings of the 5th ACM/IEEE International Workshop on Projector camera systems
The visual computing of projector-camera systems
ACM SIGGRAPH 2008 classes
Defocus Blur Correcting Projector-Camera System
ACIVS '08 Proceedings of the 10th International Conference on Advanced Concepts for Intelligent Vision Systems
Shape Disparity Inspection of the Textured Object and Its Notification by Overlay Projection
VMR '09 Proceedings of the 3rd International Conference on Virtual and Mixed Reality: Held as Part of HCI International 2009
Adaptive chrominance correction for a projector considering image and screen color
ISVC'07 Proceedings of the 3rd international conference on Advances in visual computing - Volume Part II
Color correction for optical see-through displays using display color profiles
Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology
Hi-index | 0.00 |
We wish to compensate for irregularities in the output of digital projectors that occur when they are used in non-ideal situations, such as those with varying surface reflectance and ambient light. We transform the image to be displayed into a compensation image that will produce the desired appearance. In contrast to previous methods, the transformation is based on both a radiometric model of the system and the content of the image. We present a five-stage framework for performing content-dependent photometric compensation, and the details of a specific implementation. The original image is converted to a perceptually-uniform space, the desired chrominance is fitted to the gamut of the projector, a luminance range is calculated within which the fitted chrominance values can be produced, the original luminance is fitted to that range, and finally the fitted values are converted to a compensation image. Our method balances strict compensation against dynamic range in the final output, so we can produce a good result even when removal of all visible spatial variation is not possible.