A model of visual adaptation for realistic image synthesis
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Discrete-time signal processing (2nd ed.)
Discrete-time signal processing (2nd ed.)
QSplat: a multiresolution point rendering system for large meshes
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
User Studies: Why, How, and When?
IEEE Computer Graphics and Applications
Human-aided computing: utilizing implicit human processing to classify images
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Motion tuned spatio-temporal quality assessment of natural videos
IEEE Transactions on Image Processing
Perceptually-motivated graphics, visualization and 3D displays
ACM SIGGRAPH 2010 Courses
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Evaluation of video artifact perception using event-related potentials
Proceedings of the ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization
EEG analysis of implicit human visual perception
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Preference and artifact analysis for video transitions of places
ACM Transactions on Applied Perception (TAP) - Special issue SAP 2013
Hi-index | 0.00 |
In this article we use an ElectroEncephaloGraph (EEG) to explore the perception of artifacts that typically appear during rendering and determine the perceptual quality of a sequence of images. Although there is an emerging interest in using an EEG for image quality assessment, one of the main impediments to the use of an EEG is the very low Signal-to-Noise Ratio (SNR) which makes it exceedingly difficult to distinguish neural responses from noise. Traditionally, event-related potentials have been used for analysis of EEG data. However, they rely on averaging and so require a large number of participants and trials to get meaningful data. Also, due the the low SNR ERP's are not suited for single-trial classification. We propose a novel wavelet-based approach for evaluating EEG signals which allows us to predict the perceived image quality from only a single trial. Our wavelet-based algorithm is able to filter the EEG data and remove noise, eliminating the need for many participants or many trials. With this approach it is possible to use data from only 10 electrode channels for single-trial classification and predict the presence of an artifact with an accuracy of 85%. We also show that it is possible to differentiate and classify a trial based on the exact type of artifact viewed. Our work is particularly useful for understanding how the human visual system responds to different types of degradations in images and videos. An understanding of the perception of typical image-based rendering artifacts forms the basis for the optimization of rendering and masking algorithms.