Context-Based Vision: Recognizing Objects Using Information from Both 2D and 3D Imagery
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part I
WordsEye: an automatic text-to-scene conversion system
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Context-based search for 3D models
ACM SIGGRAPH Asia 2010 papers
Characterizing structural relationships in scenes using graph kernels
ACM SIGGRAPH 2011 papers
Hi-index | 0.00 |
Creating 3D scenes requires artistic skill and is time-consuming. A key challenge is finding novel models to place in a partial scene. We present a new algorithm to propose relevant models by leveraging text data. Our algorithm takes a partially completed 3D scene as input and a user-specified region of interest. It then suggests additional models according to the point-wise mutual information between the labels of nearby models in the scene and the labels of models in the database. We show that our text-based system suggests more models that result in model arrangements not observed in the training corpus, compared to a Graph Kernel system that trains on 3D scene data. Furthermore, combining the Graph Kernel system with our new system increases the number of unobserved model arrangements for the Graph Kernel, with higher precision according to human evaluators.