Convolutional networks for images, speech, and time series
The handbook of brain theory and neural networks
Digital Image Processing
Reverse Optical Flow for Self-Supervised Adaptive Autonomous Robot Navigation
International Journal of Computer Vision
Recovering Surface Layout from an Image
International Journal of Computer Vision
Semantic object classes in video: A high-definition ground truth database
Pattern Recognition Letters
Segmentation and Recognition Using Structure from Motion Point Clouds
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Make3D: Learning 3D Scene Structure from a Single Still Image
IEEE Transactions on Pattern Analysis and Machine Intelligence
TurboPixels: Fast Superpixels Using Geometric Flows
IEEE Transactions on Pattern Analysis and Machine Intelligence
Survey of Pedestrian Detection for Advanced Driver Assistance Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Adapting visual category models to new domains
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Convolutional Neural Networks for P300 Detection with Application to Brain-Computer Interfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Domain Transfer Multiple Kernel Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
What you saw is not what you get: Domain adaptation using asymmetric kernel transforms
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Road Detection Based on Illuminant Invariance
IEEE Transactions on Intelligent Transportation Systems
Hi-index | 0.00 |
Road scene segmentation is important in computer vision for different applications such as autonomous driving and pedestrian detection. Recovering the 3D structure of road scenes provides relevant contextual information to improve their understanding. In this paper, we use a convolutional neural network based algorithm to learn features from noisy labels to recover the 3D scene layout of a road image. The novelty of the algorithm relies on generating training labels by applying an algorithm trained on a general image dataset to classify on–board images. Further, we propose a novel texture descriptor based on a learned color plane fusion to obtain maximal uniformity in road areas. Finally, acquired (off–line) and current (on–line) information are combined to detect road areas in single images. From quantitative and qualitative experiments, conducted on publicly available datasets, it is concluded that convolutional neural networks are suitable for learning 3D scene layout from noisy labels and provides a relative improvement of 7% compared to the baseline. Furthermore, combining color planes provides a statistical description of road areas that exhibits maximal uniformity and provides a relative improvement of 8% compared to the baseline. Finally, the improvement is even bigger when acquired and current information from a single image are combined.