The Pyramid Match Kernel: Efficient Learning with Sets of Features
The Journal of Machine Learning Research
Object matching in disjoint cameras using a color transfer approach
Machine Vision and Applications
Viewpoint Invariant Pedestrian Recognition with an Ensemble of Localized Features
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Multi-view object matching and tracking using canonical correlation analysis
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Intelligent multi-camera video surveillance: A review
Pattern Recognition Letters
Vehicle matching in smart camera networks using image projection profiles at multiple instances
Image and Vision Computing
Hi-index | 0.00 |
We propose a novel method for identifying road vehicles between two non-overlapping cameras. The problem is formulated as a same-different classification problem: probability of two vehicle images from two distinct cameras being from the same vehicle or from different vehicles. The key idea is to compute the probability without matching the two vehicle images directly, which is a process vulnerable to drastic appearance and aspect changes. We represent each vehicle image as an embedding amongst representative exemplars of vehicles within the same camera. The embedding is computed as a vector each of whose components is a non-metric distance for a vehicle to an exemplar. The non-metric distances are computed using robust matching of oriented edge images. A set of truthed training examples of same-different vehicle pairings across the two cameras is used to learn a classifier that encodes the probability distributions. A pair of the embeddings representing two vehicles across two cameras are then used to compute the same-different probability. In order for the vehicle exemplars to be representative for both cameras, we also propose a method for jointly selection of corresponding exemplars using the training data. Experiments on observations of over 400 vehicles under drastically illumination and camera conditions demonstrate promising results.