Information processing in dynamical systems: foundations of harmony theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Training products of experts by minimizing contrastive divergence
Neural Computation
Mining video editing rules in video streams
Proceedings of the tenth ACM international conference on Multimedia
Creating music videos using automatic media analysis
Proceedings of the tenth ACM international conference on Multimedia
Automatic music video generation based on temporal pattern analysis
Proceedings of the 12th annual ACM international conference on Multimedia
Automatic generation of personalized music sports video
Proceedings of the 13th annual ACM international conference on Multimedia
Motion analysis and segmentation through spatio-temporal slices processing
IEEE Transactions on Image Processing
Hi-index | 0.00 |
Automatic music video editing is still a challenging task due to the lack of knowledge of how music and video are matched to produce attractive effects. Previous works usually matches music and video following assumption or empirical knowledge. In this paper, we use a dual-wing harmonium model to learn and represent the underlying music video editing rules from a large dataset of music videos. The editing rules are extracted by clustering the low dimensional representation of music video clips. In the experiments, we give an intuitive visualization for the discovered editing rules. These editing rules partially reflect professional music video editor's skills and can be used to further improve the quality of automatically generated music video.