Information Retrieval
Automatic Classification of Tennis Video for High-level Content-based Retrieval
CAIVD '98 Proceedings of the 1998 International Workshop on Content-Based Access of Image and Video Databases (CAIVD '98)
Video Annotation for Content-based Retrieval using Human Behavior Analysis and Domain Knowledge
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Integrated Tracking with Vision and Sound
ICIAP '01 Proceedings of the 11th International Conference on Image Analysis and Processing
Automatic Parsing of TV Soccer Programs
ICMCS '95 Proceedings of the International Conference on Multimedia Computing and Systems
Personal TV viewing by using live chat as metadata
WWW '05 Special interest tracks and posters of the 14th international conference on World Wide Web
Generation of views of TV content using TV viewers' perspectives expressed in live chats on the web
Proceedings of the 13th annual ACM international conference on Multimedia
Video Search by Impression Extracted from Social Annotation
WISE '09 Proceedings of the 10th International Conference on Web Information Systems Engineering
The state of the art in image and video retrieval
CIVR'03 Proceedings of the 2nd international conference on Image and video retrieval
DEXA'05 Proceedings of the 16th international conference on Database and Expert Systems Applications
Searching emotional scenes in TV programs based on twitter emotion analysis
OCSC'13 Proceedings of the 5th international conference on Online Communities and Social Computing
Hi-index | 0.00 |
This paper proposes a method of automatically annotating tennis action through the integrated use of audio and video information. The proposed method extracts ball-hitting times called "impact times" using audio information, and evaluates the position relations between the player and the ball at the impact time to identify the player's basic actions, such as forehand swing, overhead swing, etc. Simulation results show that the detection rate for impact time influences the recognition rate of the player's basic actions. They also reveal that using audio information avoids some event recognition failures that cannot be averted when using only video information, demonstrating the performance and the validity of our approach.