Fundamentals of digital image processing
Fundamentals of digital image processing
Using Hierarchical Dynamical Systems to Control Reactive Behavior
RoboCup-99: Robot Soccer World Cup III
Tracking and Identifying in Real Time the Robots of a F-180 Team
RoboCup-99: Robot Soccer World Cup III
Sharif CESR Small Size Robocup Team
RoboCup 2001: Robot Soccer World Cup V
FU-Fighters 2001 (Global Vision)
RoboCup 2001: Robot Soccer World Cup V
Parabolic Flight Reconstruction from Multiple Images from a Single Camera in General Position
RoboCup 2006: Robot Soccer World Cup X
Dialocalization: Acoustic speaker diarization and visual localization as joint optimization problem
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Neural robot detection in robocup
Biomimetic Neural Learning for Intelligent Robots
Current and future trends and challenges in robot soccer
EUROCAST'05 Proceedings of the 10th international conference on Computer Aided Systems Theory
More than ink - Realization of a data-embedding pen
Pattern Recognition Letters
Hi-index | 0.00 |
This paper describes the vision system that was developed for the RoboCup F180 team FU-Fighters. The system analyzes the video stream captured from a camera mounted above the field. It localizes the robots and the ball predicting their positions in the next video frame and processing only small windows around the predicted positions. Several mechanisms were implemented to make this tracking robust. First, the size of the search windows is adjusted dynamically. Next, the quality of the detected objects is evaluated, and further analysis is carried out until it is satisfying. The system not only tracks the position of the objects, but also adapts their colors and sizes. If tracking fails, e.g. due to occlusions, we start a global search module that localizes the lost objects again. The pixel coordinates of the objects found are mapped to a Cartesian coordinate system using a non-linear transformation that takes into account the distortions of the camera. To make tracking more robust against inhomogeneous lighting, we modeled the appearance of colors in dependence of the location using color grids. Finally, we added a module for automatic identification of our robots. The system analyzes 30 frames per second on a standard PC, causing only light computational load in almost all situations.