Neural network design
Recognition of gestures in Arabic sign language using neuro-fuzzy systems
Artificial Intelligence
Designing Sociable Robots
Shape Matching and Object Recognition Using Shape Contexts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Shape Signature by Deformation
SMI '99 Proceedings of the International Conference on Shape Modeling and Applications
The Robosapien Companion: Tips, Tricks, Hacks, and an Introduction to Robotics (Technology in Action)
Pose-Oblivious Shape Signature
IEEE Transactions on Visualization and Computer Graphics
Shape signature matching for object identification invariant to image transformations and occlusion
CAIP'07 Proceedings of the 12th international conference on Computer analysis of images and patterns
Persian sign language (PSL) recognition using wavelet transform and neural networks
Expert Systems with Applications: An International Journal
Playing soccer with robosapien
RoboCup 2005
IEEE Transactions on Image Processing
Localizing Region-Based Active Contours
IEEE Transactions on Image Processing
Generation of neural networks using a genetic algorithm approach
International Journal of Bio-Inspired Computation
Hi-index | 0.00 |
This paper presents the results of our research in automatic recognition of the Mexican Sign Language (MSL) alphabet as control element for a service robot. The technique of active contours was used for image segmentation in order to recognize de signs. Once segmented, we proceeded to obtain the signature of the corresponding sign and trained a neural network for its recognition. Every symbol of the MSL was assigned to a task that the robotic system had to perform; we defined eight different tasks. The system was validated using a simulation environment and a real system. For the real case, we used a mobile platform (Powerbot) equipped with a manipulator with 6 degrees of freedom (PowerCube). For simulation of the mobile platforms, RoboWorks was used as the simulation environment. In both, simulated and real platforms, tests were performed with different images to those learned by the system, obtaining in both cases a recognition rate of 95.8%.