Current Research Activities

 

Hand Gesture Recognition using a Swarm of Mobile Robots

 

This video demonstration (with audio commentary) presents a Human-Swarm Interaction (HSI) system, where hand gestures presented by a human operator are collectively recognized by a swarm of mobile robots. This work has been accepted at conferences including IROS 2012, AAMAS 2012, HRI 2012.

 

 

 

A Robot Swarm Counting Fingers of Hand Gestures

 

This additional video (supporting video for IROS 2012) demonstrates our developed HSI system, where a swarm of robots engages to recognize and identify finger counts from hand gestures (given by human operators).

 

 

 

Human-Robot Interaction, Learning and Cooperation using Binary Feedback with Hand Gestures

 

In this video (with audio commentary; accepted at HRI 2014), a robot (student) learns to recognize hand gestures from a human instructor (teacher) while performing some tasks. The task for the robot is move to the colored markers, which correspond to different hand gestures. The instructor gives binary or partial feedback (right/wrong; yes/no) after a hand gesture is predicted by the robot. To learn from partial (limited) feedback in such interactive settings, the Upper Confidence Weighted Learning (UCWL) scheme is adopted.

 

 

 

Controlling, Directing and Maneuvering UAVs using Hand Gestures and Face Pose Estimates

 

In this video (supporting video for submission at HRI 2014), a UAV is controlled and directed (to move) using hand gestures presented by human operators. The UAV moves in the direction given by the hand gesture and based on its position with respect to the human (i.e., the estimated face pose of the human). Our approach aids vision-based human and multi-robot localization.

 

 

 

Commanding UAVs to Follow Humans using Hand Gestures

In this video demonstration (with audio included), a UAV is commanded using hand gestures to follow human's located in a certain direction. To make it easy, we consider that humans wear markers that the robot can recognize and track.

 

 

 

Spatial Hand Gestures for Selecting Individuals and Groups of Robots Intuitively from a Robot Swarm

 

In this video (supporting video for submission at IROS 2014), individuals and groups of UAVs are selected using natural, intuitive and spatial gestures given by human operators wearing tangible input devices such as colored gloves. This scheme uses a cascaded machine learning approach using multiple classifiers for spatial gesture learning and recognition. In this video, the selected robots, lift-off, move and land, similar to the use of "force" in Starwars movies.

 

 

 

 

More videos coming soon...

 

 

Selected Publications

 

[1] J. Nagi, A. Giusti, L. Gambardella, G. A. Di Caro, Human-Swarm Interaction Using Spatial Gestures, in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Chicago, USA, Sep. 14-18, 2014. (accepted) [pdf]

 

[2] J. Nagi, G. A. Di Caro, A. Giusti, L. Gambardella, Learning Symmetric Face Pose Models Online Using Locally Weighted Projectron Regression, in Proc. of the 21st IEEE International Conference on Image Processing (ICIP), Paris, France, Oct. 27-30, 2014. (accepted) [pdf]

 

[3] J. Nagi, A. Giusti, F. Nagi, L. Gambardella, G. A. Di Caro, Online Feature Extraction for the Incremental Learning of Gestures in Human-Swarm Interaction, in Proc. of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, May 31-Jun. 5, 2014. [pdf]

 

[4] H. Ngo, M. Luciw, N. Vien, J. Nagi, A. Forster, J. Schmidhuber. Efficient Interactive Multiclass Learning from Binary Feedback, ACM Transactions on Interactive Intelligent Systems (TiiS), March 2014. (in press) [pdf]

 

[5] J. Nagi, H. Ngo, J. Schmidhuber, L. M. Gambardella, G. A. Di Caro, Human-Robot Cooperation: Fast, Interactive Learning from Binary Feedback, in Proc. of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (Video Session), Bielefeld, Germany, March 3-6, 2014, pp. 107. [pdf] [online]

 

[6] J. Nagi, A. Giusti, G. A. Di Caro, L. Gambardella, Human Control of UAVs using Face Pose Estimates and Hand Gestures, in Proc. of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (Late Breaking Report), Bielefeld, Germany, March 3-6, 2014, pp. 252-253. [pdf] [online]

 

[7] G. A. Di Caro, A. Giusti, J. Nagi, Luca M. Gambardella, A Simple and Efficient Approach for Cooperative Incremental Learning in Robot Swarms, in Proc. of the 16th International Conference on Advanced Robotics (ICAR), Montevideo, Uruguay, Nov. 25-29, 2013, pp. 1-8. [pdf] [online]

 

[8] J. Nagi and G. Di Caro, A. Giusti, F. Nagi, L. Gambardella, Convolutional Neural Support Vector Machines: Hybrid visual pattern classifiers for multi-robot systems, in Proc. of the 11th International Conference on Machine Learning and Applications (ICMLA), Boca Raton, Florida, USA, Dec. 12-15, 2012, pp. 27–32. [pdf] [online]

 

[9] A. Giusti, J. Nagi, L. Gambardella, G. A. Di Caro, Cooperative Sensing and Recognition by a Swarm of Mobile Robots, in Proc. of the 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, Oct. 7-12, 2012, pp. 551–558. [pdf] [online]

 

[10] J. Nagi, H. Ngo, A. Giusti, L. M. Gambardella, J. Schmidhuber, G. A. Di Caro, Incremental Learning using Partial Feedback for Gesture-based Human-Swarm Interaction, in Proc. of the 21st IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Paris, France, Sept. 9-13, 2012, pp. 898–905. [pdf] [online]

 

[11] A. Giusti, J. Nagi, L. Gambardella, G. A. Di Caro, Distributed Consensus for Interaction between Humans and Mobile Robot Swarms, in Proc. of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (Demonstration Track), Valencia, Spain, Jun. 4-8, 2012, pp. 1503–1504. [pdf] [online]

 

[12] A. Giusti, J. Nagi, L. Gambardella, S. Bonardi, G. A. Di Caro, Human-Swarm Interaction through Distributed Cooperative Gesture Recognition, in Proc. of the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (Video Session), Boston, USA, Mar. 5-8, 2012, pp. 401. [pdf] [online]

 

[13] J. Nagi, F. Ducatelle, G. A. Di Caro, D. Ciresan, U. Meier, A. Giusti, F. Nagi, J. Schmidhuber and L. M. Gambardella, Max-Pooling Convolutional Neural Networks for Vision-based Hand Gesture Recognition, in Proc. of the 2nd IEEE International Conference on Signal and Image Processing and Applications (ICSIPA), Kuala Lumpur, Malaysia, Nov. 16-18, 2011, pp. 342–347. [pdf] [online]

 

 

 

 

Last updated on 24 June 2014.

Copyright © 2014 Jawad Nagi. All rights reserved.