News

September 2010: Shifted to lugano, Switzerland.

 

July 2010: Mobile Robotics Laboratory, IISc, is being renamed to Aerial Robotics Laboratory

:: My Work ::

 

 
IncSFA                  Transparency     


 

         Histogramic Intesity Switching              Vitar   


Glowworm Swarm Optimization

:: Incremental Slow Feature Analysis ::

The Slow Feature Analysis (SFA) unsupervised learning framework extracts features representing the underlying causes of the changes within a temporally coherent high-dimensional raw sensory input signal. We develop the first online version of SFA, via a combination of incremental Principal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, online SFA adapts along with non-stationary environments, which makes it a generally useful unsupervised preprocessor for autonomous learning agents. We compare online SFA to batch SFA in several experiments and show that it indeed learns without a teacher to encode the input stream by informative slow features representing meaningful abstract environmental properties. We extend online SFA to deep networks in hierarchical fashion, and use them to successfully extract abstract object position information from high-dimensional video.
(For more details and code please check out: IncSFA).

Top->

:: Collective Reward-Based Approach ::
An Image Based Detection of Semi-Transparent Objects


 

Figure 1: Sample input image and its corresponding final result.


Most computer and robot vision algorithms, be it for object detection, recognition, or reconstruction, are designed for opaque objects. Non-opaque objects have received less attention, although various special cases have been the subject of research efforts, especially the case of specular objects. The main objective of this work was to provide a research work in the case of semi-transparent objects, i.e. objects that are transparent but also reflect light, typically objects made of glass. They are rather omnipresent in man-made environments (especially, windows and doors). Detection of these objects provides important information that can be used in a robot's navigational strategies such as obstacle avoidance, detection of oil/water spills on the floor, localization, etc. In order to achieve the detection of semi-transparent objects we developed a novel approach using a collective-reward based technique on an image captured by an uncalibrated camera. Several experiments were conducted over different scenarios to test the efficacy of the algorithm.

Top->

:: Histogramic Intensity Switching::
with Dynamic Mask Allocation (HIS-DMA): Vision-based Obstacle Avoidance Algorithm for Mobile Robots



We introduce a new algorithm called the histogramic intensity switching with dynamic mask allocation, a method which helps a robot to avoid obstacles using vision as the only sensing element. The algorithm uses the histograms of images captured by a monochrome camera to achieve obstacle avoidance. Histograms with special masks on input images are used to give rise to a switching phenomenon in intensities based on the dominant regions of the masked image. The mask lengths are dynamically determined by a method called the Dynamic Mask Allocation (DMA). The method does not make use of any direct distance measurement of the obstacles and indirectly captures the essence of the principle behind time-to-collision (TTC). The algorithm is tested in real time on VITAR-II. Videos of Vitar avoiding obstacles are available in the
Gallery section.

      

Figure 1 : Images Taken from the camera

Left: Original Image Left and right center: Painted images (Obstacle not detected) Right: Painted Image (Obstacle detected)

Top->

:: VITAR (VIsion based Tracked Autonomous Robot) ::

Figure 1 : VITAR-I   

For vision-based navigation experiments, we have built a robotic testbed christened VITAR (Vision based Tracked Autonomous Robot) that consists of a tracked mobile robot equipped with a pan-tilt mounted vision sensor, an on board PC, driver electronics, and a wireless link to a remote PC. A novel appearance-based obstacle avoidance algorithm that uses histograms of images obtained from a monocular camera is developed and tested on the robot. 

Figure 2 : VITAR-II

Vitar has evolved to its newer version VITAR-II.  A lot of mechanical modifications were made to suit outdoor navigation. Vitar - II is more compact and light-weight compared to its predecessor. Modifications to the mobile robot base and design of more complex experiments are currently under progress. The images of VITAR are available in the Gallery section.

Top->  

:: Robotic Implementation of GSO using Player/Stage ::

The glowworm swarm optimization (GSO) algorithm is an optimization technique developed for simultaneous capture of multiple optimums of multimodal functions and can be implemented in a collection of mobile robots to carry out the task of multiple source localization. In this work, we conduct embodied robot simulations by using a multi-robot-simulator called Player/Stage that provides realistic sensor and actuator models, in order to demonstrate the efficacy of the GSO algorithm in simultaneously detecting multiple sources. The study, based on embodied simulation experiments, also shows the robustness of the algorithm to implementational constraints.   

Top->

:: Autonomous Docking System for a mobile robot ::

This Robot performs a beacon-based docking operation. It is equipped with an ultrasonic range sensor (SRF05) and a set of infrared receivers. Docking system makes the robot to dock to a target at a specific location and orientation. Two active IR beacons are placed in such a way that the location of the docking target lies on the Voronoi Partition of the beacons. These beacons transmit infrared signal in all directions. The robot detects the beacons and successfully reaches the docking-target with the required orientation. A detailed description is available here. Videos of the docking robot are available in the Gallery section.

                                

                            Figure 3 : The Robot and the Docking target

Top->