Matthew Luciw

Research Interests

(as of 2010)



Lobe Component Analysis (LCA)

Cortical neuronal layers in mammals have been shown to develop to detect different types of features depending on their input signals. This is shown by studies such as the cortical rewiring experiments of Mriganka Sur and collaborators. These results suggest a data-driven feature extraction strategy, instead of genetically encoded specialized brain areas for separate and independent feature maps.

Inspired by Principal Component Analysis and Independent Component Analysis, we developed Lobe Component Analysis (LCA), a method for incrementally setting the weights of a neuronal layer through the biologically-plausible mechanisms of Hebbian learning and lateral inhibition. LCA’s strength lies first in its simplicity and generality (due to the aforementioned two biological mechanisms). It is intended to run in real-time on a developmental robot to develop its internal representation, which is environment and input dependent. Another major strength lies in its mathematical optimality. We proved LCA’s optimality 1. spatially and 2. temporally, meaning a given neuron will take into account its history of “observations” (its input when it is able to fire) in the best possible way at every time step of development. This contrasts to gradient-based adaptive learning algorithms that only use the last observation. This optimality was shown to greatly increase the learning speed. I’ve also worked on an extension of LCA that used weighted lateral inhibitory and excitatory connections.



Publications

J. Weng and M. Luciw, “Dually Optimal Neuronal Layers: Lobe Component Analysis,” IEEE Transactions on Autonomous Mental Development, vol. 1, no. 1, pp. 68-85, 2009.

M. Luciw and J. Weng, “Laterally-Connected Lobe Component Analysis: Precision and Topography,” in Proc. 8th IEEE International Conference on Development and Learning, Shanghai, China, June 4-6, 2009.

M. Luciw and J. Weng, “A Model with Energy, Plasticity Scheduling and Cell Age for Neuronal In-place Adaptation,” Society for Neuroscience, San Diego, CA, November 3-7, 2007.

LCA software in MATLAB.



Recurrent Hebbian Neural Networks and Topographic Class Grouping

How is abstract representation developed in later cortical areas? How might our actions alter lower-level cortical representation? To investigate this, we placed multiple layers of LCA neurons in sequence, and added an output, or “motor” layer (inspired by motor and premotor cortices), which directly controls actions. Each internal layer utilized bottom-up and top-down connections simultaneously in learning and information processing. Such networks can be called Hebbian since connections strengthen from correlated firing.

I focused on the effects of the top-down connections from layer areas to earlier areas. The computational studies and results indicate that top-down connections from a later area (e.g., motor area), to an earlier area enable development of discriminant features, and class grouping.  "Discriminant features" means firing of developed neurons becomes insensitive to irrelevant sensory information, where relevance and irrelevance is determined by imposed behaviors. By class grouping, I mean neurons form topographic firing areas based on semantic information from motor areas, as seen biologically in FFA and PPA areas. I also showed how top-down connections provide temporal context and how such context assists perception in a continuously but gradually changing physical world.

Experiments involving visual recognition of rotating objects showed recognition performance of top-down-enabled networks is greatly increased over networks that developed without top-down connections. Discriminant features and class grouping only emerged in top-down-enabled networks. It is also shown how the temporal context greatly improves performance, to nearly 100% after the transition periods from one rotating object to the next.

Publications

M. Luciw and J. Weng, Top-Down Connections in Self-Organizing Hebbian Networks: Topographic Class Grouping IEEE Transactions on Autonomous Mental Development.

M. D. Luciw and J. Weng, “Topographic Class Grouping with Applications to 3D Object Recognition,” in Proc. IEEE World Congress on Computational Intelligence: International Joint Conference on Neural Networks, Hong Kong, June 1-6, 2008.

M. D. Luciw, J. Weng and S. Zeng ”Motor Initiated Expectation through Top-Down Connections as Abstract Context in a Physical World,” in Proc. 7th IEEE International Conference on Development and Learning , Monterey, CA, Aug 9- 12, 2008.

J. Weng and M. D. Luciw, “Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography,” in Proc. 5th IEEE International Conference on Development and Learning, Bloomington, IN, USA, May 31 - June 3, 2006.

M. Luciw and J. Weng, “The Effects of Top-Down Connections in Computational Multilayer Two-Way Hebbian Networks: Abstract Representation and Temporal Context,” Society for Neuroscience, Chicago, IL, October 17-21, 2009

J. Weng, T. Luwang, M. Luciw, W. Shi, H. Lu, M. Chi and X. Xue “Multilayer In-Place Learning Networks for General Invariance,” Society for Neuroscience, San Diego, CA, November 3-7, 2007.

Mojtaba Solgi's MILN software (C++)


Where-What Networks


Pathways of information processing dealing with what (identity) and where (spatiomotor) diverge in biological visual processing before re-joining at pre-motor areas in cortex. The separation of identity information from location information motivated the design of the Where-What Networks (WWN) for attention and recognition, which learns via Hebbian learning, using both bottom-up and top-down connections in both learning and performance.  WWNs are biologically-inspired integrated attention and recognition learning systems.
I showed how they develop the capability of selective attention to foregrounds over complex backgrounds. It can perform in the following four different selective attention modes: (1). Bottom-up free-viewing: WWN finds and identifies an object in the foreground with no instruction. (2). Top-down object-based: WWN is instructed to find a particular object and is able to do so, even when other (distractor) objects are in the scene. (3). Top-down position-based: WWN is instructed to identify something at a particular location and is able to do so, even when other (distractor) objects are in the scene. (4). Uncued attention shift: WWN finds and identifies an object in the foreground, then shifts its attention to successfully find and identify another object after a short time.

Publications

J. Weng and M. Luciw. Brain-Like Emergent Spatial Processing. IEEE Transactions on Autonomous Mental Development (TAMD), 2011.

M. Luciw and J. Weng, "Where-What Network 3: Developmental Top-Down Attention for Multiple Foregrounds and Complex Backgrounds," in Proc. World Congress in Computational Intelligence: International Joint Conference on Neural Networks, Barcelona, Spain, 2010.

M. Luciw and J. Weng, “Where-What Network-4: The Effect of Multiple Internal Areas,” in Proc. 9th IEEE International Conference on Development and Learning , Ann Arbor, MI, 20107.

Also see the work by Zhengping Ji, who developed WWN-1 and WWN-2.


Perceptual Awareness in Vehicles From Radars and a Camera

Zhengping and I built an object learning system that incorporates sensory information from an automotive radar system and a video camera. The radar system provides a rudimentary attention for the focus of visual analysis on relatively small areas within the image plane. For each image, the attended visual area is coded by LCA-developed orientation-selective features, leading to a sparse representation. This new representation is input to a recurrent Hebbian learning network to learn an internal representation to be able to differentiate objects in the environment.

Publications


Z. Ji, M. Luciw, J. Weng and S. Zeng, “Sparse Developmental Object Learning in a Vehicular Radar-Vision Fusion Framework,” submitted to IEEE Transactions on Intelligent Transportation Systems.

Z. Ji, M. D. Luciw, and J. Weng, “Epigenetic Sensorimotor Pathways and its Application to Developmental Object Learning,” in Proc. IEEE World Congress on Computational Intelligence: Congress on Evolutionary Computation, Hong Kong, China, June 1-6, 2008.

M. D. Luciw*, Z. Ji*, J. Weng, S. Zeng, and V. Sadekar, “A Biologically-Motivated Developmental System Towards Perceptual Awareness in Vehicle-Based Robots,” in Proc. 7th International Conference on Epigenetic Robotics, Piscataway, NJ, Nov 5-7, 2007.
* - Both authors contributed equally.







Concept Development

How does our semantic understanding emerge from a stream of low-level physical data? We investigated this question and created a system in which a “semi-concrete” concept of distance traveled, emerged from experience. Distance traveled involves sensory (e.g., movement can be perceived visually) and motor (the actions taken to move a certain amount) information. First, each information piece must be learned to be filled in when only the other piece is present. Second, the internal representation must be calibrated with existing semantic structure from the external environment (if someone tells you that you’ve moved 10 meters, for example).

Publication

L. Grabowski, M. D. Luciw, and J. Weng, “A System for Epigenetic Concept Development through Autonomous Association Learning,” in Proc. 6th IEEE International Conference on Development and Learning, Imperial College, London, July 11-13, 2007.



Real-Time Learning of Dynamic Obstacle Avoidance

We built a system that could autonomously learn to avoid moving obstacles using the Hierarchical Discriminant Regression (HDR) learning engine. Obstacle avoidance experiments on the Dav robot were also done in this project.

Publication

H. Zhao, Z. Ji, M. D. Luciw, and J. Weng, “Developmental Learning for Avoiding Dynamic Obstacles Using Attention,” in Proc. 6th IEEE International Conference on Development and Learning, London, July 11-13, 2007.