Recurrent Neural Networks
Supervised artificial recurrent neural networks (RNNs) have adaptive feedback connections that allow them to learn mappings from input sequences to output sequences. In principle, they can implement almost arbitrary sequential, algorithmic behavior. They are biologically more plausible and computationally more powerful than other adaptive models such as Hidden Markov Models and Conditional Markov Random Fields (no continuous internal states), feedforward neural networks and traditional Support Vector Machines (SVMs - no internal states at all). Goal of this project ist to further improve and analyze our state-of-the-art algorithms for supervised RNNs, especially bi-directional RNNs, and apply them to challenging sequence learning tasks such as speech processing, sun spot prediction, protein prediction. One focus is on novel types of recurrent SVMs, another on RNNs with information-theoretic objectives and on combinations with statistical approaches.
People Santiago Fernandez
Daan Wierstra
Jürgen Schmidhuber



Copyright © 2003 - 2012 IDSIA. All Rights Reserved.