The Induction of Dynamical Recognizers
Machine Learning - Connectionist approaches to language learning
Bioinformatics: the machine learning approach
Bioinformatics: the machine learning approach
Architectural bias in recurrent neural networks: fractal analysis
Neural Computation
A recursive connectionist approach for predicting disulfide connectivity in proteins
Proceedings of the 2003 ACM symposium on Applied computing
Markovian architectural bias of recurrent neural networks
IEEE Transactions on Neural Networks
2005 Special Issue: Learning protein secondary structure from sequential and relational data
Neural Networks - Special issue on neural networks and kernel methods for structured domains
Journal of VLSI Signal Processing Systems
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
A two-phase ANN method for genome-wide detection of hormone response elements
PRIB'07 Proceedings of the 2nd IAPR international conference on Pattern recognition in bioinformatics
PRE-BUD: Prefetching for energy-efficient parallel I/O systems with buffer disks
ACM Transactions on Storage (TOS)
CaPaS: an optimal security-aware cache replacement algorithm for cluster storage systems
International Journal of High Performance Systems Architecture
Privacy-preserving back-propagation and extreme learning machine algorithms
Data & Knowledge Engineering
Signal peptide discrimination and cleavage site identification using SVM and NN
Computers in Biology and Medicine
Hi-index | 0.00 |
Selection of machine learning techniques requires a certain sensitivity to the requirements of the problem. In particular, the problem can be made more tractable by deliberately using algorithms that are biased toward solutions of the requisite kind. In this paper, we argue that recurrent neural networks have a natural bias toward a problem domain of which biological sequence analysis tasks are a subset. We use experiments with synthetic data to illustrate this bias. We then demonstrate that this bias can be exploitable using a data set of protein sequences containing several classes of subcellular localization targeting peptides. The results show that, compared with feed forward, recurrent neural networks will generally perform better on sequence analysis tasks. Furthermore, as the patterns within the sequence become more ambiguous, the choice of specific recurrent architecture becomes more critical.