The threshold order of a Boolean function
Discrete Applied Mathematics
Fast sigmoidal networks via spiking neurons
Neural Computation
On the complexity of learning for a spiking neuron (extended abstract)
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Networks of spiking neurons: the third generation of neural network models
Transactions of the Society for Computer Simulation International - Special issue: simulation methodology in transportation systems
The handbook of brain theory and neural networks
Spikes: exploring the neural code
Spikes: exploring the neural code
Analogue Neural VLSI: A Pulse Stream Approach
Analogue Neural VLSI: A Pulse Stream Approach
On the Relevance of Time in Neural Computation and Learning
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
Lower bounds for the computational power of networks of spiking neurons
Neural Computation
On the Nonlearnability of a Single Spiking Neuron
Neural Computation
Delay learning and polychronization for reservoir computing
Neurocomputing
MICAI'10 Proceedings of the 9th Mexican international conference on Artificial intelligence conference on Advances in soft computing: Part II
Representing boolean functions using polynomials: more can offer less
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part III
Hi-index | 0.00 |
Computations by spiking neurons are performed using the timing of action potentials. We investigate the computational power of a simple model for such a spiking neuron in the Boolean domain by comparing it with traditional neuron models such as threshold gates (or McCulloch–Pitts neurons) and sigma‐pi units (or polynomial threshold gates). In particular, we estimate the number of gates required to simulate a spiking neuron by a disjunction of threshold gates and we establish tight bounds for this threshold number. Furthermore, we analyze the degree of the polynomials that a sigma‐pi unit must use for the simulation of a spiking neuron. We show that this degree cannot be bounded by any fixed value. Our results give evidence that the use of continuous time as a computational resource endows single‐cell models with substantially larger computational capabilities.