Neural network learning and expert systems
Neural network learning and expert systems
Depth-Size Tradeoffs for Neural Computation
IEEE Transactions on Computers - Special issue on artificial neural networks
Threshold circuits of bounded depth
Journal of Computer and System Sciences
Discrete neural computation: a theoretical foundation
Discrete neural computation: a theoretical foundation
Bounds for the Computational Power and Learning Complexity of Analog Neural Nets
SIAM Journal on Computing
Effective learning in recurrent max-min neural networks
Neural Networks
Reinforcement Learning Using the Stochastic Fuzzy Min–Max Neural Network
Neural Processing Letters
Biophysiologically plausible implementations of the maximum operation
Neural Computation
On the Computational Power of Winner-Take-All
Neural Computation
The min-max function differentiation and training of fuzzy neural networks
IEEE Transactions on Neural Networks
Learning in the combinatorial neural model
IEEE Transactions on Neural Networks
General fuzzy min-max neural network for clustering and classification
IEEE Transactions on Neural Networks
Adaptive resolution min-max classifiers
IEEE Transactions on Neural Networks
K-winners-take-all circuit with O(N) complexity
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We investigate the computational power of max-min propagation (MMP) neural networks, composed of neurons with maximum (Max) or minimum (Min) activation functions, applied over the weighted sums of inputs. The main results presented are that a single-layer MMP network can represent exactly any pseudo-Boolean function F:{0,1}n → [0,1], and that two-layer MMP neural networks are universal approximators. In addition, it is shown that several well-known fuzzy min-max (FMM) neural networks, such as Simpson's FMM, are representable by MMP neural networks.