Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural-Network-Based Fuzzy Logic Control and Decision System
IEEE Transactions on Computers - Special issue on artificial neural networks
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
A feedforward neural network with function shape autotuning
Neural Networks
Enhancing the non-linear modelling capabilities of MLP neural networks using spread encoding
Fuzzy Sets and Systems - Special issue on neuro-fuzzy techniques and applications
A bibliography of neural network business applications research: 1994–1998
Computers and Operations Research - Neural networks in business
Forecasting time series with genetic fuzzy predictor ensemble
IEEE Transactions on Fuzzy Systems
Identification and control of dynamic systems using recurrent fuzzy neural networks
IEEE Transactions on Fuzzy Systems
Synthesis of fault-tolerant feedforward neural networks using minimax optimization
IEEE Transactions on Neural Networks
Globally convergent algorithms with local learning rates
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper presents a novel neural network architecture, analysis-adjustment-synthesis network (AASN), and tests its efficiency and accuracy in modelling non-linear function and classification. The AASN is a composite of three sub-networks: analysis sub-network; adjustment sub-network; and synthesis sub-network. The analysis sub-network is a one-layered network that spreads the input values into a layer of 'spread input neurons'. This synthesis sub-network is a one-layered network that spreads the output values back into a layer of 'spread output neurons'. The adjustment sub-network, between the analysis sub-network and the synthesis sub-network, is a standard multi-layered network that operates as the learning mechanism. After training the adjustment sub-network in recalling phase, the synthesis sub-network receives the output values of spread output neurons and synthesizes them into output values with a weighted-average computation. The weights in the weighted-average computation are deduced from the method of Lagrange multipliers. The approach is tested using four function mapping problems and one classification problem. The results show that combining the analysis sub-network and the synthesis sub-network with a multi-layered network can significantly improve a network's efficiency and accuracy.