Quantified Score

Hi-index 0.00

Visualization

Abstract

The effect of different kinds of weight normalization on theoutcome of a simple competitive learning rule is analyzed. It isshown that there are important differences in the representationformed depending on whether the constraint is enforced by dividingeach weight by the same amount ("divisive enforcement") orsubtracting a fixed amount from each weight ("subtractiveenforcement"). For the divisive cases weight vectors spread outover the space so as to evenly represent "typical" inputs, whereasfor the subtractive cases the weight vectors tend to the axes ofthe space, so as to represent "extreme" inputs. The consequences ofthese differences are examined.