Neural computing with small weights
KY Siu, J Bruck - Advances in Neural Information …, 1991 - proceedings.neurips.cc
KY Siu, J Bruck
Advances in Neural Information Processing Systems, 1991•proceedings.neurips.ccAn important issue in neural computation is the dynamic range of weights in the neural
networks. Many experimental results on learning indicate that the weights in the networks
can grow prohibitively large with the size of the inputs. Here we address this issue by
studying the tradeoffs between the depth and the size of weights in polynomial-size
networks of linear threshold elements (LTEs). We show that there is an efficient way of
simulating a network of LTEs with large weights by a network of LTEs with small weights. In …
networks. Many experimental results on learning indicate that the weights in the networks
can grow prohibitively large with the size of the inputs. Here we address this issue by
studying the tradeoffs between the depth and the size of weights in polynomial-size
networks of linear threshold elements (LTEs). We show that there is an efficient way of
simulating a network of LTEs with large weights by a network of LTEs with small weights. In …
Abstract
An important issue in neural computation is the dynamic range of weights in the neural networks. Many experimental results on learning indicate that the weights in the networks can grow prohibitively large with the size of the inputs. Here we address this issue by studying the tradeoffs between the depth and the size of weights in polynomial-size networks of linear threshold elements (LTEs). We show that there is an efficient way of simulating a network of LTEs with large weights by a network of LTEs with small weights. In particular, we prove that every depth-d, polynomial-size network of LTEs with exponentially large integer weights can be simulated by a depth-(2d+ 1), polynomial-size network of LTEs with polynomially bounded integer weights. To prove these results, we use tools from harmonic analysis of Boolean functions. Our technique is quite general, it provides insights to some other problems. For example, we are able to improve the best known results on the depth of a network of linear threshold elements that computes the COM PARI SO N, SUM and PRO DU CT of two n-bits numbers, and the MAX 1M UM and the SORTING of n n-bit numbers.
proceedings.neurips.cc
Showing the best result for this search. See all results