Assignment: Learning and Relearning
Construct a simple vanilla backpropagation network, using ADOLC or
whatever other tool you wish.  Construct a random training set which
consists of two parts, A and B.
First train the network for a while on A+B, and plot
performance on A and on B.  Then hit
the network's weights with some noise (not too much please.)  This
will raise the error on both A and
B.  Now train the network on just A,
while continuing to plot performance on A and on
B.  Look at how performance on B
changes while the network is ``relearning'' A
following damage.  If you don't train too much or too little, and
don't make the learning rate too aggressive, you should see the
performance on B improve even though you are training
only on A.  In your writeup, speculate as to what
might cause this phenomenon.
The precise parameters: how many hidden units, how many input units,
how many output units, how much to train, how much noise to add to the
weight when you damage the network - I leave to you.  The object is to
(a) do this in a timely fashion, (b) notice an interesting effect.
    
    
      Barak Pearlmutter
      <bap@cs.unm.edu>