Multilayer In-place Learning Network (MILN)

UPDATE 5/10/07: The two example scripts have been updated to be more understandable, and the API has been updated to be easier to use. Also, a problem with running the examples on newer versions of matlab has been fixed.  On this page, I have provided some information about how to interpret the examples, and how to get started with the program.

There is some documentation within miln.m, and two example scripts are provided.  Please run both examples first.

Here is a recent paper about MILN.  Sections 2 - 4 will be useful in understanding the ideas behind the program.

Please direct any comments, questions, etc. here

Using the program

A question asked was:

``If I have a set of images, a set of desired outputs, and then another set of test images, where do

I specify this to get it to run?''

There are four steps to this process
(1) Network creation and parameter setting
(2) Network initialization
(3) Network updating
(4) Network testing

First, note that the inputs must be represented as vectors, not matrices (images are usually stored as matrices).  I supplied a function ``imsquash.m'' in miln_helper to change a matrix into a vector.

(1) Network creation and parameter setting:

The network is created by changing the first argument to miln to `create', e.g

milns(1) = miln('create', 1, 784, 1, [100 2]);

above a network with ID 1 is created which has 784 sensors (input-dimension) and 1 motor (output dimension).  There are two layers of neurons, with 100 on the first layer and 2 on the second layer.

Parameters can be set and changed by using the `change' keyword, e.g.:

miln('change', 1, 3, 'do_update', 0);   %don't allow the motor weights to change
miln('change', 1, 1, 'use_lateral_excite', 1);  %use neighborhood updating on layer-1

For example, in the first, the ``do_update'' parameter on layer-3 is set to 0.  This means the layer-3 weights will not update (are frozen).

Please see the contents of miln.m for an explanation of the different parameters, such as ``do_update''.  Note that the network does not have to have a pre-set number of neurons: setting the ``use_resource_control'' layer-specific parameter to one, along with e.g., ``precision_parameter" parameter allow the number of neurons to grow as needed, from experience.

(2) Network initialization

The sensory neurons weights should be first initialized, such as:

%initialize the sensory neuron weights
for i=1:99
    miln('initialize sensory neuron', 1, digits(:,i), i);
%make sure the 100-th neuron is a 9
miln('initialize sensory neuron', 1, digits(:,2), 100);

Where digits(:,i) is the i-th digit vector in the digits matrix and i gives the neuron index.  The second argument is , as always, the ID of the network.

Then, the motor neuron weights must be initialized, e.g:

%initialize the two motor neurons to the digits '4' and '9'
miln('initialize motor neuron', 1, 4, 1);
miln('initialize motor neuron', 1, 9, 2);

where the third argument gives the initial weight, and the forth argument gives the motor neuron index.

Finally, some initial sensorimotor associations can be set.  Say, I want to create an initial link from a sensory neuron representing a `4` digit to the digit 4 detector motor neuron

miln('initialize association', 1, 1, 1);

will create an association, in network ID 1, from sensory neuron #1 (that I know represents a four) to motor neuron #1 (that I know is a four-detector).

miln('initialize association', 1, 100, 2);

as another example, will create a link from sensory neuron 100 to motor neuron 2.  Note that, whenever the 'initialize association' action is used, all other associations to that particular motor neuron are eliminated.  So, only use this once for each motor neuron.

(3) Network updating

The network is updated very simply by giving a coupled sensory/motor frame, e.g.:

miln('update', 1, digits(:,i), labels(i));

(4) Network testing

The network is tested by giving a sensory frame, without the motor part.  It returns the most plausible motor part, from its previous observations, e.g:

predictedLabel(j) = miln('test', 1, digits(:,j));

Queries network ID 1 with the j-th digit vector (no label provided).  The return value is the label that the network predicts.

Interpreting the provided examples

Two example scripts are provided.  These utilize 784-sensor, 1-motor networks. The sensors detect the intensities of  the 784 pixels from 28 by 28  pixel handwritten digit images.  The motor gives a class label ('4' or '9').  What displays as you run them is a visualization of the network weights.
The point of the two examples is to show the effect of top-down excitation on the layer-1 weights. In the first example, no top-down excitation is used, and the layer-1 features are derived based on bottom-up similarity only (inner products of pixel intensity vectors).  Since `4' and `9' digits are often written  similarly, the layer-2 neurons have a tough time discriminating the two digits.  In the second example, a strong top-down weight is used from layer-2 to layer-1.  This ensures that the layer-1 features are more ``pure'' in terms of class label, and the layer-2 neurons can discriminate more easily.

Interpreting the figures

As you run the examples, what you're seeing is the development of the network weights.  MILN develops sensorimotor pathways from experience. It will automatically derive features from a set of coupled input/output pairs (sensorimotor coupling). 

In the two examples, the inputs are vectors of pixels from handwritten digit images and the outputs are class labels.  What you see as the examples run is the development of the features on the two layers of the network.  There are 100 layer-one neurons and 2 layer-two neurons.

On layer-one, since each neuron's weight is the same dimension as the input images, the weight vectors themselves can be viewed as images.  That's what you see in the top left: the 100 layer-one neuron's weights.  These weights are visually arranged in a 10 by 10 square, since the neurons themselves are arranged in a 2D ``cortical sheet''.

The 100 features shown in the upper left are mainly expressive, in that they will represent the networks sensory input (here: digits) without much influence from the top-down motor input (here: correct classification).

On the bottom, you see each of the two layer-two neurons' afferent weights.  The neuron that is taught to represent the ``4'' digit's weights is on the left.  The neurons that is taught to represent the ``9'' digit's weights is on the right.  A lighter color means a stronger weight. 

The weights on this layer are controlled to be strictly discriminatory.  They develop invariant motor-specific features:  These features will develop to detect each of the two digits, invariant to the many variations that are stored on the first layer.

After every 50 new digits/labels are experienced, the weights will visually update. 

Lastly, a set of 500 new digits are used to query the network, with the output part not supplied (testing).  The layer-two neuron with the largest response provides the class label. This result is just output to the screen.

Each neuron on each layer does the same thing.  A neuron develops from four sources: its excitatory afferent (bottom-up) input, excitatory lateral input (short-range), inhibitory lateral input (long-range), and excitatory top-down input (from the next layer only).

Anyway, the point of the two examples I provided is to examine how the excitatory top-down input can affect the development of the layer-one features. 

In the first example, layer-one neurons do not use the top-down input.  In the second example, the top-down input is used, with a weight of .3 (meaning the afferent input is weighted at .7).

It can be seen that when both the top-down and lateral-excitatory inputs are used, layer-one develops so that features corresponding a certain class will all be stored in the same physical area of the cortical sheet.