Where-What Networks
(WWNs)
WWNs are general purpose
neuromorphic networks to model the brain's dorsal ("where") and ventral
("what") pathways, aiming at engineering solutions to interactive visual
recognition and attention (e.g., WWN-1,
WWN-2,
WWN-3).
Language Acquisition Model
The model
takes inspiration from neuromorphic networks like MILN, WWN-1, WWN-2 and WWN-3 to create a language learner that is able to use generalization
to create new sentences. Read more in the paper
here.
Multilayer In-place Learning Network
(MILN)
MINL is a biologically inspired general-purpose regression network
(See
Dr.
Luciw's work for details).
A description of the code and a tutorial is available
here. A key feature in MILN is called
Topographic Class Grouping (TCG), indicating that top-down connections
from a higher cortical area to an earlier area enable development of discriminative
features. Read more details about
Topographic Class
Grouping via Discriminative Sparse Coding.
Lobe Component
Analysis (LCA)
LCA is a biologically inspired model for a layer of neurons to
develop into feature detectors. A
description of the
code and a brief tutorial is available
here.
Incremental
Hierarchical Discriminant Regression (IHDR)
IHDR is a general-purpose regression
network using a tree structure. Read
more about IHDR
here.
Detailed
information about this classification and regression
method is
available here (pdf file).
Candid
Covariance-free Principal Components Analysis (CCIPCA)
CCIPCA is a
network to derive principal components incrementally, very fast, and
without using a covariance matrix. Read more in the paper
here.
A proof of convergence is
here.
Download the code to the
above models
here.