The Developmental Approach to Natural and Machine Intelligence
for integrated vision, audition, touch, language, reasoning, robotics and the brain-mind

  SAIL on Oct. 1, 1998   SAIL on Jan. 22, 1999 SAIL on Aug. 7, 2000  Dav on Jan. 11, 2002  Dav on May 30, 2002 Dav on Aug. 24, 2003

Last updated March 21, 2014.

Demonstrations

Video segments for demonstration of SAIL and Dav developmental robots:

Autonomous Mental Development

Traditional approaches to machine intelligence require human designers to explicitly program task-specific representation, perception and behaviors, according to the tasks that the machine is supposed to execute.  However, AI tasks require capabilities such as vision, speech, language, and motivation which have been proved to be too muddy to program effectively by hand. 

What is Autonomous Mental Development

Autonomous development is the nature's approach to human intelligence. The goal of this line of research is to understand human intelligence and to advance artificial intelligence. For the former, we need to understand how the brain acquires its rich array of capabilities through autonomous development. For the latter, we aim at machine's human-level performance through autonomous development.

Probably the two most striking hallmarks of autonomous mental development (AMD) are:

  1. task nonspecificity,
  2. the skull is closed throughout the brain's lifetime learning.

By "task nonspecificity", we mean that the genome (i.e., natural developmental program) is responsible for an open variety of tasks and skills that a newborn will learn through the life. Those tasks (including the task environment and task goals) are unknown to the prior generations before birth. Thus a developmental program must be able to regulate the development of a variety of skills for many tasks.

By "skull-closed", we mean that the teacher is not allowed to pull out subparts from the brain, define their roles and the meanings of the input ports and output ports, train them individually outside, put them back into the brain and then manually link them. During the autonomous development of a natural or an artificial brain, the teacher has access only to two ends of the brain, its sensors and its effectors, not directly to its internal representation. Therefore, a developmental program must be able to regulate the development of internal representations using information of these two ends. Of course, internal sensation and internal action (i.e., thinking) are also important for the development.

In contrast, all the traditional machine learning methods, including many neural network methods, require an "open skull" approach. That is, their machine development is not fully autonomous inside the "brain". This way, the holistically-aware central controller, at least at the linking ports of the separately trained subparts, is the human teacher. This holistically-aware central controller implants static representations into the artificial "brain", which makes the "brain" brittle because no static representation appears sufficient for dynamic real-world environments.

This new AMD approach is motivated by human cognitive and behavioral development from infancy to adulthood. It requires a fundamentally different way of addressing the issue of machine intelligence. We have introduced a new kind of program: a developmental program.   A robot that develops its mind through a developmental program is called  a developmental robot.   

For humans, the developmental program is in the genome inside the nucleus of every cell. According to the genomic equivalence principle, dramatically demonstrated by animal cloning, the genome is identical across all cells of a human individual. It starts to run at the time of conception of each life. This program is responsible for whatever can happen through the entire life.  For machines, the developmental program starts to run at the ``birth'' time of the robot or machine, which enables the robot to develop its mental skills (including perception, cognition, behaviors and motivation) through interactions with its environment using its sensors and effectors.  For machines to truly understand the world, the environment must be the physical world which includes the human teachers and the robot itself.

The concept of a developmental program does not mean just to make machines grow from small to big and from simple to complex. It must enable the machine to learn new tasks that a human programmer does not know about at the time of programming.   This implies that the internal representation of any task that the robot learns must be internally emerge though interactions, a well known holy grail in AI and a great myth about the brain.

Development does not mean learning from tabular rasa either.  The developmental program, the body with sensors and effectors, and the intelligent environment (with humans who are smart educators!) are all important for the success of development. Innate behaviors such as those present at birth can greatly facilitate early mental development.

The basic nature of developmental learning plays a central role in enabling a human being to incrementally scale his level of intelligence from the ground up. In order to scale up the machine's capability to understand what happens around it, the learning mechanism embedded in a developmental program must perform systematic self-organization, according to what it sensed, what it did, the actions imposed by the human when necessary, the punishment and reward it received from the humans or environment, the novelty it predicted, and the context.  As a fundamental requirement of scaling up, the robot must develop its value system (also called the motivational system) and must gradually develop its skills of autonomous thinking.

Why autonomous mental development

A traditional common wisdom is that artificial intelligence should be studied within a narrow scope otherwise the complexity is out of hand. The developmental approach aims to provide a broad and unified developmental framework with detailed developmental algorithms, which is applicable to a wide variety of perceptual capabilities (e.g., vision, audition, and touch), cognitive capabilities (e.g., situation awareness, language understanding, reasoning, planning, communication), behavioral capabilities (e.g., speaking, dancing, walking, playing music, decision making, task execution), motivational capabilities (e.g., pain avoidance, pleasure seeking, what is right and what is wrong) and the fusion of these capabilities. By the very nature of autonomous development, a developmental program does not require humans to manually manipulate the internal content of the brain, which drastically reduces the man-hour cost of system development.  This fully autonomous development inside the brain is also a great advantage for scaling up to human level performance. From the vast amount of evidence from natural intelligence, I predict that the singularity (machine is more intelligent than humans) will never happen in a general sense without autonomous mental development.

Some recent evidence in neuroscience has suggested that the developmental mechanisms in our brain are probably very similar across different sensing modalities, and across different cortical areas.    This is a good news since it means that the task of designing a developmental program is probably more tractable than traditional task-specific programming.

Although a developmental program is by no means simple, the new developmental approach does not require human programmers to understand the domain of tasks nor to predict them. Therefore, this approach not only reduces the programming burden and increases the adaptability to real-world environment, but also enables machines to develop capabilities or skills that the programmer does not have or are too muddy to be adequately understood by the programmer. Therefore, in principle, a developmental machine is capable of being creative for subjects of high value.

Eight requirements for practical AMD

A developmental robot that is capable of practical  autonomous mental development (AMD) must deal with the following eight requirements:

  1. Environmental openness:  Due to the task-nonspecificity, AMD must deal with unknown and uncontrolled environments, including various human environments.
  2. High-dimensional sensors: The dimension of a sensor is the number of scalar values per unit time.  AMD must directly deal with continuous raw signals from high-dimensional sensors (e.g., vision, audition and taction).  
  3. Completeness in using sensory information.  Due to the environmental openness and task nonspecificity, it is not desirable for a developmental program to discard, at the program design stage, sensory information that may be useful for some future, unknown tasks. Of course, its task-specific representation autonomously derived after birth does discard information that is not useful for a particular task.
  4. Online processing:  At each time instant, what the machine will sense next depends on what the machine does now.
  5. Real-time speed:  The sensory/memory refreshing rate must be high enough so that each physical event (e.g., motion and speech) can be temporally sampled and processed in real time (e.g., about 15Hz for vision).   This speed must be maintained even when a full (very large but finite) physical "machine brain size''  is used.  It must handle one-instance learning: learning from one instance of experience.
  6. Incremental processing:  Acquired skills must be used to assist in the acquisition of new skills, as a form of ``scaffolding.''  This requires incremental processing.  Thus, batch processing is not practical for AMD.  Each new observation must be used to update the current complex representation and the raw sensory data must be discarded after it is used for updating.
  7. Perform while learning:  Conventional machines perform after they are built.  An AMD machine must perform while it ``builds'' itself "mentally.''
  8. Scale up to large memory: For large perceptual and cognitive tasks, an AMD machine must handle multimodal contexts, large long-term memory and generalization, and capabilities for increasing maturity, all in real time speed. 

History of Our Developmental Models

2005 - present: the Cortex-like MILN and the brain-like Developmental Networks with embodiments WWN-1 through WWN-8:
 
- Brain-like development: how the brain is not just a signal processor but also concurrently the developer of the signal processor: brain wires itself through its activities modulated by the genome (a developmental procedure).
- Brain-like architecture, modeling pathways, laminar 6-layer cortex, and brain areas
- General purpose model for developing brain areas: LCA. The first dually optimal feature model: space and time
- How the brain deals with space: top-down attention, bottom-up attention, recognition, invariance, specificity, complex backgrounds
- How the brain deals with time: time warping, time duration, temporal attention, arbitrary temporal length, arbitrary temporal subsets to attend
- How the brain deals with modulation: the serotonin system (e.g., punishment and stress), dopamine system (e.g., pleasure), acetylcholine system (e.g., uncertainty), and norepinephrine system (e.g.. unexpected uncertainty)
 
- An account of WWN-1 to WWN-8:
 
- WWNs have been applied to:
1998 - 2010: SAIL and Dav robots (see references below):
- Architecture: SASE, early sensory processing SHM, later processing IHDR
- SASE appears the first model that raised internal sensation and internal action
- SHM uses unsupervised incremental feature development. It appears to be the first model that incrementally develops a complete set of hierarchical receptive fields
- IHDR: incrementally growing, incremental mixture of Gaussians. It appears the first model that used continuous motor signal to perform mixture of Gaussian modeling, going beyond the restriction of using discrete class labels in LDA.
- It has been applied to:
1994 - 2000: SHOSLIF.
- It is a dynamically incrementally growing tree using discrete labels.
- The nodes are clusters based on hierarchical LDA, using discrete class labels required by LDA.
- It has been applied to:
1991 (IJCNN 1992) - 1997 (IJCV): Cresceptron.

Miscellaneous

How long can a developmental robot  live? When the hardware of a developmental robot is worn or broken, the developmental program with its learned "brain" can be downloaded from the robot and uploaded to a new robot body. Therefore, unlike a biological brain, a developmental robot can live "mentally" as long as we humans like.  It can have a very old "mental age" but a very young "body age."

This line of work is supported in part by NSF, DARPA, Microsoft Research, Siemens Corporate Research, GM, and Zyvex.

Here is a tutorial: Autonomous Mental Development for Robots, presented at ICRA 2001 and ICDL 2002.
A recent tutorial: Understanding the (5+1)-Chunk Brain-Mind Model Requires 6-Discipline Knowledge, presented at the Brain- Workshop, Dec. 19-20, 2011.
For an introduction to computational study of mental development by robots and animals, read a paper that appeared in Science.
For importance and predicted impact of this new field, read a white paper from the Workshop on Development and Learning.

Software download

Some Publications

General philosophy: Development

J. Weng, J. McClelland, A. Pentland, O. Sporns, I. Stockman, M. Sur and E. Thelen, ``Autonomous Mental Development by Robots and Animals,'' Science, vol. 291, no. 5504, pp. 599 - 600, Jan. 26, 2001. PDF file.
J. Weng, ``The Developmental Approach to Intelligent Robots,'' in Proc. 1998 AAAI Spring Symposium Series, Integrating Robotic Research: Taking The Next Leap , Stanford University, March 23-25, 1998. PDF file.
J. Weng, ``The Living Machine Initiative,'' Technical Report MSU-CPS-96-60, Department of Computer Science, MSU, December 1996. PDF file.  The revised version appeared in J. Weng, ``Learning in Computer Vision and Beyond: Development,'' in C. W. Chen and Y. Q. Zhang (eds.),  ``Visual Communication and Image Processing , Marcel Dekker Publisher, New York, NY, 1999. PDF file.

Theory about brain-mind:

J. Weng, "Establish the Three Theorems: Brain-Like DNs Logically Reason and Optimally Generalize'', in Proc. International Conference on Brain-Mind, July 27 - 28, East Lansing, Michigan, pp. +1-8, 2013. PDF file. (This paper explains (1) how DP self-programs the FA-equivalent logic into a DN through observations of the FA's input-output streams; (2) why the learning is immediate and error free; (3) why the DN is optimal in maximum likelihood if it is exposed to infinitely many observations from the real physical world using the same input-output ends while the DP is not allowed to continue to self-program; (4) why the DN is optimal in maximum likelihood it is exposed to infinitely many observations from the real physical world using the same input-output ends while the DP is not allowed to continue to self-program.)
J. Weng, ``A Theory on the Completeness of the DN Logic Capability,'' In Proc. International Conference on Brain-Mind, July 14-15, East Lansing, Michigan, USA, pp. 35-42, 2012. PDF file. (This paper presents a theory that the apparent capability for abstraction in general, and for logic in particular, is in the eyes of the human observers. This paper presents a theory for a DN to learn any practical logic, including proposition logic, classical conditioning, instrumental conditioning, language acquisition, and autonomous planning.)
J. Weng, "Why Have We Passed `Neural Networks Do not Abstract Well'?'', Natural Intelligence: the INNS Magazine, vol. 1, no.1, pp. 13-22, 2011. PDF file. (This paper presents why a new class of neural networks DN performs abstraction as well as the common basis model of state-based symbolic models: Finite Automaton, and they are further spatiotemporally optimal.)
J. Weng, "Three Theorems: Brain-Like Networks Logically Reason and Optimally Generalize," in Proc. Int'l Joint Conference on Neural Networks, San Jose, CA, pp. +1-8, July 31 - August 5, 2011. PDF file. (Coined the new terms "symbolic network", "emergent network", "Developmental Network (DN)", "Generative DN (GDN)", and Agent Finite Automaton (AFA). Claimed, with the proofs under review by a journal, that (1) a GDN can learn any AFA incrementally, immediately and error free, (2) when such a DN is frozen for new experience it is optimal in the sense of maximal likelihood, and (3) when such a DN is allowed to continue to learning for new real-world experience it thinks optimally in the sense of maximal likelihood.)
J. Weng, "A 5-Chunk Developmental Brain-Mind Network Model for Multiple Events in Complex Backgrounds,''  International Joint Conference on Neural Networks, July 18-23, Barcelona, Spain, pp. 1-8, 2010. PDF file. (The first brain-mind model in the 5-chunk scale: development, architecture, area, space and time.) DOI: 10.1109/IJCNN.2010.5596740
J. Weng and W. S. Hwang, "From Neural Networks to the Brain: Autonomous Mental Development''  IEEE Computational Intelligence Magazine, vol. 1, no. 3, pp. 15-31, August 2006. PDF file.

Task muddiness and intelligence metrics:

J. Weng, "Task Muddiness, Intelligence Metrics, and the Necessity of Autonomous Mental Development," Minds and Machines, vol. 19, pp. 93-115, 2009. PDF file.
J. Weng, "Muddy Tasks and the Necessity of Autonomous Mental Development," in Proc. 2005 AAAI Spring Symposium Series, Developmental Robotics Symposium, Stanford University, March 21-23, 2005. PDF file.

Representation theory:

J. Weng, "Symbolic Models and Emergent Models: A Review," IEEE Transactions on Autonomous Mental Development, vol. 4, no. 1, pp. 29-53, 2012. PDF file. [This paper gives new definitions about symbolic models (e.g., FA, HMM, POMDP, Bayesian nets, graphical models) and emergent models (e.g., brain networks and some neural nets) and explains why a symbolic model is theoretically proved to be brittle, not just experimentally brittle.]
J. Weng and M. Luciw, "Brain-Like Emergent Spatial Processing," IEEE transactions on Autonomous Mental Development, vol. 4, no. 2, pp. 161-185, 2012. PDF file. (This seems the first general-purpose spatial processing model of the brain, simplified of course, as it is the first to address general-purpose top-down attention for cluttered scenes, without a fixed set of concepts, such as type, location, scale, etc.)
J. Weng, M. D. Luciw and Q. Zhang, ``Brain-Like Emergent Temporal Processing: Emergent Open States,'' IEEE transactions on Autonomous Mental Development, vol. 5, no. 2, pp. 89 - 116, 2013. PDF file. (Inspired by brain anatomy, this computational brain model integrated brain's spatial processing and temporal processing where the effector port of the brain amounts to emergent open states that compress spatial attention and temporal attention.)

Feature development: LCA area as a general-purpose "building block" of brain networks:

J. Weng and M. Luciw, "Dually Optimal Neuronal Layers: Lobe Component Analysis," IEEE Transactions on Autonomous Mental Development, vol. 1, no. 1, pp. 68-85, 2009. PDF file. (This is the archival version of LCA: each level of the brain network takes two sources of input --- ascending and descending --- and is optimal in minimizing the error of representing these two sources of input using its limited neuronal resource and limited training experience.)
J. Weng and N. Zhang, ``In-Place Learning and the Lobe Component Analysis,''  in Proc. IEEE World Congress on Computational Intelligence, International Joint Conference on Neural Networks, Vancouver, BC, Canada, pp. 1-8, July 16-21, 2006. PDF file. Matlab programProgram package with test sets
J. Weng, Y. Zhang and W. Hwang, ``Candid Covariance-free Incremental Principal Component Analysis,'' IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 8, pp. 1034-1040, 2003. PDF file.

Spatial processing and spatial attention (top-down and bottom-up):

Y. Wang, X. Wu and J. Weng, ``Brain-Like Learning Directly from Dynamic Cluttered Natural Video,'' in Proc. International Conference on Brain-Mind, July 14-15, East Lansing, Michigan, USA, pp. 51-58, 2012. PDF file. [Fully closed skull (no Y supervision), everything is moving. Acetylcholine (Ach) system and norepinephrine (NE) system. The synapse maintenance leads to nearly 100% recognition rate in disjoint tests. Visualization of Ach maps (standard deviation of synapse match) and synaptogenic factors (NE).]
Y. Wang, X. Wu and J. Weng, ``Skull-Closed Autonomous Development: WWN-6 Using Natural Video,'' in Proc. Int'l Joint Conference on Neural Networks, June 10-15, Brisbane, Australia, 2012. PDF file. [Fully closed skull (no Y supervision), everything is moving. Three types of modes: Type-based (object search), location-based (object recognition) and free viewing (where-and-what).]
J. Weng and M. Luciw, "Brain-Like Emergent Spatial Processing," IEEE transactions on Autonomous Mental Development, vol. 4, no. 2, pp. 161-185, 2012. PDF file. (This seems the first general-purpose spatial processing model of the brain, simplified of course, as it is the first to address general-purpose top-down attention for cluttered scenes, without a fixed set of concepts, such as type, location, scale, etc.)
N. Wagle and J. Weng, "Using Dually Optimal LCA Features in Sensory and Action Spaces for Classification", Technical Report MSU-CSE-12-6 ,35 pages, June 2012 .(DN method is applied to publicly available datasets and compared with well-known major techniques. For the datasets used, the performance of the DN method is better or comparable to some global template based methods and comparable to some major local template based methods while the DNs also provide statistics-based location information about the object in a cluttered scene.)
Y. Wang and X. Wu and J. Weng, "Synapse Maintenance in the Where-What Network," in Proc. Int'l Joint Conference on Neural Networks, San Jose, CA, pp. 2823-2829, July 31 - August 5, 2011. PDF file. (This seems the first computational model about cell-centered automatic synapse maintenance so that each neuron can segment object along its natural contour to get rid of the background from its sensory receptive field.)
Z. Ji, M. Luciw, J. Weng, and S. Zeng, "Incremental Online Object Learning in a Vehicular Radar-Vision Fusion Framework," IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 2, pp. 402-411, June 2011. PDF file. (Brain-inspired deep learning using both bottom-up and top-down connections; why using top-down connections result in discriminative features.)
M. Luciw and J. Weng, "Where What Network 3: Developmental Top-Down Attention with Multiple Meaningful Foregrounds" in Proc. International Joint Conference on Neural Networks, Barcelona, Spain, pp. 4233-4240, July 18-23, 2010. PDF file. (Extension to multiple objects in complex backgrounds, type-based top-down attention, location-based top-down Attention, and homeostatic top-down attention.)
Z. Ji and J. Weng, "WWN-2: A Biologically Inspired Neural Network for Concurrent Visual Attention and Recognition,'' in Proc. International Joint Conference on Neural Networks, Barcelona, Spain, pp. 4247-4254, July 18-23, 2010. PDF file. (A single object in complex background, further extends to the free-viewing mode, all pixel locations.)
J. Weng, "A Theory of Architecture for Spatial Abstraction," in Proc. IEEE 8th International Conference on Development and Learning,'' Shanghai, China, pp. +1-8, June 4-7, 2009. PDF file. (Spatial theory, why top-down projections improves the firing sensitivity for relevant components of bottom-up inputs and make the firing less sensitive to irrelevant components.)
M. Luciw and J. Weng, "Laterally connected lobe component analysis: Precision and topography," in Proc. IEEE 8th International Conference on Development and Learning, Shanghai, China, June 4-7, 2009. PDF file. (Top-down connections enable clusters for the same class to group in the internal representation.)
M. Luciw, J. Weng, S. Zeng, "Motor Initiated Expectation through Top-Down Connections as Abstract Context in a Physical World," in Proc. 7th IEEE International Conference on Development and Learning, Monterey, CA, pp. 115-120, Aug. 9-12, 2008. PDF file. (Top-down connections enable classifications for disjoint tests to become almost perfect.)
Z. Ji, J. Weng, and D. Prokhorov, ``Where-What Network 1: Where and What Assist Each Other Through To
p-down Connections''  in Proc. 7th International Conference on Development and Learning (ICDL'08), Monterey, CA, pp. 61-66,, Aug. 9-12, 2008. PDF file. (From location to type, from type to location, single object, and natural background.)

Temporal processing and temporal attention (top-down and bottom-up):

J. Weng, M. D. Luciw and Q. Zhang, ``Brain-Like Emergent Temporal Processing: Emergent Open States,'' IEEE transactions on Autonomous Mental Development, vol. 5, no. 2, pp. 89 - 116, 2013. PDF file. (Inspired by brain anatomy, this computational brain model integrated brain's spatial processing and temporal processing where the effector port of the brain amounts to emergent open states that compress spatial attention and temporal attention.)
K. Miyan and J. Weng, ``WWN-Text: Cortex-Like Language Acquisition with ’What’ and ’Where’,'' in Proc. IEEE 9th International Conference on Development and Learning,'' Ann Arbor, pp. 280-285, August 18-21, 2010 . PDF file. (text as perception, early language learning and early language generalization.)
J. Weng and M. Luciw, Online Learning for Attention, Recognition, and Tracking by a Single Developmental Framework, in Proc. 23rd IEEE Conference on Computer Vision and Pattern Recognition, 4th IEEE Online Learning for Computer Vision, pp +1-8, June 13, 2010. PDF file. (Temporal tracking as a special case of type-based attention in general-purpose object recognition from complex backgrounds.)
J. Weng, "A General Purpose Brain Model For Developmental Robots: The Spatial Brain for Any Temporal Lengths," in Proc. IEEE International Conference on Robotics and Automation, Workshop on Bio-Inspired Self-Organizing Robotic Systems, Anchorage, Alaska, pp.+1- 6, May 3-8, 2010. PDF file. (Temporal theory.)
J. Weng, Q. Zhang, M. Chi, and X. Xue, "Complex Text Processing by the Temporal Context Machines," in Proc. IEEE 8th International Conference on Development and Learning," Shanghai, China, pp. +1-8, June 4-7, 2009. PDF file. (Temporal text.)
M. Solgi and J. Weng, ``Developmental Stereo: Emergence of Disparity Preference in Models of Visual Cortex,'' IEEE Transactions on Autonomous Mental Development, vol. 1, no. 4, pp. 238-252, 2009. PDF file. (Temporal stereo without explicit stereo matching, with analysis.)
M. Solgi and J. Weng, "Temporal information as top-down context in binocular disparity detection," in Proc. IEEE 8th International Conference on Development and Learning," Shanghai, China, pp. +1-7, June 4-7, 2009. PDF file. (Temporal stereo without explicit stereo matching.)
M. Luciw, J. Weng, S. Zeng, "Motor Initiated Expectation through Top-Down Connections as Abstract Context in a Physical World," IEEE International Conference on Development and Learning, Monterey, CA, pp. 115-120, Aug. 9-12, 2008. PDF file. (Temporal vision.)

Brain or mental architecture:

J. Weng, `` On Developmental Mental Architectures,''  Neurocomputing,  vol. 70, no. 13-15, pp. 2303-2323, 2007.  PDF file.
J. Weng, ``A Theory of Developmental Architecture,''  in Proc. 3rd International Conference on Development and Learning (ICDL 2004),  La Jolla, CA, pp.1- 6, Oct. 20-23, 2004.  PDF file.

Where-What Networks (WWN-1 to WWN-7):

X. Wu, G. Guo, and J. Weng, "Skull-closed Autonomous Development: WWN-7 Dealing with Scales'', in Proc. International Conference on Brain-Mind, July 27 - 28, East Lansing, Michigan, pp. +1-9, 2013. PDF file. (his paper shows that how the DP self-programs concepts that are abstract into a DN directly from physics or training data, i.e., invariant to other concepts. For example, the type concept is invariant to location and scale; the location concept is invariant to type and scale; the scale concept is invariant to location and type.)
Y. Wang, X. Wu and J. Weng, ``Brain-Like Learning Directly from Dynamic Cluttered Natural Video,'' in Proc. International Conference on Brain-Mind, July 14-15, East Lansing, Michigan, USA, pp. 51-58, 2012. PDF file. [Fully closed skull (no Y supervision), everything is moving. Acetylcholine (Ach) system and norepinephrine (NE) system. The synapse maintenance leads to nearly 100% recognition rate in disjoint tests. Visualization of Ach maps (standard deviation of synapse match) and synaptogenic factors (NE).] Also Technical Report MSU-CSE-12-5.
Y. Wang, X. Wu and J. Weng, ``Skull-Closed Autonomous Development: WWN-6 Using Natural Video,'' in Proc. Int'l Joint Conference on Neural Networks, June 10-15, Brisbane, Australia, 2012. PDF file. [Fully closed skull (no Y supervision), everything is moving. Three types of modes: Type-based (object search), location-based (object recognition) and free viewing (where-and-what).]
X. Song, W. Zhang, J. Weng. "Where-What Networks 5: Dealing with Scales for Objects in Complex Backgrounds," in Proc. 2011 International Joint Conference on Neural Networks, San Jose, California, USA, pp. 2795-2802, July 31 - August 5, 2011. PDF file. (This version deals with objects with multiple scales in complex backgrounds.)
M. Luciw and J. Weng, "Where What Network 4: The Effect of Multiple Internal Areas," in Proc. IEEE 9th International Conference on Development and Learning, Ann Arbor, pp. 311-316, August 18-21, 2010. PDF file. (This seems the first example that showed that deep learning as a cascade of areas between the sensory port and the motor port does not perform as well as shallow learning --- multiple areas each having connections with both ports.)
M. Luciw and J. Weng, "Where What Network 3: Developmental Top-Down Attention with Multiple Meaningful Foregrounds" in Proc. International Joint Conference on Neural Networks, Barcelona, Spain, pp. 4233-4240, July 18-23, 2010. PDF file. (Extension to multiple objects in complex backgrounds, type-based top-down attention, location-based top-down Attention, and homeostatic top-down attention.)
Z. Ji and J. Weng, "WWN-2: A Biologically Inspired Neural Network for Concurrent Visual Attention and Recognition,'' in Proc. International Joint Conference on Neural Networks, Barcelona, Spain, pp. 4247-4254, July 18-23, 2010. PDF file. (A single object in complex background, further extends to the free-viewing mode, all pixel locations.)
Z. Ji, J. Weng, and D. Prokhorov, ``Where-What Network 1: Where and What Assist Each Other Through Top-down Connections''  in Proc. 7th International Conference on Development and Learning (ICDL'08), Monterey, CA, Aug. 9-12, pp. 1-6, 2008. PDF file. (From location to type, from type to location, single object, and natural background.)

Motor (high-dimensional motor space using reinforcement learning):

H. Ye, X. Huang, and J. Weng, ``Inconsistent Training for Developmental Networks and the Applications in Game Agents,'' In Proc. International Conference on Brain-Mind, July 14-15, East Lansing, Michigan, USA, pp. 43-59, 2012. PDF file. (Teaching is inconsistent in a game setting.)
A. Joshi and J. Weng, "Autonomous mental development in high dimensional context and action spaces,'' Neural Networks, vol. 16, no. 5-6, pp. 701-710, 2003. PDF file. (Learning to make sounds using reinforcement learning.)

Modulatory system (includes motivational system, reinforcement learning):

J. Weng, , S. Paslaski, J. Daly, C. VanDam and J. Brown, "Modulation for Emergent Networks: Serotonin and Dopamine,'' Neural Networks, vol. 41, pp. 225-239, 2013. PDF file. [ This is a journal version of our models of two modulatory systems: Serotonin and Dopamine based on the DN brain model.]
Y. Wang, X. Wu and J. Weng, ``Brain-Like Learning Directly from Dynamic Cluttered Natural Video,'' in Proc. International Conference on Brain-Mind, July 14-15, East Lansing, Michigan, USA, pp. 51-58, 2012. PDF file. [Fully closed skull (no Y supervision), everything is moving. This work modeled other two neuromodulatory systems: Acetylcholine (ACh) system and norepinephrine (NE) system. The synapse maintenance leads to nearly 100% recognition rate in disjoint tests. Visualization of ACh maps (standard deviation of synapse match) and synaptogenic factors (NE).]
J. Daly and J. Brown and J. Weng, "Neuromorphic Motivated Systems, in Proc. Int'l Joint Conference on Neural Networks," San Jose, CA, pp. 2917-2914, July 31 - August 5, 2011. PDF file. (This seems the first neuromorphic model for fully emergent serotonin and dopamine systems with navigation as experiments. Prior reinforcement learning systems are either symbolic or not fully neuromorphic.)
S. Paslaski and C. VanDam and J. Weng, "Modeling Dopamine and Serotonin Systems in a Visual Recognition Network," in Proc. Int'l Joint Conference on Neural Networks, San Jose, CA, pp. 3016-3023, July 31 - August 5, 2011. PDF file. (This seems the first neuromorphic model for fully emergent serotonin and dopamine systems with navigation as experiments. Prior reinforcement learning systems are either symbolic or not fully neuromorphic.)
X. Huang and J. Weng, ``Inherent Value Systems for Autonomous Mental Development,'' International Journal of Humanoid Robotics, vol. 4, no. 2, pp. 407-433, 2007.   PDF file.
X. Huang and J. Weng, "Novelty and Reinforcement Learning in the Value System of Developmental Robots,"  in Proc. Second International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, Edinburgh, Scotland, August 10 - 11, 2002. PDF file.

Skill transfer and autonomous thinking:

M. Solgi, T. Liu, and J. Weng, ``A Computational Developmental Model for Specificity and Transfer in Perceptual Learning,'' Journal of Vision, vol. 13, no. 1, ar. 7, pp. 1-23, 2013. PDF file. {Although this paper is on transfer in perceptual learning only, the mechanisms explained amount to autonomous thinking.]
Y. Zhang and J. Weng, ``Task Transfer by a Developmental Robot,'' IEEE Transactions on Evolutionary Computation, vol. 11, no. 2, pp. 226-248, 2007. PDF file.
Y. Zhang and J. Weng, ``Action Chaining by a Developmental Robot with a Value System,''  in Proc. 2nd International Conference on Development and Learning, June 12 - 15, MIT, Cambridge, MA, IEEE Computer Society Press, 2002. PDF file.

Theory about developmental robotics:

J. Weng, "Developmental Robotics: Theory and Experiments''  International Journal of Humanoid Robotics, vol. 1, no. 2, 2004. PDF file.
J. Weng, "A Theory for Mentally Developing Robots,''  in Proc. 2nd International Conference on Development and Learning, June 12 - 15, MIT, Cambridge, MA, IEEE Computer Society Press, 2002. PS file or PDF file.

Overview of related projects at the EI Lab:

J. Weng and Y. Zhang, ``Developmental Robots: A New Paradigm,''  an invited paper in Proc. Second International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, Edinburgh, Scotland, August 10 - 11, 2002.  PDF file.

MILN: Cortex inspired sensorimotor pathway:

J. Weng, H. Lu, T. Luwang and X. Xue, ``Multilayer In-place Learning Networks for Modeling Functional Layers in the Laminar Cortex''  Neural Networks, vol. 21, no.2-3, pp. 150-159, 2008. PDF file.
J. Weng, H. Lu, T. Luwang and X. Xue, ``A Multilayer In-Place Learning Network for Development of General Invariances,''  International Journal of Humanoid Robotics, vol. 4, no. 2, pp. 281-320, 2007. PDF file.
J. Weng, H. Lu, T. Luwang and X. Xue, ``In-Place Learning for Positional and Scale Invariance,''  in Proc. IEEE World Congress on Computational Intelligence , International Joint Conference on Neural Networks, Vancouver, BC, Canada, July 16-21, pp. 1-10, 2006. PDF file.
J. Weng and Matthew D. Luciw, ``Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topology,''  in Proc. 5th International Conference on Development and Learning (ICDL'06) , Bloomington, IN, USA, May 31 - June 3, pp. 1-7, 2006. PDF file.

Sensory mapping engine (early processing for local feature extraction with receptive fields):

N. Zhang, J. Weng and Z. Zhang, ``A Developing Sensory Mapping for Robots,''  in Proc. 2nd International Conference on Development and Learning, June 12 - 15, MIT, Cambridge, MA, IEEE Computer Society Press, 2002. PDF file.
J. Weng and M. D. Luciw, ``Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography,''  in Proc. 5th International Conference on Development and Learning, May 30 - June 3, Bloomington, IN, 2006. PDF file.

Cognitive mapping engine (later processing for high-dimensional regression):

J. Weng and W. Hwang, "Incremental Hierarchical Discriminant Regression,"  IEEE Transactions on Neural Networks,  vol. 18, no. 2, pp. 397-415, 2007.   PDF file.
W. Hwang and J. Weng, "Hierarchical Discriminant Regression'', IEEE Trans. Pattern Analysis and Machine Intelligence, vol.  22, no. 11, pp. 1277-1293, November 2000.  (Batch HDR.) PDF file.
J. Weng and W. Hwang, "An incremental learning algorithm with automatically derived discriminating features'',  in Proc. Asian Conference on Computer Vision,  Taipei, Taiwan, pp. 426 - 431, Jan. 8 - 9, 2000. (Incremental HDR.) PDF file.
J. Weng and W. Hwang, "Online Image Classification Using IHDR,"  International Journal on Document Analysis and Recognition,  vol. 5, no. 2-3, pp. 118-125, 2003.  PDF file.

In situ co-acquisition of audition and speech recognition behavior:

Y. Zhang, J. Weng and W. Hwang, "Auditory Learning: A Developmental Method," IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 601-616, 2005. PDF file.

In situ co-acquisition of vision, early spoken language and behavior

Y. Zhang and J. Weng, "Conjunctive Visual and Auditory Development via Real-Time Dialogue,''  in Proc. 3rd International Workshop on Epigenetic Robotics, Boston, MA, pp. 974 - 980, August 4-5, 2003.  PDF file.

Application of Developmental Vision to Part Inspection in Manufacturing:

G. Abramovich, J. Weng and D. Dutta, "Adaptive Part Inspection through Developmental Vision,"  Journal of Manufacturing Science and Engineering, vol. 127, no. 4, pp. 846-856, Nov. 2005. PDF file.

Multimodal co-development:

Y. Zhang and J. Weng, "Spatiotemporal Multimodal Developmental Learning", IEEE Transactions on Autonomous Mental Development, vol. 2, no. 3, pp. 149-166, 2010. PDF file. [Vision and audition co-develop with motor actions as inputs for cross-modality concept representations.]
J. Weng, Y. Zhang and Y. Chen, "Developing Early Senses about the World: `Object Permanence' and Visuoauditory Real-time Learning,'' in Proc. International Joint Conf. on Neural Networks, Portland, OR, pp. 2710 - 2715, July 20-24, 2003.   PDF file.

Dav:

S. Zeng and J. Weng, ``Online-learning and attention-based approach to obstacle avoidance using a range finder," Journal of Intelligent and Robotic Systems, vol. 43, no. 2, June 2005. PDF file.
S. Zeng and J. Weng, ``Obstacle Avoidance through Incremental Learning with Attention Selection,''  in Proc. IEEE Int'l Conf. on Robotics and Automation, New Orleans, Louisiana, pp. 115-121, April 26 - May 1, 2004. PDF file.
J. D. Han, S. W. Zeng, K. Y. Tham, M. Badgero and J. Weng, ``Dav: A Humanoid Robot Platform for Autonomous Mental Development,''  in Proc. 2nd International Conference on Development and Learning, June 12 - 15, MIT, Cambridge, MA, IEEE Computer Society Press, 2002. PDF file.

SAIL-2:

Y. Zhang and J. Weng, ``Grounded Auditory Development by a Developmental Robot,''  in Proc. INNS/IEEE International Joint Conference of Neural Networks 2001 (IJCNN 2001), Washington DC, pp. 1059-1064,  July 14-19, 2001. PDF file.
J. Weng, W. S. Hwang, Y. Zhang, C. Yang and R. Smith, ``Developmental Humanoids: Humanoids that Develop Skills Automatically,''  in the Proc. the first IEEE-RAS International Conference on Humanoid Robots, Cambridge, MIT, Sept. 7-8, 2000. PDF file.
J. Weng, C. H. Evans and W. S. Hwang, ``An Incremental Learning Method for Face Recognition under Continuous Video Stream,'' in Proc. Fourth International Conference on Automatic Face and Gesture Recognition, Grenoble, France. March 28 - 30, 2000. PDF file.
J. Weng, W. S. Hwang, Y. Zhang and C. Evans, ``Developmental robots: Theory, Method and Experimental Results,'' in Proc. 2nd Int'l Symposium on Humanoid Robots, Tokyo, Japan, pp. 57- 64, Oct. 8- 9, 1999. PDF file.

SAIL-1:

J. Weng, C. Evans, W. S. Hwang, and Y. B. Lee, ``The Developmental Approach to Artificial Intelligence: Concepts, Developmental Algorithms and Experimental Results'' in Proc. NSF Design & Manufacturing Grantees Conference, , Queen Mary, Long Beach, CA, Jan 5-8, 1999. Also, Technical Report MSU-CPS-98-25, July, 1998. PDF file.
J. Weng, Y. B. Lee and C. H. Evans, ``The developmental approach to multimedia speech learning,'' in Proc. IEEE Int'l Conf. on Acoustics, Speech, and Signal Processing, Phoenix, Arizona, vol. 6, pp. 3093 - 3096, March 15 - 19, 1999. PDF file.
Back To Weng's Home Page: http://www.cse.msu.edu/~weng/