SAIL on Oct. 1, 1998 | SAIL on Jan. 22, 1999 | SAIL on Aug. 7, 2000 | Dav on Jan. 11, 2002 | Dav on May 30, 2002 | Dav on Aug. 24, 2003 |
Last updated May 13, 2021.
Video segments for demonstration of SAIL and Dav developmental robots:
Autonomous development is the nature's approach to human intelligence. The goal of this line of research is to understand human intelligence and to advance artificial intelligence. For the former, we need to understand how the brain acquires its rich array of capabilities through autonomous development. For the latter, we aim at machine's human-level performance through autonomous development.
Probably the two most striking hallmarks of autonomous mental development (AMD) are:
By "task nonspecificity", we mean that the genome (i.e., natural developmental program) is responsible for an open variety of tasks and skills that a newborn will learn through the life. Those tasks (including the task environment and task goals) are unknown to the prior generations before birth. Thus a developmental program must be able to regulate the development of a variety of skills for many tasks.
By "skull-closed", we mean that the teacher is not allowed to pull out subparts from the brain, define their roles and the meanings of the input ports and output ports, train them individually outside, put them back into the brain and then manually link them. During the autonomous development of a natural or an artificial brain, the teacher has access only to two ends of the brain, its sensors and its effectors, not directly to its internal representation. Therefore, a developmental program must be able to regulate the development of internal representations using information of these two ends. Of course, internal sensation and internal action (i.e., thinking) are also important for the development.
In contrast, all the traditional machine learning methods, including many neural network methods, require an "open skull" approach. That is, their machine development is not fully autonomous inside the "brain". This way, the holistically-aware central controller, at least at the linking ports of the separately trained subparts, is the human teacher. This holistically-aware central controller implants static representations into the artificial "brain", which makes the "brain" brittle because no static representation appears sufficient for dynamic real-world environments.
This new AMD approach is motivated by human cognitive and behavioral development from infancy to adulthood. It requires a fundamentally different way of addressing the issue of machine intelligence. We have introduced a new kind of program: a developmental program. A robot that develops its mind through a developmental program is called a developmental robot.
For humans, the developmental program is in the genome inside the nucleus of every cell. According to the genomic equivalence principle, dramatically demonstrated by animal cloning, the genome is identical across all cells of a human individual. It starts to run at the time of conception of each life. This program is responsible for whatever can happen through the entire life. For machines, the developmental program starts to run at the ``birth'' time of the robot or machine, which enables the robot to develop its mental skills (including perception, cognition, behaviors and motivation) through interactions with its environment using its sensors and effectors. For machines to truly understand the world, the environment must be the physical world which includes the human teachers and the robot itself.
The concept of a developmental program does not mean just to make machines grow from small to big and from simple to complex. It must enable the machine to learn new tasks that a human programmer does not know about at the time of programming. This implies that the internal representation of any task that the robot learns must be internally emerge though interactions, a well known holy grail in AI and a great myth about the brain.
Development does not mean learning from tabular rasa either. The developmental program, the body with sensors and effectors, and the intelligent environment (with humans who are smart educators!) are all important for the success of development. Innate behaviors such as those present at birth can greatly facilitate early mental development.
The basic nature of developmental learning plays a central role in enabling a human being to incrementally scale his level of intelligence from the ground up. In order to scale up the machine's capability to understand what happens around it, the learning mechanism embedded in a developmental program must perform systematic self-organization, according to what it sensed, what it did, the actions imposed by the human when necessary, the punishment and reward it received from the humans or environment, the novelty it predicted, and the context. As a fundamental requirement of scaling up, the robot must develop its value system (also called the motivational system) and must gradually develop its skills of autonomous thinking.
A traditional common wisdom is that artificial intelligence should be studied within a narrow scope otherwise the complexity is out of hand. The developmental approach aims to provide a broad and unified developmental framework with detailed developmental algorithms, which is applicable to a wide variety of perceptual capabilities (e.g., vision, audition, and touch), cognitive capabilities (e.g., situation awareness, language understanding, reasoning, planning, communication), behavioral capabilities (e.g., speaking, dancing, walking, playing music, decision making, task execution), motivational capabilities (e.g., pain avoidance, pleasure seeking, what is right and what is wrong) and the fusion of these capabilities. By the very nature of autonomous development, a developmental program does not require humans to manually manipulate the internal content of the brain, which drastically reduces the man-hour cost of system development. This fully autonomous development inside the brain is also a great advantage for scaling up to human level performance. From the vast amount of evidence from natural intelligence, I predict that the singularity (machine is more intelligent than humans) will never happen in a general sense without autonomous mental development.
Some recent evidence in neuroscience has suggested that the developmental mechanisms in our brain are probably very similar across different sensing modalities, and across different cortical areas. This is a good news since it means that the task of designing a developmental program is probably more tractable than traditional task-specific programming.
Although a developmental program is by no means simple, the new developmental approach does not require human programmers to understand the domain of tasks nor to predict them. Therefore, this approach not only reduces the programming burden and increases the adaptability to real-world environment, but also enables machines to develop capabilities or skills that the programmer does not have or are too muddy to be adequately understood by the programmer. Therefore, in principle, a developmental machine is capable of being creative for subjects of high value.
A developmental robot that is capable of practical autonomous mental development (AMD) must deal with the following eight requirements:
How long can a developmental robot live? When the hardware of a developmental robot is worn or broken, the developmental program with its learned "brain" can be downloaded from the robot and uploaded to a new robot body. Therefore, unlike a biological brain, a developmental robot can live "mentally" as long as we humans like. It can have a very old "mental age" but a very young "body age."
This line of work is supported in part by NSF, DARPA, Microsoft Research, Siemens Corporate Research, GM, and Zyvex.
Here is a tutorial: Autonomous Mental Development for Robots, presented at ICRA 2001 and ICDL 2002.
A recent tutorial: Understanding the (5+1)-Chunk Brain-Mind Model Requires 6-Discipline Knowledge, presented at the Brain- Workshop, Dec. 19-20, 2011.
For an introduction to computational study of mental development by robots and animals, read a paper that appeared in Science.
For importance and predicted impact of this new field, read a white paper from the Workshop on Development and Learning.
Monograph and Textbook
Juyang Weng. Natural and Artificial Intelligence: Introduction to Computational Brain-Mind, BMI Press, 1st edition, 2012; 2nd edition, 2019.
J. Weng, "A Protocol for Testing Conscious Learning Robots," in Proc. International Joint Conference on Neural Networks, pp. 1-8, IEEE Press, Queensland, Australia, June 18 - June 23, 2023.
PDF file.
J. Weng, "20 Million-Dollar Problems for Any Brain Models and a Holistic Solution: Conscious Learning", in Proc. International Joint Conference on Neural Networks, pp. 1-9, Padua, Italy, July 18-23, 2022.
PDF file.
J. Weng, A Developmental Network Model of Conscious Learning in Biological Brains, Research Square, pp. 1-33, June 7, 2022. PDF file
J. Weng, "An Algorithmic Theory of Conscious Learning", in Proc. 2022 3rd International Conf. on Artificial Intelligence in Electronics Engineering, pp. 1-10, Bangkok, Thailand, Jan. 21-23, 2022.
PDF file.
J. Weng, "3D-to-2D-to-3D Conscious Learning", in Proc. IEEE 40th International Conference on Consumer Electronics, pp. 1-6, Las Vegas NV, USA, Jan.7-9, 2022.
PDF file.
X. Wu and J. Weng, "On Machine Thinking", in Proc. International Joint Conference on Neural Networks, pp. 1-8, Shenzhen, China, July 18-22, 2021.
PDF file.
J. Weng, "Conscious Intelligence Requires Developmental Autonomous Programming For General Purposes," in Proc. IEEE International Conference on Development and Learning and Epigenetic Robotics, pp. 1-7, Valparaiso, Chile, Oct. 26-27, 2020. PDF file arXiv PDF file.
J. Weng, "Autonomous Programming for General Purposes: Theory," International Journal of Huamnoid Robotics, vol. 17, no. 14, pp. 1-36, September, 14, 2020. PDF file.
J. Weng, "A Model for Auto-Programming for General Purposes," arXiv:1810.05764, June 13, 2017. PDF file.
J. Weng, J. McClelland, A. Pentland, O. Sporns, I. Stockman, M. Sur and E. Thelen, "Autonomous Mental Development by Robots and Animals," Science, vol. 291, no. 5504, pp. 599 - 600, Jan. 26, 2001. PDF file.
J. Weng, ``The Developmental Approach to Intelligent Robots,'' in Proc. 1998 AAAI Spring Symposium Series, Integrating Robotic Research: Taking The Next Leap , Stanford University, March 23-25, 1998. PDF file.
J. Weng, ``The Living Machine Initiative,'' Technical Report MSU-CPS-96-60, Department of Computer Science, MSU, December 1996. PDF file. The revised version appeared in J. Weng, ``Learning in Computer Vision and Beyond: Development,'' in C. W. Chen and Y. Q. Zhang (eds.), ``Visual Communication and Image Processing , Marcel Dekker Publisher, New York, NY, 1999. PDF file.
J. Weng and M. Luciw, "Dually Optimal Neuronal Layers: Lobe Component Analysis," IEEE Transactions on Autonomous Mental Development, vol. 1, no. 1, pp. 68-85, 2009. PDF file. (This is the archival version of LCA: each level of the brain network takes two sources of input --- ascending and descending --- and is optimal in minimizing the error of representing these two sources of input using its limited neuronal resource and limited training experience.)
J. Weng and N. Zhang, ``In-Place Learning and the Lobe Component Analysis,'' in Proc. IEEE World Congress on Computational Intelligence, International Joint Conference on Neural Networks, Vancouver, BC, Canada, pp. 1-8, July 16-21, 2006. PDF file. Matlab program. Program package with test sets.
J. Weng, Y. Zhang and W. Hwang, ``Candid Covariance-free Incremental Principal Component Analysis,'' IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 8, pp. 1034-1040, 2003. PDF file.
Y. Wang, X. Wu and J. Weng, ``Brain-Like Learning Directly from Dynamic Cluttered Natural Video,'' in Proc. International Conference on Brain-Mind, July 14-15, East Lansing, Michigan, USA, pp. 51-58, 2012. PDF file. [Fully closed skull (no Y supervision), everything is moving. Acetylcholine (Ach) system and norepinephrine (NE) system. The synapse maintenance leads to nearly 100% recognition rate in disjoint tests. Visualization of Ach maps (standard deviation of synapse match) and synaptogenic factors (NE).]
Y. Wang, X. Wu and J. Weng, ``Skull-Closed Autonomous Development: WWN-6 Using Natural Video,'' in Proc. Int'l Joint Conference on Neural Networks, June 10-15, Brisbane, Australia, 2012. PDF file. [Fully closed skull (no Y supervision), everything is moving. Three types of modes: Type-based (object search), location-based (object recognition) and free viewing (where-and-what).]
J. Weng and M. Luciw, "Brain-Like Emergent Spatial Processing," IEEE transactions on Autonomous Mental Development, vol. 4, no. 2, pp. 161-185, 2012. PDF file. (This seems the first general-purpose spatial processing model of the brain, simplified of course, as it is the first to address general-purpose top-down attention for cluttered scenes, without a fixed set of concepts, such as type, location, scale, etc.)
N. Wagle and J. Weng, "Developing Dually Optimal LCA Features in Sensory and Action Spaces for Classification", In Proc. the 2nd Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, San Diego, CA, pp. +1-8, Nov. 7 - 9, 2012. PDF file. (DN method is applied to publicly available datasets and compared with well-known major techniques. For the datasets used, the performance of the DN method is better or comparable to some global template based methods and comparable to some major local template based methods while the DNs also provide statistics-based location information about the object in a cluttered scene.)
Y. Wang and X. Wu and J. Weng, "Synapse Maintenance in the Where-What Network," in Proc. Int'l Joint Conference on Neural Networks, San Jose, CA, pp. 2823-2829, July 31 - August 5, 2011. PDF file. (This seems the first computational model about cell-centered automatic synapse maintenance so that each neuron can segment object along its natural contour to get rid of the background from its sensory receptive field.)
Z. Ji, M. Luciw, J. Weng, and S. Zeng, "Incremental Online Object Learning in a Vehicular Radar-Vision Fusion Framework," IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 2, pp. 402-411, June 2011. PDF file. (Brain-inspired deep learning using both bottom-up and top-down connections; why using top-down connections result in discriminative features.)
M. Luciw and J. Weng, "Where What Network 3: Developmental Top-Down Attention with Multiple Meaningful Foregrounds" in Proc. International Joint Conference on Neural Networks, Barcelona, Spain, pp. 4233-4240, July 18-23, 2010. PDF file. (Extension to multiple objects in complex backgrounds, type-based top-down attention, location-based top-down Attention, and homeostatic top-down attention.)
Z. Ji and J. Weng, "WWN-2: A Biologically Inspired Neural Network for Concurrent Visual Attention and Recognition,'' in Proc. International Joint Conference on Neural Networks, Barcelona, Spain, pp. 4247-4254, July 18-23, 2010. PDF file. (A single object in complex background, further extends to the free-viewing mode, all pixel locations.)
J. Weng, "A Theory of Architecture for Spatial Abstraction," in Proc. IEEE 8th International Conference on Development and Learning,'' Shanghai, China, pp. +1-8, June 4-7, 2009. PDF file. (Spatial theory, why top-down projections improves the firing sensitivity for relevant components of bottom-up inputs and make the firing less sensitive to irrelevant components.)
M. Luciw and J. Weng, "Laterally connected lobe component analysis: Precision and topography," in Proc. IEEE 8th International Conference on Development and Learning, Shanghai, China, June 4-7, 2009. PDF file. (Top-down connections enable clusters for the same class to group in the internal representation.)
M. Luciw, J. Weng, S. Zeng, "Motor Initiated Expectation through Top-Down Connections as Abstract Context in a Physical World," in Proc. 7th IEEE International Conference on Development and Learning, Monterey, CA, pp. 115-120, Aug. 9-12, 2008. PDF file. (Top-down connections enable classifications for disjoint tests to become almost perfect.)
Z. Ji, J. Weng, and D. Prokhorov, ``Where-What Network 1: Where and What Assist Each Other Through To
p-down Connections'' in Proc. 7th International Conference on Development and Learning (ICDL'08), Monterey, CA, pp. 61-66,, Aug. 9-12, 2008. PDF file. (From location to type, from type to location, single object, and natural background.)
J. Weng, M. D. Luciw and Q. Zhang, ``Brain-Like Emergent Temporal Processing: Emergent Open States,'' IEEE transactions on Autonomous Mental Development, vol. 5, no. 2, pp. 89 - 116, 2013. PDF file. (Inspired by brain anatomy, this computational brain model integrated brain's spatial processing and temporal processing where the effector port of the brain amounts to emergent open states that compress spatial attention and temporal attention.)
J. Weng and M. Luciw, Online Learning for Attention, Recognition, and Tracking by a Single Developmental Framework, in Proc. 23rd IEEE Conference on Computer Vision and Pattern Recognition, 4th IEEE Online Learning for Computer Vision, pp +1-8, June 13, 2010. PDF file. (Temporal tracking as a special case of type-based attention in general-purpose object recognition from complex backgrounds.)
J. Weng, "A General Purpose Brain Model For Developmental Robots: The Spatial Brain for Any Temporal Lengths," in Proc. IEEE International Conference on Robotics and Automation, Workshop on Bio-Inspired Self-Organizing Robotic Systems, Anchorage, Alaska, pp.+1- 6, May 3-8, 2010. PDF file. (Temporal theory.)
J. Weng, Q. Zhang, M. Chi, and X. Xue, "Complex Text Processing by the Temporal Context Machines," in Proc. IEEE 8th International Conference on Development and Learning," Shanghai, China, pp. +1-8, June 4-7, 2009. PDF file. (Temporal text.)
Multi-sensory development
J. A. Knoll, J. Honer, S. Church, and J. Weng, ``Optimal Developmental Learning for
Multisensory and Multi-Teaching Modalities,'' in Proc. IEEE International Conference on Development and Learning, Beijing, China, pp. 1-6, Aug. 23-26, 2021. PDF file. (Development for left-sensory and right-sensory modalities using GENISAMA VCML-100 left-right synchronized camera 3Deye for real-time on-the-fly learning, with outputs having both image disparity and object types)
J. A. Knoll, V. -N. Hoang, J. Honer, S. Church, T. -H. Tran, and J. Weng, ``Fast Developmental Stereo-Disparity Detectors,'' in Proc. IEEE International Conference on Development and Learning and Epigenetic Robotics, Valparaiso, Chile, pp. 1-6, Oct. 26-27, 2020. PDF file. (Development for left-sensory and right-sensory modalities using GENISAMA VCML-100 left-right synchronized camera 3Deye for real-time on-the-fly learning, with outputs as image disparities)
M. Solgi and J. Weng, ``WWN-8: Incremental Online Stereo with Shape-from-X
Using Life-Long Big Data from Multiple Modalities,'' in Proc. INNS Conference on Big Data, San Francisco, pp. 316-326, August 8-10, 2015. PDF file. (Development for left-sensory and right-sensory modalities with emergent concepts of location, shape, and type without explicit stereo matching.)
M. Solgi and J. Weng, ``Developmental Stereo: Emergence of Disparity Preference in Models of Visual Cortex,'' IEEE Transactions on Autonomous Mental Development, vol. 1, no. 4, pp. 238-252, 2009. PDF file. (Temporal stereo without explicit stereo matching, with analysis.)
M. Solgi and J. Weng, "Temporal information as top-down context in binocular disparity detection," in Proc. IEEE 8th International Conference on Development and Learning," Shanghai, China, pp. +1-7, June 4-7, 2009. PDF file. (Temporal stereo without explicit stereo matching.)
M. Luciw, J. Weng, S. Zeng, "Motor Initiated Expectation through Top-Down Connections as Abstract Context in a Physical World," IEEE International Conference on Development and Learning, Monterey, CA, pp. 115-120, Aug. 9-12, 2008. PDF file. (Temporal vision.)
J. Weng and M. Luciw, ``Brain-Inspired Concept Networks: Learning Concepts from Cluttered Science,'' in IEEE Intelligent Systems Magazine, vol. 29, no. 6, pp 14-22, 2014. PDF file. (This is an overview of Where-What Networks with 4 modes, (1) free viewing, (2) type context for object detection, (3) location context for object recognition, and (4) homeostasis for autonomous switching text.)
M. Solgi and J. Weng, ``WWN-8: Incremental Online Stereo with Shape-from-X
Using Life-Long Big Data from Multiple Modalities,'' in Proc. INNS Conference on Big Data, San Francisco, pp. 316-326, August 8-10, 2015. PDF file. (Development for left-sensory and right-sensory modalities with emergent concepts of location, shape, and type without explicit stereo matching.)
X. Wu, G. Guo, and J. Weng, "Skull-closed Autonomous Development: WWN-7 Dealing with Scales'', in Proc. International Conference on Brain-Mind, July 27 - 28, East Lansing, Michigan, pp. +1-9, 2013. PDF file. (his paper shows that how the DP self-programs concepts that are abstract into a DN directly from physics or training data, i.e., invariant to other concepts. For example, the type concept is invariant to location and scale; the location concept is invariant to type and scale; the scale concept is invariant to location and type.)
Y. Wang, X. Wu and J. Weng, ``Brain-Like Learning Directly from Dynamic Cluttered Natural Video,'' in Proc. International Conference on Brain-Mind, July 14-15, East Lansing, Michigan, USA, pp. 51-58, 2012. PDF file. [Fully closed skull (no Y supervision), everything is moving. Acetylcholine (Ach) system and norepinephrine (NE) system. The synapse maintenance leads to nearly 100% recognition rate in disjoint tests. Visualization of Ach maps (standard deviation of synapse match) and synaptogenic factors (NE).] Also Technical Report MSU-CSE-12-5.
Y. Wang, X. Wu and J. Weng, ``Skull-Closed Autonomous Development: WWN-6 Using Natural Video,'' in Proc. Int'l Joint Conference on Neural Networks, June 10-15, Brisbane, Australia, 2012. PDF file. [Fully closed skull (no Y supervision), everything is moving. Three types of modes: Type-based (object search), location-based (object recognition) and free viewing (where-and-what).]
X. Song, W. Zhang, J. Weng. "Where-What Networks 5: Dealing with Scales for Objects in Complex Backgrounds," in Proc. 2011 International Joint Conference on Neural Networks, San Jose, California, USA, pp. 2795-2802, July 31 - August 5, 2011. PDF file. (This version deals with objects with multiple scales in complex backgrounds.)
M. Luciw and J. Weng, "Where What Network 4: The Effect of Multiple Internal Areas," in Proc. IEEE 9th International Conference on Development and Learning, Ann Arbor, pp. 311-316, August 18-21, 2010. PDF file. (This seems the first example that showed that deep learning as a cascade of areas between the sensory port and the motor port does not perform as well as shallow learning --- multiple areas each having connections with both ports.)
M. Luciw and J. Weng, "Where What Network 3: Developmental Top-Down Attention with Multiple Meaningful Foregrounds" in Proc. International Joint Conference on Neural Networks, Barcelona, Spain, pp. 4233-4240, July 18-23, 2010. PDF file. (Extension to multiple objects in complex backgrounds, type-based top-down attention, location-based top-down Attention, and homeostatic top-down attention.)
Z. Ji and J. Weng, "WWN-2: A Biologically Inspired Neural Network for Concurrent Visual Attention and Recognition,'' in Proc. International Joint Conference on Neural Networks, Barcelona, Spain, pp. 4247-4254, July 18-23, 2010. PDF file. (A single object in complex background, further extends to the free-viewing mode, all pixel locations.)
Z. Ji, J. Weng, and D. Prokhorov, ``Where-What Network 1: Where and What Assist Each Other Through Top-down Connections'' in Proc. 7th International Conference on Development and Learning (ICDL'08), Monterey, CA, Aug. 9-12, pp. 1-6, 2008. PDF file. (From location to type, from type to location, single object, and natural background.)
Z. Zheng, X. He, and J. Weng, ``Approaching Camera-Based Real-World Navigation Using Object Recognition,'' in Proc. INNS Conference on Big Data, San Francisco, pp. 428-436, August 8-10, 2015. PDF file. (Learn to navigate through a network of campus walkways using a monotone video camera for local behaviors and GPS with Google Maps for path planning.)
H. Ye, X. Huang, and J. Weng, ``Inconsistent Training for Developmental Networks and the Applications in Game Agents,'' In Proc. International Conference on Brain-Mind, July 14-15, East Lansing, Michigan, USA, pp. 43-59, 2012. PDF file. (Teaching is inconsistent in a game setting.)
A. Joshi and J. Weng, "Autonomous mental development in high dimensional context and action spaces,'' Neural Networks, vol. 16, no. 5-6, pp. 701-710, 2003. PDF file. (Learning to make sounds using reinforcement learning.)
J. Weng, , S. Paslaski, J. Daly, C. VanDam and J. Brown, "Modulation for Emergent Networks: Serotonin and Dopamine,'' Neural Networks, vol. 41, pp. 225-239, 2013. PDF file. [ This is a journal version of our models of two modulatory systems: Serotonin and Dopamine based on the DN brain model.]
Y. Wang, X. Wu and J. Weng, ``Brain-Like Learning Directly from Dynamic Cluttered Natural Video,'' in Proc. International Conference on Brain-Mind, July 14-15, East Lansing, Michigan, USA, pp. 51-58, 2012. PDF file. [Fully closed skull (no Y supervision), everything is moving. This work modeled other two neuromodulatory systems: Acetylcholine (ACh) system and norepinephrine (NE) system. The synapse maintenance leads to nearly 100% recognition rate in disjoint tests. Visualization of ACh maps (standard deviation of synapse match) and synaptogenic factors (NE).]
J. Daly and J. Brown and J. Weng, "Neuromorphic Motivated Systems, in Proc. Int'l Joint Conference on Neural Networks," San Jose, CA, pp. 2917-2914, July 31 - August 5, 2011. PDF file. (This seems the first neuromorphic model for fully emergent serotonin and dopamine systems with navigation as experiments. Prior reinforcement learning systems are either symbolic or not fully neuromorphic.)
S. Paslaski and C. VanDam and J. Weng, "Modeling Dopamine and Serotonin Systems in a Visual Recognition Network," in Proc. Int'l Joint Conference on Neural Networks, San Jose, CA, pp. 3016-3023, July 31 - August 5, 2011. PDF file. (This seems the first neuromorphic model for fully emergent serotonin and dopamine systems with navigation as experiments. Prior reinforcement learning systems are either symbolic or not fully neuromorphic.)
X. Huang and J. Weng, ``Inherent Value Systems for Autonomous Mental Development,'' International Journal of Humanoid Robotics, vol. 4, no. 2, pp. 407-433, 2007. PDF file.
X. Huang and J. Weng, "Novelty and Reinforcement Learning in the Value System of Developmental Robots," in Proc. Second International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, Edinburgh, Scotland, August 10 - 11, 2002. PDF file.
M. Solgi, T. Liu, and J. Weng, ``A Computational Developmental Model for Specificity and Transfer in Perceptual Learning,'' Journal of Vision, vol. 13, no. 1, ar. 7, pp. 1-23, 2013. PDF file. {Although this paper is on transfer in perceptual learning only, the mechanisms explained amount to autonomous thinking.]
Y. Zhang and J. Weng, ``Task Transfer by a Developmental Robot,'' IEEE Transactions on Evolutionary Computation, vol. 11, no. 2, pp. 226-248, 2007. PDF file.
Y. Zhang and J. Weng, ``Action Chaining by a Developmental Robot with a Value System,'' in Proc. 2nd International Conference on Development and Learning, June 12 - 15, MIT, Cambridge, MA, IEEE Computer Society Press, 2002. PDF file.
J. Weng, "Developmental Robotics: Theory and Experiments'' International Journal of Humanoid Robotics, vol. 1, no. 2, 2004. PDF file.
J. Weng, "A Theory for Mentally Developing Robots,'' in Proc. 2nd International Conference on Development and Learning, June 12 - 15, MIT, Cambridge, MA, IEEE Computer Society Press, 2002. PS file or PDF file.
J. Weng and Y. Zhang, ``Developmental Robots: A New Paradigm,'' an invited paper in Proc. Second International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, Edinburgh, Scotland, August 10 - 11, 2002. PDF file.
J. Weng, H. Lu, T. Luwang and X. Xue, ``Multilayer In-place Learning Networks for Modeling Functional Layers in the Laminar Cortex'' Neural Networks, vol. 21, no.2-3, pp. 150-159, 2008. PDF file.
J. Weng, H. Lu, T. Luwang and X. Xue, ``A Multilayer In-Place Learning Network for Development of General Invariances,'' International Journal of Humanoid Robotics, vol. 4, no. 2, pp. 281-320, 2007. PDF file.
J. Weng, H. Lu, T. Luwang and X. Xue, ``In-Place Learning for Positional and Scale Invariance,'' in Proc. IEEE World Congress on Computational Intelligence , International Joint Conference on Neural Networks, Vancouver, BC, Canada, July 16-21, pp. 1-10, 2006. PDF file.
J. Weng and Matthew D. Luciw, ``Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topology,'' in Proc. 5th International Conference on Development and Learning (ICDL'06) , Bloomington, IN, USA, May 31 - June 3, pp. 1-7, 2006. PDF file.
N. Zhang, J. Weng and Z. Zhang, ``A Developing Sensory Mapping for Robots,'' in Proc. 2nd International Conference on Development and Learning, June 12 - 15, MIT, Cambridge, MA, IEEE Computer Society Press, 2002. PDF file.
J. Weng and M. D. Luciw, ``Optimal In-Place Self-Organization for Cortical Development: Limited Cells, Sparse Coding and Cortical Topography,'' in Proc. 5th International Conference on Development and Learning, May 30 - June 3, Bloomington, IN, 2006. PDF file.
Y. Zhang, J. Weng and W. Hwang, "Auditory Learning: A Developmental Method," IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 601-616, 2005. PDF file.
Y. Zhang and J. Weng, "Conjunctive Visual and Auditory Development via Real-Time Dialogue,'' in Proc. 3rd International Workshop on Epigenetic Robotics, Boston, MA, pp. 974 - 980, August 4-5, 2003. PDF file.
G. Abramovich, J. Weng and D. Dutta, "Adaptive Part Inspection through Developmental Vision," Journal of Manufacturing Science and Engineering, vol. 127, no. 4, pp. 846-856, Nov. 2005. PDF file.
S. Zeng and J. Weng, ``Online-learning and attention-based approach to obstacle avoidance using a range finder," Journal of Intelligent and Robotic Systems, vol. 43, no. 2, June 2005. PDF file.
S. Zeng and J. Weng, ``Obstacle Avoidance through Incremental Learning with Attention Selection,'' in Proc. IEEE Int'l Conf. on Robotics and Automation, New Orleans, Louisiana, pp. 115-121, April 26 - May 1, 2004. PDF file.
J. D. Han, S. W. Zeng, K. Y. Tham, M. Badgero and J. Weng, ``Dav: A Humanoid Robot Platform for Autonomous Mental Development,'' in Proc. 2nd International Conference on Development and Learning, June 12 - 15, MIT, Cambridge, MA, IEEE Computer Society Press, 2002. PDF file.