Zhengping Ji

Office: 190A Mellon Institute,                             Carnegie Mellon University 
Lab Phone: (517) 432-9474
Home Phone: (517) 775-1459
Email:
jizhengp@andrew.cmu.edu

 
ABOUT ME 

I have recently graduated from the Computer Science & Engineering Department  at  Michigan State University. I was a research assistant at Embodied Intelligence Lab , under Prof. Juyang (John) Weng. Now I am a postdoctoral research associate in the Center for the Neural Basis of Cognition, Carnegie Mellon University, working for the RealNose project supported by DARPA.

PREVIOUS RESEARCH WORK

Where-What Network

Developmental networks, called Where-What Networks (WWN) [8][9][10][11][12][13][14], were designed for a general sensorimotor pathway, such that recognition and attention interact with each other in a single network. The cortex inspired neuromorphic architecture modeled three types of attention: feature-based bottom-up attention, location-based top-down attention, and object-based top-down attention. The in-place learning algorithm was used to develop the internal representation (including synaptic bottom-up and top-down weights of every neuron) in the network, such that every neuron was responsible for the learning of its own signal processing characteristics within its connected network environment, through interactions with other neurons in the same layer.

 

Object Classification by the Sensor Fusion of Radar and Vision Systems

A low cost, real-time object recognition system was developed in a sensor fusion framework [3][6][7][15][16]. The powerful vision algorithm is used to deal with a wide variation of targets provided by radar system. The objects of interest fall into three categories:(1) vehicle (e.g., cars and trucks); (2) pedestrians; (3) other objects having radar returns (e.g., light poles, guide rails, signs and bridges, garbage cans, road barriers). The module clusters multiple radar tracks into a single object and facilitates the threat assessment algorithms by offering object category labels. It fits into an embedded system and holds promises in industrial applications. 

 

Autonomous Mental Developmental Learning

This research is to advance artificial intelligence using what we call the developmental approach [2][4]. The new approach is motivated by human cognitive and behavioral development from infancy to adulthood. It requires a fundamentally different way of addressing the issue of machine intelligence. Developmental programs, such as SHM, IHDR (e.g., [17]), MILN, etc., was applied for a robot to develop its mind through the learning process. 

 

 

Autonomous Outdoor Navigation

Developed an online-learning and attention-based approach to outdoor navigation [1][5][18] in DARPA Grand Challenge, 2005. Several sensors were applied in vehicle driving to percept local environments by a combination of vision, LADAR, ultrasonic, and contact sensors. The work was applied on Team AVS and Team Crossland in the challenge events.



[1]  Z. Ji, X. Huang, W. Tong and J. Weng, On-line Learning of Covert and Overt Perceptual Capability for Vision-based Navigation, ICDL, 2006.

[2]  M. Luciw and Z. Ji, Building Blocks of Development, IEEE Computational Intelligence Magazine, vol. 1, no. 3, pp. 5-9, 2006. 

[3]  Z. Ji, M. Luciw, J. Weng and S. Zeng, A Biologically Motivated Developmental System for Perceptual Awareness in Vehicle-based Robots, EpiRob, 2007. 

[4]  H. Zhao, Z. Ji and J. Weng, Developmental Learning for Avoiding Dynamic Obstacles Using Attention, ICDL, 2007.  

[5]  Z. Ji, X. Huang and J. Weng, Learning of Sensorimotor Behaviors by a SASE Agent for Vision-based Navigation, IJCNN, 2008.

[6]  Z. Ji, M. Luciw and J. Weng, Epigenetic Sensorimotor Pathways and Its Application to Developmental Object Learning, CEC, 2008.

[7]  Z. Ji and D. Prokhorov, Radar-camera Fusion for Object Classification, FUSION, 2008.

[8]  Z. Ji, J. Weng and D. Prokhorov, "Where What Network 1: "Where" and "What" Features Assist Each Other through Top-down Connections'', ICDL, 2008.

[9]  Z. Ji, "Cortex-inspired Developmental Learning for Vision-based Navigation, Attention and Recognition", PhD Dissertation, Computer Science Department, Michigan State University, 2009.

[10] Z. Ji, J. Weng, and D. Prokhorov, Autonomous Mental Development for Neuromorphic Robots, Neuromorphic and Brain-based Robots: Trends and Perspectives, J. Krichmar and H. Wagatsuma (eds), 2010. 

[11]  Z. Ji and J. Weng, "WWN-2: A Biologically Inspired Neural Network for Concurrent Visual Attention and Recognition", WCCI, 2010. (To appear)

[12] Matt Luciw, Developmental Learning Networks for Visual Attention and Object Recognition, PhD Proposal, Computer Science Department, Michigan State University, 2009 

[13] M. Luciw and J. Weng, Where What Network 3: Developmental Top-Down Attention with Multiple Meaningful Foreground, WCCI, 2010. (To appear)   

[14] Z. Ji and J. Weng, Cortex-inspired Computational Modeling of Concurrent Visual Attention and Recognition, the Twelfth Annual CSE Poster Workshop, Michigan State University, 2008

[15] Z. Ji, Developmental Object Learning Using Video and Radar for a Driver Assistance System, the Twelfth Annual CSE Poster Workshop, Michigan State University, 2008.

[16] Z. Ji, Developmental Learning of High-dimensional Sensorimotor Regression and Its Application to Car Driving Assistance, Ph.D. Dissertation Proposal MSU-CSE-07-03, Michigan State University, 2007 

[17] H. Zhao, Z. Ji, M. Luciw, J. Weng, Attention-based Developmental Learning for Indoor Navigation, the Eleventh Annual CSE Poster Workshop, Michigan State University, 2007.

[18] Z. Ji, X. Huang, J. Weng, On-Road and off-Road Navigation Through Reinforcement Learning of Local Attention Windows, the Tenth Annual CSE Poster Workshop, Michigan State University, 2006.



updated Janurary 2010