Cresceptron Deep Learning:
Detecting and Recognizing 3D Objects from 2D Images of Cluttered Scenes and Segmenting Recognized Objects from the 2D Images

This project develops a framework called Cresceptron for automatic learning for recognition and segmentation of real-world objects from their images based on exemplars of performance of such tasks. The Cresceptron has been tested on the task of visual recognition: recognizing 3-D general objects from 2-D electro-optical images of natural scenes and segmenting the recognized objects from their cluttered image background. Specifically, it recognizes and segments image patterns that are similar to those learned, using a stochastic distortion model and view-based interpolation, allowing other view points that are moderately different from those used in learning. It incorporates both individual learning and class learning; with the former, each training example is treated as a different individual and with the later, each example is a sample of a class. Several types of network structures have been developed, and their properties are addressed in terms of knowledge recallability, positional invariance, generalization power, discrimination power and space complexity. Experiments with a variety of real-world images are reported to demonstrate the feasibility of the Cresceptron.

References

J. Weng, N. Ahuja and T. S. Huang, ``Cresceptron: a self-organizing neural network which grows adaptively,'' in Proc.Int'l Joint Conference on Neural Networks, Baltimore, Maryland, vol. 1, pp. 576-581, June 1992. IJCNN1992pdf
J. Weng, N. Ahuja and T. S. Huang, ``Learning recognition and segmentation of 3-D objects from 2-D images,'' in Proc. 4th International Conf. Computer Vision, Berlin, Germany, pp. 121-128, May, 1993. ICCV1993pdf
J. Weng, N. Ahuja and T. S. Huang, ``Learning recognition and segmentation Using the Cresceptron,'' Int'l Jounral of Computer Vision. vol. 25, no. 2, pp. 105-139, Nov. 1997. IJCVpdf file. myPDF file.
 Back To Weng's Home Page: http://web.cps.msu.edu/~weng/