Michigan State University Skip navigation MSU Home    CSE & MSU    Embodied Intelligence Lab.   

 





[As of October 2013, I have graduated and moved to Microsoft Corp. This page will no longer be updated.]

Mojtaba

I am currently a Ph.D. candidate at the department of Computer Science and Engineering, Michigan State University. I am a member of the Embodied Intelligence Lab, led by Prof. Juyang (John) Weng.

My research interests are in the areas of computer and biological vision, deep learning and autonomous mobile robotics. Specifically, I am interested in the developmental learning of visual perception, including representation, attention, recognition and tracking.


Research Highlights

(Please see Publications for related published articles.)

Developmental networks for integrated recognition of type, location and shape

Developmental networks, such as Where-What Networks (WWN), have been shown promising for simultaneous attention and recognition, while handling variations in scale, location and type as well as inter-class varations. This project focuses on integrating shape recognition to the WWNs with minimal supervision while utilization both monocular and binocular depth cues.

Transfer of learning to untrained conditions

How do we learn from few examples, and so brilliantly generalize our learnings to previously "unseen" situations? Based on a neuromorphic model of the visual cortex, in this project we explain how the human brain "transfers" the learning effects to novel situations. The results replicate experimental results on human subjects for Perceptual Learning, and the model presents a neurally plausible algorithm for transfer and specificity of learning.

Simultaneous 3D shape and pose estimation and tracking from images

While an intern at Toyota Research Institute, N.A., in summer 2012, I adopted and expended the work on the Pixel-wise Posterior 3D (PWP3D) for real-time segmentation and tracking of 3D objects in the autonomous vehicle application. The novel part included a learning model for shape variations in 3D shape space. A patent regarding this work is under review and will soon be filed.

Convolutional Deep Belief Networks

I participated in a challenge by The DiCarlo Lab at MIT to design and implement the Convolutional Deep Belief Networks in Python. The GPU-based implementation (PyCuda and Cython) performed successful reconstruction on both natrual and face images. Please see Software for a techinical report and code.

Lidar and camera sensory fusion for semi-supervised object learning

During my internship with Toyota Research Institute, N.A., summer 2011, we developed calibration and software components for mapping between laser and vision sensors. The information from the laser modality, e.g., depth information and segmentation, was then used to train the vision modality. A learning method based on horizon-leveled image bands was devised which successfully detected objects on the road such as other vehicles.

Robotic soccer simulation

While an undergraduate student in computer science, I was involved with RoboCup Soccer competitions for 3 years. Our team in 3D Soccer Simulation League, named Aria, won two world championships in 2004 and 2005.

 

 
 
Home | Contact | Publications | Software | Hobbies