Experience

  • Present Aug. 2015

    Graduate Research Assistant

    Computer Vision lab, Michigan State University

  • Aug. 2017 May. 2017

    Summer Research Assistant

    NEC Labs America

  • May 2015 Jan. 2015

    Software Engineer Intern

    Meijer Inc

  • May 2015 May 2014

    Undergraduate Research Assistant

    3D Vision lab, Michigan State University

Education & Training

  • Ph.D2020

    Ph.D in Computer Science

    Michigan State University

  • B.S.2015

    Bachelor of Science in Computer Science

    Michigan State University

    Graduate with High Honor. GPA 4.0/4.0

Honors, Awards and Grants

  • April 2015
    Michigan State University Board of Trustees's Award
    image
    Outstanding academic performance:
    Graduating seniors having the highest cumulative grade-points
  • 2013, 2014
    First Prize in Herzog Mathematics Competition
    image
    The Herzog competition is a long-standing tradition in MSU, in honor of Prof. Herzog, who devoted significant efforts to undergraduate education, in particular to prepare students for the Putnam exam.
    The results of this competition will be used to rank
  • 2013, 2014
    First Prize in Microsoft Coding Competition at MSU
  • 2011
    Odon Vallet Scholarship
    Prof. Odon Vallet is from Sorbonne University, France. He has received a large inheritance valued and decided to give the annual interest on this fortune to outstanding young researcher, college and highschool students in France and Vietnam.
  • 2010
    Second Prize in Vietnamese National Mathematics Olympiad

Research Projects

  • Image Colorization

    [Class Project] Convert grayscale images into color images

    We aim to color gray-scale images automatically, without direct human assistance.
    This automatic colorization is an underconstrained problem since there is usually no one-to-one correspondence between color and local texture.
    In this work, we will try to apply recent advances in generative models which is Generative Adversarial Network to a conditional setting. Instead of generate realistic samples from random noise as conventional GAN, we will generate realistic samples (color image) from both input gray-scale image and noise.
  • Disentangled Representation Learning GAN for Pose-Invariant Face Recognition

    Simultaneously learn identity representation and synthesize face images

    The large pose discrepancy between two face images is one of the key challenges in face recognition. The conventional approach to pose-robust face recognition either performs face frontalization on, or learns a pose-invariant representation from, a non-frontal face image. We argue that, it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes Disentangled Representation Learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator allows DR-GAN to learn the identity representation for each face image, in addition to image synthesis. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one integrated representation along with an arbitrary number of synthetic images. Quantitative and qualitative evaluation on both constrained and unconstrained databases demonstrate the superiority of DR-GAN over the state of the art.
  • Learning to Fuse Information with Missing Modalities

    Recognize massive amounts of data into semantically meaningful classes from multi-modality with missing training data

    One of the key GEOINT capabilities is to be able to automatically recognize a large array of objects from visual data. Depending on the resolution of imagery, objects may range from specific locations or scenes, road, building, forest to vehicle, human, etc. This is clearly a technically challenging problem for both computer vision and machine learning due to the large variations in the appearance of these objects exhibited in the imagery. To address this problem, researchers have developed various fusion methods that combine information collected from multiple sensing modalities, such as RGB imagery, LiDAR point cloud, multispectral imaging, hyperspectral imaging, and GPS, to improve the reliability and accuracy of object recognition. This research direction is motivated by the ever-decreasing sensoring cost, and more importantly, by the complementary characteristics among multiple sensing modalities. Therefore, with the well-funded promise of escalating object recognition performance, a great deal of data analysis research is in urgent need in order to fully take advantage of this massive amount of multi-modality data.

    All the prior research oninformation fusion requires that the sensor data of all modalities are available for every training data instance. This requirement significantly limits the application of information fusion methods as missing modalities abound in practical applications.

    In recognizing missing modalities as a roadblock toward fulfilling the key GEOINT capability, we propose to develop powerful and computationally efficient approaches that can learn to fuse information from different sensors when a significant portion of training data has missing modalities. The ultimate goal of our project is to develop a suite of computer vision and machine learning tools for geographical imagery analysis that can serve as an aid for geo-spatial analysts to facilitate the analysis and classification of geographical images.
  • Person tracking and motion analysis for medical applications

    New RGBD sensors enable precise tracking of human motions. This can be used for medical appications such as home care.

Labmates

Dr. Xiaoming Liu

Asisstant Professor

+ Follow

Amin Jourabloo

PhD Student

+ Follow

Xi Yin

PhD Student

+ Follow

Yousef Atoum

PhD Student

+ Follow

Seyed Morteza Safdarnejad

PhD Student

+ Follow

Garrick Brazil

PhD Student

+ Follow

Yaojie Liu

PhD Student

Great labmates!

I'm having chances to work with great people.

Filter by type:

Sort by year:

Disentangled Representation Learning GAN for Pose-Invariant Face Recognition

Luan Tran, Xi Yin, Xiaoming Liu
Proceeding of IEEE Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, July. 2017 (Oral Presentation)

Abstract

The large pose discrepancy between two face images is one of the key challenges in face recognition. The conventional approach to pose-robust face recognition either performs face frontalization on, or learns a pose-invariant representation from, a non-frontal face image. We argue that, it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes Disentangled Representation Learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator allows DR-GAN to learn the identity representation for each face image, in addition to image synthesis. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one integrated representation along with an arbitrary number of synthetic images. Quantitative and qualitative evaluation on both constrained and unconstrained databases demonstrate the superiority of DR-GAN over the state of the art.

Missing Modalities Imputation via Cascaded Residual Autoencoder

Luan Tran, Xiaoming Liu, Jiayu Zhou, Rong Jin
Proceeding of IEEE Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, July. 2017

Abstract

Affordable sensors lead to an increasing interest in acquiring and modeling data with multiple modalities. Learning from multiple modalities has shown significantly performance improvement in object recognition. However, in practice it is common that the sensing equipment experiences unforeseeable malfunction or configuration issues, leading to corrupted data with missing modalities. Most existing multi-modal learning algorithms could not handle missing modalities, and would discard either all modalities with missing values or all corrupted data. To leverage the valuable information in these corrupted data, we propose to impute the missing data, by leveraging the relatedness among different modalities. Specifically, we propose a novel Cascaded Residual Autoencoder (CRA) to impute missing modalities. By stacking residual autoencoders, CRA grows iteratively to model the residual between the current prediction and original data. Extensive experiments demonstrate the superior performance of CRA on both the data imputation and the object recognition task on imputed data.

Currrent Teaching

  • Present 1995

    Preclinical Endodnotics

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.

  • Present 2003

    SELC 8160 Molar Endodontic Selective

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.

  • Present 2010

    Endodontics Postdoctoral AEGD Program

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.

Teaching History

  • 1997 1995

    Preclinical Endodnotics

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.

  • 2005 2003

    SELC 8160 Molar Endodontic Selective

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.

  • 2011 2010

    Endodontics Postdoctoral AEGD Program

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.

  • 2011 2010

    Endodontics Postdoctoral AEGD Program

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.

  • 2011 2010

    Endodontics Postdoctoral AEGD Program

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.

At My Lab

You can also find me at my lab located at Rm 3114 Engineering Building, Michigan State University.

I am at my lab every week day from 8:00 am until 5:00 pm.