Tianbao Yang

Ph.D.
Michigan State University
Email: yangtia1 AT gmail DOT com



I am currently a Machine Learning Researcher at Machine Learning Lab, GE Global Resarch. My advisor at MSU is Rong Jin. My CV can be found here.


Research


I am interested in statistical machine learning and its applications to real world problems. My research topics include:
  • Social Network Analysis: Modeling Link and Content for Community Detection
    We developed algorithms for Detecting Communities and their Dynamic Evolutions in Social Networks.
    Modeling Link: We introduced Popularity and Productivity to model the Power Law Degree Distribution in real networks.
    Modeling Content: We proposed a Discriminative content model and combine link and content together to detect communities.

    Dynamic Evolutions: We proposed a Dynamic Stochastic Block Model to track the evolutions of communities in Dynamic Social Networks.

  • Learning from Noisy Labels: Algorithms and Theory
    We proposed algorithms and theory for learning from Noisy labels.
    Algorithms: We proposed algorithm to estimate the sufficient statistics related to the perfect labels from the sufficient statistics related to the noisy labels.
    Theory: We developed theory to show that the model learned from the noisy labels converges to the one learned from the Perfect label under appropriate assumptions.

  • Multiple Kernel Learning: Efficiency, Robustness, and Sparsity
    Efficiency: We proposed several Online multiple kernel learning algorithms and proved their regret (mistake) bounds.
    Robustness: We proposed stochastic programming framework for multiple kernel learning from Noisy labels.
    Sparsity: We proposed algorithms and theory for sparse multiple kernel learning involved nor more than a Fixed number of kernels.

  • Large Scale Optimization, Online Optimization and Stochastic Optimization
    Large Scale Optimization: We developed efficient primal dual gradient method for solving non-smooth optimization problems, which can achieve O(1/T) convergence rate.
    Online Optimization: We developed algorithms and theories for online convex optimization to achieve a Variation-based regret bound.

    Stochastic Optimization: We developed algorithms and theories for stochastic gradient descent with only one projection ; and online optimization with stochastic constraints.

  • Improved Bounds for the Nystrom method
    Improved Bounds for the Nystrom method: We developed several improved bounds for the Nystrom method under larger eigengap condition and power law eigenvalue distribution; and the theory of their application to large-scale kernel leanring.


Publications



Preprints

Conference and Journal Publications


Industry Lab Experience


  • NEC Laboratories America Inc.. Summer Intern, 2008, 2009, 2010
  • Yahoo Research! Summer Intern, 2011


Services


PC Member

  • the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) 2012
  • the Association for the Advancement of Artificial Intelligence (AAAI) 2012
  • the ACM International Conference on Information and Knowledge Management (CIKM) 2012
  • the Asian Conference on Machine Learning (ACML) 2012

Reviewer

  • ACM Transactions on Knowledge Discovery from Data
  • Asian Conference on Pattern Recognition
  • IEEE Transactions on Neural Networks
  • Information Sciences