Welcome!

Contact Info

Email: hma .at. cse.msu.edu
Phone: 517-355-1646
Office: 1126 Engineering
Mailing Address:
     428 S. Shaw Lane, Room 3115
     East Lansing, MI 48824

Research Interests

High Performance Computing, Applications of Parallel Computing, Big Data Analytics, Numerical Linear Algebra

My Google Scholar Profile
My Research Gate Profile
My LinkedIn Profile

Current Research Topics

In my research, I mainly work on the design and development of parallel algorithms, numerical methods and software systems that can harness the full potential of state-of-the-art computing platforms to address challenging problems in large scale data-intensive scientific applications. A distinguishing aspect of my research is the close interdisciplinary collaborations that I have built with domain experts in fields as diverse as molecular modeling, nuclear physics and computational biology. My current research interests can be grouped into 3 main topics:

Algorithms and data-driven techniques for molecular modeling and simulation

As part of my PhD studies, I have developed a highly scalable parallel reactive molecular dynamics code, PuReMD [SISC12,ParCo12], which is now actively being used by hundreds of researchers worldwide. During my postdoc, I have worked on eigensolvers for fast and scalable electronic structure computations [ParCo13]. Recently, I have started working on a fully automated force field generation and optimization framework to streamline the design and simulation of advanced materials. The goal here is to lessen or even completely remove the dependence on human experts by leveraging the vast amounts of experimental and computational data that are becoming available under the Materials Genome Initiative and the vast computational power of modern supercomputers. The picture below gives a good overview of the framework that I am envisioning. This is a highly interdisciplinary project involving collaborations with computational physicists and materials scientist.



Parallel algorithms and numerical methods for emerging architectures:

In response to the ever-increasing needs of large scale applications, the architecture of modern computing systems has been undergoing major transformations. On the processor side, multi-core and many-core CPUs have emerged, and GPUs have been introduced for general purpose computing. On the memory side, new interesting technologies such as stack memories and non-volatile memories are entering the marketplace. New network technologies are being developed to ensure fast and reliable communications on massively parallel architectures. These major changes in the architecture of large computer systems open up additional exciting research opportunities in parallel computing. During my postdoc, I have worked on the design and development of high performing eigensolvers with a heavy focus on these shifts in computer architecture. The application domains for our solvers include nuclear structure computations (see MFDn) and DFT-based electronic strcuture computations (see SHINES). My future work in this topic will focus on designing novel parallel algorithms and numerical methods that pay attention to the deep memory hierarchies and massive on-chip parallelism of the emerging computer systems. My main focus area will be on sparse matrix and graph computations, which are so fundamental to several applications in large-scale simulations and data analytics. Energy consumption of the data centers and supercomputers increasingly become a major concern. So besides performance, reducing the energy consumption of the algorithms we design will be a main objective.



Software systems for high performance data analytics

Matrix and graph computations constitute widely used methods in the analysis of large-scale datasets. Together with my collaborators from the Ohio State University, I am working on a task-based data-flow middleware, DOoC+LAF, that will enable high productivity while still ensuring good performance and scalability for data analysis by narrowing the focus to matrix and graph based analytics problems. The key component of this framework is a runtime environment which is aware of both the underlying architecture and the needs of its target applications to determine and execute optimal data movement policies. In our recent work [ICCPW12, Cluster12], we have demonstrated the capabilities of DOoC+LAF by implementing a out-of-core iterative solver on SSD-equipped clusters. In close collaboration with scientists working on data-driven methods for scientific discovery, we aim to develop DOoC+LAF into a full fledged graph and matrix analytics ecosystem.