To provide more research opportunities for undergraduate students, the Department of Computer Science and Engineering at Michigan State University has initiated a "Rosenblum Undergraduate Research Opportunity Award (RUROA)" starting Fall 2013. This award is a generous gift from Mr. Paul Rosenblum, who has a long term passion for supporting undergraduates to involve in research projects. This fall we have a list of research projects available to undergraduate students, especially in the areas of biometrics, computer vision, machine learning, and graphics.
"3D Correspondence Inference" Faculty Advisor: Yiying Tong Contact e-mail: ytong@msu.edu Research Area(s): Computer Graphics |
This project aims at developing
tools for correspondence inference, which can be used to facilitate
shape retrieval and shape analysis. As 3D scanners and 3D printers have
become commonplace, there is a dire need of algorithms to search within
and analyze the large 3D geometric datasets. The first step to these
goals is often building a correspondence among different shapes. Most
existing methods for correspondence inference focus on spatial
representations of the correspondences, lacking direct access to global
structures. We plan to extend the functional map representation, a
compact form storing the linear mapping between low-frequency functions
defined on the shapes. Our group is building a new representation based
on vector fields. We seek a candidate to help build test datasets,
convert various 3D model formats, and design convenient user interface,
as well as contribute to the core algorithm. Experience in C++, basic
geometric modeling, and digital signal processing is required. |
"Empirical Downscaling of Model-Simulated Data for Climate Research" Faculty Advisor: Pang-Ning Tan Contact e-mail: tpan@cse.msu.edu Research Area(s): Data Mining |
Projections of future climate
scenarios is crucial for assessing and understanding the potential
impacts of climate change on various human and natural systems,
including agriculture, water resource, urban development, tourism,
human health, and biodiversity. Recent advances in computer-simulated
regional and global climate models have produced vast amounts of data
that could be harnessed to improve our ability to generate more
reliable projections of the future climate scenarios. However, the
spatial scales of these projections (ranging from 10km to 200km
resolution) are still far too coarse to be effectively used by various
impact assessment studies. The goal of this project is to develop data
mining approaches that integrate historical climate observations with
model simulated data to obtain more reliable high resolution climate
projections. These high-resolutions projections will be utilized by
crop models to simulate their future yield production in response to
changing climate conditions. |
"Facial Analysis in First Person Vision" Faculty Advisor: Xiaoming Liu Contact e-mail: liuxm@cse.msu.edu Research Area(s): Computer Vision, Biometrics, Pattern Recognition |
|
The conventional cameras, either
stationary camera or PTZ camera, capture visual content from a
third-person’s perspective. In contrast, the recently developed
first-person vision (FPV) senses the environment and the surrounding
people’s activities from a wearable sensor, which are taken from
his/her viewpoint. There are a number of advantages in the videos of
FPV over those from conventional cameras, such as better viewing
angles, close-up distance, multi-view observations, etc. Hence, how to
take advantage of them in the visual content analysis is an important
question. In the context of human-to-human interaction, this project
will explore the technical and practical approaches of performing
facial analysis on the wearable platform. For example, while wearing a
camera on my head, I am approached by another person, whom I could not
remember his/her name. Can the wearable camera take the face image to
perform a face matching with a list of subjects that I have encountered
before? This project will involve studying various communication
schemes between wearable sensors and PC/smartphone, implementing
existing visual analysis (such as face recognition or expression
recognition) algorithms on smartphone. |
"Facial Expression Recognition in HCI" Faculty Advisor: Anil Jain Contact e-mail: jain@msu.edu Research Area(s): Biometrics, Pattern Recognition |
|
Consider a typical family room,
sensors such as one RGB camera and/or one 3D sensor such as Microsoft
Kinect are mounted on top of a TV. When a user is watching TV or
playing game, it would be nice to automatically estimate the emotional
status of the user. Typically this is achieved by expression
recognition from conventional RGB videos of faces. In this project, we
will utilize both the RGB camera and 3D sensor to develop a real-time
expression recognition system. The student is expected to review the
literature, gather or collect relevant datasets, study the pattern
recognition algorithms, and develop a demo system on expression
recognition. |
"Fine-grained Ethnicity Estimation from Facial Images" Faculty Advisor: Xiaoming Liu Contact e-mail: liuxm@cse.msu.edu Research Area(s): Computer Vision, Biometrics, Pattern Recognition |
|
We, as humans, have the
capability to predict the demographic estimation, such as age, gender,
and ethnicity, of the person we encounter in our daily life. As
computer scientists, we always like to make a computer has the same
intelligence as humans do. In the case of demographic estimation, the
goal is to develop algorithms and software to automatic estimate
demographic, such as age, gender, and ethnicity, from face images. This
is a research topic not only for fulfilling the dream of machine
intelligence, but also driven by its potential applications in law
enforcement, security control, and human-computer interaction. In the
research community, automatic estimation of age, gender, and basic
ethnicity groups has been well studied. However, how to estimate
fine-grained ethnicity groups (Chinese, Arbs, Japanese, Korea, white
American, African American, Hispanic, etc) is still an unaddressed
problem. This research project will involve identifying major ethnicity
groups, collecting face databases, developing algorithms, and
performance evaluation. |
"Gesture Recognition Using Smart Phones" Faculty Advisor: Arun Ross Contact e-mail: rossarun@cse.msu.edu Research Area(s): Biometrics, Computer Vision, Pattern Recognition |
|
The
goal of this project is to develop a gesture recognition application
for a smart phone. The application on the smart phone should be able to
decipher the hand motion of a user holding it (i.e., the smart phone)
and perform certain operations based on the executed motion. For
example, if the user were to grasp the smart phone and move it from a
horizontal to a vertical position, the application should first
recognize this gesture and then connect with another smart phone device
in order to transfer data. The application should be able to recognize
at least 5 distinct hand gestures based on the accelerometer data. |
"Iris Recognition Using Smart Phones" Faculty Advisor: Arun Ross Contact e-mail: rossarun@cse.msu.edu Research Area(s): Biometrics, Computer Vision, Pattern Recognition |
|
The
goal of this project is to develop an application that can perform iris
recognition in a smart phone. The application should capture an image
of the iris using the camera in the smart phone, extract features from
the image, and compare the extracted features with the template data
stored in the smart phone in order to determine if the acquired iris
corresponds to the stored template. The feature extraction and matching
have to be performed within the smart phone’s computing environment. |
"Knows Who You Are from Your Hands" Faculty Advisor: Xiaoming Liu Contact e-mail: liuxm@cse.msu.edu Research Area(s): Computer Vision, Biometrics, Pattern Recognition |
|
We hypothesize that an
individual computer user has a unique and consistent pattern of hand
appearance, shape, and movements, independent of the text, while typing
on a keyboard. We have developed a novel biometric modality named
“Typing Behavior (TB)” for continuous user authentication. Given a
webcam pointing toward a keyboard, real-time computer vision algorithms
are developed to automatically extract hand movement patterns from the
video stream. For one typing video, the hands are segmented in each
frame and a unique descriptor is extracted based on the shape and
position of hands, as well as their temporal dynamics in the video
sequence. Experimental results have shown that this is an accurate and
efficient biometric modality. Now we would like to extend this work
toward using hand appearance information for authentication. A large
database of typing has been captured in our prior study. This project
will involve studying various feature representations for hand texture,
develop the system component, and evaluate the system performance. |
"Online Movie Recommender System" Faculty Advisor: Rong Jin Contact e-mail: rongjin@cse.msu.edu Research Area(s): Machine learning |
|
The objective of online recommender system is to automatically recommend items for an individual user by effectively leveraging the rating information provided by other users. The most well known example is book recommendation in Amazon. Another example is movie recommendation in Netflix. In this project, we will focus on developing the state-of-the-art online recommendation system by exploiting the matrix completion theory, which yielded the best performance in the Netflix contest. The figure on the right shows the basic idea of recommendation by matrix completion. We will utilize the movie rating data provided in Netflix contest and meta-data from IMDB to develop an online movie recommendation system.
|