Developing techniques to detect and defend against misuse of generative AI models, including deepfake detection, robust unlearning, and watermarking.
Studying attack and defense strategies for machine learning models exposed to adversarial inputs during training and/or inference.
Building robust, privacy-preserving collaborative/federated learning frameworks resilient to malicious attacks and data leakage.
Optimizing collaboration mechanism between agents in multi-agent systems based on large language models or multimodal foundation models
Enabling efficient inference and training directly on homomorphically encrypted data to preserve privacy throughout the ML pipeline.
Investigating vulnerabilities in biometric recognition systems including presentation attack detection and template protection.
Information Fusion
Fair/Explainable AI
Anomaly Detection in Surveillance Videos