Unit: San Diego State University
Title: Associate Professor
Extreme-scale learning algorithms of artificial intelligence (AI) systems often prevent critical and robust decisions as they fail to operate reliably and efficiently, transfer knowledge, and adapt to continually evolving noisy and complex environments. They require an end-to-end reliable and efficient learning process that not only guarantees robust continual learning but also adapts to new online domains and complex environments. Several specific shortcomings of AI-driven systems include: (i) many fail to provide consistent, reliable, and robust online predictive decisions, particularly in dynamic environments; (ii) they are vulnerable to adversarial examples in which small and often imperceptible perturbations fool the network and change the decision; (iii) they cannot effectively transfer knowledge from pre-trained models to target dataset; (iv) they struggle with adaptation from out-of-distributions models; and (v) they are unreliable due to uncertainties and low confidence level. Recognizing these limitations and challenges, we (1) address the foundations of reliable continual learning for AI models, (2) enhance the robustness of extreme-scale AI algorithms against adverse conditions and noises, (3) enable online adaptation from multiple out-of-distribution models, and (4) monitor confidence level and calibration under label shift problems.
Reliable and safe artificial intelligence, Machine learning algorithms, Design, improvement, and analysis of deep learning techniques with an emphasis on efficiency and robustness, Domain adaptation and knowledge transfer, Mathematical analysis of continual learning methods, Adversarial learning, Multi-tasks learning problems, Multimodal learning with transformers, Graph summarization, Data mining, and High-dimensional network structure learning.