Associate Professor of Computer Science, Kean University
Some researchers suggest that Deep Learning (DL) has become alchemy because of its lack of theoretical foundation. In our view, more and more people will be using DL, as long as it is beneficial to them, regardless of theoretical foundation, which is similar to human’s use of fire or the metal lead (Pb) before the invention of the table of elements. To avoid the usage of DL in a harmful way, we propose to investigate the underlying theories of DL in terms of its stability. New mathematical proofs have shown that DL is universally unstable, which is consistent with our interpretation of neurons as similarity estimators. In this proposal, we will first validate computational proofs of instability using our discovery of neurons being similarity approximations. Then we will look into ways to avoid the instability, e.g. to detect and mitigate instability, based on our neuron similarity interpretation. Most importantly, we will investigate the protection of safety-critical systems that are using AI DL technologies. We hope to contribute to the safe usage of DL with many successful applications.
Dr. Li's research focus is on AI and Machine Learning. In AI, she invented the conditional belief method to represent knowledge uncertainty. In Machine Learning, she discovered a similarity interpretation of neurons and using it to guide a principle of neural network design with a significant reduction in the required number of training cases. She is proposing to use this similarity discovery to protect AI systems with safer applications. The two main areas of her AI/ML are software engineering and health.