Academic and Scholarly Events

  • 2/5 Statistics Colloquium, Wei Deng

    Wei Deng

    Morgan Stanley

     

    Non-convex Bayesian Learning via Stochastic Gradient MCMC and Schrödinger Bridge

     

    Generating models from data relies on Monte Carlo methods; generating data from models necessitates diffusion models. The former ensures safe and reliable artificial intelligence (AI) for making trustworthy decisions, while the latter fuels generative AI for applications like artwork creation, product design, and dialogue generation, among others. Despite significant advancements in GPU computing, it is still not efficient enough in simulations and generation of multi-modal distributions.

     

    To accelerate Monte Carlo computations, a standard tool is Langevin Monte Carlo (LMC). However, LMC often struggles to effectively explore the multi-modal posterior distribution. To tackle this issue, we begin by introducing replica exchange Langevin Monte Carlo (also known as parallel tempering) to propose appropriate swaps between exploration and exploitation. We demonstrate how to implement the bias-free replica exchange method in big data, how to accelerate the convergence using variance reduction, and how to implement multi-chain parallel tempering using non-reversibility. Additionally, we delve into the realm of deep importance sampling, where we propose adaptive samplers to mitigate energy barriers and facilitate optimization. We demonstrate that the interacting algorithm can be more efficient than the single-chain alternative with an equivalent computational budget.

     

    Generative models, while powerful, pose challenges due to their training expense and slow inference. The Schrödinger Bridge (SB) serves as a theoretical tool to address these issues, but its theoretical understanding in generative models remains incomplete. To contribute to this understanding, we examine SB stability properties concerning marginals and apply the algorithm in probabilistic time series imputation. Considering that real-world data often comes with bounded support, we propose a reflected Schrödinger bridge based on reflected forward-backward stochastic differential equations. With the help of bounded domains, we obtain a linear stability property w.r.t. the marginals. Our algorithm yields robust generative modeling in diverse domains, and its scalability is demonstrated in real-world constrained generative modeling through standard image benchmarks.

    Bio: Wei Deng is a machine learning researcher at Morgan Stanley, NY. He completed his Ph.D. in Applied Mathematics from Purdue University in December 2021, under the guidance of Dr. Faming Liang and Guang Lin. Wei's research focuses on Monte Carlo methods, Diffusion Models, and he has a keen interest in state space models. His objective is to develop more scalable and reliable probabilistic methods for solving AI applications in supervised, generative, and time series analysis.                                                              

                            

      Monday 2/5/2024 3:30 PM AUST 105

    Webex link: https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m0ee1039617b2dbbbede30aba3e224791

    Coffee will be served at 3:00 pm in the Noether Lounge (AUST 326)

    For more information, contact: Tracy Burke at tracy.burke@uconn.edu