Academic and Scholarly Events

  • Today: Statistics Colloquium, Apratim Dey

    Apratim Dey, PhD Student

    Department of Statistics

    Stanford University

     

    Compressive Sensing with Stein Shrinkage

    Abstract: We live in an era where sensors, radars and satellites all around us are collecting enormous volumes of data, thereby raising fundamental questions on the feasibility of data acquisition, storage and transmission. Do we really need to collect all the data? Compressive sensing, introduced in the early 2000's, posits that the answer is usually no - particularly when the data is inherently k-sparse, one can take many less linear measurements and still enjoy perfect recovery when the number of measurements exceeds a critical threshold.  A lot of research over two decades has focused on understanding this critical threshold, most of it, however, deriving statements of the form "If n > C k log(n/k) then one achieves perfect recovery" [here n is the number of measurements and k is the number of non-zeros in the signal to be recovered]. Clearly, this leads to several questions: what is C? is this rate necessary? Further, a lot of emphasis has been placed on convex optimization (viz. $ell_1$ norm minimization, also known as basis pursuit) naturally making one wonder: can we do even better? Can, somehow, n = k (the BEST possible case)?

     

    Later, inspired by ideas from statistical physics, powerful iterative algorithms (by the name AMP) developed that are able to obtain this critical threshold exactly for basis pursuit, and also provide a pathway to precisely understand the performance of potentially non-convex procedures. I shall briefly review these algorithms. Via these algorithms, we know today that with generic measurements, one is usually not able to achieve n too close to k - there is always a gap between sparsity and undersampling.

     

    However, when one measures multiple sparse signals sharing the same sparsity pattern (as is common in important modern technology), it turns out that pleasantly, one can achieve n quite close to k, even with completely unstructured measurements. However, the usual convex optimization does not work; one needs to resort to a non-convex procedure, with the help of AMP. This procedure involves the James-Stein shrinkage, and the reason behind its outstanding performance can be (surprisingly) traced to insights developed in classical statistical decision theory, particularly in the minimax estimation of a Gaussian mean. Thus, the worlds of modern high dimensional signal processing and classical statistical decision theory become one and the same.

     

    Bio: Apratim Dey is a Ph.D. student in the Department of Statistics at Stanford University, advised by David Donoho. He obtained his undergraduate degree in Statistics from the Indian Statistical Institute. His research interests span compressed sensing, high dimensional statistics, precise analysis of high dimensional optimization, and more recently, phenomena in deep learning.  

     

    DATE:  Wednesday, November 6, 2024, 4:00 PM, AUST 202

     

    Webex link: https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6ce04569549df21ef77fc19f75b86ff4

     

    Coffee will be served at 3:30 in the Noether Lounge (AUST 326)

    For more information, contact: Tracy Burke at tracy.burke@uconn.edu