Dear all,
Statistics and Data Science (SDS) group at the Queen Mary University
of London organize a series of internal seminars which will be
available to anyone interested.
Our next speaker will be Jingwei Liang <https://jliang993.github.io>
(Queen Mary University of London, UK)
*Date*: Wednesday April 7, 2021 at 3 p.m. (UK time)
*Where:* Zoom https://qmul-ac-uk.zoom.us/j/82396451820
*Title*: Screening for Sparse Online Learning:
*Abstract*: Sparsity promoting regularizers are widely used to impose
low-complexity structure (e.g. l1-norm for sparsity) to the regression
coefficients of supervised learning. In the realm of deterministic
optimization, the sequence generated by iterative algorithms (such as
proximal gradient descent) exhibit "finite activity identification",
namely, they can identify the low-complexity structure in a finite number
of iterations. However, most online algorithms (such as proximal stochastic
gradient descent) do not have the property owing to the vanishing step-size
and non-vanishing variance. In this talk, by combining with a screening
rule, I will show how to eliminate useless features of the iterates
generated by online algorithms, and thereby enforce finite activity
identification. One consequence is that when combined with any convergent
online algorithm, sparsity properties imposed by the regularizer can be
exploited for computational gains. Numerically, significant acceleration
can be obtained.
Best Wishes,
Luca Rossini
You may leave the list at any time by sending the command
SIGNOFF allstat
to [log in to unmask], leaving the subject line blank.
|