Title: Fast gradient descent for drifting least squares regression, with application to bandits
Speaker: Prashanth L.A.
Date and Time: Wednesday, November 26, 2014, 3:30 PM
Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract
Online learning algorithms require to often recompute least squares regression estimates of parameters. We study improving the computational complexity of such algorithms by using stochastic gradient descent (SGD) type schemes in place of classic regression solvers. We show that SGD schemes efficiently track the true solutions of the regression problems, even in the presence of a drift. This finding coupled with an O(d) improvement in complexity, where d is the dimension of the data, make them attractive for implementation in the big data settings. In the case when strong convexity in the regression problem is guaranteed, we provide bounds on the error both in expectation and high probability (the latter is often needed to provide theoretical guarantees for higher level algorithms), despite the drifting least squares solution. As an example of this case we prove that the regret performance of an SGD version of the PEGE linear bandit algorithm [Rusmevichientong and Tsitsiklis 2010] is worse than that of PEGE itself only by a factor of O(log^4 n). When strong convexity of the regression problem cannot be guaranteed, we investigate using an adaptive regularisation. We make an empirical study of an adaptively regularised, SGD version of LinUCB [Li et al. 2010] in a news article recommendation application, which uses the large scale news recommendation dataset from Yahoo! front page. These experiments show a large gain in computational complexity and a consistently low tracking error.
Speaker Bio:
Prashanth L.A. is currently a research associate in the Stochastic Systems Lab (SSL) at the Department of Computer Science and Automation. Prior to this, he was a postdoctoral researcher at INRIA Lille - Team SequeL from 2012 to 2014. From 2002 to 2009, he was with Texas Instruments (India) Pvt Ltd, Bangalore, India. He received his Masters and Ph.D degrees in Computer Science and Automation from Indian Institute of Science, in 2008 and 2013, respectively. He was awarded the third prize for his Ph.D. dissertation, by the IEEE Intelligent Transportation Systems Society (ITSS). He is the coauthor of a book entitled `Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods', published by Springer in 2013.His research interests are in stochastic control and optimization, reinforcement learning and multi-armed bandits, with applications in transportation systems, wireless networks and recommendation systems.
Host Faculty: Shalabh Bhatnagar