Indo-US Joint Center for

Advanced Research in Machine Learning, Game Theory and Optimization

Main  |  People  |  Activities  |  Sponsors

Indo-US Lectures Week in

Machine Learning, Game Theory and Optimization

January 7-10, 2014
Indian Institute of Science, Bangalore

Venue: Centre for Nano Science and Engineering (CeNSE), Indian Institute of Science

(All lectures will be held in the CeNSE Seminar Hall GF-07. The Poster Session will be held in CeNSE Room TF-08)

Registration: Participation is by invitation only. Registrations are now closed.


Tuesday, January 7

Morning Lectures 9:00-1:00 pm

9:00 am

Coffee, Registration & Welcome

9:30 am

Introduction & Recent Advances: Game Theory
David Parkes
[Abstract]


David Parkes
Professor, Harvard University

Introduction & Recent Advances: Game Theory

Abstract: This tutorial will first introduce the main solution concepts (Nash equilibrium, correlated equilibrium, coarse correlated equilibrium) for simultaneous move games. From there we will discuss compact representations and characterization results (including potential games, congestion games, graphical games), and review recent results in regard to the computational complexity of finding equilibria in games.

References:
(1) Economics and Computation
David Parkes and Sven Seuken, Upcoming


11:00 am

Coffee

11:30 am

Introduction & Recent Advances: Statistical Learning
Shivani Agarwal
[Abstract]


Shivani Agarwal
Assistant Professor, Indian Institute of Science

Introduction & Recent Advances: Statistical Learning

Abstract: This tutorial will give a brief introduction to statistical learning, and a flavor of recent advances in designing efficient learning algorithms with desirable statistical properties. We will focus mostly on supervised learning in discrete prediction spaces. The first part of the tutorial will focus on binary classification, which involves predicting one of just two labels. We will introduce notions of loss functions, generalization error, Bayes error, and statistical consistency, and will look at what is known about consistency of empirical risk minimization (ERM) and surrogate risk minimization algorithms, concluding the first part with the seminal results of Bartlett et al. (2006) on classification-calibrated surrogate losses. In the second part of the tutorial, we will consider more general discrete prediction problems, such as multiclass 0-1 classification, sequence prediction, document (subset) ranking etc, where the prediction space is finite but can be arbitrarily large, and the loss structure can be captured by a loss matrix. We will introduce the notion of convex calibration dimension of a loss matrix, and briefly describe some recent results on designing convex calibrated surrogate losses for an arbitrary target loss matrix. We will conclude with a summary of some open directions.

References:
(1) Machine Learning Lecture Notes (Lecture 1)
Shivani Agarwal, 2013 [pdf]

(2) Machine Learning Lecture Notes (Lecture 24)
Shivani Agarwal, 2013 [pdf]

(3) Classification calibration dimension for general multiclass losses
Harish G. Ramaswamy and Shivani Agarwal, 2012 [pdf]

(4) Convex calibrated surrogates for low-rank loss matrices with applications to subset ranking losses
Harish G. Ramaswamy, Shivani Agarwal and Ambuj Tewari, 2013 [pdf]





Afternoon Lectures 3:00-5:30 pm

3:00 pm

Coffee

3:30 pm

Introduction & Recent Advances: Online Learning & Game Theory
Avrim Blum
[Abstract]


Avrim Blum
Professor, Carnegie Mellon University

Introduction & Recent Advances: Online Learning & Game Theory

Abstract: This tutorial will discuss several simple online learning algorithms with surprisingly strong guarantees for repeated decision-making in uncertain environments. For example, how should I decide what route I should take to work each day if I have to decide on my route before I know what traffic will be like that day? How should a seller adapt prices based on demand in real-time? The tutorial will also discuss connections of these algorithms and their guarantees to central concepts in game theory. Specific topics include: algorithms for "combining expert advice", "sleeping experts", and "bandit" problems; algorithms for implicitly specified problems; connections to game-theoretic notions of minimax optimality and Nash and correlated equilibria; and connections to classic graph algorithm problems.

References:
(1) Learning, Regret minimization, and Equilibria
Avrim Blum and Y. Mansour, 2007 [pdf]

(2) Lecture notes on Machine Learning Theory
Avrim Blum, 2012 [pdf]

(3) Lecture notes on Algorithms, Games, and Networks
Avrim Blum and Ariel Procaccia, 2013 [pdf]



Wednesday, January 8

Morning Lectures 9:00-1:00 pm

9:00 am

Coffee

9:30 am

Introduction & Recent Advances: Markov Chains/Distributed Algorithms, Network Centrality and Statistical Inference
Devavrat Shah
[Abstract]


Devavrat Shah
Associate Professor, Massachusetts Institute of Technology

Introduction & Recent Advances: Markov Chains/Distributed Algorithms, Network Centrality and Statistical Inference

Abstract: Network centrality is a network or graph function that assigns "score" to each node of the graph depending upon the network structure. Some of the popular examples include PageRank, Modularity and Betweenness. Historically, centralities have been proposed as a heuristic to utilize the network structure for various data processing tasks, e.g. web-search. Usually, such functions are expected to be computable at scale.

In this tutorial and associated short talks, through concrete examples, we shall discuss principled approach for finding network centrality for the task at hand. These examples include finding influential agents, rank aggregation and information aggregation/crowd-sourcing. In a large class of scenarios (including the examples discussed), network centrality boils down to finding stationary distribution of a random walk over the network. We shall discuss a "local" algorithm to solve it using a classical result from theory of Markov chains.


11:00 am

Coffee

11:30 am

Introduction & Recent Advances: Mechanism Design
David Parkes
[Abstract]


David C. Parkes
Professor, Harvard University

Introduction & Recent Advances: Mechanism Design

Abstract: This tutorial will first review two basic positive results in mechanism design -- for weighted affine maximization (Groves mechanism) and for single-parameter domains (Myerson mechanism). We will also provide an example of an argument towards an impossibility result (Gibbard-Satterthwaite). The last section of the tutorial will discuss some aspects of algorithmic mechanism design (for the NP hard knapsack auction, motivated by the more general problem of combinatorial auctions, and the min-makespan task assignment problem). Time permitting, we will introduce the price-of-anarchy extension theorem framework that is developing for prior-free auction design.

References:
(1) Economics and Computation
David Parkes and Sven Seuken, Upcoming



Afternoon Lectures 3:00-5:30 pm

3:00 pm

Coffee

3:30 pm

Clustering Data: Does Theory help?
Ravi Kannan
[Abstract]


Ravi Kannan
Principal Researcher, Microsoft Research India

Clustering Data: Does Theory help?

Abstract: Clustering is the problem of dividing a set of data points in high dimen- sional space into groups of similar points. It is an important ingredient of algorithms in many areas. Theoretical Computer Science has brought to bear powerful ideas to find nearly optimal clusterings. In Statistics, mixture models of data have been useful in understanding the structure of data and developing algorithms. In practice, many heuristics, (eg. dimension reduc- tion, the k-means algorithm) are widely used. The talk will describe some aspects of the first two, and attempt to answer the question: Is there a happy marriage of these with practice?



Thursday, January 9

Morning Session 9:00-12:25 pm

Learning & Incentives for Crowdsourcing and Mechanism Design
Session Chairs: David Parkes and Devavrat Shah

9:00 am

Coffee

9:30 am

Payment Rules through Discriminant-Based Classifiers
David Parkes
[Abstract]


David C. Parkes
Professor, Harvard University

Payment Rules through Discriminant-Based Classifiers

Abstract: By adopting the goal of minimizing expected ex post regret in place of incentive compatibility, we can use statistical learning techniques to design payment rules. Using a target outcome rule and oracle-access to a type distribution as input, the method trains a discriminant-based classifier, with an admissible structure imposed on the discriminant rule that yields desirable incentive properties when this discriminant is used as a payment rule. We can scale up training by adopting succinct k-wise dependent valuation models and making a connection with Markov networks. Experimental results in multi-parameter combinatorial auctions, and for an egalitarian outcome rule, show low ex post regret relative to current approaches.

References:
(1) Payment Rules through Discriminant-Based Classifiers
Paul Duetting, Felix Fischer, Pichayut Jirapinyo, John Lai, Benjamin Lubin and David Parkes, 2014 [pdf]


10:10 am

Team formation in a crowdsourcing set-up
Ankit Sharma
[Abstract]


Ankit Sharma
Graduate Student, Carnegie Mellon University

Team formation in a crowdsourcing set-up

Abstract: Crowdsourcing, in addition to being used to complete 'simple mechanical' tasks, are also being used by firms to find solutions to complex problems. Websites such as innocentive.com pose challenges that usually require significant effort and thinking. Moreover, these complex tasks require a variety of skills and knowledge in several scientific areas that a single individual need not possess. Such tasks might need a team of individuals to come together to accomplish it. This talk asks the question on how we can form teams in a crowdsourcing set-up.

The unique difficulties for team formation in a crowdsourcing setting stem from the fact that the individual contributors come from extremely varied backgrounds and are forming teams for relatively short time periods. Forming a team that is productive requires that the members collectively have the requisite skills, and importantly, are compatible with one another so that they can work together as a team. Team formation in a crowdsourcing set-up therefore requires learning which set of individuals can form a team. We delve into both computational and game theoretic aspects of this problem.

This is a work in progress, and some results presented in the talk are from a joint work with Avrim Blum, Anupam Gupta and Ariel Procaccia.


10:35 am

Eliciting Honest Feedback in Crowdsourced Environments using Continuous Scoring Rules
Rohith Vallam
[Abstract]


Rohith Vallam
Graduate Student, Indian Institute of Science

Eliciting Honest Feedback in Crowdsourced Environments using Continuous Scoring Rules

Abstract: Eliciting accurate information on any object (perhaps a new product or service or person) using the wisdom of a crowd of individuals utilizing web-based platforms is an important research problem. We cast the elicitation problem in the framework of mechanism design with correlated private information and extend the standard peer prediction mechanism for to incorporate multidimensional, continuous signals using strictly proper continuous scoring rules and show that honest reporting is a Nash Equilibrium when prior probabilities are common knowledge and the observations made by the raters are stochastically relevant. To compute payments for the nodes, we explore the logarithmic, quadratic, and spherical scoring rules using techniques from complex analysis. We also obtain some insights through simulations including the relationship between the budget of the mechanism designer and the quality of aggregated answer.


11:00 am

Coffee

11:20 am

Crowd Centrality
Devavrat Shah
[Abstract]


Devavrat Shah
Associate Professor, Massachusetts Institute of Technology

Crowd Centrality

Abstract: Crowdsourcing systems, in which numerous tasks are electronically distributed to numerous “information piece-workers”, have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowd-sourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting.

In this talk, we shall discuss the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. The answer we obtain is in the form of efficient task-allocation (through random regular graphs) coupled an algorithm that evaluates the "importance" of each worker in the crowd - the "crowd centrality". We evaluate it iteratively in a message-passing manner and utilize it to aggregate answers for tasks reliably.

We shall discuss variations of basic problem in form of experimentation, adaptation, generality of the domain of answers and heterogeneity of workers.

The talk is based on a joint work with David Karger (MIT) and Sewoong Oh (UIUC).

References:
(1) Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems
David R. Karger, Sewoong Oh and Devavrat Shah, 2014 [pdf]


12:00 pm

Toward Buying Labels From the Crowd
Bo Waggoner
[Abstract]


Bo Waggoner
Graduate Student, Harvard University

Toward Buying Labels From the Crowd

Abstract: Suppose you have a machine learning task such as classification. Members of "the crowd" have drawn examples i.i.d. from an unknown distribution and have private costs for reporting their labels to you. Your goal is to design a mechanism/learning algorithm that buys these labels from the crowd, so that you have low generalization error while spending a small budget.

In this talk, I will describe some prior work in this type of setting and discuss our ongoing work. Our approach is to use a no-regret learning algorithm to learn from previously observed data and to determine the "value" or price we should pay for data held by the next crowd member. A key challenge is to characterize when such algorithms perform well; we have some results in very simple settings.

Joint work with Jacob Abernethy, Yiling Chen, and Chien-Ju Ho.



Afternoon Session 2:30-5:00 pm

Multi-armed Bandits
Session Chairs: Y. Narahari and Dinesh Garg

2:30 pm

Thompson Sampling: a provably good Bayesian heuristic for bandit problems
Shipra Agrawal
[Abstract]


Shipra Agrawal
Researcher, Microsoft Research India

Thompson Sampling: a provably good Bayesian heuristic for bandit problems

Abstract: Multi-armed bandit problem is a basic model for managing the exploration/exploitation trade-off that arises in many situations. Thompson Sampling [Thompson 1933] is one of the earliest heuristic for the multi-armed bandit problem, which has recently seen a surge of interest due to its elegance, flexibility, efficiency, and promising empirical performance. In this talk, I will discuss recent results showing that Thompson Sampling gives near-optimal regret for several popular variants of the multi-armed bandit problem, including linear contextual bandits. Interestingly, these works provide a prior-free frequentist type analysis of a Bayesian heuristic, and thereby a rigorous support for the intuition that once you acquire enough data, it doesn't matter what prior you started from because your posterior will be accurate enough.

References:
(1) Thompson Sampling for Contextual Bandits with Linear Payoffs
Shipra Agrawal and Navin Goyal, 2013 [pdf]

(2) Further Optimal Regret Bounds for Thompson Sampling
Shipra Agrawal and Navin Goyal, 2013 [pdf]


3:10 pm

PAC Subset Selection in Stochastic Multi-armed Bandits
Shivaram Kalyanakrishnan
[Abstract]


Shivaram Kalyanakrishnan
Scientist, Yahoo! Labs Bangalore

PAC Subset Selection in Stochastic Multi-armed Bandits

Abstract: We consider the problem of selecting, from among n real-valued random variables, a subset of size m of those with the highest means, based on efficiently sampling the random variables. This problem, which we denote Explore-m, finds application in a variety of areas, such as stochastic optimization, simulation and industrial engineering, and on-line advertising. The theoretical basis of our work is an extension of a previous formulation using multi-armed bandits that is devoted to identifying just the single best of n random variables (Explore-1). Under a PAC setting, we provide algorithms for Explore-m and bound their sample complexity.

Our main contribution is the LUCB algorithm, which, interestingly, bears a close resemblance to the well-known UCB algorithm for regret minimization. We derive an expected-sample-complexity bound for LUCB that is novel even for single-arm selection. We then improve the problem-dependent constant in this bound through a novel algorithmic variant called KL-LUCB. Experiments affirm the relative efficiency of KL-LUCB over other algorithms for Explore-m. Our contributions also include a lower bound on the worst case sample complexity of such algorithms.

References:
(1) Efficient Selection of Multiple Bandit Arms: Theory and Practice
Shivaram Kalyanakrishnan and Peter Stone, 2010 [pdf]

(2) PAC Subset Selection in Stochastic Multi-armed Bandits
Shivaram Kalyanakrishnan, Ambuj Tewari, Peter Auer and Peter Stone, 2012 [pdf]

(3) Information Complexity in Bandit Subset Selection
Emilie Kaufmann and Shivaram Kalyanakrishnan, 2013 [pdf]


3:50 pm

Coffee

4:10 pm

Multiarmed Bandit Mechanisms
Y. Narahari, Dinesh Garg
[Abstract]


Y. Narahari
Professor, Indian Institute of Science

Dinesh Garg
Researcher, IBM Research India

Multiarmed Bandit Mechanisms

Abstract: In this talk, we provide a view of the emerging landscape of multiarmed bandit mechanisms which represent an elegant and rich abstraction for online learning problems involving strategic agents. The talk is structured into four parts.

(1) There is an extensive body of literature on different flavors of the classical multiarmed bandit problem and we commence our talk by summarizing this body of literature through a taxonomy.

(2) When the arms are held by strategic agents, the problem comes alive with new research challenges. We attempt to provide an overview of current art in this area.

(3) We present recent work done at the Game Theory lab, IISc, on designing multiarmed bandit mechanisms for sponsored search auctions and crowdsourcing.

(4) There are exciting possibilities and opportunities for future research in this area and in the concluding part of the talk, we present some promising threads of research.

References:
(1) Truthful multi-armed bandit mechanisms for multi-slot sponsored search auctions
Akash Das Sharma, Sujit Gujar, and Y. Narahari, 2012 [pdf]

(2) Truthful Mechanism for Multi-Slot Sponsored Search Auctions
Debmalya Mandal and Y. Narahari, 2014

(3) A Quality Assuring Multi-Armed Bandit Crowdsourcing Mechanism with Incentive Compatible Learning.
Shweta Jain, Sujit Gujar, Onno Zoeter, Y. Narahari, 2014



Poster Session 5:15-6:30 pm


Friday, January 10

Morning Session 9:30-12:00 pm

Rank Aggregation
Session Chairs: David Parkes and Shivani Agarwal

9:30 am

Coffee

9:55 am

Rank Centrality
Devavrat Shah
[Abstract]


Devavrat Shah
Associate Professor, Massachusetts Institute of Technology

Rank Centrality

Abstract: The question of aggregating pair-wise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining ranking, finding ‘scores’ for each object (e.g. player’s rating) is of interest for understanding the intensity of the preferences.

We propose "rank centrality" for discovering scores for objects from pairwise comparisons, along with an associated iterative algorithm to evaluate it. The algorithm has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects if they are compared; the scores turn out to be the stationary probability of this random walk.

To establish the efficacy of algorithm, the popular Bradley-Terry-Luce (BTL) model is considered. We bound the finite sample error rates between the scores assumed by the BTL model and those estimated by our algorithm. In particular, the number of samples required to learn the score well with high probability depends on the structure of comparison graph – when the Laplacian of the comparison graph has constant spectral gap, e.g. pairs chosen at random for comparison, this leads to near-optimal dependence on the number of samples.

This is based on a joint work with Sahand Negahban (MIT) and Sewoong Oh (UIUC).

References:
(1) Iterative Ranking from Pair-wise Comparisons
Sahand Negahban, Sewoong Oh and Devavrat Shah, 2012 [pdf]


10:35 am

A Stastistical convergence perspective of algorithms for rank aggregation from pairwise data
Arun Rajkumar
[Abstract]


Arun Rajkumar
Graduate Student, Indian Institute of Science

A Stastistical convergence perspective of algorithms for rank aggregation from pairwise data

Abstract: There has been much interest recently in the problem of rank aggregation from pairwise data. Indeed, such problems arise in several applications, ranging from movie or webpage rankings to rankings of job candidates. A natural question that arises is: under what sorts of statistical assumptions do various rank aggregation algorithms converge to an `optimal' ranking? In this talk, we consider this question in a natural setting where pairwise comparisons are assumed to be drawn randomly and independently from some underlying probability distribution. We first show that, under a `time-reversibility' or Bradley-Terry-Luce (BTL) condition on the distribution, the rank centrality (PageRank) and least squares (HodgeRank) algorithms both converge to an optimal ranking. Next, we show that a matrix version of the Borda count algorithm, and more surprisingly, an algorithm which performs maximum likelihood estimation under a BTL assumption, both converge to an optimal ranking under a `low-noise' condition that is strictly more general than BTL. Finally, we propose a new SVM-based algorithm for rank aggregation from pairwise data, and show that this converges to an optimal ranking under an even more general condition that we term `generalized low-noise'. In all cases, we provide explicit sample complexity bounds for exact recovery of an optimal ranking. Our experiments confirm our theoretical findings and help to shed light on the statistical behavior of various rank aggregation algorithms.

References:
(1) A Stastistical convergence perspective of algorithms for rank aggregation from pairwise data
Arun Rajkumar and Shivani Agarwal, 2014 [pdf]


11:00 am

Coffee

11:20 am

Flexible Parametric Ranking Models
David Parkes
[Abstract]


David Parkes
Professor, Harvard University

Flexible Parametric Ranking Models

Abstract: The Plackett-Luce rank model is simple and tractable to work with, but seems unlikely to be sufficient for many practical modeling and inference problems. I describe the more flexible location family of random-utility models (RUMs), which includes the Normal-RUM as a special case, and can extend to allow for mixture models. It also affords interesting new connections with canonical models used within econometrics. I demonstrate improved fit on a dataset of rank preferences, along with an application to understanding crowdsourced human judgment data. In regard to the estimation problem, I briefly mention new results on consistent rank-breaking of data in connection with Generalized Method of Moments approaches.

References:
(1) Computing Parametric Ranking Models via Rank-Breaking
Hossein Azari Soufiani, David Parkes and Lirong Xia, 2014 [pdf]



Afternoon Session 2:30-5:30 pm

Learning & Optimization
Session Chairs: Chiranjib Bhattacharyya and Shivani Agarwal

2:30pm

Matrix Completion and Alternating Minimization
Prateek Jain
[Abstract]

3:10 pm

The Geometry of Diversity: Determinantal Point Processes
Alex Kulesza
[Abstract]


Alex Kulesza
Postdoctoral Fellow, University of Michigan

The Geometry of Diversity: Determinantal Point Processes

Abstract: Many real-world problems involve negative interactions; we might want search results to be diverse, sentences in a summary to cover distinct aspects of the subject, or objects in an image to occupy different regions of space. However, traditional structured probabilistic models tend deal poorly with these kinds of problems; Markov random fields, for example, become intractable even to approximate.

In this talk we will define and describe determinantal point processes (DPPs), which behave in a complementary fashion: while they cannot encode positive interactions, they define expressive models of negative correlations that come with surprising and elegant algorithms for many types of inference, including normalization, conditioning, marginalization, and sampling. While DPPs have been studied by mathematicians for over 35 years and play an important role in random matrix theory, we will show how they can also be used as models for real-world data.

We will describe a series of new extensions, algorithms, and theoretical results that make modeling and learning with DPPs efficient and practical. Experimentally, we show that the techniques we introduce allow DPPs to be used for performing tasks like document summarization, multiple human pose estimation, search diversification, and the threading of large document collections.

References:
(1) Determinantal point processes for machine learning
A. Kulesza and B. Taskar, 2012 [pdf]


3:50 pm

Coffee

4:10 pm

Making SVMs robust to uncertainty in Kernel matrices
Chiranjib Bhattacharyya
[Abstract]


Chiranjib Bhattacharyya
Associate Professor, Indian Institute of Science

Making SVMs robust to uncertainty in Kernel matrices

Abstract: We study the problem of designing SVM classifiers when the Kernel matrix, K, is affected by uncertainty. We explore formulations derived from two different frameworks, namely Chance Constraint programming(CCP) and Robust Optimization(RO). The CCP setting though leads to non-convex formulation, sometimes reduces to Second Order Cone Programs. On the other hand we show that RO based formulation always yields a SOCP.

The RO based formulation can be reformulated as a saddle point problem which can be solved by a first order algorithm with an efficiency estimate of $O(1/T^2)$ where $T$ is the number of iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets is presented to compare the proposed formulations.


4:50 pm

Bayesian Optimization for Machine Learning and Science
Jasper Snoek
[Abstract]


Jasper Snoek
Postdoctoral Researcher, Harvard

Bayesian Optimization for Machine Learning and Science

Abstract: Recent advances in machine learning are starting to have a profound impact throughout the sciences and industry. However, many of the most powerful machine learning models remain challenging to use effectively by all but a select few domain experts. How can we make machine learning more accessible to non-experts? A major hindrance is that the use of machine learning algorithms frequently involves careful tuning of various meta-parameters such as learning parameters and model hyperparameters. Unfortunately, this tuning is often a "black art" requiring expert experience, rules of thumb, or sometimes brute-force search. There is therefore great appeal for automatic approaches that can optimize the performance of any given learning algorithm to the problem at hand. We develop a principled approach to this problem through constructing a statistical model of the functional mapping between these parameters and a given objective which we can iteratively refine and query. The resulting "Bayesian optimization" procedure was able to find better parameters for multiple recent machine learning models than the experts who developed them and achieved state of the art results on various benchmark problems. Naturally, many more general problems across the sciences involve a similar form of iterative parameter tuning and recent work has been focused on developing a general tool for tuning parameters for arbitrary problems.

In this talk I will give an overview of this approach, detail some applications to problems in rehabilitation science and assistive technology, and discuss some exciting new applications with collaborators at Harvard and MIT. Time permitting, I'll give a quick tutorial of the open-source code package we have developed to perform Bayesian optimization.

References:
(1) Practical Bayesian Optimization of Machine Learning Algorithms
Jasper Snoek, Hugo Larochelle and Ryan Prescott Adams, 2012 [pdf]

(2) Multi-Task Bayesian Optimization
Kevin Swersky, Jasper Snoek and Ryan Prescott Adams, 2013 [pdf]





Visitor Information

Map of Bangalore with event-related landmarks marked:


Map of IISc with event-related landmarks marked:

Map of eateries around CVH/BEL Road:


For tourism-related information, see here.

There is also a wealth of visitor information available here.

Fine dining in Bangalore.



Student Volunteer Team
  • Harikrishna Narasimhan
  • Rohit Vaish
  • Siddarth Ramamohan
  • Aadirupa Saha
  • Arpit Agarwal
  • Suprovat Ghoshal
  • Saneem Ahmed


Sponsors

The Indo-US Joint Center and Lectures Week are supported by the Indo-US Science & Technology Forum.



We also gratefully acknowledge additional support for the Lectures Week from the following sponsors:

Gold Sponsors :
     


Bronze Sponsor :