Announcements

  • 11/28/2017
    Schedule is now available here. List of posters also available here.
  • 11/07/2017
    Submit a Poster. Deadline for paper submission for the poster session is November 22.
  • 11/07/2017
    Registration is Free. Please RSVP as occupancy is limited.

Invited Speakers

  • image

    Avrim Blum

    Professor and Chief Academic Officer
    Toyota Technological Institute at Chicago
    http://ttic.uchicago.edu/~avrim/

    Title. Learning about Agents and Mechanisms from Opaque Transactions

    Abstract. In this talk I will discuss the problem of trying to learn the requirements and preferences of economic agents by observing the outcomes of an allocation mechanism whose rules you also don’t initially know. As an example, consider observing web pages where the agents are advertisers and the winners are those whose ads show up on the given page. We know these ads are placed based on bids and other constraints given to some auction mechanism, but we do not get to see these bids and constraints. What we would like to do is from repeated observations of this type to learn what the requirements and preferences of the agents are. Or consider observing the input-output behavior of a cloud computing service, where the input consists of a set of agents requesting service, and the output tells us which actually received service and which did not. In this case, we assume the agents who did not receive service were not served due to overlap of their resource needs with higher-priority requests. From such input-output behavior, we would like to learn the underlying structure. Our goal will be from observing a series of such interactions to try to learn both the needs and preferences of the agents and perhaps also the rules of the allocation mechanism. This talk is based on work joint with Yishay Mansour and Jamie Morgenstern, as well as work joint with undergraduate student Michael Liang.

  • image

    Cynthia Dwork

    Gordon McKay Professor of Computer Science
    Harvard Paulson School of Engineering
    Harvard University
    https://www.seas.harvard.edu/directory/dwork

    Title. Fair Questions

    Abstract. “Unfairness” of algorithms – for tasks ranging from advertising to recidivism prediction – has attracted considerable attention in the popular press. Algorithmic techniques for achieving fairness now routinely appear in dedicated workshops and symposia as well as in established research conferences. This talk will focus on the (relatively) new study of a mathematically rigorous theory of fairness: definitions, methods, and provable limits and tradeoffs, providing a lens for hot-button policy issues such as “interpretability” and raising new questions for future research.

  • image

    Mitsuru Igami

    Associate Professor of Economics
    Department of Economics
    Yale University
    https://sites.google.com/view/igami

    Title. Artificial Intelligence as Structural Estimation: Economic Interpretations of Deep Blue, Bonanza, and AlphaGo

    Abstract. Artificial intelligence (AI) has achieved superhuman performance in a growing number of tasks, including the classical games of chess, shogi, and Go, but understanding and explaining AI remain challenging. This paper studies the machine-learning algorithms for developing the game AIs, and provides their structural interpretations. Specifically, chess-playing Deep Blue is a calibrated value function, whereas shogi-playing Bonanza represents an estimated value function via Rust's (1987) nested fixed-point method. AlphaGo's "supervised-learning policy network" is a deep neural network (DNN) version of Hotz and Miller's (1993) conditional choice probability estimates; its "reinforcement-learning value network" is equivalent to Hotz, Miller, Sanders, and Smith's (1994) simulation method for estimating the value function. Their performances suggest DNNs are a useful functional form when the state space is large and data are sparse. Explicitly incorporating strategic interactions and unobserved heterogeneity in the data-generating process would further improve AIs' explicability.

  • image

    Elie Tamer

    Professor of Economics
    Department of Economics
    Harvard University
    https://scholar.harvard.edu/tamer/home

    Title. Inference on Auctions with Weak Assumptions on Information

    Abstract. Given a sample of bids from independent auctions, this paper examines the question of inference on auction fundamentals (e.g. valuation distributions, welfare measures) under weak assumptions on information structure. The question is important as it allows us to learn about the valuation distribution in a robust way, i.e., without assuming that a particular information structure holds across observations. We leverage recent contributions in the robust mechanism design literature that exploit the link between Bayesian Correlated Equilibria and Bayesian Nash Equilibria in incomplete information games to construct an econometrics framework for learning about auction fundamentals using observed data on bids. We showcase our construction of identified sets in private value and common value auctions. Our approach for constructing these sets inherits the computational simplicity of solving for correlated equilibria: checking whether a particular valuation distribution belongs to the identified set is as simple as determining whether a linear program is feasible. A similar linear program can be used to construct the identified set on various welfare measures and counterfactual objects. For inference and to summarize statistical uncertainty, we propose novel finite sample methods using tail inequalities that are used to construct confidence regions on sets. We also highlight methods based on Bayesian bootstrap and subsampling. A set of Monte Carlo experiments show adequate finite sample properties of our inference procedures. We also illustrate our methods using data from OCS auctions.

Program

Location: CAEC17 will take place at Microsoft, 1 Memorial Drive, 1st Floor, Horace Mann Theater on Fri Dec 01.

  • 8:30 - 9:00
    Breakfast
  • 9:00 - 9:50
    Cynthia Dwork. Fair Questions

    Abstract. “Unfairness” of algorithms – for tasks ranging from advertising to recidivism prediction – has attracted considerable attention in the popular press. Algorithmic techniques for achieving fairness now routinely appear in dedicated workshops and symposia as well as in established research conferences. This talk will focus on the (relatively) new study of a mathematically rigorous theory of fairness: definitions, methods, and provable limits and tradeoffs, providing a lens for hot-button policy issues such as “interpretability” and raising new questions for future research.

  • 9:50-10:40
    • Annie Liang. Predicting and Understanding Initial Play (joint with Drew Fudenberg)
    • Hongyao Ma. Spatio-Temporal Pricing for Ridesharing Platforms
    • Georgios Zervas. Consumer Reviews and Regulation: Evidence from NYC Restaurants (joint with Chiara Farronato)

    Annie Liang. Predicting and Understanding Initial Play (joint with Drew Fudenberg)
    Abstract. We take a machine learning approach to the problem of predicting initial play in strategic-form games. We predict game play data from previous laboratory experiments, and also a new data set of 200 games with randomly distributed payoffs that were played on Mechanical Turk. We consider two approaches, with the goals of uncovering new regularities in play and improving the predictions of existing theories. First, we use machine learning algorithms to train prediction rules based on a large set of game features. Examination of the games where our algorithm predicts play correctly but the existing models do not leads us to introduce a risk aversion parameter, which we find significantly improves predictive accuracy. Second, we augment existing empirical models by using play in a set of training games to predict how the models' parameters vary across new games. This modified approach generates better out-of-sample predictions, and provides insight into how and why the parameters vary. These methodologies are not special to the problem of predicting play in games, and may be useful in other contexts.

    Hongyao Ma. Spatio-Temporal Pricing for Ridesharing Platforms
    Abstract. Ridesharing platforms match drivers and riders and use dynamic pricing to balance supply and demand. A challenge is to set prices in a way they are appropriately smooth in time and space, in the sense that drivers will choose to accept their dispatches rather drive to another area, or wait for higher prices or a better trip. We develop a discrete time, multi-period and multi-location model for which there always exists anonymous, origin-destination competitive equilibrium prices, and give an efficient planning algorithm. The Spatio-Temporal Pricing (STP) mechanism computes a driver-pessimal competitive equilibrium plan at the beginning of time and after any driver deviation. The plan (dispatches of drivers to pick up drivers, driver re-positioning, and payments) is welfare-maximizing and envy free. The STP mechanism is incentive aligned: it is a subgame perfect equilibrium for drivers to always accept their dispatches at all times. We also give an impossibility result, that there can be no dominant strategy, welfare-optimal, envy-free and budget balanced mechanism. Numerical simulations show that in comparison to a baseline, myopic pricing mechanism, where drivers incur a high regret, the STP mechanism achieves significantly higher social welfare and no regret.

    Georgios Zervas. Consumer Reviews and Regulation: Evidence from NYC Restaurants (joint with Chiara Farronato)
    Abstract. We investigate complementarities between two signals of restaurant quality: health inspections and online reviews. To protect consumers, health inspectors periodically evaluate restaurant hygiene, and assign health grades to restaurants. Consumers are also able to rate restaurant quality online, through review platforms like Yelp. We investigate whether consumer reviews can predict hygiene violations. To do so, we implement we use machine learning to predict individual restaurant violations from the text of Yelp reviews. We find that consumer reviews are good predictors of food handling violations, but are poor predictors of facilities and maintenance violations. We then investigate how the hygienic information contained in online reviews affects demand and supply. On the demand side, we find that conditional on hygiene related information contained in online reviews, consumers still take health grades into account when choosing restaurants. On the supply side, we find that review platforms create reputational incentives for restaurants to reduce hygiene violations that are predictable by Yelp reviews. Specifically, we find that relative to restaurants not on Yelp, restaurants on Yelp score better on hygiene dimensions detectable by consumers than on dimensions not detectable by consumers. Our results have implications for the design of government regulation in a world where consumers rate their service experiences online.

  • 10:40-11:10
    Coffee Break and Poster Session
  • 11:10-12:00
    Elie Tamer. Inference on Auctions with Weak Assumptions on Information

    Abstract. Given a sample of bids from independent auctions, this paper examines the question of inference on auction fundamentals (e.g. valuation distributions, welfare measures) under weak assumptions on information structure. The question is important as it allows us to learn about the valuation distribution in a robust way, i.e., without assuming that a particular information structure holds across observations. We leverage recent contributions in the robust mechanism design literature that exploit the link between Bayesian Correlated Equilibria and Bayesian Nash Equilibria in incomplete information games to construct an econometrics framework for learning about auction fundamentals using observed data on bids. We showcase our construction of identified sets in private value and common value auctions. Our approach for constructing these sets inherits the computational simplicity of solving for correlated equilibria: checking whether a particular valuation distribution belongs to the identified set is as simple as determining whether a linear program is feasible. A similar linear program can be used to construct the identified set on various welfare measures and counterfactual objects. For inference and to summarize statistical uncertainty, we propose novel finite sample methods using tail inequalities that are used to construct confidence regions on sets. We also highlight methods based on Bayesian bootstrap and subsampling. A set of Monte Carlo experiments show adequate finite sample properties of our inference procedures. We also illustrate our methods using data from OCS auctions.

  • 12:00 - 13:10
    Lunch Break.
  • 13:10 - 14:00
    Avrim Blum. Learning about Agents and Mechanisms from Opaque Transactions

    Abstract. In this talk I will discuss the problem of trying to learn the requirements and preferences of economic agents by observing the outcomes of an allocation mechanism whose rules you also don’t initially know. As an example, consider observing web pages where the agents are advertisers and the winners are those whose ads show up on the given page. We know these ads are placed based on bids and other constraints given to some auction mechanism, but we do not get to see these bids and constraints. What we would like to do is from repeated observations of this type to learn what the requirements and preferences of the agents are. Or consider observing the input-output behavior of a cloud computing service, where the input consists of a set of agents requesting service, and the output tells us which actually received service and which did not. In this case, we assume the agents who did not receive service were not served due to overlap of their resource needs with higher-priority requests. From such input-output behavior, we would like to learn the underlying structure. Our goal will be from observing a series of such interactions to try to learn both the needs and preferences of the agents and perhaps also the rules of the allocation mechanism. This talk is based on work joint with Yishay Mansour and Jamie Morgenstern, as well as work joint with undergraduate student Michael Liang.

  • 14:00 - 14:30
    Coffee Break and Poster Session.
  • 14:30 - 15:15
    • Nicole Immorlica. Approximate Efficiency in Matching Markets
    • Dean Eckles. Randomization inference for spillovers in networks
    • Nir Rosenfeld. Optimal Tagging with Markov Chain Optimization

    Nicole Immorlica. Approximate Efficiency in Matching Markets
    Abstract. We propose a measure of approximate ex-ante Pareto e fficiency in matching markets. Using this notion, we quantify the intuited e fficiency improvement of the so-called Boston mechanism and the recently-proposed choice-augmented deferred acceptance mechanism over the random serial dictatorship mechanism. Furthermore, we provide a formal statement and analysis of a ra ffle-type mechanism, which is conceptually simpler than the Boston mechanism and has a comparable effi ciency guarantee.

    Dean Eckles. Randomization inference for spillovers in networks
    Abstract. Social and behavioral scientists are interested in testing of hypotheses about spillovers (i.e. interference, exogenous peer effects) in social networks. However, when there is a single network, this is complicated by lack of independent observations. We explore Fisherian randomization inference as an approach to exact finite-population inference, where the main problem is that the relevant hypotheses are non-sharp null hypotheses. Fisherian randomization inference can be used to test these hypotheses either by (a) making the hypotheses sharp by assuming a model for direct effects or (b) conducting conditional randomization inference such that the hypotheses are sharp. I present both of these approaches, the latter of which is developed in Aronow (2012) and our paper (Athey, Eckles & Imbens, 2017). I illustrate these methods with application to a large voter turnout experiment on Facebook (Jones et al., 2017)

    Nir Rosenfeld. Optimal Tagging with Markov Chain Optimization
    Abstract. In many information systems, content creators and owners use tags to describe and annotate content. Since tags can have a considerable effect on the volume of traffic that eventually reaches an item, content creators should benefit from a disciplined approach to tag selection. Here we introduce the task of optimal tagging, where the goal is to choose a subset of tags for a new item, such that the probability of browsing users reaching that item is maximized. We formulate this as a general state-selection problem over Markov chains, show that it is NP-hard, but is monotone and submodular and thus has a greedy approximation. We provide an efficient implementation of the greedy step, and demonstrate how individual content creators can optimize tags by simple trial and error.

  • 15:15 - 15:45
    Coffee Break and Poster Session.
  • 15:45 - 16:35
    Mitsuru Igami. Artificial Intelligence as Structural Estimation: Economic Interpretations of Deep Blue, Bonanza, and AlphaGo

    Abstract. Artificial intelligence (AI) has achieved superhuman performance in a growing number of tasks, including the classical games of chess, shogi, and Go, but understanding and explaining AI remain challenging. This paper studies the machine-learning algorithms for developing the game AIs, and provides their structural interpretations. Specifically, chess-playing Deep Blue is a calibrated value function, whereas shogi-playing Bonanza represents an estimated value function via Rust's (1987) nested fixed-point method. AlphaGo's "supervised-learning policy network" is a deep neural network (DNN) version of Hotz and Miller's (1993) conditional choice probability estimates; its "reinforcement-learning value network" is equivalent to Hotz, Miller, Sanders, and Smith's (1994) simulation method for estimating the value function. Their performances suggest DNNs are a useful functional form when the state space is large and data are sparse. Explicitly incorporating strategic interactions and unobserved heterogeneity in the data-generating process would further improve AIs' explicability.

  • 16:35 - 18:00
    Poster Session.

Posters

  • Eiichiro Kazumori. On the Virtue of Being Regular and Predictable: A Structural Analysis of the Primary Dealer System in the United States Treasury Auctions.
    We analyze the policy question of whether the US Treasury should maintain the current security distribution mechanism of the primary dealer system in the Treasury market after the financial crisis to achieve the debt management objective of lowest funding cost over time. We partially identify the effect of auction design through the counterfactual analysis using the novel asymptotic approximation method. We find that the primary dealer system achieves the lowest funding cost volatilities while maintaining an equal level of costs in comparison with the alternative direct bidding system and the syndicate bidding system.
  • Ali Jadbabaie, Elchanan Mossel and M. Amin Rahimian. Bayesian Group Decisions: Algorithms and Complexity [PDF]
    We address the computations that Bayesian agents undertake to realize their optimal actions, as they repeatedly observe each other's actions, following an initial private observation. We use iterated eliminations of infeasible signals (IEIS) to model the thinking process as well as the calculations of a Bayesian agent in a group decision scenario. We show that IEIS runs in exponential time; however, when the group structure is a partially ordered set, the Bayesian calculations simplify and polynomial-time computation of the Bayesian recommendations is possible. We next shift attention to the case where agents reveal their beliefs (instead of actions) at every decision epoch. We analyze the computational complexity of the Bayesian belief formation in groups and show that it is NP-hard. We also investigate the factors underlying this computational complexity and show how belief calculations simplify in special network structures or cases with strong inherent symmetries. We finally give insights about the statistical efficiency (optimality) of the beliefs and its relations to computational efficiency.
  • Zhibing Zhao, Tristan Villamil and Lirong Xia. Learning Mixtures of Random Utility Models [PDF]
    We tackle the problem of identifiability and efficient learning of mixtures of Random Utility Models (RUMs). We show that when the PDFs of utility distributions are symmetric, the mixture of k RUMs (denoted by k-RUM) is not identifiable when the number of alternatives m is no more than 2k-1. On the other hand, when m >= max{4k-2,6}, any k-RUM is generically identifiable. We then propose three algorithms for learning mixtures of RUMs: an EM-based algorithm, which we call E-GMM, a direct generalized-method-of-moments (GMM) algorithm, and a sandwich (GMM-E-GMM) algorithm that combines the other two. Experiments on synthetic data show that the sandwich algorithm achieves the highest statistical efficiency and GMM is the most computationally efficient. Experiments on real-world data at Preflib show that Gaussian k-RUMs provide better fitness than a single Gaussian RUM, the Plackett-Luce model, and mixtures of Plackett-Luce models w.r.t. commonly-used model fitness criteria. To the best of our knowledge, this is the first work on learning mixtures of general RUMs.
  • Amy Greenwald, Takehiro Oyakawa and Vasilis Syrgkanis. Simple vs Optimal Contests with Convex Costs
    We study an optimal contest design problem where contributors abilities are private, their costs are convex as a function of their effort, and the designer seeks to maximize their total effort. We address the design of approximately optimal mechanisms that are robust in that they are independent of the ability distribution and the precise form of the cost function. We show that a very simple all-pay contest where the prize is distributed equally among the top quartile of contributors is always a constant factor approximation to the optimal for a large class of convex cost functions, when the number of contributors is larger than some constant. This result stands in contrast to contests with linear costs, where awarding a prize to a single top contributor is approximately-optimal; when costs are convex, this latter allocation is far from optimal. Our result is enabled by novel results in the space of optimal mechanism design with convex costs, which could be of independent interest. Finally, we validate the performance of our approximately-optimal contests via simulation experiments, and portray much better empirical performance than the worst-case guarantees.
  • Amy Greenwald, Takehiro Oyakawa and Vasilis Syrgkanis. On Revenue-Maximizing Mechanisms assuming Convex Costs
    We investigate revenue-maximizing mechanisms in settings where bidders' utility functions are characterized by convex costs. Such costs arise, for instance, in procurement auctions for energy. We provide a constant-factor approximation guarantee for a prior-free randomized mechanism. Additionally, we propose two heuristics that allocate proportionally, using either value or virtual value. We describe experiments which show that for randomly drawn monotone hazard rate distributions, our mechanisms can achieve near optimal performance. Perhaps surprisingly, in the convex cost setting, it is preferable to allocate to multiple relatively high bidders, rather than only to bidders with the highest (virtual) value, as is optimal in the traditional quasi-linear utility setting.
  • Gerasimos Palaiopanos, Ioannis Panageas and Georgios Piliouras. Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos
    The Multiplicative Weights Update (MWU) method is a ubiquitous meta-algorithm that works as follows: A distribution is maintained on a certain set, and at each step the probability assigned to action $\gamma$ is multiplied by $(1 -\epsilon C(\gamma))>0$ where $C(\gamma)$ is the ``cost" of action $\gamma$ and then rescaled to ensure that the new values form a distribution. We analyze MWU in congestion games where agents use \textit{arbitrary admissible constants} as learning rates $\epsilon$ and prove convergence to \textit{exact Nash equilibria}. Interestingly, this convergence result does not carry over to the nearly homologous MWU variant where at each step the probability assigned to action $\gamma$ is multiplied by $(1 -\epsilon)^{C(\gamma)}$ even for the simplest case of two-agent, two-strategy load balancing games, where such dynamics can provably lead to limit cycles or even chaotic behavior.
  • Sujoy Sikdar, Sibel Adali and Lirong Xia. Top-Trading-Cycles Mechanisms with Acceptable Bundles
    In this paper, we extend the Top-Trading-Cycles (TTC) mechanism to housing markets with acceptable bundles. We propose Conditionally-Most-Important trees (CMI-trees) as a new language that generalizes commonly-studied languages such as generalized lexicographic preferences and LP-trees. Our main result is that for any instance of housing markets with acceptable bundles, and any CMI-tree profile, if TTC outputs an acceptable full allocation, then TTC is strict core selecting, non-bossy and indifferent to the order of implementing cycles as they become available. We propose multi-type housing markets with size constraints as a generalization of multi-type housing markets (Moulin 1995; Sikdar, Adali, and Xia 2017) and the setting of Fujita et al. (2015), and show that our extension of TTC always outputs an acceptable full allocation. Finally, we show that TTC is computationally hard to manipulate in this setting.
  • Mohammadhossein Bateni, Hossein Esfandiari, Vahab Mirrokni and Saeed Seddighin. A Study of Compact Reserve Pricing Languages
    Online advertising allows advertisers to implement fine-tuned targeting of users. While such precise targeting leads to more effective advertising, it introduces challenging multidimensional pricing and bidding problems for publishers and advertisers. In this context, advertisers and publishers need to deal with an exponential number of possibilities. As a result, designing efficient and "compact" multidimensional bidding and pricing systems and algorithms are practically important for online advertisement. Compact bidding languages have already been studied in the context of multiplicative bidding. In this paper, we study the compact pricing problem. More specifically, we first define the "multiplicative reserve price optimization problem" (MRPOP) and show that unlike the unrestricted reserve price system, it is NP-hard to find the best reserve price solution in this setting. Next, we present an efficient algorithm to compute a solution for MRPOP that achieves a logarithmic approximation of the optimum solution of the unrestricted setting, where we can set a reserve price for each individual impression type (i.e., one element in the Cartesian product of all features). We do so by characterizing the properties of an optimum solution. Furthermore, our empirical study confirms the effectiveness of multiplicative pricing in practice. In fact, the simulations show that our algorithm obtains 90--98% of the value of the best solution that sets the reserve prices for each auction individually (i.e., the optimum set of reserve prices). Finally, in order to establish the tightness of our results in the adversarial setting, we demonstrate that there is no "compact" pricing system (i.e., a pricing system using $O(n^{1-\epsilon})$ bits to set $n$ reserve prices) that loses, in the worst case, less than a logarithmic factor compared to the optimum set of reserve prices. Notice that this hardness result is not restricted to the multiplicative setting and holds for any compact pricing system. In summary, not only does the multiplicative reserve price system shows great promise in our empirical study, it is theoretically optimal up to a constant factor in the adversarial setting.
  • Yang Liu and Yiling Chen. Surrogate Scoring Rule and a Dominant Truth Serum for Information Elicitation
    We present a method for information elicitation, where each agent truthfully reporting his information is a dominant strategy, even when there is no ground-truth verification. It is well known that truthful elicitation in dominant strategy can be achieved using proper scoring rules if we have access to the ground-truth (i.e. the outcome of the event) and truthful elicitation at a Bayesian Nash equilibrium can be achieved with various peer prediction mechanisms if ground-truth is not available. In this work, we first observe that if we have access to a random variable that is a noisy version of the ground-truth with known bias, we can design surrogate scoring rules to achieve truthful elicitation in dominant strategy. These surrogate scoring rules are inspired by the use of surrogate loss functions in machine learning and they remove bias from the noisy random variable such that in expectation a prediction is as if evaluated against the ground-truth. Built upon surrogate scoring rules, we develop a dominant truth serum (DTS) to achieve truthful elicitation in dominant strategy when agents' reports are the only source of information that we have access to. In DTS, reports from other agents are treated as noisy observations of the ground-truth and we develop a method to learn and then correct the biases in these reports without having access to the ground-truth. In expectation predictions in DTS are as if evaluated against the ground-truth.
  • Vira Semenova. Machine Learning for Partial Identification: Example of Bracketed Data
    Partially identified models occur commonly in economic applications. A common problem in this literature is a regression problem with bracketed (interval-censored) outcome variable Y , which creates a set-identified parameter of interest. The recent studies have only considered finite-dimensional linear regression in such context. To incorporate more complex controls into the problem, we consider a partially linear projection. We characterize the identified set B for the linear component of this projection and propose an estimator of its support function. Our estimator converges at parametric rate and has asymptotic normality properties. It may be useful for labor economics applications that involve bracketed salaries and rich, high-dimensional de- mographic data about the subjects of the study.
  • Konstantinos Stouras. Crowdsourcing Contests with Private Types and Evaluation Uncertainty [PDF]
    We introduce a general model of crowdsourcing contests in which the performance of an agent is driven by his privately known ability type (adverse selection), his costly effort choice and is further affected by evaluation uncertainty or luck (moral hazard). We show that when marginal cost of effort is sufficiently small and the solvers' types, as well as, the evaluation uncertainty are independent and identically distributed respectively, there is a unique Bayes Nash equilibrium effort. This equilibrium effort is in symmetric and pure strategies and involves inactive types. Our model includes as special cases the Tullock contest (Tullock, 1967), the all-pay auction model with private information of Moldovanu et al. (2001), and the symmetric effort with evaluation uncertainty model of Lazear et al. (1981) that have been studied separately thus far. Our results suggest that several comparative statics results of the all-pay auctions with private information are robust in the presence of noisy feedback.
  • Mina Karzand and Guy Bresler. Regret bounds and regimes of optimality for user-user and item-item recommendation systems
    We consider an online model for recommendation systems, with each user being recommended an item at each time-step and providing 'like' or 'dislike' feedback. A latent variable model specifies the user preferences: both users and items are clustered into types. All users of a given type have identical preferences for the items, and similarly, items of a given type are either all liked or all disliked by a given user. The model captures structure in both the item and user spaces, and in this paper, we assume that the type preference matrix is randomly generated. We describe two algorithms inspired by user-user and item-item collaborative filtering (CF), modified to explicitly make exploratory recommendations, and prove performance guarantees in terms of their expected regret. For two regimes of model parameters, with structure only in item space or only in user space, we prove information-theoretic lower bounds on regret that match our upper bounds up to logarithmic factors. Our analysis elucidates system operating regimes in which existing CF algorithms are nearly optimal.
  • Jose Correa, Patricio Foncea, Ruben Hoeksma, Tim Oosterwijk and Tjark Vredeveld. Posted Price Mechanism for a Random Stream of Customers
    Posted price mechanisms constitute a widely used way of selling items to strategic consumers. Although suboptimal, the attractiveness of these mechanisms comes from their simplicity and easy implementation. In this paper, we investigate the performance of posted price mechanisms when customers arrive in an unknown random order. We compare the expected revenue of these mechanisms to the expected revenue of the optimal auction in two different settings. Namely, the nonadaptive setting in which all offers are sent to the customers beforehand, and the adaptive setting in which an offer is made when a consumer arrives. For the nonadaptive case, we obtain a strategy achieving an expected revenue within at least a 1 − 1/e fraction of that of the optimal auction. We also show that this bound is tight, even if the customers have i.i.d. valuations for the item. For the adaptive case, we exhibit a posted price mechanism that achieves a factor 0.745 of the optimal revenue, when the customers have i.i.d. valuations for the item. Furthermore, we prove that our results extend to the prophet inequality setting and in particular our result for i.i.d. random valuations resolves a problem posed by Hill and Kertz [13].
  • Wei Yu. The Optical Transceiver as Datacenter Bottleneck: Applying inventory management models to data center capacity planning
    The optical transceiver is becoming the cost driver of data center network. In addition, its long order lead time increases the project lead time for intra-datacenter network deployment. We introduce the inventory management model to show that creates extra cost for data centers.
  • Zhe Feng, Chara Podimata and Vasilis Syrgkanis. Learning to Bid Without Knowing your Value
    We address online learning in complex auction settings, such as sponsored search auctions, where the value of the bidder is unknown to her, evolving in an arbitrary manner and observed only if the bidder wins an allocation. We leverage the structure of the utility of the bidder to provide algorithms with regret rates against the best fixed bid in hindsight, that are exponentially faster in convergence in terms of dependence on the action space, than what would have been derived by applying a generic bandit algorithm. Our results are enabled by analyzing a new online learning setting with outcome-based feedback, which generalizes learning with feedback graphs. We provide an online learning algorithm for this setting, of independent interest, with regret that grows only logarithmically with the number of actions and linearly only in the number of potential outcomes (the latter being very small in most auction settings).
  • Paul Duetting, Zhe Feng, Harikrishna Narasimhan and David Parkes. Optimal Economic Design through Deep Learning (short paper)
    Designing an auction that maximizes expected revenue is an intricate task. Despite major efforts, only the single-item case is fully understood. We explore the use of tools from deep learning on this topic. The design objective is revenue optimal, dominant-strategy incentive compatible auctions. For a baseline, we show that multi-layer neural networks can learn almost-optimal auctions for a variety of settings for which there are analytical solutions, and even without encoding characterization results into the design of the network. Looking ahead, deep learning has promise for deriving auctions with high revenue for poorly understood problems
  • Thodoris Lykouris, Vahab Mirrokni and Renato Paes Leme. Stochastic bandits robust to adversarial corruptions
    We introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam. The goal of this model is to encourage the design of bandit algorithms that (i) work well in mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models. We consider a setting with K arms with underlying stochastic distributions such that Delta(a) is the difference between the mean of arm a and the optimal arm a*. The total corruption C corresponds to the total perturbation that an adversary can add to the stochastic input. Our main result is an algorithm with regret bounded by O(\sum_{a\neq a*}\frac{K C log(KT/delta) +\log(T)}{\Delta(a)} \cdot log(KT/delta))$ with $1-delta$ probability and pseudo-regret O(\sum_{a\neq a*}\frac{K C+\log(T)}{\Delta(a)}\cdot \log(KT))$. Notably, our algorithm is agnostic to the total corruption and the bound holds with respect to the total corruption that was added in retrospect. We also provide a lower bound showing that the linear dependency on the corruption level is necessary if one wants to ensure, at the same time, asymptotically optimal guarantees in the case that the input is totally stochastic.
  • Santiago Balseiro, Anthony Kim, Mohammad Mahdian and Vahab Mirrokni. Budget Management Strategies in Repeated Auctions: Uniqueness and Stability of System Equilibria
    In online advertising, platforms sell ad placements over time using repeated auctions. Platforms often provide budget management strategies, which allow budget constrained advertisers to control their cumulative expenditures by adjusting the allocations and payments. Each budget management strategy, when applied to all budget constrained advertisers simultaneously, drives the system toward a different equilibrium. Since most budget management strategies are implemented dynamically by adjusting their parameters over time, understanding the uniqueness and stability of equilibria is of practical relevance for advertising platforms. In this paper, we study the uniqueness and stability properties of five budget management strategies in terms of an equilibrium notion called system equilibrium: probabilistic throttling, thresholding, bid shading, reserve pricing, and multiplicative boosting. In particular, we show that reserve pricing and multiplicative boosting always admit a unique and stable system equilibrium. The same holds for probabilistic throttling in the case of two buyers. The answer is mixed for bid shading: we prove uniqueness and stability for a relevant class of buyer distributions and provide a specific example showing that bid shading can admit multiple and unstable equilibria. On the other hand, we find that thresholding does not admit a unique and stable system equilibria in general. Our study thus complements existing research on budget management strategies, which has focused mostly on comparing budget management strategies in terms of performance metrics such as the seller’s profit and buyers’ utility.
  • Yiling Chen, Chara Podimata and Nisarg Shah. Strategyproof Linear Regression
    We study the problem of linear regression when training data points are held by strategic agents who have incentives to misreport them. We extend results about previously known families of strategyproof mechanisms, introduce a novel family of such mechanisms, and provide two characterizations of strategyproof mechanisms for linear regression.
  • Alessandro Epasto, Mohammad Mahdian, Vahab Mirroki and Manolis Zampetakis. Prior Free Mechanism Design via Differential Privacy: Beyond Exponential Mechanism
    Consider an multi-bidder multi-item auction scenario where the auctioneer want to maximizes her revenue. In the classical formulation of this problem the auctioneer has also access to the prior distribution of the bids of the buyers. Despite the theoretical importance of this line of literature, in practice the knowledge of the prior distribution is very rarely a realistic assumption. Our goal in this work is to provide a framework for mechanism design that takes advantage of the understanding that we have about optimal auctions with prior and also takes into account the fact that the buyers can also strategize when providing information about their bid distribution. We propose a revenue optimization framework that is approximately optimal and approximately incentive compatible without relying on the knowledge of prior distributions.
  • Constantinos Daskalakis, Christos Tzamos and Manolis Zampetakis. A Converse to Banach’s Fixed Point Theorem and its CLS Completeness
    Banach’s fixed point theorem for contraction maps has been widely used to analyze the convergence of iterative methods in non-convex problems. It is a common experience, however, that iterative maps fail to be globally contracting under the natural metric in their domain, making the applicability of Banach’s theorem limited. We explore how generally we can apply Banach’s fixed point theorem to establish the convergence of iterative methods when pairing it with carefully designed metrics. Our first result is a strong converse of Banach’s theorem, showing that it is a universal analysis tool for establishing uniqueness of fixed points and for bounding the convergence rate of iterative maps to a unique fixed point. In other words, we show that, whenever an iterative map globally converges to a unique fixed point, there exists a metric under which the iterative map is contracting and which can be used to bound the number of iterations until convergence. We illustrate our approach in the widely used power method, providing a new way of bounding its convergence rate through contraction arguments. We next consider the computational complexity of Banach’s fixed point theorem. Making the proof of our converse theorem constructive, we show that computing a fixed point whose existence is guaranteed by Banach’s fixed point theorem is CLS-complete. We thus provide the first natural complete problem for the class CLS, which was defined in Daskalakis and Papadimimtriou [DP11]. to capture the complexity of problems such as P-matrix LCP, computing KKT-points, and finding mixed Nash equilibria in congestion and network coordination games.

Organizers

Yaron Singer

Computer Science, Harvard University

+ Follow

Vasilis Syrgkanis

Microsoft Research, New England

+ Follow

Sponsors

image