I am an Assistant Professor in Management Science and Engineering and (by courtesy) in Computer Science and Electrical Engineering, in the School of Engineering at Stanford University. I lead the Stanford Causal AI Lab and I am an Associated Director of the Stanford Causal Science Center. I am also a affiliated with the Institute for Computational and Mathematical Engineering (ICME). I am an active member of the Stanford Operations Research Group, the Statistical Machine Learning Group, the CS Theory Group and affiliated with the Stanford AI Lab (SAIL), the center for Human-Centered AI (HAI) and Stanford Data Science. My research interests are in the areas of machine learning, causal inference, econometrics, online and reinforcement learning, game theory/mechanism design and algorithm design. If you are a Stanford undergraduate or master's student feel free to come by my office hours. If you are a Stanford PhD student and interested in working with me, please reach out. If you want to pursue PhD studies at Stanford, please apply to the relevant PhD programs (primarily MS&E and ICME and potentially also CS, EE) and list me as a faculty of interest. If you are interested in learning more about the topic of causal machine learning I will be offering two new relevant courses on Applied Causal Inference Powered by ML and AI and on Foundations of Causal Machine Learning. If you are interested in learning more about topics at the intersection of ML, AI and Game Theory, I am offring a new course on Game Theory, Data Science and AI.

Until August 2022, I was a Principal Researcher at Microsoft Research, New England, where I was also co-leading the project on Automated Learning and Intelligence for Causation and Economics (ALICE) and was a member of the EconCS and StatsML groups. I received my Ph.D. in Computer Science from Cornell University, where I had the privilege to be advised by Eva Tardos and then spent two years as a postdoc researcher at Microsoft Research, NYC, as part of the Algorithmic Economics and the Machine Learning groups. I obtained my diploma in EECS at the National Technical University of Athens, Greece. At MSR I have had the opportunity to work with an amazing set of summer interns including Nika Haghtalab, Gautam Kamath, Jieming Mao, Jonas Mueller, Yichen Wang, Steven Wu, Juba Ziani, Dylan Foster, Khashayar Khosravi, Mert Demirer, Nishanth Dikkala, Manolis Zampetakis, Michael Celentano, Gali Noti, Rahul Singh, Dhruv Rohatgi, Anish Agarwal, Matthew O'Keefe, Andrew Bennett, Korinna Frangias.

[Contact]

vsyrgk [at] stanford.edu
Huang Engineering Center
Office 252
475 Via Ortega
Stanford, CA 94305

News

Students

Representative Publications


Double/Debiased Machine Learning for Dynamic Treatment Effects via g-Estimation

Greg Lewis, Vasilis Syrgkanis
Preliminary version at 35th Conference on Neural Information Processing Systems, NeurIPS'21

Abstract

We consider the estimation of treatment effects in settings when multiple treatments are assigned over time and treatments can have a causal effect on future outcomes or the state of the treated unit. We propose an extension of the double/debiased machine learning framework to estimate the dynamic effects of treatments, which can be viewed as a Neyman orthogonal (locally robust) cross-fitted version of g-estimation in the dynamic treatment regime. Our method applies to a general class of non-linear dynamic treatment models known as Structural Nested Mean Models and allows the use of machine learning methods to control for potentially high dimensional state variables, subject to a mean square error guarantee, while still allowing parametric estimation and construction of confidence intervals for the structural parameters of interest. These structural parameters can be used for off-policy evaluation of any target dynamic policy at parametric rates, subject to semi-parametric restrictions on the data generating process. Our work is based on a recursive peeling process, typical in g-estimation, and formulates a strongly convex objective at each stage, which allows us to extend the g-estimation framework in multiple directions: i) to provide finite sample guarantees, ii) to estimate non-linear effect heterogeneity with respect to fixed unit characteristics, within arbitrary function spaces, enabling a dynamic analogue of the RLearner algorithm for heterogeneous effects, iii) to allow for high-dimensional sparse parameterizations of the target structural functions, enabling automated model selection via a recursive lasso algorithm. We also provide guarantees for data stemming from a single treated unit over a long horizon and under stationarity conditions.

Minimax Estimation of Conditional Moment Models

Nishanth Dikkala, Greg Lewis, Lester Mackey, Vasilis Syrgkanis
Preliminary version at 34th Conference on Neural Information Processing Systems, NeurIPS'20
[Github Code] [MSR Blogpost]

Abstract

We develop an approach for estimating models described via conditional moment restrictions, with a prototypical application being non-parametric instrumental variable regression. We introduce a min-max criterion function, under which the estimation problem can be thought of as solving a zero-sum game between a modeler who is optimizing over the hypothesis space of the target model and an adversary who identifies violating moments over a test function space. We analyze the statistical estimation rate of the resulting estimator for arbitrary hypothesis spaces, with respect to an appropriate analogue of the mean squared error metric, for ill-posed inverse problems. We show that when the minimax criterion is regularized with a second moment penalty on the test function and the test function space is sufficiently rich, then the estimation rate scales with the critical radius of the hypothesis and test function spaces, a quantity which typically gives tight fast rates. Our main result follows from a novel localized Rademacher analysis of statistical learning problems defined via minimax objectives. We provide applications of our main results for several hypothesis spaces used in practice such as: reproducing kernel Hilbert spaces, high dimensional sparse linear functions, spaces defined via shape constraints, ensemble estimators such as random forests, and neural networks. For each of these applications we provide computationally efficient optimization methods for solving the corresponding minimax problem (e.g. stochastic first-order heuristics for neural networks). In several applications, we show how our modified mean squared error rate, combined with conditions that bound the ill-posedness of the inverse problem, lead to mean squared error rates. We conclude with an extensive experimental analysis of the proposed methods.

Estimation and Inference with Trees and Forests in High Dimensions

Vasilis Syrgkanis, Manolis Zampetakis
Preliminary version at the 33rd Annual Conference on Learning Theory, COLT'20

Abstract

We analyze the finite sample mean squared error (MSE) performance of regression trees and forests in the high dimensional regime with binary features, under a sparsity constraint. We prove that if only r of the d features are relevant for the mean outcome function, then shallow trees built greedily via the CART empirical MSE criterion achieve MSE rates that depend only logarithmically on the ambient dimension d. We prove upper bounds, whose exact dependence on the number relevant variables r depends on the correlation among the features and on the degree of relevance. For strongly relevant features, we also show that fully grown honest forests achieve fast MSE rates and their predictions are also asymptotically normal, enabling asymptotically valid inference that adapts to the sparsity of the regression function.

Orthogonal Statistical Learning (aka Statistical Learning with a Nuisance Component)

Dylan Foster, Vasilis Syrgkanis
Preliminary version at 32nd Annual Conference on Learning Theory, COLT'19
Best Paper Award
Revise and Resubmit at the Annals of Statistics

Abstract

We provide excess risk guarantees for statistical learning in a setting where the population risk with respect to which we evaluate the target model depends on an unknown model that must be to be estimated from data (a "nuisance model"). We analyze a two-stage sample splitting meta-algorithm that takes as input two arbitrary estimation algorithms: one for the target model and one for the nuisance model. We show that if the population risk satisfies a condition called Neyman orthogonality, the impact of the nuisance estimation error on the excess risk bound achieved by the meta-algorithm is of second order. Our theorem is agnostic to the particular algorithms used for the target and nuisance and only makes an assumption on their individual performance. This enables the use of a plethora of existing results from statistical learning and machine learning literature to give new guarantees for learning with a nuisance component. Moreover, by focusing on excess risk rather than parameter estimation, we can give guarantees under weaker assumptions than in previous works and accommodate the case where the target parameter belongs to a complex nonparametric class. We characterize conditions on the metric entropy such that oracle rates---rates of the same order as if we knew the nuisance model---are achieved. We also analyze the rates achieved by specific estimation algorithms such as variance-penalized empirical risk minimization, neural network estimation and sparse high-dimensional linear model estimation. We highlight the applicability of our results in four settings of central importance in the literature: 1) heterogeneous treatment effect estimation, 2) offline policy optimization, 3) domain adaptation, and 4) learning with missing data.

Training GANs with Optimism

Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, Haoyang Zeng
Preliminary version at 6th International Conference on Learning Representations, ICLR 2018
[Github Code]

Abstract

We address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs. Recent theoretical results have shown that optimistic mirror decent (OMD) can enjoy faster regret rates in the context of zero-sum games. WGANs is exactly a context of solving a zero-sum game with simultaneous no-regret dynamics. Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs. We formally show that in the case of bi-linear zero-sum games the last iterate of OMD dynamics converges to an equilibrium, in contrast to GD dynamics which are bound to cycle. We also portray the huge qualitative difference between GD and OMD dynamics with toy examples, even when GD is modified with many adaptations proposed in the recent literature, such as gradient penalty or momentum. We apply OMD WGAN training to a bioinformatics problem of generating DNA sequences. We observe that models trained with OMD achieve consistently smaller KL divergence with respect to the true underlying distribution, than models trained with GD variants. Finally, we introduce a new algorithm, Optimistic Adam, which is an optimistic variant of Adam. We apply it to WGAN training on CIFAR10 and observe improved performance in terms of inception score as compared to Adam.

Robust Optimization for Non-Convex Objectives

Robert Chen, Brendan Lucier, Yaron Singer, Vasilis Syrgkanis
31st Annual Conference on Neural Information Processing Systems, NeurIPS'17 [Github Code]
Oral Presentation

Abstract

We consider robust optimization problems, where the goal is to optimize in the worst case over a class of objective functions. We develop a reduction from robust improper optimization to Bayesian optimization: given an oracle that returns alpha-approximate solutions for distributions over objectives, we compute a distribution over solutions that is α-approximate in the worst case. We show that derandomizing this solution is NP-hard in general, but can be done for a broad class of statistical learning tasks. We apply our results to robust neural network training and submodular optimization. We evaluate our approach experimentally on corrupted character classification, and robust influence maximization in networks.

Oracle Efficient Learning and Auction Design

Miro Dudik, Nika Haghtalab, Haipeng Luo, Robert E. Schapire, Vasilis Syrgkanis, Jenn Wortman Vaughan
58th Annual IEEE Symposium on Foundations of Computer Science, FOCS'17
Full version at the Journal of the ACM, September 2020

Abstract

We consider the design of computationally efficient online learning algorithms in an adversarial setting in which the learner has access to an offline optimization oracle. We present an algorithm called Generalized Follow-the-Perturbed-Leader and provide conditions under which it is oracle-efficient while achieving vanishing regret. Our results make significant progress on an open problem raised by Hazan and Koren, who showed that oracle-efficient algorithms do not exist in general and asked whether one can identify properties under which oracle-efficient online learning may be possible. Our auction-design framework considers an auctioneer learning an optimal auction for a sequence of adversarially selected valuations with the goal of achieving revenue that is almost as good as the optimal auction in hindsight, among a class of auctions. We give oracle-efficient learning results for: (1) VCG auctions with bidder-specific reserves in single-parameter settings, (2) envy-free item pricing in multi-item auctions, and (3) s-level auctions of Morgenstern and Roughgarden for single-item settings. The last result leads to an approximation of the overall optimal Myerson auction when bidders' valuations are drawn according to a fast-mixing Markov process, extending prior work that only gave such guarantees for the i.i.d. setting. Finally, we derive various extensions, including: (1) oracle-efficient algorithms for the contextual learning setting in which the learner has access to side information (such as bidder demographics), (2) learning with approximate oracles such as those based on Maximal-in-Range algorithms, and (3) no-regret bidding in simultaneous auctions, resolving an open problem of Daskalakis and Syrgkanis.

Learning in Auctions: Regret is Hard, Envy is Easy

Constantinos Daskalakis, Vasilis Syrgkanis
57th Annual IEEE Symposium on Foundations of Computer Science, FOCS'16
Invited to Games and Economic Behavior

Abstract

A line of recent work provides welfare guarantees of simple combinatorial auction formats, such as selling m items via simultaneous second price auctions (SiSPAs) (Christodoulou et al. 2008, Bhawalkar and Roughgarden 2011, Feldman et al. 2013). These guarantees hold even when the auctions are repeatedly executed and players use no-regret learning algorithms. Unfortunately, off-the-shelf no-regret algorithms for these auctions are computationally inefficient as the number of actions is exponential. We show that this obstacle is insurmountable: there are no polynomial-time no-regret algorithms for SiSPAs, unless RP⊇ NP, even when the bidders are unit-demand. Our lower bound raises the question of how good outcomes polynomially-bounded bidders may discover in such auctions. To answer this question, we propose a novel concept of learning in auctions, termed "no-envy learning." This notion is founded upon Walrasian equilibrium, and we show that it is both efficiently implementable and results in approximately optimal welfare, even when the bidders have fractionally subadditive (XOS) valuations (assuming demand oracles) or coverage valuations (without demand oracles). No-envy learning outcomes are a relaxation of no-regret outcomes, which maintain their approximate welfare optimality while endowing them with computational tractability. Our results extend to other auction formats that have been studied in the literature via the smoothness paradigm. Our results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts is infinite, and the payoff function of the learner is non-linear. This algorithm has applications outside of auction settings, such as in security games. Our result for coverage valuations is based on a novel use of convex rounding schemes and a reduction to online convex optimization.

Efficient Algorithms for Adversarial Contextual Learning

33rd International Conference on Machine Learning, ICML'16
Best Spotlight Talk award at 10th Annual Machine Learning Symposium

Abstract

We provide the first oracle efficient sublinear regret algorithms for adversarial versions of the contextual bandit problem. In this problem, the learner repeatedly makes an action on the basis of a context and receives reward for the chosen action, with the goal of achieving reward competitive with a large class of policies. We analyze two settings: i) in the transductive setting the learner knows the set of contexts a priori, ii) in the small separator setting, there exists a small set of contexts such that any two policies behave differently in one of the contexts in the set. Our algorithms fall into the follow the perturbed leader family \cite{Kalai2005} and achieve regret O(T^(3/4) \sqrt{K log(N)}) in the transductive setting and O(T^(2/3)d^(3/4)K\sqrt{log(N)}) in the separator setting, where K is the number of actions, N is the number of baseline policies, and d is the size of the separator. We actually solve the more general adversarial contextual semi-bandit linear optimization problem, whilst in the full information setting we address the even more general contextual combinatorial optimization. We provide several extensions and implications of our algorithms, such as switching regret and efficient learning with predictable sequences.

Abstract

Game-theoretic models relevant for computer science applications usually feature a large number of players. The goal of this paper is to develop an analytical framework for bounding the price of anarchy in such models. We demonstrate the wide applicability of our framework through instantiations for several well-studied models, including simultaneous single-item auctions, greedy combinatorial auctions, and routing games. In all cases, we identify conditions under which the POA of large games is much better than that of worst-case instances. Our results also give new senses in which simple auctions can perform almost as well as optimal ones in realistic settings.

Fast Convergence of Regularized Learning in Games

29th Annual Conference on Neural Information Processing Systems, NeurIPS'15
Best Paper Award

Abstract

We show that natural classes of regularized learning algorithms with a form of recency bias achieve faster convergence rates to approximate efficiency and to coarse correlated equilibria in multiplayer normal form games. When each player in a game uses an algorithm from our class, their individual regret decays at O(T^{−3/4}), while the sum of utilities converges to an approximate optimum at O(T^{−1})--an improvement upon the worst case O(T^{−1/2}) rates. We show a black-box reduction for any algorithm in the class to achieve O~(T^{−1/2}) rates against an adversary, while maintaining the faster rates against algorithms in the class. Our results extend those of [Rakhlin and Shridharan 2013] and [Daskalakis et al. 2014], who only analyzed two-player zero-sum games for specific algorithms.

Abstract

The main goal of this paper is to develop a theory of inference of player valuations from observed data in the generalized second price auction without relying on the Nash equilibrium assumption. Existing work in Economics on inferring agent values from data relies on the assumption that all participant strategies are best responses of the observed play of other players, i.e. they constitute a Nash equilibrium. In this paper, we show how to perform inference relying on a weaker assumption instead: assuming that players are using some form of no-regret learning. Learning outcomes emerged in recent years as an attractive alternative to Nash equilibrium in analyzing game outcomes, modeling players who haven't reached a stable equilibrium, but rather use algorithmic learning, aiming to learn the best way to play from previous observations. In this paper we show how to infer values of players who use algorithmic learning strategies. Such inference is an important first step before we move to testing any learning theoretic behavioral model on auction data. We apply our techniques to a dataset from Microsoft's sponsored search ad auction system.

Abstract

We consider common-value hybrid auctions among two asymmetrically informed bidders, where the winning bidder pays his bid with some positive probability k and the losing bid otherwise. Under the assumption of discrete and affiliated signals, we give an explicit characterization of the (unique) equilibrium, based on a simple recurrence relation. We show that equilibrium revenue is decreasing in k, and that the limit second-price equilibrium that is selected entails extensive free-riding by uninformed bidders. We further show that the Linkage Principle can fail to hold even in a pure first-price auction with binary signals: public revelation of a signal to both bidders may decrease the auctioneer's revenue. Lastly, we analyze the effects of public acquisition of additional information on bidder utilities and exhibit cases in which both bidders strictly prefer for a specific bidder to receive additional information.

Bayesian Incentive-Compatible Bandit Exploration

16th ACM Conference on Economics and Computation, Portland, OR, USA, June 15-19, 2015, EC'15
Full version at Operations Research, July 2020

Abstract

Individual decision-makers consume information revealed by the previous decision makers, and produce information that may help in future decisions. This phenomenon is common in a wide range of scenarios in the Internet economy, as well as in other domains such as medical decisions. Each decision-maker would individually prefer to "exploit": select an action with the highest expected reward given her current information. At the same time, each decision-maker would prefer previous decision-makers to "explore", producing information about the rewards of various actions. A social planner, by means of carefully designed information disclosure, can incentivize the agents to balance the exploration and exploitation so as to maximize social welfare. We formulate this problem as a multi-armed bandit problem (and various generalizations thereof) under incentive-compatibility constraints induced by the agents' Bayesian priors. We design an incentive-compatible bandit algorithm for the social planner whose regret is asymptotically optimal among all bandit algorithms (incentive-compatible or not). Further, we provide a black-box reduction from an arbitrary multi-arm bandit algorithm to an incentive-compatible one, with only a constant multiplicative increase in regret. This reduction works for very general bandit setting that incorporate contexts and arbitrary auxiliary feedback.

Composable and efficient mechanisms

44th Symposium on Theory of Computing Conference, STOC'13
Survey on Price of Anarchy in Auctions at the Journal of Artificial Intelligence Research, May 2017
Price of Anarchy in Auctions: [ Slide Show | PDF Slides ]
Tutorial on PoA in Auctions at WINE 2013 with Jason Hartline: [ Slide Show | PDF Slides ]

Abstract

We initiate the study of efficient mechanism design with guaranteed good properties even when players participate in multiple different mechanisms simultaneously or sequentially. We define the class of smooth mechanisms, related to smooth games defined by Roughgarden, that can be thought of as mechanisms that generate approximately market clearing prices. We show that smooth mechanisms result in high quality outcome in equilibrium both in the full information setting and in the Bayesian setting with uncertainty about participants, as well as in learning outcomes. Our main result is to show that such mechanisms compose well: smoothness locally at each mechanism implies efficiency globally. For mechanisms where good performance requires that bidders do not bid above their value, we identify the notion of a weakly smooth mechanism. Weakly smooth mechanisms, such as the Vickrey auction, are approximately efficient under the no-overbidding assumption. Similar to smooth mechanisms, weakly smooth mechanisms behave well in composition, and have high quality outcome in equilibrium (assuming no overbidding) both in the full information setting and in the Bayesian setting, as well as in learning outcomes. In most of the paper we assume participants have quasi-linear valuations. We also extend some of our results to settings where participants have budget constraints.

Working Papers


Direct Preference Optimization With Unobserved Preference Heterogeneity
Keertana Chidambaram, Karthik Vinay Seetharaman, Vasilis Syrgkanis

Simultaneous Inference for Local Structural Parameters with Random Forests
David M. Ritzwoller, Vasilis Syrgkanis

Taking a Moment for Distributional Robustness
Jabari Hastings, Christopher Jung, Charlotte Peale, Vasilis Syrgkanis

Dynamic Local Average Treatment Effects
Ravi B. Sojitra, Vasilis Syrgkanis

Structure-agnostic Optimality of Doubly Robust Learning for Treatment Effect Estimation
Jikai Jin, Vasilis Syrgkanis

Regularized DeepIV with Model Selection
Zihao Li, Hui Lan, Vasilis Syrgkanis, Mengdi Wang, Masatoshi Uehara

Incentive-Aware Synthetic Control: Accurate Counterfactual Estimation via Incentivized Exploration
Daniel Ngo, Keegan Harris, Anish Agarwal, Vasilis Syrgkanis, Zhiwei Steven Wu

Source Condition Double Robust Inference on Functionals of Inverse Problems
Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara

Inference on Optimal Dynamic Policies via Softmax Approximation
Qizhao Chen, Morgane Austern, Vasilis Syrgkanis

Post Reinforcement Learning Inference
Vasilis Syrgkanis, Ruohan Zhan

Synthetic Blip Effects: Generalizing Synthetic Controls for the Dynamic Treatment Regime
Anish Agarwal, Vasilis Syrgkanis

Long Story Short: Omitted Variable Bias in Causal Machine Learning
Victor Chernozhukov, Carlos Cinelli, Whitney Newey, Amit Sharma, Vasilis Syrgkanis

Automatic Debiased Machine Learning for Dynamic Treatment Effects
Victor Chernozhukov, Whitney Newey, Rahul Singh, Vasilis Syrgkanis

Automatic Debiased Machine Learning via Neural Nets for Generalized Linear Regression
Victor Chernozhukov, Whitney K. Newey, Victor Quintas-Martinez, Vasilis Syrgkanis

Finding Subgroups with Significant Treatment Effects
Jann Spiess, Vasilis Syrgkanis, Victor Yaneng Wang

Adversarial Estimation of Reisz Representers
Victor Chernozhukov, Whitney Newey, Rahul Singh, Vasilis Syrgkanis

Inference on Auctions with Weak Assumptions on Information
Vasilis Syrgkanis, Elie Tamer, Juba Ziani [Github Code]

Surveys

The Price of Anarchy in Auctions
Tim Roughgarden, Vasilis Syrgkanis, Eva Tardos
Journal of Artificial Intelligence Research, May 2017

Algorithmic Game Theory and Econometrics
Vasilis Syrgkanis, SIGecom Exchanges, June 2015

The Dining Bidder Problem: a la russe et a la francaise
Renato Paes Leme, Vasilis Syrgkanis, Eva Tardos, SIGecom Exchanges, December 2012
A review of recent results in simultaneous and sequential item auctions.

Theses

Efficiency of Mechanisms in Complex Markets
PhD Thesis, Cornell University, Computer Science Department, August 2014

Equilibria in Congestion Game Models: Existence, Complexity and Efficiency
Vasilis Syrgkanis
Undergraduate Diploma Thesis, National Technical University of Athens, July 2009 (title is in Greek but main content, p. 6 and on, is in English)

Professional Service

Senior Program Committee: EC 2017, ICLR 2020, EC 2020
Organizing Committee: NeurIPS 2018 Smooth Games Optimization and Machine Learning Workshop, 2nd Workshop on Algorithmic Game Theory and Data Science 2016, 3nd Workshop on Algorithmic Game Theory and Data Science 2017, NeurIPS17 Workshop on Learning in the Presence of Strategic Behavior
Program Committee: EC 2013, AdAuctions 2014, EC 2015, IJCAI 2015, WINE 2015, AdAuctions 2015, ICML 2016, EC 2016, NeurIPS 2016, HCOMP 2016, WINE 2016, SAGT 2017, NeurIPS 2017, ALT 2018, ICLR 2018, ICLR 2019, ALT 2019
Journal Reviewer: Journal of the ACM, SIAM Journal on Computing, ACM Transactions on Economics and Computation, Journal of Machine Learning Research, IEEE Transactions on Automatic Control