# Paper Digest: ICML 2021 Highlights

Download ICML-2021-Paper-Digests.pdf– highlights of all ICML-2021 papers. Readers can also choose to read this highlight article on our console, which allows users to filter out papers using keywords, authors, etc. The International Conference on Machine Learning (ICML) is one of the top machine learning conferences in the world. In 2021, it is to be held online.

To help the community quickly catch up on the work presented in this conference, Paper Digest Team processed all accepted papers, and generated one highlight sentence (typically the main topic) for each paper. Readers are encouraged to read these machine generated highlights / summaries to quickly get the main idea of each paper. Based in New York, Paper Digest is dedicated to producing high-quality text analysis results that people can acturally use on a daily basis. In the past 4 years, we have been serving users across the world with a number of exclusive services on ranking, search, tracking and review. This month we feature Literature Review Generator, which automatically generates literature review around any topic.

If you do not want to miss any interesting academic paper, you are welcome to **sign up our free daily paper digest service ** to get updates on new papers published in your area every day. You are also welcome to follow us on Twitter and Linkedin to get updated with new conference digests.

Paper Digest Team

team@paperdigest.org

#### TABLE 1: Paper Digest: ICML 2021 Highlights

Paper | Author(s) | |
---|---|---|

1 | A New Representation of Successor Features for Transfer Across Dissimilar EnvironmentsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address this problem, we propose an approach based on successor features in which we model successor feature functions with Gaussian Processes permitting the source successor features to be treated as noisy measurements of the target successor feature function. |
Majid Abdolshah; Hung Le; Thommen Karimpanal George; Sunil Gupta; Santu Rana; Svetha Venkatesh; |

2 | Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time ScalingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a novel scheme for desynchronizing the evaluation of clauses, eliminating the voting bottleneck. |
Kuruge Darshana Abeyrathna; Bimal Bhattarai; Morten Goodwin; Saeed Rahimi Gorji; Ole-Christoffer Granmo; Lei Jiao; Rupsa Saha; Rohan K Yadav; |

3 | Debiasing Model Updates for Improving Personalized Federated TrainingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel method for federated learning that is customized specifically to the objective of a given edge device. |
Durmus Alp Emre Acar; Yue Zhao; Ruizhao Zhu; Ramon Matas; Matthew Mattina; Paul Whatmough; Venkatesh Saligrama; |

4 | Memory Efficient Online Meta LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel algorithm for online meta learning where task instances are sequentially revealed with limited supervision and a learner is expected to meta learn them in each round, so as to allow the learner to customize a task-specific model rapidly with little task-level supervision. |
Durmus Alp Emre Acar; Ruizhao Zhu; Venkatesh Saligrama; |

5 | Robust Testing and Estimation Under Manipulation AttacksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study robust testing and estimation of discrete distributions in the strong contamination model. |
Jayadev Acharya; Ziteng Sun; Huanyu Zhang; |

6 | GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Here, we propose GP-Tree, a novel method for multi-class classification with Gaussian processes and DKL. |
Idan Achituve; Aviv Navon; Yochai Yemini; Gal Chechik; Ethan Fetaya; |

7 | F-Domain Adversarial Learning: Theory and AlgorithmsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we introduce a novel and general domain-adversarial framework. |
David Acuna; Guojun Zhang; Marc T. Law; Sanja Fidler; |

8 | Towards Rigorous Interpretations: A Formalisation of Feature AttributionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we propose to formalise feature selection/attribution based on the concept of relaxed functional dependence. |
Darius Afchar; Vincent Guigue; Romain Hennequin; |

9 | Acceleration Via Fractal Learning Rate SchedulesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We provide some experiments to challenge conventional beliefs about stable learning rates in deep learning: the fractal schedule enables training to converge with locally unstable updates which make negative progress on the objective. |
Naman Agarwal; Surbhi Goel; Cyril Zhang; |

10 | A Regret Minimization Approach to Iterative Learning ControlRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this setting, we propose a new performance metric, planning regret, which replaces the standard stochastic uncertainty assumptions with worst case regret. |
Naman Agarwal; Elad Hazan; Anirudha Majumdar; Karan Singh; |

11 | Towards The Unification and Robustness of Perturbation and Gradient Based ExplanationsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we analyze two popular post hoc interpretation techniques: SmoothGrad which is a gradient based method, and a variant of LIME which is a perturbation based method. |
Sushant Agarwal; Shahin Jabbari; Chirag Agarwal; Sohini Upadhyay; Steven Wu; Himabindu Lakkaraju; |

12 | Label Inference Attacks from Log-loss ScoresRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we investigate the problem of inferring the labels of a dataset from single (or multiple) log-loss score(s), without any other access to the dataset. |
Abhinav Aggarwal; Shiva Kasiviswanathan; Zekun Xu; Oluwaseyi Feyisetan; Nathanael Teissier; |

13 | Deep Kernel ProcessesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We define deep kernel processes in which positive definite Gram matrices are progressively transformed by nonlinear kernel functions and by sampling from (inverse) Wishart distributions. |
Laurence Aitchison; Adam Yang; Sebastian W. Ober; |

14 | How Does Loss Function Affect Generalization Performance of Deep Learning? Application to Human Age EstimationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In summary, our main statement in this paper is: choose a stable loss function, generalize better. |
Ali Akbari; Muhammad Awais; Manijeh Bashar; Josef Kittler; |

15 | On Learnability Via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student SettingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we explore theoretical analysis on training two-layer ReLU neural networks in a teacher-student regression model, in which a student network learns an unknown teacher network through its outputs. |
Shunta Akiyama; Taiji Suzuki; |

16 | Slot Machines: Discovering Winning Combinations of Random Weights in Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In contrast to traditional weight optimization in a continuous space, we demonstrate the existence of effective random networks whose weights are never updated. |
Maxwell M Aladago; Lorenzo Torresani; |

17 | A Large-scale Benchmark for Few-shot Program Induction and SynthesisRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose a new way of leveraging unit tests and natural inputs for small programs as meaningful input-output examples for each sub-program of the overall program. |
Ferran Alet; Javier Lopez-Contreras; James Koppel; Maxwell Nye; Armando Solar-Lezama; Tomas Lozano-Perez; Leslie Kaelbling; Joshua Tenenbaum; |

18 | Robust Pure Exploration in Linear Bandits with Limited BudgetRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider the pure exploration problem in the fixed-budget linear bandit setting. |
Ayya Alieva; Ashok Cutkosky; Abhimanyu Das; |

19 | Communication-Efficient Distributed Optimization with Quantized PreconditionersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We investigate fast and communication-efficient algorithms for the classic problem of minimizing a sum of strongly convex and smooth functions that are distributed among $n$ different nodes, which can communicate using a limited number of bits. |
Foivos Alimisis; Peter Davies; Dan Alistarh; |

20 | Non-Exponentially Weighted Aggregation: Regret Bounds for Unbounded Loss FunctionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study a generalized aggregation strategy, where the weights no longer depend exponentially on the losses. |
Pierre Alquier; |

21 | Dataset Dynamics Via Gradient Flows in Probability SpaceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose a novel framework for dataset transformation, which we cast as optimization over data-generating joint probability distributions. |
David Alvarez-Melis; Nicol? Fusi; |

22 | Submodular Maximization Subject to A Knapsack Constraint: Combinatorial Algorithms with Near-optimal Adaptive ComplexityRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we obtain the first \emph{constant factor} approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with \emph{near-optimal} $O(\log n)$ adaptive complexity. |
Georgios Amanatidis; Federico Fusco; Philip Lazos; Stefano Leonardi; Alberto Marchetti-Spaccamela; Rebecca Reiffenh?user; |

23 | Safe Reinforcement Learning with Linear Function ApproximationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we address both problems by first modeling safety as an unknown linear cost function of states and actions, which must always fall below a certain threshold. |
Sanae Amani; Christos Thrampoulidis; Lin Yang; |

24 | Automatic Variational Inference with Cascading FlowsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Here, we combine the flexibility of normalizing flows and the prior-embedding property of ASVI in a new family of variational programs, which we named cascading flows. |
Luca Ambrogioni; Gianluigi Silvestri; Marcel Van Gerven; |

25 | Sparse Bayesian Learning Via Stepwise RegressionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Herein, we propose a coordinate ascent algorithm for SBL termed Relevance Matching Pursuit (RMP) and show that, as its noise variance parameter goes to zero, RMP exhibits a surprising connection to Stepwise Regression. |
Sebastian E. Ament; Carla P. Gomes; |

26 | Locally Persistent Exploration in Continuous Control Tasks with Sparse RewardsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a new exploration method, based on two intuitions: (1) the choice of the next exploratory action should depend not only on the (Markovian) state of the environment, but also on the agent’s trajectory so far, and (2) the agent should utilize a measure of spread in the state space to avoid getting stuck in a small region. |
Susan Amin; Maziar Gomrokchi; Hossein Aboutalebi; Harsh Satija; Doina Precup; |

27 | Preferential Temporal Difference LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose an approach to re-weighting states used in TD updates, both when they are the input and when they provide the target for the update. |
Nishanth V. Anand; Doina Precup; |

28 | Unitary Branching Programs: Learnability and Lower BoundsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we study a generalized version of bounded width branching programs where instructions are defined by unitary matrices of bounded dimension. |
Fidel Ernesto Diaz Andino; Maria Kokkou; Mateus De Oliveira Oliveira; Farhad Vadiee; |

29 | The Logical Options FrameworkRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce a hierarchical reinforcement learning framework called the Logical Options Framework (LOF) that learns policies that are satisfying, optimal, and composable. |
Brandon Araki; Xiao Li; Kiran Vodrahalli; Jonathan Decastro; Micah Fry; Daniela Rus; |

30 | Annealed Flow Transport Monte CarloRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose here a novel Monte Carlo algorithm, Annealed Flow Transport (AFT), that builds upon AIS and SMC and combines them with normalizing flows (NFs) for improved performance. |
Michael Arbel; Alex Matthews; Arnaud Doucet; |

31 | Permutation WeightingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we introduce permutation weighting, a method for estimating balancing weights using a standard binary classifier (regardless of cardinality of treatment). |
David Arbour; Drew Dimmery; Arjun Sondhi; |

32 | Analyzing The Tree-layer Structure of Deep ForestsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, our aim is not to benchmark DF performances but to investigate instead their underlying mechanisms. |
Ludovic Arnould; Claire Boyer; Erwan Scornet; |

33 | Dropout: Explicit Forms and Capacity ControlRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We investigate the capacity control provided by dropout in various machine learning problems. |
Raman Arora; Peter Bartlett; Poorya Mianjy; Nathan Srebro; |

34 | Tighter Bounds on The Log Marginal Likelihood of Gaussian Process Regression Using Conjugate GradientsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a lower bound on the log marginal likelihood of Gaussian process regression models that can be computed without matrix factorisation of the full kernel matrix. |
Artem Artemev; David R Burt; Mark Van Der Wilk; |

35 | Deciding What to Learn: A Rate-Distortion ApproachRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, leveraging rate-distortion theory, we automate this process such that the designer need only express their preferences via a single hyperparameter and the agent is endowed with the ability to compute its own learning targets that best achieve the desired trade-off. |
Dilip Arumugam; Benjamin Van Roy; |

36 | Private Adaptive Gradient Methods for Convex OptimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm. |
Hilal Asi; John Duchi; Alireza Fallah; Omid Javidbakht; Kunal Talwar; |

37 | Private Stochastic Convex Optimization: Optimal Rates in L1 GeometryRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We show that, up to logarithmic factors the optimal excess population loss of any $(\epsilon,\delta)$-differentially private optimizer is $\sqrt{\log(d)/n} + \sqrt{d}/\epsilon n.$ The upper bound is based on a new algorithm that combines the iterative localization approach of Feldman et al. (2020) with a new analysis of private regularized mirror descent. |
Hilal Asi; Vitaly Feldman; Tomer Koren; Kunal Talwar; |

38 | Combinatorial Blocking Bandits with Stochastic DelaysRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we extend the above model in two directions: (i) We consider the general combinatorial setting where more than one arms can be played at each round, subject to feasibility constraints. (ii) We allow the blocking time of each arm to be stochastic. |
Alexia Atsidakou; Orestis Papadigenopoulos; Soumya Basu; Constantine Caramanis; Sanjay Shakkottai; |

39 | Dichotomous Optimistic Search to Quantify Human PerceptionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we address a variant of the continuous multi-armed bandits problem, called the threshold estimation problem, which is at the heart of many psychometric experiments. |
Julien Audiffren; |

40 | Federated Learning Under Arbitrary Communication PatternsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we investigate the performance of an asynchronous version of local SGD wherein the clients can communicate with the server at arbitrary time intervals. |
Dmitrii Avdiukhin; Shiva Kasiviswanathan; |

41 | Asynchronous Distributed Learning : Adapting to Gradient Delays Without Prior KnowledgeRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a robust training method for the constrained setting and derive non asymptotic convergence guarantees that do not depend on prior knowledge of update delays, objective smoothness, and gradient variance. |
Rotem Zamir Aviv; Ido Hakimi; Assaf Schuster; Kfir Yehuda Levy; |

42 | Decomposable Submodular Function Minimization Via Maximum FlowRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We solve this minimization problem by lifting the solutions of a parametric cut problem, which we obtain via a new efficient combinatorial reduction to maximum flow. |
Kyriakos Axiotis; Adam Karczmarz; Anish Mukherjee; Piotr Sankowski; Adrian Vladu; |

43 | Differentially Private Query Release Through Adaptive ProjectionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose, implement, and evaluate a new algo-rithm for releasing answers to very large numbersof statistical queries likek-way marginals, sub-ject to differential privacy. |
Sergul Aydore; William Brown; Michael Kearns; Krishnaram Kenthapadi; Luca Melis; Aaron Roth; Ankit A Siva; |

44 | On The Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror DescentRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We develop a novel technique for deriving the inductive bias of gradient-flow and use it to obtain closed-form implicit regularizers for multiple cases of interest. |
Shahar Azulay; Edward Moroshko; Mor Shpigel Nacson; Blake E Woodworth; Nathan Srebro; Amir Globerson; Daniel Soudry; |

45 | On-Off Center-Surround Receptive Fields for Accurate and Robust Image ClassificationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To this end, our paper extends the receptive field of convolutional neural networks with two residual components, ubiquitous in the visual processing system of vertebrates: On-center and off-center pathways, with an excitatory center and inhibitory surround; OOCS for short. |
Zahra Babaiee; Ramin Hasani; Mathias Lechner; Daniela Rus; Radu Grosu; |

46 | Uniform Convergence, Adversarial Spheres and A Simple RemedyRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We provide an extensive theoretical investigation of the previously studied data setting through the lens of infinitely-wide models. |
Gregor Bachmann; Seyed-Mohsen Moosavi-Dezfooli; Thomas Hofmann; |

47 | Faster Kernel Matrix Algebra Via Density EstimationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study fast algorithms for computing basic properties of an n x n positive semidefinite kernel matrix K corresponding to n points x_1,…,x_n in R^d. |
Arturs Backurs; Piotr Indyk; Cameron Musco; Tal Wagner; |

48 | Robust Reinforcement Learning Using Least Squares Policy Iteration with Provable Performance GuaranteesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper addresses the problem of model-free reinforcement learning for Robust Markov Decision Process (RMDP) with large state spaces. |
Kishan Panaganti Badrinath; Dileep Kalathil; |

49 | Skill Discovery for Exploration and Planning Using Deep Skill GraphsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce a new skill-discovery algorithm that builds a discrete graph representation of large continuous MDPs, where nodes correspond to skill subgoals and the edges to skill policies. |
Akhil Bagaria; Jason K Senthil; George Konidaris; |

50 | Locally Adaptive Label Smoothing Improves Predictive ChurnRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we present several baselines for reducing churn and show that training on soft labels obtained by adaptively smoothing each example’s label based on the example’s neighboring labels often outperforms the baselines on churn while improving accuracy on a variety of benchmark classification tasks and model architectures. |
Dara Bahri; Heinrich Jiang; |

51 | How Important Is The Train-Validation Split in Meta-Learning?Related Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We provide a detailed theoretical study on whether and when the train-validation split is helpful in the linear centroid meta-learning problem. |
Yu Bai; Minshuo Chen; Pan Zhou; Tuo Zhao; Jason Lee; Sham Kakade; Huan Wang; Caiming Xiong; |

52 | Stabilizing Equilibrium Models By Jacobian RegularizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models. |
Shaojie Bai; Vladlen Koltun; Zico Kolter; |

53 | Don’t Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary ClassificationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we show theoretically that over-parametrization is not the only reason for over-confidence. |
Yu Bai; Song Mei; Huan Wang; Caiming Xiong; |

54 | Principled Exploration Via Optimistic Bootstrapping and Backward InductionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a principled exploration method for DRL through Optimistic Bootstrapping and Backward Induction (OB2I). |
Chenjia Bai; Lingxiao Wang; Lei Han; Jianye Hao; Animesh Garg; Peng Liu; Zhaoran Wang; |

55 | GLSearch: Maximum Common Subgraph Detection Via Learning to SearchRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose GLSearch, a Graph Neural Network (GNN) based learning to search model. |
Yunsheng Bai; Derek Xu; Yizhou Sun; Wei Wang; |

56 | Breaking The Limits of Message Passing Graph Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we show that if the graph convolution supports are designed in spectral-domain by a non-linear custom function of eigenvalues and masked with an arbitrary large receptive field, the MPNN is theoretically more powerful than the 1-WL test and experimentally as powerful as a 3-WL existing models, while remaining spatially localized. |
Muhammet Balcilar; Pierre Heroux; Benoit Gauzere; Pascal Vasseur; Sebastien Adam; Paul Honeine; |

57 | Instance Specific Approximations for Submodular MaximizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We develop an algorithm that gives an instance-specific approximation for any solution of an instance of monotone submodular maximization under a cardinality constraint. |
Eric Balkanski; Sharon Qian; Yaron Singer; |

58 | Augmented World Models Facilitate Zero-Shot Dynamics Generalization From A Single Offline EnvironmentRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: However, little attention has been paid to potentially changing dynamics when transferring a policy to the online setting, where performance can be up to 90% reduced for existing methods. In this paper we address this problem with Augmented World Models (AugWM). |
Philip J Ball; Cong Lu; Jack Parker-Holder; Stephen Roberts; |

59 | Regularized Online Allocation Problems: Fairness and BeyondRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we introduce the regularized online allocation problem, a variant that includes a non-linear regularizer acting on the total resource consumption. |
Santiago Balseiro; Haihao Lu; Vahab Mirrokni; |

60 | Predict Then Interpolate: A Simple Algorithm to Learn Stable ClassifiersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose Predict then Interpolate (PI), a simple algorithm for learning correlations that are stable across environments. |
Yujia Bao; Shiyu Chang; Dr.Regina Barzilay; |

61 | Variational (Gradient) Estimate of The Score Function in Energy-based Latent Variable ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper presents new estimates of the score function and its gradient with respect to the model parameters in a general energy-based latent variable model (EBLVM). |
Fan Bao; Kun Xu; Chongxuan Li; Lanqing Hong; Jun Zhu; Bo Zhang; |

62 | Compositional Video Synthesis with Action GraphsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address this challenge, we propose to represent the actions in a graph structure called Action Graph and present the new "Action Graph To Video" synthesis task. |
Amir Bar; Roei Herzig; Xiaolong Wang; Anna Rohrbach; Gal Chechik; Trevor Darrell; Amir Globerson; |

63 | Approximating A Distribution Using Weight QueriesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose an interactive algorithm that iteratively selects data set examples and performs corresponding weight queries. |
Nadav Barak; Sivan Sabato; |

64 | Graph Convolution for Semi-Supervised Classification: Improved Linear Separability and Out-of-Distribution GeneralizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To understand the merits of this approach, we study the classification of a mixture of Gaussians, where the data corresponds to the node attributes of a stochastic block model. |
Aseem Baranwal; Kimon Fountoulakis; Aukosh Jagannath; |

65 | Training Quantized Neural Networks to Global Optimality Via Semidefinite ProgrammingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we introduce a convex optimization strategy to train quantized NNs with polynomial activations. |
Burak Bartan; Mert Pilanci; |

66 | Beyond $log^2(T)$ Regret for Decentralized Bandits in Matching MarketsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a phase based algorithm, where in each phase, besides deleting the globally communicated dominated arms the agents locally delete arms with which they collide often. |
Soumya Basu; Karthik Abinav Sankararaman; Abishek Sankararaman; |

67 | Optimal Thompson Sampling Strategies for Support-aware CVaR BanditsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we study a multi-arm bandit problem in which the quality of each arm is measured by the Conditional Value at Risk (CVaR) at some level alpha of the reward distribution. |
Dorian Baudry; Romain Gautron; Emilie Kaufmann; Odalric Maillard; |

68 | On Limited-Memory Subsampling Strategies for BanditsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Our first contribution is to show that a simple deterministic subsampling rule, proposed in the recent work of \citet{baudry2020sub} under the name of “last-block subsampling”, is asymptotically optimal in one-parameter exponential families. |
Dorian Baudry; Yoan Russac; Olivier Capp?; |

69 | Generalized Doubly Reparameterized Gradient EstimatorsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Here, we develop two generalizations of the DReGs estimator and show that they can be used to train conditional and hierarchical VAEs on image modelling tasks more effectively. |
Matthias Bauer; Andriy Mnih; |

70 | Directional Graph NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To overcome this limitation, we propose the first globally consistent anisotropic kernels for GNNs, allowing for graph convolutions that are defined according to topologicaly-derived directional flows. |
Dominique Beani; Saro Passaro; Vincent L?tourneau; Will Hamilton; Gabriele Corso; Pietro Li?; |

71 | Policy Analysis Using Synthetic Controls in Continuous-TimeRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a continuous-time alternative that models the latent counterfactual path explicitly using the formalism of controlled differential equations. |
Alexis Bellot; Mihaela Van Der Schaar; |

72 | Loss Surface Simplexes for Mode Connecting Volumes and Fast EnsemblingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we in fact demonstrate the existence of mode-connecting simplicial complexes that form multi-dimensional manifolds of low loss, connecting many independently trained models. |
Gregory Benton; Wesley Maddox; Sanae Lotfi; Andrew Gordon Gordon Wilson; |

73 | TFix: Learning to Fix Coding Errors with A Text-to-Text TransformerRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we address this challenge and present a new learning-based system, called TFix. |
Berkay Berabi; Jingxuan He; Veselin Raychev; Martin Vechev; |

74 | Learning Queueing Policies for Organ Transplantation Allocation Using Interpretable Counterfactual Survival AnalysisRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we develop a data-driven model for (real-time) organ allocation using observational data for transplant outcomes. Furthermore, we introduce a novel organ-allocation simulator to accurately test new policies. |
Jeroen Berrevoets; Ahmed Alaa; Zhaozhi Qian; James Jordon; Alexander E.S. Gimson; Mihaela Van Der Schaar; |

75 | Learning from Biased Data: A Semi-Parametric ApproachRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider risk minimization problems where the (source) distribution $P_S$ of the training observations $Z_1, \ldots, Z_n$ differs from the (target) distribution $P_T$ involved in the risk that one seeks to minimize. |
Patrice Bertail; Stephan Cl?men?on; Yannick Guyonvarch; Nathan Noiry; |

76 | Is Space-Time Attention All You Need for Video Understanding?Related Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a convolution-free approach to video classification built exclusively on self-attention over space and time. |
Gedas Bertasius; Heng Wang; Lorenzo Torresani; |

77 | Confidence Scores Make Instance-dependent Label-noise Learning PossibleRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To alleviate this issue, we introduce confidence-scored instance-dependent noise (CSIDN), where each instance-label pair is equipped with a confidence score. |
Antonin Berthon; Bo Han; Gang Niu; Tongliang Liu; Masashi Sugiyama; |

78 | Size-Invariant Graph Representations for Graph Classification ExtrapolationsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we consider an underexplored area of an otherwise rapidly developing field of graph representation learning: The task of out-of-distribution (OOD) graph classification, where train and test data have different distributions, with test data unavailable during training. |
Beatrice Bevilacqua; Yangze Zhou; Bruno Ribeiro; |

79 | Principal Bit Analysis: Autoencoding with Schur-Concave LossRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider a linear autoencoder in which the latent variables are quantized, or corrupted by noise, and the constraint is Schur-concave in the set of latent variances. |
Sourbh Bhadane; Aaron B Wagner; Jayadev Acharya; |

80 | Lower Bounds on Cross-Entropy Loss in The Presence of Test-time AdversariesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we determine optimal lower bounds on the cross-entropy loss in the presence of test-time adversaries, along with the corresponding optimal classification outputs. |
Arjun Nitin Bhagoji; Daniel Cullina; Vikash Sehwag; Prateek Mittal; |

81 | Additive Error Guarantees for Weighted Low Rank ApproximationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study a natural greedy algorithm for weighted low rank approximation and develop a simple condition under which it yields bi-criteria approximation up to a small additive factor in the error. |
Aditya Bhaskara; Aravinda Kanchana Ruwanpathirana; Maheshakya Wijewardena; |

82 | Sample Complexity of Robust Linear Classification on Separated DataRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider the sample complexity of learning with adversarial robustness. |
Robi Bhattacharjee; Somesh Jha; Kamalika Chaudhuri; |

83 | Finding K in Latent $k-$ PolytopeRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: The first important contribution of this paper is to show that under \emph{standard assumptions} $k$ equals the \INR of a \emph{subset smoothed data matrix} defined from Data generated from an $\LkP$. |
Chiranjib Bhattacharyya; Ravindran Kannan; Amit Kumar; |

84 | Non-Autoregressive Electron Redistribution Modeling for Reaction PredictionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address these issues, we devise a non-autoregressive learning paradigm that predicts reaction in one shot. |
Hangrui Bi; Hengyi Wang; Chence Shi; Connor Coley; Jian Tang; Hongyu Guo; |

85 | TempoRL: Learning When to ActRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address this, we propose a proactive setting in which the agent not only selects an action in a state but also for how long to commit to that action. |
Andr? Biedenkapp; Raghu Rajan; Frank Hutter; Marius Lindauer; |

86 | Follow-the-Regularized-Leader Routes to Chaos in Routing GamesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study the emergence of chaotic behavior of Follow-the-Regularized Leader (FoReL) dynamics in games. |
Jakub Bielawski; Thiparat Chotibut; Fryderyk Falniowski; Grzegorz Kosiorowski; Michal Misiurewicz; Georgios Piliouras; |

87 | Neural Symbolic Regression That ScalesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we introduce the first symbolic regression method that leverages large scale pre-training. We procedurally generate an unbounded set of equations, and simultaneously pre-train a Transformer to predict the symbolic equation from a corresponding set of input-output-pairs. |
Luca Biggio; Tommaso Bendinelli; Alexander Neitz; Aurelien Lucchi; Giambattista Parascandolo; |

88 | Model Distillation for Revenue Optimization: Interpretable Personalized PricingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a novel, customized, prescriptive tree-based algorithm that distills knowledge from a complex black-box machine learning algorithm, segments customers with similar valuations and prescribes prices in such a way that maximizes revenue while maintaining interpretability. |
Max Biggs; Wei Sun; Markus Ettl; |

89 | Scalable Normalizing Flows for Permutation Invariant DensitiesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we demonstrate how calculating the trace, a crucial step in this method, raises issues that occur both during training and inference, limiting its practicality. |
Marin Bilo?; Stephan G?nnemann; |

90 | Online Learning for Load Balancing of Unknown Monotone Resource Allocation GamesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To overcome this, we propose a simple algorithm that learns to shift the NE of the game to meet the total load constraints by adjusting the pricing coefficients in an online manner. |
Ilai Bistritz; Nicholas Bambos; |

91 | Low-Precision Reinforcement Learning: Running Soft Actor-Critic in Half PrecisionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we consider continuous control with the state-of-the-art SAC agent and demonstrate that a naïve adaptation of low-precision methods from supervised learning fails. |
Johan Bj?rck; Xiangyu Chen; Christopher De Sa; Carla P Gomes; Kilian Weinberger; |

92 | Multiplying Matrices Without MultiplyingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Consequently, the task of efficiently approximating matrix products has received significant attention. We introduce a learning-based algorithm for this task that greatly outperforms existing methods. |
Davis Blalock; John Guttag; |

93 | One for One, or All for All: Equilibria and Optimality of Collaboration in Federated LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Inspired by game theoretic notions, this paper introduces a framework for incentive-aware learning and data sharing in federated learning. |
Avrim Blum; Nika Haghtalab; Richard Lanas Phillips; Han Shao; |

94 | Black-box Density Function Estimation Using Recursive PartitioningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a novel approach to Bayesian inference and general Bayesian computation that is defined through a sequential decision loop. |
Erik Bodin; Zhenwen Dai; Neill Campbell; Carl Henrik Ek; |

95 | Weisfeiler and Lehman Go Topological: Message Passing Simplicial NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To overcome these limitations, we propose Message Passing Simplicial Networks (MPSNs), a class of models that perform message passing on simplicial complexes (SCs). |
Cristian Bodnar; Fabrizio Frasca; Yuguang Wang; Nina Otter; Guido F Montufar; Pietro Li?; Michael Bronstein; |

96 | The Hintons in Your Neural Network: A Quantum Field Theory View of Deep LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we develop a quantum field theory formalism for deep learning, where input signals are encoded in Gaussian states, a generalization of Gaussian processes which encode the agent’s uncertainty about the input signal. |
Roberto Bondesan; Max Welling; |

97 | Offline Contextual Bandits with Overparameterized ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We formally prove upper bounds on the regret of overparameterized value-based learning and lower bounds on the regret for policy-based algorithms. |
David Brandfonbrener; William Whitney; Rajesh Ranganath; Joan Bruna; |

98 | High-Performance Large-Scale Image Recognition Without NormalizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer-Free ResNets. |
Andy Brock; Soham De; Samuel L Smith; Karen Simonyan; |

99 | Evaluating The Implicit Midpoint Integrator for Riemannian Hamiltonian Monte CarloRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we examine the implicit midpoint integrator as an alternative to the generalized leapfrog integrator. |
James Brofos; Roy R Lederman; |

100 | Reinforcement Learning of Implicit and Explicit Control Flow InstructionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We focus here on the problem of learning control flow that deviates from a strict step-by-step execution of instructions{—}that is, control flow that may skip forward over parts of the instructions or return backward to previously completed or skipped steps. |
Ethan Brooks; Janarthanan Rajendran; Richard L Lewis; Satinder Singh; |

101 | Machine Unlearning for Random ForestsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we introduce data removal-enabled (DaRE) forests, a variant of random forests that enables the removal of training data with minimal retraining. |
Jonathan Brophy; Daniel Lowd; |

102 | Value Alignment VerificationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we formalize and theoretically analyze the problem of efficient value alignment verification: how to efficiently test whether the behavior of another agent is aligned with a human’s values? |
Daniel S Brown; Jordan Schneider; Anca Dragan; Scott Niekum; |

103 | Model-Free and Model-Based Policy Evaluation When Causality Is UncertainRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We develop worst-case bounds to assess sensitivity to these unobserved confounders in finite horizons when confounders are drawn iid each period. |
David A Bruns-Smith; |

104 | Narrow Margins: Classification, Margins and Fat TailsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We investigate the case where this convergence property is not guaranteed to hold and show that it can be fully characterised by the distribution of error terms in the latent variable interpretation of linear classifiers. |
Francois Buet-Golfouse; |

105 | Differentially Private Correlation ClusteringRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose an algorithm that achieves subquadratic additive error compared to the optimal cost. |
Mark Bun; Marek Elias; Janardhan Kulkarni; |

106 | Disambiguation of Weak Supervision Leading to Exponential Convergence RatesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we focus on partial labelling, an instance of weak supervision where, from a given input, we are given a set of potential targets. |
Vivien A Cabannnes; Francis Bach; Alessandro Rudi; |

107 | Finite Mixture Models Do Not Reliably Learn The Number of ComponentsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we add rigor to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM component-count posterior diverges: the posterior probability of any particular finite number of components converges to 0 in the limit of infinite data. |
Diana Cai; Trevor Campbell; Tamara Broderick; |

108 | A Theory of Label Propagation for Subpopulation ShiftRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose a provably effective framework based on label propagation by using an input consistency loss. |
Tianle Cai; Ruiqi Gao; Jason Lee; Qi Lei; |

109 | Lenient Regret and Good-Action Identification in Gaussian Process BanditsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study the problem of Gaussian process (GP) bandits under relaxed optimization criteria stating that any function value above a certain threshold is “good enough”. |
Xu Cai; Selwyn Gomes; Jonathan Scarlett; |

110 | A Zeroth-Order Block Coordinate Descent Algorithm for Huge-Scale Black-Box OptimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a novel algorithm, coined ZO-BCD, that exhibits favorable overall query complexity and has a much smaller per-iteration computational complexity. |
Hanqin Cai; Yuchen Lou; Daniel Mckenzie; Wotao Yin; |

111 | GraphNorm: A Principled Approach to Accelerating Graph Neural Network TrainingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study what normalization is effective for Graph Neural Networks (GNNs). |
Tianle Cai; Shengjie Luo; Keyulu Xu; Di He; Tie-Yan Liu; Liwei Wang; |

112 | On Lower Bounds for Standard and Robust Gaussian Process Bandit OptimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we consider algorithm independent lower bounds for the problem of black-box optimization of functions having a bounded norm is some Reproducing Kernel Hilbert Space (RKHS), which can be viewed as a non-Bayesian Gaussian process bandit problem. |
Xu Cai; Jonathan Scarlett; |

113 | High-dimensional Experimental Design and Kernel BanditsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose a rounding procedure that frees N of any dependence on the dimension d, while achieving nearly the same performance guarantees of existing rounding procedures. |
Romain Camilleri; Kevin Jamieson; Julian Katz-Samuels; |

114 | A Gradient Based Strategy for Hamiltonian Monte Carlo Hyperparameter OptimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Instead, we propose to optimize an objective that quantifies directly the speed of convergence to the target distribution. |
Andrew Campbell; Wenlong Chen; Vincent Stimper; Jose Miguel Hernandez-Lobato; Yichuan Zhang; |

115 | Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise InjectionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we focus on the so-called ‘implicit effect’ of GNIs, which is the effect of the injected noise on the dynamics of SGD. |
Alexander Camuto; Xiaoyu Wang; Lingjiong Zhu; Chris Holmes; Mert Gurbuzbalaban; Umut Simsekli; |

116 | Fold2Seq: A Joint Sequence(1D)-Fold(3D) Embedding-based Generative Model for Protein DesignRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To overcome these challenges, we propose Fold2Seq, a novel transformer-based generative framework for designing protein sequences conditioned on a specific target fold. |
Yue Cao; Payel Das; Vijil Chenthamarakshan; Pin-Yu Chen; Igor Melnyk; Yang Shen; |

117 | Learning from Similarity-Confidence DataRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we investigate a novel weakly supervised learning problem of learning from similarity-confidence (Sconf) data, where only unlabeled data pairs equipped with confidence that illustrates their degree of similarity (two examples are similar if they belong to the same class) are needed for training a discriminative binary classifier. |
Yuzhou Cao; Lei Feng; Yitian Xu; Bo An; Gang Niu; Masashi Sugiyama; |

118 | Parameter-free Locally Accelerated Conditional GradientsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We remove this limitation by introducing a novel, Parameter-Free Locally accelerated CG (PF-LaCG) algorithm, for which we provide rigorous convergence guarantees. |
Alejandro Carderera; Jelena Diakonikolas; Cheuk Yin Lin; Sebastian Pokutta; |

119 | Optimizing Persistent Homology Based FunctionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Building on real analytic geometry arguments, we propose a general framework that allows us to define and compute gradients for persistence-based functions in a very simple way. |
Mathieu Carriere; Frederic Chazal; Marc Glisse; Yuichi Ike; Hariprasad Kannan; Yuhei Umeda; |

120 | Online Policy Gradient for Model Free Learning of Linear Quadratic Regulators with $\sqrt$T RegretRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present the first model-free algorithm that achieves similar regret guarantees. |
Asaf B Cassel; Tomer Koren; |

121 | Multi-Receiver Online Bayesian PersuasionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study, for the first time, an online Bayesian persuasion setting with multiple receivers. |
Matteo Castiglioni; Alberto Marchesi; Andrea Celli; Nicola Gatti; |

122 | Marginal Contribution Feature Importance – An Axiomatic Approach for Explaining DataRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Therefore, we develop a set of axioms to capture properties expected from a feature importance score when explaining data and prove that there exists only one score that satisfies all of them, the Marginal Contribution Feature Importance (MCI). |
Amnon Catav; Boyang Fu; Yazeed Zoabi; Ahuva Libi Weiss Meilik; Noam Shomron; Jason Ernst; Sriram Sankararaman; Ran Gilad-Bachrach; |

123 | Disentangling Syntax and Semantics in The Brain with Deep NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Overall, this study introduces a versatile framework to isolate, in the brain activity, the distributed representations of linguistic constructs. |
Charlotte Caucheteux; Alexandre Gramfort; Jean-Remi King; |

124 | Fair Classification with Noisy Protected Attributes: A Framework with Provable GuaranteesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present an optimization framework for learning a fair classifier in the presence of noisy perturbations in the protected attributes. |
L. Elisa Celis; Lingxiao Huang; Vijay Keswani; Nisheeth K. Vishnoi; |

125 | Best Model Identification: A Rested Bandit FormulationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce and analyze a best arm identification problem in the rested bandit setting, wherein arms are themselves learning algorithms whose expected losses decrease with the number of times the arm has been played. |
Leonardo Cella; Massimiliano Pontil; Claudio Gentile; |

126 | Revisiting Rainbow: Promoting More Insightful and Inclusive Deep Reinforcement Learning ResearchRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we argue that, despite the community’s emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. |
Johan Samir Obando Ceron; Pablo Samuel Castro; |

127 | Learning Routines for Effective Off-Policy Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel framework for reinforcement learning that effectively lifts such constraints. |
Edoardo Cetin; Oya Celiktutan; |

128 | Learning Node Representations Using Stationary Flow Prediction on Large Payment and Cash Transaction NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, the gradient model is extended to a gated version and we prove that it, unlike the gradient model, is a universal approximator for flows on graphs. |
Ciwan Ceylan; Salla Franz?n; Florian T. Pokorny; |

129 | GRAND: Graph Neural DiffusionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE. |
Ben Chamberlain; James Rowbottom; Maria I Gorinova; Michael Bronstein; Stefan Webb; Emanuele Rossi; |

130 | HoroPCA: Hyperbolic Dimensionality Reduction Via Horospherical ProjectionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We generalize each of these concepts to the hyperbolic space and propose HoroPCA, a method for hyperbolic dimensionality reduction. |
Ines Chami; Albert Gu; Dat P Nguyen; Christopher Re; |

131 | Goal-Conditioned Reinforcement Learning with Imagined SubgoalsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose to incorporate imagined subgoals into policy learning to facilitate learning of complex tasks. |
Elliot Chane-Sane; Cordelia Schmid; Ivan Laptev; |

132 | Locally Private K-Means in One RoundRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We provide an approximation algorithm for k-means clustering in the \emph{one-round} (aka \emph{non-interactive}) local model of differential privacy (DP). |
Alisa Chang; Badih Ghazi; Ravi Kumar; Pasin Manurangsi; |

133 | Modularity in Reinforcement Learning Via Algorithmic Independence in Credit AssignmentRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce what we call the modularity criterion for testing whether a learning algorithm satisfies this constraint by performing causal analysis on the algorithm itself. |
Michael Chang; Sid Kaushik; Sergey Levine; Tom Griffiths; |

134 | Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed DetectionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We address object-level resampling by introducing an object-centric sampling strategy based on a dynamic, episodic memory bank. |
Nadine Chang; Zhiding Yu; Yu-Xiong Wang; Animashree Anandkumar; Sanja Fidler; Jose M Alvarez; |

135 | DeepWalking Backwards: From Embeddings Back to GraphsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Focusing on a variant of the popular DeepWalk method \cite{PerozziAl-RfouSkiena:2014, QiuDongMa:2018}, we present algorithms for accurate embedding inversion – i.e., from the low-dimensional embedding of a graph $G$, we can find a graph $\tilde G$ with a very similar embedding. |
Sudhanshu Chanpuriya; Cameron Musco; Konstantinos Sotiropoulos; Charalampos Tsourakakis; |

136 | Differentiable Spatial Planning Using TransformersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose Spatial Planning Transformers (SPT), which given an obstacle map learns to generate actions by planning over long-range spatial dependencies, unlike prior data-driven planners that propagate information locally via convolutional structure in an iterative manner. |
Devendra Singh Chaplot; Deepak Pathak; Jitendra Malik; |

137 | Solving Challenging Dexterous Manipulation Tasks With Trajectory Optimisation and Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we first introduce a suite of challenging simulated manipulation tasks where current reinforcement learning and trajectory optimisation techniques perform poorly. |
Henry J Charlesworth; Giovanni Montana; |

138 | Classification with Rejection Based on Cost-sensitive ClassificationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, based on the relationship between classification with rejection and cost-sensitive classification, we propose a novel method of classification with rejection by learning an ensemble of cost-sensitive classifiers, which satisfies all the following properties: (i) it can avoid estimating class-posterior probabilities, resulting in improved classification accuracy. |
Nontawat Charoenphakdee; Zhenghang Cui; Yivan Zhang; Masashi Sugiyama; |

139 | Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic SkillsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In particular, we propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset. |
Yevgen Chebotar; Karol Hausman; Yao Lu; Ted Xiao; Dmitry Kalashnikov; Jacob Varley; Alex Irpan; Benjamin Eysenbach; Ryan C Julian; Chelsea Finn; Sergey Levine; |

140 | Unified Robust Semi-Supervised Variational AutoencoderRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a novel noise-robust semi-supervised deep generative model by jointly tackling noisy labels and outliers simultaneously in a unified robust semi-supervised variational autoencoder (URSVAE). |
Xu Chen; |

141 | Unsupervised Learning of Visual 3D Keypoints for ControlRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner. |
Boyuan Chen; Pieter Abbeel; Deepak Pathak; |

142 | Integer Programming for Causal Structure Learning in The Presence of Latent VariablesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel exact score-based method that solves an integer programming (IP) formulation and returns a score-maximizing ancestral ADMG for a set of continuous variables that follow a multivariate Gaussian distribution. |
Rui Chen; Sanjeeb Dash; Tian Gao; |

143 | Improved Corruption Robust Algorithms for Episodic Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose new algorithms which, compared to the existing results in \cite{lykouris2020corruption}, achieve strictly better regret bounds in terms of total corruptions for the tabular setting. |
Yifang Chen; Simon Du; Kevin Jamieson; |

144 | Scalable Computations of Wasserstein Barycenter Via Input Convex Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we present a novel scalable algorithm to approximate the Wasserstein Barycenters aiming at high-dimensional applications in machine learning. |
Yongxin Chen; Jiaojiao Fan; Amirhossein Taghvaei; |

145 | Neural Feature Matching in Implicit 3D RepresentationsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: While the benefits from the global latent space do not correspond to explicit points at local level, we propose to track the continuous point trajectory by matching implicit features with the latent code interpolating between shapes, from which we corroborate the hierarchical functionality of the deep implicit functions, where early layers map the latent code to fitting the coarse shape structure, and deeper layers further refine the shape details. |
Yunlu Chen; Basura Fernando; Hakan Bilen; Thomas Mensink; Efstratios Gavves; |

146 | Decentralized Riemannian Gradient Descent on The Stiefel ManifoldRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a decentralized Riemannian stochastic gradient method (DRSGD) with the convergence rate of $\mathcal{O}(1/\sqrt{K})$ to a stationary point. |
Shixiang Chen; Alfredo Garcia; Mingyi Hong; Shahin Shahrampour; |

147 | Learning Self-Modulating Attention in Continuous Time Space with Applications to Sequential RecommendationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a novel attention network, named \textit{self-modulating attention}, that models the complex and non-linearly evolving dynamic user preferences. |
Chao Chen; Haoyu Geng; Nianzu Yang; Junchi Yan; Daiyue Xue; Jianping Yu; Xiaokang Yang; |

148 | Mandoline: Model Evaluation Under Distribution ShiftRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Our key insight is that practitioners may have prior knowledge about the ways in which the distribution shifts, which we can use to better guide the importance weighting procedure. |
Mayee Chen; Karan Goel; Nimit S Sohoni; Fait Poms; Kayvon Fatahalian; Christopher Re; |

149 | Order Matters: Probabilistic Modeling of Node Sequence for Graph GenerationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we provide an expression for the likelihood of a graph generative model and show that its calculation is closely related to the problem of graph automorphism. |
Xiaohui Chen; Xu Han; Jiajing Hu; Francisco Ruiz; Liping Liu; |

150 | CARTL: Cooperative Adversarially-Robust Transfer LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address such a problem, we propose a novel cooperative adversarially-robust transfer learning (CARTL) by pre-training the model via feature distance minimization and fine-tuning the pre-trained model with non-expansive fine-tuning for target domain tasks. |
Dian Chen; Hongxin Hu; Qian Wang; Li Yinli; Cong Wang; Chao Shen; Qi Li; |

151 | Finding The Stochastic Shortest Path with Low Regret: The Adversarial Cost and Unknown Transition CaseRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Specifically, we develop algorithms that achieve $O(\sqrt{S^2ADT_\star K})$ regret for the full-information setting and $O(\sqrt{S^3A^2DT_\star K})$ regret for the bandit feedback setting, where $D$ is the diameter, $T_\star$ is the expected hitting time of the optimal policy, $S$ is the number of states, $A$ is the number of actions, and $K$ is the number of episodes. |
Liyu Chen; Haipeng Luo; |

152 | SpreadsheetCoder: Formula Prediction from Semi-structured ContextRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we present the first approach for synthesizing spreadsheet formulas from tabular context, which includes both headers and semi-structured tabular data. |
Xinyun Chen; Petros Maniatis; Rishabh Singh; Charles Sutton; Hanjun Dai; Max Lin; Denny Zhou; |

153 | Large-Margin Contrastive Learning with Distance Polarization RegularizerRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To this end, we propose \emph{large-margin contrastive learning} (LMCL) with \emph{distance polarization regularizer}, motivated by the distribution characteristic of pairwise distances in \emph{metric learning}. |
Shuo Chen; Gang Niu; Chen Gong; Jun Li; Jian Yang; Masashi Sugiyama; |

154 | Z-GCNETs: Time Zigzags at Graph Convolutional Networks for Time Series ForecastingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: As convergence of these two emerging ideas, we propose to enhance DL architectures with the most salient time-conditioned topological information of the data and introduce the concept of zigzag persistence into time-aware graph convolutional networks (GCNs). |
Yuzhou Chen; Ignacio Segovia; Yulia R. Gel; |

155 | A Unified Lottery Ticket Hypothesis for Graph Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To this end, this paper first presents a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights, for effectively accelerating GNN inference on large-scale graphs. Leveraging this new tool, we further generalize the recently popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network, which can be jointly identified from the original GNN and the full dense graph by iteratively applying UGS. |
Tianlong Chen; Yongduo Sui; Xuxi Chen; Aston Zhang; Zhangyang Wang; |

156 | Network Inference and Influence Maximization from SamplesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we consider the more realistic sampling setting where the network is unknown and we only have a set of passively observed cascades that record the set of activated nodes at each diffusion step. |
Wei Chen; Xiaoming Sun; Jialin Zhang; Zhijie Zhang; |

157 | Data-driven Prediction of General Hamiltonian Dynamics Via Learning Exactly-Symplectic MapsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider the learning and prediction of nonlinear time series generated by a latent symplectic map. |
Renyi Chen; Molei Tao; |

158 | Analysis of Stochastic Lanczos Quadrature for Spectrum ApproximationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present an error analysis for stochastic Lanczos quadrature (SLQ). |
Tyler Chen; Thomas Trogdon; Shashanka Ubaru; |

159 | Large-Scale Multi-Agent Deep FBSDEsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we present a scalable deep learning framework for finding Markovian Nash Equilibria in multi-agent stochastic games using fictitious play. |
Tianrong Chen; Ziyi O Wang; Ioannis Exarchos; Evangelos Theodorou; |

160 | Representation Subspace Distance for Domain Adaptation RegressionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Based on this finding, we propose to close the domain gap through orthogonal bases of the representation spaces, which are free from feature scaling. |
Xinyang Chen; Sinan Wang; Jianmin Wang; Mingsheng Long; |

161 | Overcoming Catastrophic Forgetting By Bayesian Generative RegularizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a new method to over-come catastrophic forgetting by adding generative regularization to Bayesian inference frame-work. |
Pei-Hung Chen; Wei Wei; Cho-Jui Hsieh; Bo Dai; |

162 | Cyclically Equivariant Neural Decoders for Cyclic CodesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose a novel neural decoder for cyclic codes by exploiting their cyclically invariant property. |
Xiangyu Chen; Min Ye; |

163 | A Receptor Skeleton for Capsule Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper presents a new capsule structure, which contains a set of optimizable receptors and a transmitter is devised on the capsule’s representation. |
Jintai Chen; Hongyun Yu; Chengde Qian; Danny Z Chen; Jian Wu; |

164 | Accelerating Gossip SGD with Periodic Global AveragingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper introduces Gossip-PGA, which adds Periodic Global Averaging to accelerate Gossip SGD. |
Yiming Chen; Kun Yuan; Yingya Zhang; Pan Pan; Yinghui Xu; Wotao Yin; |

165 | ActNN: Reducing Training Memory Footprint Via 2-Bit Activation Compressed TrainingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose ActNN, a memory-efficient training framework that stores randomly quantized activations for back propagation. |
Jianfei Chen; Lianmin Zheng; Zhewei Yao; Dequan Wang; Ion Stoica; Michael Mahoney; Joseph Gonzalez; |

166 | SPADE: A Spectral Method for Black-Box Adversarial Robustness EvaluationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: By leveraging the generalized Courant-Fischer theorem, we propose a SPADE score for evaluating the adversarial robustness of a given model, which is proved to be an upper bound of the best Lipschitz constant under the manifold setting. |
Wuxinlin Cheng; Chenhui Deng; Zhiqiang Zhao; Yaohui Cai; Zhiru Zhang; Zhuo Feng; |

167 | Self-supervised and Supervised Joint Training for Resource-rich Machine TranslationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a joint training approach, F2-XEnDec, to combine self-supervised and supervised learning to optimize NMT models. |
Yong Cheng; Wei Wang; Lu Jiang; Wolfgang Macherey; |

168 | Exact Optimization of Conformal Predictors Via Incremental and Decremental LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we show that it is possible to speed up a CP classifier considerably, by studying it in conjunction with the underlying ML method, and by exploiting incremental&decremental learning. |
Giovanni Cherubin; Konstantinos Chatzikokolakis; Martin Jaggi; |

169 | Problem Dependent View on Structured Thresholding Bandit ProblemsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We investigate the \textit{problem dependent regime} in the stochastic \emph{Thresholding Bandit problem} (\tbp) under several \emph{shape constraints}. |
James Cheshire; Pierre Menard; Alexandra Carpentier; |

170 | Online Optimization in Games Via Control Theory: Connecting Regret, Passivity and Poincar? RecurrenceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a novel control-theoretic understanding of online optimization and learning in games, via the notion of passivity. |
Yun Kuen Cheung; Georgios Piliouras; |

171 | Understanding and Mitigating Accuracy Disparity in RegressionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study the accuracy disparity problem in regression. |
Jianfeng Chi; Yuan Tian; Geoffrey J. Gordon; Han Zhao; |

172 | Private Alternating Least Squares: Practical Private Matrix Completion with Tighter RatesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study the problem of differentially private (DP) matrix completion under user-level privacy. |
Steve Chien; Prateek Jain; Walid Krichene; Steffen Rendle; Shuang Song; Abhradeep Thakurta; Li Zhang; |

173 | Light RUMsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we consider the question of the (lossy) compressibility of RUMs on a universe of size $n$, i.e., the minimum number of bits required to approximate the winning probabilities of each slate. |
Flavio Chierichetti; Ravi Kumar; Andrew Tomkins; |

174 | Parallelizing Legendre Memory Unit TrainingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Here we leverage the linear time-invariant (LTI) memory component of the LMU to construct a simplified variant that can be parallelized during training (and yet executed as an RNN during inference), resulting in up to 200 times faster training. |
Narsimha Reddy Chilkuri; Chris Eliasmith; |

175 | Quantifying and Reducing Bias in Maximum Likelihood Estimation of Structured AnomaliesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we demonstrate that in the normal means setting, the bias of the MLE depends on the size of the anomaly family. |
Uthsav Chitra; Kimberly Ding; Jasper C.H. Lee; Benjamin J Raphael; |

176 | Robust Learning-Augmented Caching: An Experimental StudyRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We show that a straightforward method – blindly following either a predictor or a classical robust algorithm, and switching whenever one becomes worse than the other – has only a low overhead over a well-performing predictor, while competing with classical methods when the coupled predictor fails, thus providing a cheap worst-case insurance. |
Jakub Chledowski; Adam Polak; Bartosz Szabucki; Konrad Tomasz Zolna; |

177 | Unifying Vision-and-Language Tasks Via Text GenerationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To alleviate these hassles, in this work, we propose a unified framework that learns different tasks in a single architecture with the same language modeling objective, i.e., multimodal conditional text generation, where our models learn to generate labels in text based on the visual and textual inputs. |
Jaemin Cho; Jie Lei; Hao Tan; Mohit Bansal; |

178 | Learning from Nested Data with Ornstein Auto-EncodersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: After identifying several issues with RIOAE, we present the product-space OAE (PSOAE) that minimizes a tighter upper bound of the distance and achieves orthogonality in the representation space. |
Youngwon Choi; Sungdong Lee; Joong-Ho Won; |

179 | Variational Empowerment As Representation Learning for Goal-Conditioned Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we discuss how these two approaches {—} goal-conditioned RL (GCRL) and MI-based RL {—} can be generalized into a single family of methods, interpreting mutual information maximization and variational empowerment as representation learning methods that acquire function-ally aware state representations for goal reaching. |
Jongwook Choi; Archit Sharma; Honglak Lee; Sergey Levine; Shixiang Shane Gu; |

180 | Label-Only Membership Inference AttacksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Whereas current attack methods all require access to the model’s predicted confidence score, we introduce a label-only attack that instead evaluates the robustness of the model’s predicted (hard) labels under perturbations of the input, to infer membership. |
Christopher A. Choquette-Choo; Florian Tramer; Nicholas Carlini; Nicolas Papernot; |

181 | Modeling Hierarchical Structures with Continuous Recursive Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose Continuous Recursive Neural Network (CRvNN) as a backpropagation-friendly alternative to address the aforementioned limitations. |
Jishnu Ray Chowdhury; Cornelia Caragea; |

182 | Scaling Multi-Agent Reinforcement Learning with Selective Parameter SharingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel method to automatically identify agents which may benefit from sharing parameters by partitioning them based on their abilities and goals. |
Filippos Christianos; Georgios Papoudakis; Muhammad A Rahman; Stefano V Albrecht; |

183 | Beyond Variance Reduction: Understanding The True Impact of Baselines on Policy OptimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we demonstrate that the standard view is too limited for bandit and RL problems. |
Wesley Chung; Valentin Thomas; Marlos C. Machado; Nicolas Le Roux; |

184 | First-Order Methods for Wasserstein Distributionally Robust MDPRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a framework for solving Distributionally robust MDPs via first-order methods, and instantiate it for several types of Wasserstein ambiguity sets. |
Julien Grand Clement; Christian Kroer; |

185 | Phasic Policy GradientRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce Phasic Policy Gradient (PPG), a reinforcement learning framework which modifies traditional on-policy actor-critic methods by separating policy and value function training into distinct phases. |
Karl W Cobbe; Jacob Hilton; Oleg Klimov; John Schulman; |

186 | Riemannian Convex Potential MapsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose and study a class of flows that uses convex potentials from Riemannian optimal transport. |
Samuel Cohen; Brandon Amos; Yaron Lipman; |

187 | Scaling Properties of Deep Residual NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. |
Alain-Sam Cohen; Rama Cont; Alain Rossier; Renyuan Xu; |

188 | Differentially-Private Clustering of Easy InstancesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we aim at providing simple implementable differentrially private clustering algorithms when the the data is "easy," e.g., when there exists a significant separation between the clusters. |
Edith Cohen; Haim Kaplan; Yishay Mansour; Uri Stemmer; Eliad Tsfadia; |

189 | Improving Ultrametrics Embeddings Through CoresetsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We improve the above result and show how to improve the above guarantee from 5c to $\sqrt{2}$c+e while achieving the same asymptotic running time. |
Vincent Cohen-Addad; R?mi De Joannis De Verclos; Guillaume Lagarde; |

190 | Correlation Clustering in Constant Many Parallel RoundsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we propose a massively parallel computation (MPC) algorithm for this problem that is considerably faster than prior work. |
Vincent Cohen-Addad; Silvio Lattanzi; Slobodan Mitrovic; Ashkan Norouzi-Fard; Nikos Parotsidis; Jakub Tarnawski; |

191 | Concentric Mixtures of Mallows Models for Top-$k$ Rankings: Sampling and IdentifiabilityRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study mixtures of two Mallows models for top-$k$ rankings with equal location parameters but with different scale parameters (a mixture of concentric Mallows models). |
Fabien Collas; Ekhine Irurozki; |

192 | Exploiting Shared Representations for Personalized Federated LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Based on this intuition, we propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client. |
Liam Collins; Hamed Hassani; Aryan Mokhtari; Sanjay Shakkottai; |

193 | Differentiable Particle Filtering Via Entropy-Regularized Optimal TransportRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: By leveraging optimal transport ideas, we introduce a principled differentiable particle filter and provide convergence results. |
Adrien Corenflos; James Thornton; George Deligiannidis; Arnaud Doucet; |

194 | Fairness and Bias in Online SelectionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We address the issues of fairness and bias in online selection by introducing multi-color versions of the classic secretary and prophet problem. |
Jose Correa; Andres Cristi; Paul Duetting; Ashkan Norouzi-Fard; |

195 | Relative Deviation Margin BoundsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a series of new and more favorable margin-based learning guarantees that depend on the empirical margin loss of a predictor. |
Corinna Cortes; Mehryar Mohri; Ananda Theertha Suresh; |

196 | A Discriminative Technique for Multiple-Source AdaptationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a new discriminative technique for the multiple-source adaptation (MSA) problem. |
Corinna Cortes; Mehryar Mohri; Ananda Theertha Suresh; Ningshan Zhang; |

197 | Characterizing Fairness Over The Set of Good Models Under Selective LabelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance, or “the set of good models.” |
Amanda Coston; Ashesh Rambachan; Alexandra Chouldechova; |

198 | Two-way Kernel Matrix Puncturing: Towards Resource-efficient PCA and Spectral ClusteringRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: The article introduces an elementary cost and storage reduction method for spectral clustering and principal component analysis. |
Romain Couillet; Florent Chatelain; Nicolas Le Bihan; |

199 | Explaining Time Series Predictions with Dynamic MasksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address these challenges, we propose dynamic masks (Dynamask). |
Jonathan Crabb?; Mihaela Van Der Schaar; |

200 | Generalised Lipschitz Regularisation Equals Distributional RobustnessRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In response, we have been able to significantly sharpen existing results regarding the relationship between distributional robustness and regularisation, when defined with a transportation cost uncertainty set. |
Zac Cranko; Zhan Shi; Xinhua Zhang; Richard Nock; Simon Kornblith; |

201 | Environment Inference for Invariant LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose EIIL, a general framework for domain-invariant learning that incorporates Environment Inference to directly infer partitions that are maximally informative for downstream Invariant Learning. |
Elliot Creager; Joern-Henrik Jacobsen; Richard Zemel; |

202 | Mind The Box: $l_1$-APGD for Sparse Adversarial Attacks on Image ClassifiersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We show that when taking into account also the image domain $[0,1]^d$, established $l_1$-projected gradient descent (PGD) attacks are suboptimal as they do not consider that the effective threat model is the intersection of the $l_1$-ball and $[0,1]^d$. |
Francesco Croce; Matthias Hein; |

203 | Parameterless Transductive Feature Re-representation for Few-Shot LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a parameterless transductive feature re-representation framework that differs from all existing solutions from the following perspectives. |
Wentao Cui; Yuhong Guo; |

204 | Randomized Algorithms for Submodular Function Maximization with A $k$-System ConstraintRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study the problem of non-negative submodular function maximization subject to a $k$-system constraint, which generalizes many other important constraints in submodular optimization such as cardinality constraint, matroid constraint, and $k$-extendible system constraint. |
Shuang Cui; Kai Han; Tianshuai Zhu; Jing Tang; Benwei Wu; He Huang; |

205 | GBHT: Gradient Boosting Histogram Transform for Density EstimationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a density estimation algorithm called \textit{Gradient Boosting Histogram Transform} (GBHT), where we adopt the \textit{Negative Log Likelihood} as the loss function to make the boosting procedure available for the unsupervised tasks. |
Jingyi Cui; Hanyuan Hang; Yisen Wang; Zhouchen Lin; |

206 | ProGraML: A Graph-based Program Representation for Data Flow Analysis and Compiler OptimizationsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose ProGraML – Program Graphs for Machine Learning – a language-independent, portable representation of program semantics. |
Chris Cummins; Zacharias V. Fisches; Tal Ben-Nun; Torsten Hoefler; Michael F P O?Boyle; Hugh Leather; |

207 | Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem while attaining near-optimal sample complexity guarantees. |
Sebastian Curi; Ilija Bogunovic; Andreas Krause; |

208 | Quantifying Availability and Discovery in Recommender Systems Via Stochastic ReachabilityRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we consider how preference models in interactive recommendation systems determine the availability of content and users’ opportunities for discovery. |
Mihaela Curmei; Sarah Dean; Benjamin Recht; |

209 | Dynamic Balancing for Model Selection in Bandits and RLRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a framework for model selection by combining base algorithms in stochastic bandits and reinforcement learning. |
Ashok Cutkosky; Christoph Dann; Abhimanyu Das; Claudio Gentile; Aldo Pacchiano; Manish Purohit; |

210 | ConViT: Improving Vision Transformers with Soft Convolutional Inductive BiasesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To this end, we introduce gated positional self-attention (GPSA), a form of positional self-attention which can be equipped with a “soft convolutional inductive bias. |
St?phane D?Ascoli; Hugo Touvron; Matthew L Leavitt; Ari S Morcos; Giulio Biroli; Levent Sagun; |

211 | Consistent Regression When Oblivious Outliers OverwhelmRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider a robust linear regression model $y=X\beta^* + \eta$, where an adversary oblivious to the design $X\in \mathbb{R}^{n\times d}$ may choose $\eta$ to corrupt all but an $\alpha$ fraction of the observations $y$ in an arbitrary way. |
Tommaso D?Orsi; Gleb Novikov; David Steurer; |

212 | Offline Reinforcement Learning with Pseudometric LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose an iterative procedure to learn a pseudometric (closely related to bisimulation metrics) from logged transitions, and use it to define this notion of closeness. |
Robert Dadashi; Shideh Rezaeifar; Nino Vieillard; L?onard Hussenot; Olivier Pietquin; Matthieu Geist; |

213 | A Tale of Two Efficient and Informative Negative Sampling DistributionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we show two classes of distributions where the sampling scheme is truly adaptive and provably generates negative samples in near-constant time. |
Shabnam Daghaghi; Tharun Medini; Nicholas Meisburger; Beidi Chen; Mengnan Zhao; Anshumali Shrivastava; |

214 | SiameseXML: Siamese Networks Meet Extreme Classifiers with 100M LabelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address these, this paper develops the SiameseXML framework based on a novel probabilistic model that naturally motivates a modular approach melding Siamese architectures with high-capacity extreme classifiers, and a training pipeline that effortlessly scales to tasks with 100 million labels. |
Kunal Dahiya; Ananye Agarwal; Deepak Saini; Gururaj K; Jian Jiao; Amit Singh; Sumeet Agarwal; Purushottam Kar; Manik Varma; |

215 | Fixed-Parameter and Approximation Algorithms for PCA with OutliersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study this problem from the perspective of parameterized complexity by investigating how parameters like the dimension of the data, the subspace dimension, the number of outliers and their structure, and approximation error, influence the computational complexity of the problem. |
Yogesh Dahiya; Fedor Fomin; Fahad Panolan; Kirill Simonov; |

216 | Sliced Iterative Normalizing FlowsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We develop an iterative (greedy) deep learning (DL) algorithm which is able to transform an arbitrary probability distribution function (PDF) into the target PDF. |
Biwei Dai; Uros Seljak; |

217 | Convex Regularization in Monte-Carlo Tree SearchRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we overcome these limitations by introducing the use of convex regularization in Monte-Carlo Tree Search (MCTS) to drive exploration efficiently and to improve policy updates. |
Tuan Q Dam; Carlo D?Eramo; Jan Peters; Joni Pajarinen; |

218 | Demonstration-Conditioned Reinforcement Learning for Few-Shot ImitationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel approach to learning few-shot-imitation agents that we call demonstration-conditioned reinforcement learning (DCRL). |
Christopher R. Dance; Julien Perez; Th?o Cachet; |

219 | Re-understanding Finite-State Representations of Recurrent Policy NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce an approach for understanding control policies represented as recurrent neural networks. |
Mohamad H Danesh; Anurag Koul; Alan Fern; Saeed Khorram; |

220 | Newton Method Over Networks Is Fast Up to The Statistical PrecisionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a distributed cubic regularization of the Newton method for solving (constrained) empirical risk minimization problems over a network of agents, modeled as undirected graph. |
Amir Daneshmand; Gesualdo Scutari; Pavel Dvurechensky; Alexander Gasnikov; |

221 | BasisDeVAE: Interpretable Simultaneous Dimensionality Reduction and Feature-Level Clustering with Derivative-Based Variational AutoencodersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present DeVAE, a novel VAE-based model with a derivative-based forward mapping, allowing for greater control over decoder behaviour via specification of the decoder function in derivative space. |
Dominic Danks; Christopher Yau; |

222 | Intermediate Layer Optimization for Inverse Problems Using Deep Generative ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose Intermediate Layer Optimization (ILO), a novel optimization algorithm for solving inverse problems with deep generative models. |
Giannis Daras; Joseph Dean; Ajil Jalal; Alex Dimakis; |

223 | Measuring Robustness in Deep Learning Based Compressive SensingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In order to understand the sensitivity to such perturbations, in this work, we measure the robustness of different approaches for image reconstruction including trained and un-trained neural networks as well as traditional sparsity-based methods. |
Mohammad Zalbagi Darestani; Akshay S Chaudhari; Reinhard Heckel; |

224 | SAINT-ACC: Safety-Aware Intelligent Adaptive Cruise Control for Autonomous Vehicles Using Deep Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a novel adaptive cruise control (ACC) system namely SAINT-ACC: {S}afety-{A}ware {Int}elligent {ACC} system (SAINT-ACC) that is designed to achieve simultaneous optimization of traffic efficiency, driving safety, and driving comfort through dynamic adaptation of the inter-vehicle gap based on deep reinforcement learning (RL). |
Lokesh Chandra Das; Myounggyu Won; |

225 | Lipschitz Normalization for Self-attention Layers with Application to Graph Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we show that enforcing Lipschitz continuity by normalizing the attention scores can significantly improve the performance of deep attention models. |
George Dasoulas; Kevin Scaman; Aladin Virmaux; |

226 | Householder Sketch for Accurate and Accelerated Least-Mean-Squares SolversRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In retrospect, we explore classical Householder transformation as a candidate for sketching and accurately solving LMS problems. |
Jyotikrishna Dass; Rabi Mahapatra; |

227 | Byzantine-Resilient High-Dimensional SGD with Local Iterations on Heterogeneous DataRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We provide convergence analyses for both strongly-convex and non-convex smooth objectives in the heterogeneous data setting. |
Deepesh Data; Suhas Diggavi; |

228 | Catformer: Designing Stable Transformers Via Sensitivity AnalysisRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we improve upon recent analysis of Transformers and formalize a notion of sensitivity to capture the difficulty of training. |
Jared Q Davis; Albert Gu; Krzysztof Choromanski; Tri Dao; Christopher Re; Chelsea Finn; Percy Liang; |

229 | Diffusion Source Identification on Networks with Statistical ConfidenceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce a statistical framework for the study of this problem and develop a confidence set inference approach inspired by hypothesis testing. |
Quinlan E Dawkins; Tianxi Li; Haifeng Xu; |

230 | Bayesian Deep Learning Via Subnetwork InferenceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we show that it suffices to perform inference over a small subset of model weights in order to obtain accurate predictive posteriors. |
Erik Daxberger; Eric Nalisnick; James U Allingham; Javier Antoran; Jose Miguel Hernandez-Lobato; |

231 | Adversarial Robustness Guarantees for Random Deep Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We explore the properties of adversarial examples for deep neural networks with random weights and biases, and prove that for any p$\geq$1, the \ell^p distance of any given input from the classification boundary scales as one over the square root of the dimension of the input times the \ell^p norm of the input. |
Giacomo De Palma; Bobak Kiani; Seth Lloyd; |

232 | High-Dimensional Gaussian Process Inference with DerivativesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We show that in the \emph{low-data} regime $N < d$, the= gram= matrix= can= be= decomposed= in= a= manner= that= reduces= cost= of= inference= to= $\mathcal{o}(n^2d= += (n^2)^3)$= (i.e., linear= number= dimensions)= and,= special= cases,= n^3)$. |
Filip De Roos; Alexandra Gessner; Philipp Hennig; |

233 | Transfer-Based Semantic Anomaly DetectionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we show that a previously overlooked strategy for anomaly detection (AD) is to introduce an explicit inductive bias toward representations transferred over from some large and varied semantic task. |
Lucas Deecke; Lukas Ruff; Robert A. Vandermeulen; Hakan Bilen; |

234 | Grid-Functioned Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce a new neural network architecture that we call "grid-functioned" neural networks. |
Javier Dehesa; Andrew Vidler; Julian Padget; Christof Lutteroth; |

235 | Multidimensional Scaling: Approximation and ComplexityRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we prove that minimizing the Kamada-Kawai objective is NP-hard and give a provable approximation algorithm for optimizing it, which in particular is a PTAS on low-diameter graphs. |
Erik Demaine; Adam Hesterberg; Frederic Koehler; Jayson Lynch; John Urschel; |

236 | What Does Rotation Prediction Tell Us About Classifier Accuracy Under Varying Testing Environments?Related Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we train semantic classification and rotation prediction in a multi-task way. |
Weijian Deng; Stephen Gould; Liang Zheng; |

237 | Toward Better Generalization Bounds with Locally Elastic StabilityRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Given that, we propose \emph{locally elastic stability} as a weaker and distribution-dependent stability notion, which still yields exponential generalization bounds. |
Zhun Deng; Hangfeng He; Weijie Su; |

238 | Revenue-Incentive Tradeoffs in Dynamic Reserve PricingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study how to set reserves to boost revenue based on the historical bids of strategic buyers, while controlling the impact of such a policy on the incentive compatibility of the repeated auctions. |
Yuan Deng; Sebastien Lahaie; Vahab Mirrokni; Song Zuo; |

239 | Heterogeneity for The Win: One-Shot Federated ClusteringRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we explore the unique challenges—and opportunities—of unsupervised federated learning (FL). |
Don Kurian Dennis; Tian Li; Virginia Smith; |

240 | Kernel Continual LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper introduces kernel continual learning, a simple but effective variant of continual learning that leverages the non-parametric nature of kernel methods to tackle catastrophic forgetting. |
Mohammad Mahdi Derakhshani; Xiantong Zhen; Ling Shao; Cees Snoek; |

241 | Bayesian Optimization Over Hybrid SpacesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a novel approach referred as Hybrid Bayesian Optimization (HyBO) by utilizing diffusion kernels, which are naturally defined over continuous and discrete variables. |
Aryan Deshwal; Syrine Belakaria; Janardhan Rao Doppa; |

242 | Navigation Turing Test (NTT): Learning to Evaluate Human-Like NavigationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We address these limitations through a novel automated Navigation Turing Test (ANTT) that learns to predict human judgments of human-likeness. |
Sam Devlin; Raluca Georgescu; Ida Momennejad; Jaroslaw Rzepecki; Evelyn Zuniga; Gavin Costello; Guy Leroy; Ali Shaw; Katja Hofmann; |

243 | Versatile Verification of Tree EnsemblesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper introduces a generic algorithm called Veritas that enables tackling multiple different verification tasks for tree ensemble models like random forests (RFs) and gradient boosted decision trees (GBDTs). |
Laurens Devos; Wannes Meert; Jesse Davis; |

244 | On The Inherent Regularization Effects of Noise Injection During TrainingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we present a theoretical study of one particular way of random perturbation, which corresponds to injecting artificial noise to the training data. |
Oussama Dhifallah; Yue Lu; |

245 | Hierarchical Agglomerative Graph Clustering in Nearly-Linear TimeRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study the widely-used hierarchical agglomerative clustering (HAC) algorithm on edge-weighted graphs. |
Laxman Dhulipala; David Eisenstat; Jakub Lacki; Vahab Mirrokni; Jessica Shi; |

246 | Learning Online Algorithms with Distributional AdviceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study the problem of designing online algorithms given advice about the input. |
Ilias Diakonikolas; Vasilis Kontonis; Christos Tzamos; Ali Vakilian; Nikos Zarifis; |

247 | A Wasserstein Minimax Framework for Mixed Linear RegressionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose an optimal transport-based framework for MLR problems, Wasserstein Mixed Linear Regression (WMLR), which minimizes the Wasserstein distance between the learned and target mixture regression models. |
Theo Diamandis; Yonina Eldar; Alireza Fallah; Farzan Farnia; Asuman Ozdaglar; |

248 | Context-Aware Online Collective Inference for Templated Graphical ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we examine online collective inference, the problem of maintaining and performing inference over a sequence of evolving graphical models. |
Charles Dickens; Connor Pryor; Eriq Augustine; Alexander Miller; Lise Getoor; |

249 | ARMS: Antithetic-REINFORCE-Multi-Sample Gradient for Binary VariablesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To better utilize more than two samples, we propose ARMS, an Antithetic REINFORCE-based Multi-Sample gradient estimator. |
Aleksandar Dimitriev; Mingyuan Zhou; |

250 | XOR-CD: Linearly Convergent Constrained Structure GenerationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose XOR-Contrastive Divergence learning (XOR-CD), a provable approach for constrained structure generation, which remains difficult for state-of-the-art neural network and constraint reasoning approaches. |
Fan Ding; Jianzhu Ma; Jinbo Xu; Yexiang Xue; |

251 | Dual Principal Component Pursuit for Robust Subspace Learning: Theory and Algorithms for A Holistic ApproachRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we consider a DPCP approach for simultaneously computing the entire basis of the orthogonal complement subspace (we call this a holistic approach) by solving a non-convex non-smooth optimization problem over the Grassmannian. |
Tianyu Ding; Zhihui Zhu; Rene Vidal; Daniel P Robinson; |

252 | Coded-InvNet for Resilient Prediction Serving SystemsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Inspired by a new coded computation algorithm for invertible functions, we propose Coded-InvNet a new approach to design resilient prediction serving systems that can gracefully handle stragglers or node failures. |
Tuan Dinh; Kangwook Lee; |

253 | Estimation and Quantization of Expected Persistence DiagramsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this article, we study two such summaries, the Expected Persistence Diagram (EPD), and its quantization. |
Vincent Divol; Theo Lacombe; |

254 | On Energy-Based Models with Overparametrized Shallow Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Building from the incipient theory of overparametrized neural networks, we show that models trained in the so-called ’active’ regime provide a statistical advantage over their associated ’lazy’ or kernel regime, leading to improved adaptivity to hidden low-dimensional structure in the data distribution, as already observed in supervised learning. |
Carles Domingo-Enrich; Alberto Bietti; Eric Vanden-Eijnden; Joan Bruna; |

255 | Kernel-Based Reinforcement Learning: A Finite-Time AnalysisRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce Kernel-UCBVI, a model-based optimistic algorithm that leverages the smoothness of the MDP and a non-parametric kernel estimator of the rewards and transitions to efficiently balance exploration and exploitation. |
Omar Darwiche Domingues; Pierre Menard; Matteo Pirotta; Emilie Kaufmann; Michal Valko; |

256 | Attention Is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with DepthRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This work proposes a new way to understand self-attention networks: we show that their output can be decomposed into a sum of smaller terms—or paths—each involving the operation of a sequence of attention heads across layers. |
Yihe Dong; Jean-Baptiste Cordonnier; Andreas Loukas; |

257 | How Rotational Invariance of Common Kernels Prevents Generalization in High DimensionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we show that in high dimensions, the rotational invariance property of commonly studied kernels (such as RBF, inner product kernels and fully-connected NTK of any depth) leads to inconsistent estimation unless the ground truth is a low-degree polynomial. |
Konstantin Donhauser; Mingqi Wu; Fanny Yang; |

258 | Fast Stochastic Bregman Gradient Methods: Sharp Analysis and Variance ReductionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study the problem of minimizing a relatively-smooth convex function using stochastic Bregman gradient methods. |
Radu Alexandru Dragomir; Mathieu Even; Hadrien Hendrikx; |

259 | Bilinear Classes: A Structural Framework for Provable Generalization in RLRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This work introduces Bilinear Classes, a new structural framework, which permit generalization in reinforcement learning in a wide variety of settings through the use of function approximation. |
Simon Du; Sham Kakade; Jason Lee; Shachar Lovett; Gaurav Mahajan; Wen Sun; Ruosong Wang; |

260 | Improved Contrastive Divergence Training of Energy-Based ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose an adaptation to improve contrastive divergence training by scrutinizing a gradient term that is difficult to calculate and is often left out for convenience. |
Yilun Du; Shuang Li; Joshua Tenenbaum; Igor Mordatch; |

261 | Order-Agnostic Cross Entropy for Non-Autoregressive Machine TranslationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a new training objective named order-agnostic cross entropy (OaXE) for fully non-autoregressive translation (NAT) models. |
Cunxiao Du; Zhaopeng Tu; Jing Jiang; |

262 | Putting The Learning Into Learning-Augmented Algorithms for Frequency EstimationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Learning here is used to predict heavy hitters from a data stream, which are counted explicitly outside the sketch. |
Elbert Du; Franklyn Wang; Michael Mitzenmacher; |

263 | Estimating $a$-Rank from A Few Entries with Low Rank Matrix CompletionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we aim to reduce the number of pairwise comparisons in recovering a satisfying ranking for $n$ strategies in two-player meta-games, by exploring the fact that agents with similar skills may achieve similar payoffs against others. |
Yali Du; Xue Yan; Xu Chen; Jun Wang; Haifeng Zhang; |

264 | Learning Diverse-Structured Networks for Adversarial RobustnessRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we argue that NA and AT cannot be handled independently, since given a dataset, the optimal NA in ST would be no longer optimal in AT. |
Xuefeng Du; Jingfeng Zhang; Bo Han; Tongliang Liu; Yu Rong; Gang Niu; Junzhou Huang; Masashi Sugiyama; |

265 | Risk Bounds and Rademacher Complexity in Batch Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper considers batch Reinforcement Learning (RL) with general value function approximation. |
Yaqi Duan; Chi Jin; Zhiyuan Li; |

266 | Sawtooth Factorial Topic Embeddings Guided Gamma Belief NetworkRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To relax this assumption, we propose sawtooth factorial topic embedding guided GBN, a deep generative model of documents that captures the dependencies and semantic similarities between the topics in the embedding space. |
Zhibin Duan; Dongsheng Wang; Bo Chen; Chaojie Wang; Wenchao Chen; Yewen Li; Jie Ren; Mingyuan Zhou; |

267 | Exponential Reduction in Sample Complexity with Learning of Ising Model DynamicsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study the problem of reconstructing binary graphical models from correlated samples produced by a dynamical process, which is natural in many applications. |
Arkopal Dutt; Andrey Lokhov; Marc D Vuffray; Sidhant Misra; |

268 | Reinforcement Learning Under Moral UncertaintyRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper translates such insights to the field of reinforcement learning, proposes two training methods that realize different points among competing desiderata, and trains agents in simple environments to act under moral uncertainty. |
Adrien Ecoffet; Joel Lehman; |

269 | Confidence-Budget Matching for Sequential Budgeted LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we formalize decision-making problems with querying budget, where there is a (possibly time-dependent) hard limit on the number of reward queries allowed. |
Yonathan Efroni; Nadav Merlis; Aadirupa Saha; Shie Mannor; |

270 | Self-Paced Context Evaluation for Contextual Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To improve sample efficiency for learning on such instances of a problem domain, we present Self-Paced Context Evaluation (SPaCE). |
Theresa Eimer; Andr? Biedenkapp; Frank Hutter; Marius Lindauer; |

271 | Provably Strict Generalisation Benefit for Equivariant ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: By considering the simplest case of linear models, this paper provides the first provably non-zero improvement in generalisation for invariant/equivariant models when the target distribution is invariant/equivariant with respect to a compact group. |
Bryn Elesedy; Sheheryar Zaidi; |

272 | Efficient Iterative Amortized Inference for Learning Symmetric and Disentangled Multi-Object RepresentationsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we introduce EfficientMORL, an efficient framework for the unsupervised learning of object-centric representations. |
Patrick Emami; Pan He; Sanjay Ranka; Anand Rangarajan; |

273 | Implicit Bias of Linear RNNsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: However, RNNs’ poor ability to capture long-term dependencies has not been fully understood. This paper provides a rigorous explanation of this property in the special case of linear RNNs. |
Melikasadat Emami; Mojtaba Sahraee-Ardakan; Parthe Pandit; Sundeep Rangan; Alyson K Fletcher; |

274 | Global Optimality Beyond Two Layers: Training Deep ReLU Networks Via Convex ProgramsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we develop a novel unified framework to reveal a hidden regularization mechanism through the lens of convex optimization. |
Tolga Ergen; Mert Pilanci; |

275 | Revealing The Structure of Deep Neural Networks Via Convex DualityRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study regularized deep neural networks (DNNs) and introduce a convex analytic framework to characterize the structure of the hidden layers. |
Tolga Ergen; Mert Pilanci; |

276 | Whitening for Self-Supervised Representation LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a different direction and a new loss function for SSL, which is based on the whitening of the latent-space features. |
Aleksandr Ermolov; Aliaksandr Siarohin; Enver Sangineto; Nicu Sebe; |

277 | Graph Mixture Density NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce the Graph Mixture Density Networks, a new family of machine learning models that can fit multimodal output distributions conditioned on graphs of arbitrary topology. |
Federico Errica; Davide Bacciu; Alessio Micheli; |

278 | Cross-Gradient Aggregation for Decentralized Learning from Non-IID DataRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Inspired by ideas from continual learning, we propose Cross-Gradient Aggregation (CGA), a novel decentralized learning algorithm where (i) each agent aggregates cross-gradient information, i.e., derivatives of its model with respect to its neighbors’ datasets, and (ii) updates its model using a projected gradient based on quadratic programming (QP). |
Yasaman Esfandiari; Sin Yong Tan; Zhanhong Jiang; Aditya Balu; Ethan Herron; Chinmay Hegde; Soumik Sarkar; |

279 | Weight-covariance Alignment for Adversarially Robust Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a new SNN that achieves state-of-the-art performance without relying on adversarial training, and enjoys solid theoretical justification. |
Panagiotis Eustratiadis; Henry Gouk; Da Li; Timothy Hospedales; |

280 | Data Augmentation for Deep Learning Based Accelerated MRI Reconstruction with Limited DataRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Inspired by the success of Data Augmentation (DA) for classification problems, in this paper, we propose a pipeline for data augmentation for accelerated MRI reconstruction and study its effectiveness at reducing the required training data in a variety of settings. |
Zalan Fabian; Reinhard Heckel; Mahdi Soltanolkotabi; |

281 | Poisson-Randomised DirBN: Large Mutation Is Needed in Dirichlet Belief NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose Poisson-randomised Dirichlet Belief Networks (Pois-DirBN), which allows large mutations for the latent distributions across layers to enlarge the representation capability. |
Xuhui Fan; Bin Li; Yaqiong Li; Scott A. Sisson; |

282 | Model-based Reinforcement Learning for Continuous Control with Posterior SamplingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study model-based posterior sampling for reinforcement learning (PSRL) in continuous state-action spaces theoretically and empirically. |
Ying Fan; Yifei Ming; |

283 | SECANT: Self-Expert Cloning for Zero-Shot Generalization of Visual PoliciesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we consider robust policy learning which targets zero-shot generalization to unseen visual environments with large distributional shift. |
Linxi Fan; Guanzhi Wang; De-An Huang; Zhiding Yu; Li Fei-Fei; Yuke Zhu; Animashree Anandkumar; |

284 | On Estimation in Latent Variable ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we consider a gradient based method via using variance reduction technique to accelerate estimation procedure. |
Guanhua Fang; Ping Li; |

285 | On Variational Inference in Biclustering ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we develop a theory for the estimation of general biclustering models, where the data is assumed to follow certain statistical distribution with underlying biclustering structure. |
Guanhua Fang; Ping Li; |

286 | Learning Bounds for Open-Set LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we target a more challenging and re_x0002_alistic setting: open-set learning (OSL), where there exist test samples from the classes that are unseen during training. |
Zhen Fang; Jie Lu; Anjin Liu; Feng Liu; Guangquan Zhang; |

287 | Streaming Bayesian Deep Tensor FactorizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: More important, for highly expressive, deep factorization, we lack an effective approach to handle streaming data, which are ubiquitous in real-world applications. To address these issues, we propose SBTD, a Streaming Bayesian Deep Tensor factorization method. |
Shikai Fang; Zheng Wang; Zhimeng Pan; Ji Liu; Shandian Zhe; |

288 | PID Accelerated Value Iteration AlgorithmRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose modifications to VI in order to potentially accelerate its convergence behaviour. |
Amir-Massoud Farahmand; Mohammad Ghavamzadeh; |

289 | Near-Optimal Entrywise Anomaly Detection for Low-Rank Matrices with Sub-Exponential NoiseRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: So motivated, we propose a conceptually simple entrywise approach to anomaly detection in low-rank matrices. |
Vivek Farias; Andrew A Li; Tianyi Peng; |

290 | Connecting Optimal Ex-Ante Collusion in Teams to Extensive-Form Correlation: Faster Algorithms and Positive Complexity ResultsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We focus on the problem of finding an optimal strategy for a team of players that faces an opponent in an imperfect-information zero-sum extensive-form game. |
Gabriele Farina; Andrea Celli; Nicola Gatti; Tuomas Sandholm; |

291 | Train Simultaneously, Generalize Better: Stability of Gradient-based Minimax LearnersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we show that the optimization algorithm also plays a key role in the generalization performance of the trained minimax model. |
Farzan Farnia; Asuman Ozdaglar; |

292 | Unbalanced Minibatch Optimal Transport; Applications to Domain AdaptationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: As an alternative, we suggest that the same minibatch strategy coupled with unbalanced optimal transport can yield more robust behaviors. |
Kilian Fatras; Thibault Sejourne; R?mi Flamary; Nicolas Courty; |

293 | Risk-Sensitive Reinforcement Learning with Function Approximation: A Debiasing ApproachRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study function approximation for episodic reinforcement learning with entropic risk measure. |
Yingjie Fei; Zhuoran Yang; Zhaoran Wang; |

294 | Lossless Compression of Efficient Private Local RandomizersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Here we demonstrate a general approach that, under standard cryptographic assumptions, compresses every efficient LDP algorithm with negligible loss in privacy and utility guarantees. |
Vitaly Feldman; Kunal Talwar; |

295 | Dimensionality Reduction for The Sum-of-Distances MetricRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We give a dimensionality reduction procedure to approximate the sum of distances of a given set of n points in Rd to any shape that lies in a k-dimensional subspace. |
Zhili Feng; Praneeth Kacham; David Woodruff; |

296 | Reserve Price Optimization for First Price Auctions in Display AdvertisingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a gradient-based algorithm to adaptively update and optimize reserve prices based on estimates of bidders’ responsiveness to experimental shocks in reserves. |
Zhe Feng; Sebastien Lahaie; Jon Schneider; Jinchao Ye; |

297 | Uncertainty Principles of Encoding GANsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we study this predicament of encoding GANs, which is indispensable research for the GAN community. |
Ruili Feng; Zhouchen Lin; Jiapeng Zhu; Deli Zhao; Jingren Zhou; Zheng-Jun Zha; |

298 | Pointwise Binary Classification with Pairwise Confidence ComparisonsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Thus, in this paper, we propose a novel setting called pairwise comparison (Pcomp) classification, where we have only pairs of unlabeled data that we know one is more likely to be positive than the other. |
Lei Feng; Senlin Shu; Nan Lu; Bo Han; Miao Xu; Gang Niu; Bo An; Masashi Sugiyama; |

299 | Provably Correct Optimization and Exploration with Non-linear PoliciesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we address this question by designing ENIAC, an actor-critic method that allows non-linear function approximation in the critic. |
Fei Feng; Wotao Yin; Alekh Agarwal; Lin Yang; |

300 | KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation Via Knowledge DistillationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address the above problems, we propose a privacy-preserving UMDA paradigm named Knowledge Distillation based Decentralized Domain Adaptation (KD3A), which performs domain adaptation through the knowledge distillation on models from different source domains. |
Haozhe Feng; Zhaoyang You; Minghao Chen; Tianye Zhang; Minfeng Zhu; Fei Wu; Chao Wu; Wei Chen; |

301 | Understanding Noise Injection in GANsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a geometric framework to theoretically analyze the role of noise injection in GANs. |
Ruili Feng; Deli Zhao; Zheng-Jun Zha; |

302 | GNNAutoScale: Scalable and Expressive Graph Neural Networks Via Historical EmbeddingsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present GNNAutoScale (GAS), a framework for scaling arbitrary message-passing GNNs to large graphs. |
Matthias Fey; Jan E. Lenssen; Frank Weichert; Jure Leskovec; |

303 | PsiPhi-Learning: Reinforcement Learning with Demonstrations Using Successor Features and Inverse Temporal Difference LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a multi-task inverse reinforcement learning (IRL) algorithm, called \emph{inverse temporal difference learning} (ITD), that learns shared state features, alongside per-agent successor features and preference vectors, purely from demonstrations without reward labels. |
Angelos Filos; Clare Lyle; Yarin Gal; Sergey Levine; Natasha Jaques; Gregory Farquhar; |

304 | A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix GroupsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we provide a completely general algorithm for solving for the equivariant layers of matrix groups. |
Marc Finzi; Max Welling; Andrew Gordon Wilson; |

305 | Few-Shot Conformal Prediction with Auxiliary TasksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we obtain substantially tighter prediction sets while maintaining desirable marginal guarantees by casting conformal prediction as a meta-learning paradigm over exchangeable collections of auxiliary tasks. |
Adam Fisch; Tal Schuster; Tommi Jaakkola; Dr.Regina Barzilay; |

306 | Scalable Certified Segmentation Via Randomized SmoothingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a new certification method for image and point cloud segmentation based on randomized smoothing. |
Marc Fischer; Maximilian Baader; Martin Vechev; |

307 | What’s in The Box? Exploring The Inner Life of Neural Networks with Robust RulesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel method for exploring how neurons within neural networks interact. |
Jonas Fischer; Anna Olah; Jilles Vreeken; |

308 | Online Learning with Optimism and DelayRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Inspired by the demands of real-time climate and weather forecasting, we develop optimistic online learning algorithms that require no parameter tuning and have optimal regret guarantees under delayed feedback. |
Genevieve E Flaspohler; Francesco Orabona; Judah Cohen; Soukayna Mouatadid; Miruna Oprescu; Paulo Orenstein; Lester Mackey; |

309 | Online A-Optimal Design and Active Linear RegressionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider in this paper the problem of optimal experiment design where a decision maker can choose which points to sample to obtain an estimate $\hat{\beta}$ of the hidden parameter $\beta^{\star}$ of an underlying linear model. |
Xavier Fontaine; Pierre Perrault; Michal Valko; Vianney Perchet; |

310 | Deep Adaptive Design: Amortizing Sequential Bayesian Experimental DesignRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce Deep Adaptive Design (DAD), a method for amortizing the cost of adaptive Bayesian experimental design that allows experiments to be run in real-time. |
Adam Foster; Desi R Ivanova; Ilyas Malik; Tom Rainforth; |

311 | Efficient Online Learning for Dynamic K-ClusteringRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we study dynamic clustering problems from the perspective of online learning. |
Dimitris Fotakis; Georgios Piliouras; Stratis Skoulakis; |

312 | Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This work addresses the problem of optimizing communications between server and clients in federated learning (FL). |
Yann Fraboni; Richard Vidal; Laetitia Kameni; Marco Lorenzi; |

313 | Agnostic Learning of Halfspaces with Gradient Descent Via Soft MarginsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We show that when a quantity we refer to as the \textit{soft margin} is well-behaved—a condition satisfied by log-concave isotropic distributions among others—minimizers of convex surrogates for the zero-one loss are approximate minimizers for the zero-one loss itself. |
Spencer Frei; Yuan Cao; Quanquan Gu; |

314 | Provable Generalization of SGD-trained Neural Networks of Any Width in The Presence of Adversarial Label NoiseRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To the best of our knowledge, this is the first work to show that overparameterized neural networks trained by SGD can generalize when the data is corrupted with adversarial label noise. |
Spencer Frei; Yuan Cao; Quanquan Gu; |

315 | Post-selection Inference with HSIC-LassoRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a selective inference procedure using the so-called model-free "HSIC-Lasso" based on the framework of truncated Gaussians combined with the polyhedral lemma. |
Tobias Freidling; Benjamin Poignard; H?ctor Climente-Gonz?lez; Makoto Yamada; |

316 | Variational Data Assimilation with A Learned Inverse Observation OperatorRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We learn a mapping from observational data to physical states and show how it can be used to improve optimizability. |
Thomas Frerix; Dmitrii Kochkov; Jamie Smith; Daniel Cremers; Michael Brenner; Stephan Hoyer; |

317 | Bayesian Quadrature on Riemannian Data ManifoldsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To ease this computational burden, we advocate probabilistic numerical methods for Riemannian statistics. |
Christian Fr?hlich; Alexandra Gessner; Philipp Hennig; Bernhard Sch?lkopf; Georgios Arvanitidis; |

318 | Learn-to-Share: A Hardware-friendly Transfer Learning Framework Exploiting Computation and Parameter SharingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose LeTS, a framework that leverages both computation and parameter sharing across multiple tasks. |
Cheng Fu; Hanxian Huang; Xinyun Chen; Yuandong Tian; Jishen Zhao; |

319 | Learning Task Informed AbstractionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To mitigate this problem, we propose learning Task Informed Abstractions (TIA) that explicitly separates reward-correlated visual features from distractors. |
Xiang Fu; Ge Yang; Pulkit Agrawal; Tommi Jaakkola; |

320 | Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks Via Random Precision Training and InferenceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we demonstrate a new perspective regarding quantization’s role in DNNs’ robustness, advocating that quantization can be leveraged to largely boost DNNs’ robustness, and propose a framework dubbed Double-Win Quant that can boost the robustness of quantized DNNs over their full precision counterparts by a large margin. |
Yonggan Fu; Qixuan Yu; Meng Li; Vikas Chandra; Yingyan Lin; |

321 | Auto-NBA: Efficient and Effective Search Over The Joint Space of Networks, Bitwidths, and AcceleratorsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To tackle these daunting challenges towards optimal and fast development of DNN accelerators, we propose a framework dubbed Auto-NBA to enable jointly searching for the Networks, Bitwidths, and Accelerators, by efficiently localizing the optimal design within the huge joint design space for each target dataset and acceleration specification. |
Yonggan Fu; Yongan Zhang; Yang Zhang; David Cox; Yingyan Lin; |

322 | A Deep Reinforcement Learning Approach to Marginalized Importance Sampling with The Successor RepresentationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We bridge the gap between MIS and deep reinforcement learning by observing that the density ratio can be computed from the successor representation of the target policy. |
Scott Fujimoto; David Meger; Doina Precup; |

323 | Learning Disentangled Representations Via Product Manifold ProjectionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel approach to disentangle the generative factors of variation underlying a given set of observations. |
Marco Fumero; Luca Cosmo; Simone Melzi; Emanuele Rodola; |

324 | Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose policy information capacity (PIC) – the mutual information between policy parameters and episodic return – and policy-optimal information capacity (POIC) – between policy parameters and episodic optimality – as two environment-agnostic, algorithm-agnostic quantitative metrics for task difficulty. |
Hiroki Furuta; Tatsuya Matsushima; Tadashi Kozuno; Yutaka Matsuo; Sergey Levine; Ofir Nachum; Shixiang Shane Gu; |

325 | An Information-Geometric Distance on The Space of TasksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We develop an algorithm to compute the distance which iteratively transports the marginal on the data of the source task to that of the target task while updating the weights of the classifier to track this evolving data distribution. |
Yansong Gao; Pratik Chaudhari; |

326 | Maximum Mean Discrepancy Test Is Aware of Adversarial AttacksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Given this phenomenon, we raise a question: are natural and adversarial data really from different distributions? The answer is affirmative- the previous use of the MMD test on the purpose missed three key factors, and accordingly, we propose three components. |
Ruize Gao; Feng Liu; Jingfeng Zhang; Bo Han; Tongliang Liu; Gang Niu; Masashi Sugiyama; |

327 | Unsupervised Co-part Segmentation Through AssemblyRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose an unsupervised learning approach for co-part segmentation from images. |
Qingzhe Gao; Bin Wang; Libin Liu; Baoquan Chen; |

328 | Discriminative Complementary-Label Learning with Weighted LossRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we derive a simple and theoretically-sound \emph{discriminative} model towards $P(\bar y\mid {\bm x})$, which naturally leads to a risk estimator with estimation error bound at $\mathcal{O}(1/\sqrt{n})$ convergence rate. |
Yi Gao; Min-Ling Zhang; |

329 | RATT: Leveraging Unlabeled Data to Guarantee GeneralizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we leverage unlabeled data to produce generalization bounds. |
Saurabh Garg; Sivaraman Balakrishnan; Zico Kolter; Zachary Lipton; |

330 | On Proximal Policy Optimization’s Heavy-tailed GradientsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we present a detailed empirical study to characterize the heavy-tailed nature of the gradients of the PPO surrogate reward function. |
Saurabh Garg; Joshua Zhanson; Emilio Parisotto; Adarsh Prasad; Zico Kolter; Zachary Lipton; Sivaraman Balakrishnan; Ruslan Salakhutdinov; Pradeep Ravikumar; |

331 | What Does LIME Really See in Images?Related Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: On the theoretical side, we show that when the number of generated examples is large, LIME explanations are concentrated around a limit explanation for which we give an explicit expression. |
Damien Garreau; Dina Mardaoui; |

332 | Parametric Graph for Unimodal Ranking BanditRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose an original algorithm, easy to implement and with strong theoretical guarantees to tackle this problem in the Position-Based Model (PBM) setting, well suited for applications where items are displayed on a grid. |
Camille-Sovanneary Gauthier; Romaric Gaudel; Elisa Fromont; Boammani Aser Lompo; |

333 | Let’s Agree to Degree: Comparing Graph Convolutional Networks in The Message-Passing FrameworkRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we cast neural networks defined on graphs as message-passing neural networks (MPNNs) to study the distinguishing power of different classes of such models. |
Floris Geerts; Filip Mazowiecki; Guillermo Perez; |

334 | On The Difficulty of Unbiased Alpha Divergence MinimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we study unbiased methods for alpha-divergence minimization through the Signal-to-Noise Ratio (SNR) of the gradient estimator. |
Tomas Geffner; Justin Domke; |

335 | How and Why to Use Experimental Data to Evaluate Methods for Observational Causal InferenceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We describe and analyze observational sampling from randomized controlled trials (OSRCT), a method for evaluating causal inference methods using data from randomized controlled trials (RCTs). |
Amanda M Gentzel; Purva Pruthi; David Jensen; |

336 | Strategic Classification in The DarkRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we generalize the strategic classification model to such scenarios and analyze the effect of an unknown classifier. |
Ganesh Ghalme; Vineet Nair; Itay Eilat; Inbal Talgam-Cohen; Nir Rosenfeld; |

337 | EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RLRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we closely investigate an important simplification of BCQ (Fujimoto et al., 2018) – a prior approach for offline RL – removing a heuristic design choice. |
Seyed Kamyar Seyed Ghasemipour; Dale Schuurmans; Shixiang Shane Gu; |

338 | Differentially Private Aggregation in The Shuffle Model: Almost Central Accuracy in Almost A Single MessageRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we study the problem of summing (aggregating) real numbers or integers, a basic primitive in numerous machine learning tasks, in the shuffle model. |
Badih Ghazi; Ravi Kumar; Pasin Manurangsi; Rasmus Pagh; Amer Sinha; |

339 | The Power of Adaptivity for Stochastic Submodular CoverRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We ask: how well can solutions with only a few adaptive rounds approximate fully-adaptive solutions? |
Rohan Ghuge; Anupam Gupta; Viswanath Nagarajan; |

340 | Differentially Private QuantilesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we propose an instance of the exponential mechanism that simultaneously estimates exactly $m$ quantiles from $n$ data points while guaranteeing differential privacy. |
Jennifer Gillenwater; Matthew Joseph; Alex Kulesza; |

341 | Query Complexity of Adversarial AttacksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: There are two main attack models considered in the adversarial robustness literature: black-box and white-box. We consider these threat models as two ends of a fine-grained spectrum, indexed by the number of queries the adversary can ask. |
Grzegorz Gluch; R?diger Urbanke; |

342 | Spectral Normalisation for Deep Reinforcement Learning: An Optimisation PerspectiveRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We diverge from this view and show we can recover the performance of these developments not by changing the objective, but by regularising the value-function estimator. |
Florin Gogianu; Tudor Berariu; Mihaela C Rosca; Claudia Clopath; Lucian Busoniu; Razvan Pascanu; |

343 | 12-Lead ECG Reconstruction Via Koopman OperatorsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we present a methodology to reconstruct missing or noisy leads using the theory of Koopman Operators. |
Tomer Golany; Kira Radinsky; Daniel Freedman; Saar Minha; |

344 | Function Contrastive Learning of Transferable Meta-RepresentationsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we study the implications of this joint training on the transferability of the meta-representations. |
Muhammad Waleed Gondal; Shruti Joshi; Nasim Rahaman; Stefan Bauer; Manuel Wuthrich; Bernhard Sch?lkopf; |

345 | Active Slices for Sliced Stein DiscrepancyRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: First, we show in theory that the requirement of using optimal slicing directions in the kernelized version of SSD can be relaxed, validating the resulting discrepancy with finite random slicing directions. Second, given that good slicing directions are crucial for practical performance, we propose a fast algorithm for finding good slicing directions based on ideas of active sub-space construction and spectral decomposition. |
Wenbo Gong; Kaibo Zhang; Yingzhen Li; Jose Miguel Hernandez-Lobato; |

346 | On The Problem of Underranking in Group-Fair RankingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we formulate the problem of underranking in group-fair rankings based on how close the group-fair rank of each item is to its original rank, and prove a lower bound on the trade-off achievable for simultaneous underranking and group fairness in ranking. |
Sruthi Gorantla; Amit Deshpande; Anand Louis; |

347 | MARINA: Faster Non-Convex Distributed Learning with CompressionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We develop and analyze MARINA: a new communication efficient method for non-convex distributed learning over heterogeneous datasets. |
Eduard Gorbunov; Konstantin P. Burlachenko; Zhize Li; Peter Richtarik; |

348 | Systematic Analysis of Cluster Similarity Indices: How to Validate Validation MeasuresRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a theoretical framework to tackle this problem: we develop a list of desirable properties and conduct an extensive theoretical analysis to verify which indices satisfy them. |
Martijn M G?sgens; Alexey Tikhonov; Liudmila Prokhorenkova; |

349 | Revisiting Point Cloud Shape Classification with A Simple and Effective BaselineRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: First, we find that auxiliary factors like different evaluation schemes, data augmentation strategies, and loss functions, which are independent of the model architecture, make a large difference in performance. |
Ankit Goyal; Hei Law; Bowei Liu; Alejandro Newell; Jia Deng; |

350 | Dissecting Supervised Constrastive LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we address the question whether there are fundamental differences in the sought-for representation geometry in the output space of the encoder at minimal loss. |
Florian Graf; Christoph Hofer; Marc Niethammer; Roland Kwitt; |

351 | Oops I Took A Gradient: Scalable Sampling for Discrete DistributionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a general and scalable approximate sampling strategy for probabilistic models with discrete variables. |
Will Grathwohl; Kevin Swersky; Milad Hashemi; David Duvenaud; Chris Maddison; |

352 | Detecting Rewards Deterioration in Episodic Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we address this problem by focusing directly on the rewards and testing for degradation. We present this problem as a multivariate mean-shift detection problem with possibly partial observations. |
Ido Greenberg; Shie Mannor; |

353 | Crystallization Learning with The Delaunay TriangulationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Based on the Delaunay triangulation, we propose the crystallization learning to estimate the conditional expectation function in the framework of nonparametric regression. |
Jiaqi Gu; Guosheng Yin; |

354 | AutoAttend: Automated Attention Representation SearchRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we automate Key, Query and Value representation design, which is one of the most important steps to obtain effective self-attentions. |
Chaoyu Guan; Xin Wang; Wenwu Zhu; |

355 | Operationalizing Complex Causes: A Pragmatic View of MediationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Given a collection of candidate mediators, we propose (a) a two-step method for predicting the causal responses of crude interventions; and (b) a testing procedure to identify mediators of crude interventions. |
Limor Gultchin; David Watson; Matt Kusner; Ricardo Silva; |

356 | On A Combination of Alternating Minimization and Nesterov’s MomentumRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we combine AM and Nesterov’s acceleration to propose an accelerated alternating minimization algorithm. |
Sergey Guminov; Pavel Dvurechensky; Nazarii Tupitsa; Alexander Gasnikov; |

357 | Decentralized Single-Timescale Actor-Critic on Zero-Sum Two-Player Stochastic GamesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study the global convergence and global optimality of the actor-critic algorithm applied for the zero-sum two-player stochastic games in a decentralized manner. |
Hongyi Guo; Zuyue Fu; Zhuoran Yang; Zhaoran Wang; |

358 | Adversarial Policy Learning in Two-player Competitive GamesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose a new adversarial learning algorithm. |
Wenbo Guo; Xian Wu; Sui Huang; Xinyu Xing; |

359 | Soft Then Hard: Rethinking The Quantization in Neural Image CompressionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We thus propose a novel soft-then-hard quantization strategy for neural image compression that first learns an expressive latent space softly, then closes the train-test mismatch with hard quantization. |
Zongyu Guo; Zhizheng Zhang; Runsen Feng; Zhibo Chen; |

360 | UneVEn: Universal Value Exploration for Multi-Agent Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Specifically, we propose a novel MARL approach called Universal Value Exploration (UneVEn) that learns a set of related tasks simultaneously with a linear decomposition of universal successor features. |
Tarun Gupta; Anuj Mahajan; Bei Peng; Wendelin Boehmer; Shimon Whiteson; |

361 | Distribution-Free Calibration Guarantees for Histogram Binning Without Sample SplittingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We prove calibration guarantees for the popular histogram binning (also called uniform-mass binning) method of Zadrozny and Elkan (2001). |
Chirag Gupta; Aaditya Ramdas; |

362 | Correcting Exposure Bias for Link RecommendationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose estimators that leverage known exposure probabilities to mitigate this bias and consequent feedback loops. |
Shantanu Gupta; Hao Wang; Zachary Lipton; Yuyang Wang; |

363 | The Heavy-Tail Phenomenon in SGDRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we argue that these three seemingly unrelated perspectives for generalization are deeply linked to each other. |
Mert Gurbuzbalaban; Umut Simsekli; Lingjiong Zhu; |

364 | Knowledge Enhanced Machine Learning Pipeline Against Diverse Adversarial AttacksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we aim to enhance the ML robustness from a different perspective by leveraging domain knowledge: We propose a Knowledge Enhanced Machine Learning Pipeline (KEMLP) to integrate domain knowledge (i.e., logic relationships among different predictions) into a probabilistic graphical model via first-order logic rules. |
Nezihe Merve G?rel; Xiangyu Qi; Luka Rimanic; Ce Zhang; Bo Li; |

365 | Adapting to Delays and Data in Adversarial Multi-Armed BanditsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider the adversarial multi-armed bandit problem under delayed feedback. |
Andras Gyorgy; Pooria Joulani; |

366 | Rate-Distortion Analysis of Minimum Excess Risk in Bayesian LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we build upon and extend the recent results of (Xu & Raginsky, 2020) to analyze the MER in Bayesian learning and derive information-theoretic bounds on it. |
Hassan Hafez-Kolahi; Behrad Moniri; Shohreh Kasaei; Mahdieh Soleymani Baghshah; |

367 | Regret Minimization in Stochastic Non-Convex Learning Via A Proximal-Gradient ApproachRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: On that account, we propose a conceptual approach that leverages non-convex optimality measures, leading to a suitable generalization of the learner’s local regret. |
Nadav Hallak; Panayotis Mertikopoulos; Volkan Cevher; |

368 | Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient ExplorationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, sample-aware policy entropy regularization is proposed to enhance the conventional policy entropy regularization for better exploration. |
Seungyul Han; Youngchul Sung; |

369 | Adversarial Combinatorial Bandits with General Non-linear Reward FunctionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we study the adversarial combinatorial bandit with a known non-linear reward function, extending existing work on adversarial linear combinatorial bandit. |
Yanjun Han; Yining Wang; Xi Chen; |

370 | A Collective Learning Framework to Boost GNN Expressiveness for Node ClassificationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we investigate this question and propose {\em collective learning} for GNNs —a general collective classification approach for node representation learning that increases their representation power. |
Mengyue Hang; Jennifer Neville; Bruno Ribeiro; |

371 | Grounding Language to Entities and Dynamics for Generalization in Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We develop a new model, EMMA (Entity Mapper with Multi-modal Attention) which uses an entity-conditioned attention module that allows for selective focus over relevant descriptions in the manual for each entity in the environment. |
Austin W. Hanjie; Victor Y Zhong; Karthik Narasimhan; |

372 | Sparse Feature Selection Makes Batch Reinforcement Learning More Sample EfficientRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper provides a statistical analysis of high-dimensional batch reinforcement learning (RL) using sparse linear function approximation. |
Botao Hao; Yaqi Duan; Tor Lattimore; Csaba Szepesvari; Mengdi Wang; |

373 | Bootstrapping Fitted Q-Evaluation for Off-Policy InferenceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study the use of bootstrapping in off-policy evaluation (OPE), and in particular, we focus on the fitted Q-evaluation (FQE) that is known to be minimax-optimal in the tabular and linear-model cases. |
Botao Hao; Xiang Ji; Yaqi Duan; Hao Lu; Csaba Szepesvari; Mengdi Wang; |

374 | Compressed Maximum LikelihoodRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Inspired by recent advances in estimating distribution functionals, we propose $\textit{compressed maximum likelihood}$ (CML) that applies ML to the compressed samples. |
Yi Hao; Alon Orlitsky; |

375 | Valid Causal Inference with (Some) Invalid InstrumentsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we show how to perform consistent IV estimation despite violations of the exclusion assumption. |
Jason S Hartford; Victor Veitch; Dhanya Sridhar; Kevin Leyton-Brown; |

376 | Model Performance Scaling with Multiple Data SourcesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We show that there is a simple scaling law that predicts the loss incurred by a model even under varying dataset composition. |
Tatsunori Hashimoto; |

377 | Hierarchical VAEs Know What They Don’t KnowRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In the context of hierarchical variational autoencoders, we provide evidence to explain this behavior by out-of-distribution data having in-distribution low-level features. |
Jakob D. Drachmann Havtorn; Jes Frellsen; Soren Hauberg; Lars Maal?e; |

378 | Defense Against Backdoor Attacks Via Robust Covariance EstimationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel defense algorithm using robust covariance estimation to amplify the spectral signature of corrupted data. |
Jonathan Hayase; Weihao Kong; Raghav Somani; Sewoong Oh; |

379 | Boosting for Online Convex OptimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider the decision-making framework of online convex optimization with a very large number of experts. |
Elad Hazan; Karan Singh; |

380 | PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose PipeTransformer, which leverages automated elastic pipelining for efficient distributed training of Transformer models. |
Chaoyang He; Shen Li; Mahdi Soltanolkotabi; Salman Avestimehr; |

381 | SoundDet: Polyphonic Moving Sound Event Detection and Localization from Raw WaveformRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a new framework SoundDet, which is an end-to-end trainable and light-weight framework, for polyphonic moving sound event detection and localization. |
Yuhang He; Niki Trigoni; Andrew Markham; |

382 | Logarithmic Regret for Reinforcement Learning with Linear Function ApproximationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we show that logarithmic regret is attainable under two recently proposed linear MDP assumptions provided that there exists a positive sub-optimality gap for the optimal action-value function. |
Jiafan He; Dongruo Zhou; Quanquan Gu; |

383 | Finding Relevant Information Via A Discrete Fourier ExpansionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address this, we propose a Fourier-based approach to extract relevant information in the supervised setting. |
Mohsen Heidari; Jithin Sreedharan; Gil I Shamir; Wojciech Szpankowski; |

384 | Zeroth-Order Non-Convex Learning Via Hierarchical Dual AveragingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a hierarchical version of dual averaging for zeroth-order online non-convex optimization {–} i.e., learning processes where, at each stage, the optimizer is facing an unknown non-convex loss function and only receives the incurred loss as feedback. |
Am?lie H?liou; Matthieu Martin; Panayotis Mertikopoulos; Thibaud Rahier; |

385 | Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced SparsityRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To help, we propose two simple regularization techniques to apply during the training of GCNNs: Batch Representation Orthonormalization (BRO) and Gini regularization. |
Ryan Henderson; Djork-Arn? Clevert; Floriane Montanari; |

386 | Muesli: Combining Improvements in Policy OptimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel policy update that combines regularized policy optimization with model learning as an auxiliary loss. |
Matteo Hessel; Ivo Danihelka; Fabio Viola; Arthur Guez; Simon Schmitt; Laurent Sifre; Theophane Weber; David Silver; Hado Van Hasselt; |

387 | Learning Representations By Humans, for HumansRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Here we propose a framework to directly support human decision-making, in which the role of machines is to reframe problems rather than to prescribe actions through prediction. |
Sophie Hilgard; Nir Rosenfeld; Mahzarin R Banaji; Jack Cao; David Parkes; |

388 | Optimizing Black-box Metrics with Iterative Example WeightingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Our approach is to adaptively learn example weights on the training dataset such that the resulting weighted objective best approximates the metric on the validation sample. |
Gaurush Hiranandani; Jatin Mathur; Harikrishna Narasimhan; Mahdi Milani Fard; Sanmi Koyejo; |

389 | Trees with Attention for Set Prediction TasksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Set-Tree, presented in this work, extends the support for sets to tree-based models, such as Random-Forest and Gradient-Boosting, by introducing an attention mechanism and set-compatible split criteria. |
Roy Hirsch; Ran Gilad-Bachrach; |

390 | Multiplicative Noise and Heavy Tails in Stochastic OptimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Modeling stochastic optimization algorithms as discrete random recurrence relations, we show that multiplicative noise, as it commonly arises due to variance in local rates of convergence, results in heavy-tailed stationary behaviour in the parameters. |
Liam Hodgkinson; Michael Mahoney; |

391 | MC-LSTM: Mass-Conserving LSTMRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Our novel Mass-Conserving LSTM (MC-LSTM) adheres to these conservation laws by extending the inductive bias of LSTM to model the redistribution of those stored quantities. |
Pieter-Jan Hoedt; Frederik Kratzert; Daniel Klotz; Christina Halmich; Markus Holzleitner; Grey S Nearing; Sepp Hochreiter; Guenter Klambauer; |

392 | Learning Curves for Analysis of Deep NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a method to robustly estimate learning curves, abstract their parameters into error and data-reliance, and evaluate the effectiveness of different parameterizations. |
Derek Hoiem; Tanmay Gupta; Zhizhong Li; Michal Shlapentokh-Rothman; |

393 | Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural ProcessesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Motivated by objects such as electric fields or fluid streams, we study the problem of learning stochastic fields, i.e. stochastic processes whose samples are fields like those occurring in physics and engineering. |
Peter Holderrieth; Michael J Hutchinson; Yee Whye Teh; |

394 | Latent Programmer: Discrete Latent Codes for Program SynthesisRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Based on these insights, we introduce the Latent Programmer (LP), a program synthesis method that first predicts a discrete latent code from input/output examples, and then generates the program in the target language. |
Joey Hong; David Dohan; Rishabh Singh; Charles Sutton; Manzil Zaheer; |

395 | Chebyshev Polynomial Codes: Task Entanglement-based Coding for Distributed Matrix MultiplicationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose Chebyshev polynomial codes, which can achieve order-wise improvement in encoding complexity at the master and communication load in distributed matrix multiplication using task entanglement. |
Sangwoo Hong; Heecheol Yang; Youngseok Yoon; Taehyun Cho; Jungwoo Lee; |

396 | Federated Learning of User Verification Models Without Sharing EmbeddingsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address this problem, we propose Federated User Verification (FedUV), a framework in which users jointly learn a set of vectors and maximize the correlation of their instance embeddings with a secret linear combination of those vectors. |
Hossein Hosseini; Hyunsin Park; Sungrack Yun; Christos Louizos; Joseph Soriaga; Max Welling; |

397 | The Limits of Min-Max Optimization Algorithms: Convergence to Spurious Non-Critical SetsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In particular, we show that a wide class of state-of-the-art schemes and heuristics may converge with arbitrarily high probability to attractors that are in no way min-max optimal or even stationary. |
Ya-Ping Hsieh; Panayotis Mertikopoulos; Volkan Cevher; |

398 | Near-Optimal Representation Learning for Linear Bandits and Linear RLRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a sample-efficient algorithm, MTLR-OFUL, which leverages the shared representation to achieve $\tilde{O}(M\sqrt{dkT} + d\sqrt{kMT} )$ regret, with $T$ being the number of total steps. |
Jiachen Hu; Xiaoyu Chen; Chi Jin; Lihong Li; Liwei Wang; |

399 | On The Random Conjugate Kernel and Neural Tangent KernelRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We investigate the distributions of Conjugate Kernel (CK) and Neural Tangent Kernel (NTK) for ReLU networks with random initialization. |
Zhengmian Hu; Heng Huang; |

400 | Off-Belief LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Policies learned through self-play may adopt arbitrary conventions and implicitly rely on multi-step reasoning based on fragile assumptions about other agents’ actions and thus fail when paired with humans or independently trained agents at test time. To address this, we present off-belief learning (OBL). |
Hengyuan Hu; Adam Lerer; Brandon Cui; Luis Pineda; Noam Brown; Jakob Foerster; |

401 | Generalizable Episodic Memory for Deep Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address this problem, we propose Generalizable Episodic Memory (GEM), which effectively organizes the state-action values of episodic memory in a generalizable manner and supports implicit planning on memorized trajectories. |
Hao Hu; Jianing Ye; Guangxiang Zhu; Zhizhou Ren; Chongjie Zhang; |

402 | A Scalable Deterministic Global Optimization Algorithm for Clustering ProblemsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we modelled the MSSC task as a two-stage optimization problem and proposed a tailed reduced-space branch and bound (BB) algorithm. |
Kaixun Hua; Mingfei Shi; Yankai Cao; |

403 | On Recovering from Modeling Errors Using Testing Bayesian NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider the problem of supervised learning with Bayesian Networks when the used dependency structure is incomplete due to missing edges or missing variable states. |
Haiying Huang; Adnan Darwiche; |

404 | A Novel Sequential Coreset Method for Gradient Descent AlgorithmsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, based on the “locality” property of gradient descent algorithms, we propose a new framework, termed “sequential coreset”, which effectively avoids these obstacles. |
Jiawei Huang; Ruomin Huang; Wenjie Liu; Nikolaos Freris; Hu Ding; |

405 | FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning AnalysisRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: The current paper presents a new class of convergence analysis for FL, Federated Neural Tangent Kernel (FL-NTK), which corresponds to overparamterized ReLU neural networks trained by gradient descent in FL and is inspired by the analysis in Neural Tangent Kernel (NTK). |
Baihe Huang; Xiaoxiao Li; Zhao Song; Xin Yang; |

406 | STRODE: Stochastic Boundary Ordinary Differential EquationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we present a probabilistic ordinary differential equation (ODE), called STochastic boundaRy ODE (STRODE), that learns both the timings and the dynamics of time series data without requiring any timing annotations during training. |
Hengguan Huang; Hongfu Liu; Hao Wang; Chang Xiao; Ye Wang; |

407 | A Riemannian Block Coordinate Descent Method for Computing The Projection Robust Wasserstein DistanceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a Riemannian block coordinate descent (RBCD) method to solve this problem, which is based on a novel reformulation of the regularized max-min problem over the Stiefel manifold. |
Minhui Huang; Shiqian Ma; Lifeng Lai; |

408 | Projection Robust Wasserstein BarycentersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper proposes the projection robust Wasserstein barycenter (PRWB) that has the potential to mitigate the curse of dimensionality, and a relaxed PRWB (RPRWB) model that is computationally more tractable. |
Minhui Huang; Shiqian Ma; Lifeng Lai; |

409 | Accurate Post Training Quantization With Small Calibration SetsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To this end, we minimize the quantization errors of each layer or block separately by optimizing its parameters over the calibration set. |
Itay Hubara; Yury Nahshan; Yair Hanani; Ron Banner; Daniel Soudry; |

410 | Learning and Planning in Complex Action SpacesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a general framework to reason in a principled way about policy evaluation and improvement over such sampled action subsets. |
Thomas Hubert; Julian Schrittwieser; Ioannis Antonoglou; Mohammadamin Barekatain; Simon Schmitt; David Silver; |

411 | Generative Adversarial TransformersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce the GANsformer, a novel and efficient type of transformer, and explore it for the task of visual generative modeling. |
Drew A Hudson; Larry Zitnick; |

412 | Neural Pharmacodynamic State Space ModelingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a deep generative model that makes use of a novel attention-based neural architecture inspired by the physics of how treatments affect disease state. |
Zeshan M Hussain; Rahul G. Krishnan; David Sontag; |

413 | Hyperparameter Selection for Imitation LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We address the issue of tuning hyperparameters (HPs) for imitation learning algorithms in the context of continuous-control, when the underlying reward function of the demonstrating expert cannot be observed at any time. |
L?onard Hussenot; Marcin Andrychowicz; Damien Vincent; Robert Dadashi; Anton Raichuk; Sabela Ramos; Nikola Momchev; Sertan Girgin; Raphael Marinier; Lukasz Stafiniak; Manu Orsini; Olivier Bachem; Matthieu Geist; Olivier Pietquin; |

414 | Pareto GAN: Extending The Representational Power of GANs to Heavy-Tailed DistributionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We identify issues with standard loss functions and propose the use of alternative metric spaces that enable stable and efficient learning. |
Todd Huster; Jeremy Cohen; Zinan Lin; Kevin Chan; Charles Kamhoua; Nandi O. Leslie; Cho-Yu Jason Chiang; Vyas Sekar; |

415 | LieTransformer: Equivariant Self-Attention for Lie GroupsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. |
Michael J Hutchinson; Charline Le Lan; Sheheryar Zaidi; Emilien Dupont; Yee Whye Teh; Hyunjik Kim; |

416 | Crowdsourcing Via Annotator Co-occurrence Imputation and Provable Symmetric Nonnegative Matrix FactorizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This work recasts the pairwise co-occurrence based D&S model learning problem as a symmetric NMF (SymNMF) problem—which offers enhanced identifiability relative to CNMF. |
Shahana Ibrahim; Xiao Fu; |

417 | Selecting Data Augmentation for Simulating InterventionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we focus on the case where the problem arises through spurious correlation between the observed domains and the actual task labels. |
Maximilian Ilse; Jakub M Tomczak; Patrick Forr?; |

418 | Scalable Marginal Likelihood Estimation for Model Selection in Deep LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we present a scalable marginal-likelihood estimation method to select both hyperparameters and network architectures, based on the training data alone. |
Alexander Immer; Matthias Bauer; Vincent Fortuin; Gunnar R?tsch; Khan Mohammad Emtiyaz; |

419 | Active Learning for Distributionally Robust Level-Set EstimationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this study, we addressed this problem by considering the \textit{distributionally robust PTR} (DRPTR) measure, which considers the worst-case PTR within given candidate distributions. |
Yu Inatsu; Shogo Iwazaki; Ichiro Takeuchi; |

420 | Learning Randomly Perturbed Structured Predictors for Direct Loss MinimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we interpolate between these techniques by learning the variance of randomized structured predictors as well as their mean, in order to balance between the learned score function and the randomized noise. |
Hedda Cohen Indelman; Tamir Hazan; |

421 | Randomized Entity-wise Factorization for Multi-Agent Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Our method aims to leverage these commonalities by asking the question: “What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?” |
Shariq Iqbal; Christian A Schroeder De Witt; Bei Peng; Wendelin Boehmer; Shimon Whiteson; Fei Sha; |

422 | Randomized Exploration in Reinforcement Learning with General Value Function ApproximationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm as well as the optimism principle. |
Haque Ishfaq; Qiwen Cui; Viet Nguyen; Alex Ayoub; Zhuoran Yang; Zhaoran Wang; Doina Precup; Lin Yang; |

423 | Distributed Second Order Methods with Fast Rates and Compressed CommunicationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We develop several new communication-efficient second-order methods for distributed optimization. |
Rustem Islamov; Xun Qian; Peter Richtarik; |

424 | What Are Bayesian Neural Network Posteriors Really Like?Related Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To investigate foundational questions in Bayesian deep learning, we instead use full batch Hamiltonian Monte Carlo (HMC) on modern architectures. |
Pavel Izmailov; Sharad Vikram; Matthew D Hoffman; Andrew Gordon Gordon Wilson; |

425 | How to Learn When Data Reacts to Your Model: Performative Gradient DescentRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Here we introduce \emph{performative gradient descent} (PerfGD), an algorithm for computing performatively optimal points. |
Zachary Izzo; Lexing Ying; James Zou; |

426 | Perceiver: General Perception with Iterative AttentionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we introduce the Perceiver {–} a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. |
Andrew Jaegle; Felix Gimeno; Andy Brock; Oriol Vinyals; Andrew Zisserman; Joao Carreira; |

427 | Imitation By Predicting ObservationsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a new method for imitation solely from observations that achieves comparable performance to experts on challenging continuous control tasks while also exhibiting robustness in the presence of observations unrelated to the task. |
Andrew Jaegle; Yury Sulsky; Arun Ahuja; Jake Bruce; Rob Fergus; Greg Wayne; |

428 | Local Correlation Clustering with Asymmetric Classification ErrorsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study the $\ell_p$ objective in Correlation Clustering under the following assumption: Every similar edge has weight in $[\alpha\mathbf{w},\mathbf{w}]$ and every dissimilar edge has weight at least $\alpha\mathbf{w}$ (where $\alpha \leq 1$ and $\mathbf{w}>0$ is a scaling parameter). |
Jafar Jafarov; Sanchit Kalhan; Konstantin Makarychev; Yury Makarychev; |

429 | Alternative Microfoundations for Strategic ClassificationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we argue that a direct combination of these ingredients leads to brittle solution concepts of limited descriptive and prescriptive value. |
Meena Jagadeesan; Celestine Mendler-D?nner; Moritz Hardt; |

430 | Robust Density Estimation from Batches: The Best Things in Life Are (Nearly) FreeRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We answer this question, showing that, perhaps surprisingly, up to logarithmic factors, the optimal sample complexity is the same as for genuine, non-adversarial, data! |
Ayush Jain; Alon Orlitsky; |

431 | Instance-Optimal Compressed Sensing Via Posterior SamplingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We show for Gaussian measurements and \emph{any} prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees. |
Ajil Jalal; Sushrut Karmalkar; Alex Dimakis; Eric Price; |

432 | Fairness for Image Generation with Uncertain Sensitive AttributesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution, which entail different definitions from the standard classification setting. |
Ajil Jalal; Sushrut Karmalkar; Jessica Hoffmann; Alex Dimakis; Eric Price; |

433 | Feature Clustering for Support Identification in Extreme RegionsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: The present paper develops a novel optimization-based approach to assess the dependence structure of extremes. |
Hamid Jalalzai; R?mi Leluc; |

434 | Improved Regret Bounds of Bilinear Bandits Using Action Space AnalysisRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we make progress towards closing the gap between the upper and lower bound on the optimal regret. |
Kyoungseok Jang; Kwang-Sung Jun; Se-Young Yun; Wanmo Kang; |

435 | Inverse Decision Modeling: Learning Interpretable Representations of BehaviorRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we develop an expressive, unifying perspective on *inverse decision modeling*: a framework for learning parameterized representations of sequential decision behavior. |
Daniel Jarrett; Alihan H?y?k; Mihaela Van Der Schaar; |

436 | Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts GeneralizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We highlight that poor final generalization coincides with the trace of the FIM attaining a large value early in training, to which we refer as catastrophic Fisher explosion. |
Stanislaw Jastrzebski; Devansh Arpit; Oliver Astrand; Giancarlo B Kerg; Huan Wang; Caiming Xiong; Richard Socher; Kyunghyun Cho; Krzysztof J Geras; |

437 | Policy Gradient Bayesian Robust Optimization for Imitation LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We derive a novel policy gradient-style robust optimization approach, PG-BROIL, that optimizes a soft-robust objective that balances expected performance and risk. |
Zaynah Javed; Daniel S Brown; Satvik Sharma; Jerry Zhu; Ashwin Balakrishna; Marek Petrik; Anca Dragan; Ken Goldberg; |

438 | In-Database Regression in Input Sparsity TimeRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we design subspace embeddings for database joins which can be computed significantly faster than computing the join. |
Rajesh Jayaram; Alireza Samadian; David Woodruff; Peng Ye; |

439 | Parallel and Flexible Sampling from Autoregressive Models Via Langevin DynamicsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper introduces an alternative approach to sampling from autoregressive models. |
Vivek Jayaram; John Thickstun; |

440 | Objective Bound Conditional Gaussian Process for Bayesian OptimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a new surrogate model, called the objective bound conditional Gaussian process (OBCGP), to condition a Gaussian process on a bound on the optimal function value. |
Taewon Jeong; Heeyoung Kim; |

441 | Quantifying Ignorance in Individual-Level Causal-Effect Estimates Under Hidden ConfoundingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a new parametric interval estimator suited for high-dimensional data, that estimates a range of possible CATE values when given a predefined bound on the level of hidden confounding. |
Andrew Jesson; S?ren Mindermann; Yarin Gal; Uri Shalit; |

442 | DeepReDuce: ReLU Reduction for Fast Private InferenceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper proposes DeepReDuce: a set of optimizations for the judicious removal of ReLUs to reduce private inference latency. |
Nandan Kumar Jha; Zahra Ghodsi; Siddharth Garg; Brandon Reagen; |

443 | Factor-analytic Inverse Regression for High-dimension, Small-sample Dimensionality ReductionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To overcome this limitation, we propose Class-conditional Factor Analytic Dimensions (CFAD), a model-based dimensionality reduction method for high-dimensional, small-sample data. |
Aditi Jha; Michael J. Morais; Jonathan W Pillow; |

444 | Fast Margin Maximization Via Dual AccelerationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present and analyze a momentum-based gradient method for training linear classifiers with an exponentially-tailed loss (e.g., the exponential or logistic loss), which maximizes the classification margin on separable data at a rate of O(1/t^2). |
Ziwei Ji; Nathan Srebro; Matus Telgarsky; |

445 | Marginalized Stochastic Natural Gradients for Black-Box Variational InferenceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a stochastic natural gradient estimator that is as broadly applicable and unbiased, but improves efficiency by exploiting the curvature of the variational bound, and provably reduces variance by marginalizing discrete latent variables. |
Geng Ji; Debora Sujono; Erik B Sudderth; |

446 | Bilevel Optimization: Convergence Analysis and Enhanced DesignRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we investigate the nonconvex-strongly-convex bilevel optimization problem. |
Kaiyi Ji; Junjie Yang; Yingbin Liang; |

447 | Efficient Statistical Tests: A Neural Tangent Kernel ApproachRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a shift-invariant convolutional neural tangent kernel (SCNTK) based outlier detector and two-sample tests with maximum mean discrepancy (MMD) that is O(n) in the number of samples due to using the random feature approximation. |
Sheng Jia; Ehsan Nezhadarya; Yuhuai Wu; Jimmy Ba; |

448 | Scaling Up Visual and Vision-Language Representation Learning With Noisy Text SupervisionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. |
Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig; |

449 | Multi-Dimensional Classification Via Sparse Label EncodingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a novel MDC approach named SLEM which learns the predictive model in an encoded label space instead of the original heterogeneous one. |
Bin-Bin Jia; Min-Ling Zhang; |

450 | Self-Damaging Contrastive LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper proposes to explicitly tackle this challenge, via a principled framework called Self-Damaging Contrastive Learning (SDCLR), to automatically balance the representation learning without knowing the classes. |
Ziyu Jiang; Tianlong Chen; Bobak J Mortazavi; Zhangyang Wang; |

451 | Prioritized Level ReplayRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce Prioritized Level Replay (PLR), a general framework for selectively sampling the next training level by prioritizing those with higher estimated learning potential when revisited in the future. |
Minqi Jiang; Edward Grefenstette; Tim Rockt?schel; |

452 | Monotonic Robust Policy Optimization with Model DiscrepancyRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Since the average and worst-case performance are both important for generalization in RL, in this paper, we propose a policy optimization approach for concurrently improving the policy’s performance in the average and worst-case environment. |
Yuankun Jiang; Chenglin Li; Wenrui Dai; Junni Zou; Hongkai Xiong; |

453 | Approximation Theory of Convolutional Architectures for Time Series ModellingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we derive parallel results for convolutional architectures, with WaveNet being a prime example. |
Haotian Jiang; Zhong Li; Qianxiao Li; |

454 | Streaming and Distributed Algorithms for Robust Column Subset SelectionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We give the first single-pass streaming algorithm for Column Subset Selection with respect to the entrywise $\ell_p$-norm with $1 \leq p < 2$. |
Shuli Jiang; Dennis Li; Irene Mengze Li; Arvind V Mahankali; David Woodruff; |

455 | Single Pass Entrywise-Transformed Low Rank ApproximationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we resolve this open question, obtaining the first single-pass algorithm for this problem and for the same class of functions $f$ studied by Liang et al. |
Yifei Jiang; Yi Li; Yiming Sun; Jiaxin Wang; David Woodruff; |

456 | The Emergence of IndividualityRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Inspired by that individuality is of being an individual separate from others, we propose a simple yet efficient method for the emergence of individuality (EOI) in multi-agent reinforcement learning (MARL). |
Jiechuan Jiang; Zongqing Lu; |

457 | Online Selection Problems Against Constrained AdversaryRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Inspired by a recent line of work in online algorithms with predictions, we study the constrained adversary model that utilizes predictions from a different perspective. |
Zhihao Jiang; Pinyan Lu; Zhihao Gavin Tang; Yuhao Zhang; |

458 | Active CoveringRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We analyze the problem of active covering, where the learner is given an unlabeled dataset and can sequentially label query examples. |
Heinrich Jiang; Afshin Rostamizadeh; |

459 | Emphatic Algorithms for Deep Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we extend the use of emphatic methods to deep reinforcement learning agents. |
Ray Jiang; Tom Zahavy; Zhongwen Xu; Adam White; Matteo Hessel; Charles Blundell; Hado Van Hasselt; |

460 | Characterizing Structural Regularities of Labeled Data in Overparameterized ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We analyze how individual instances are treated by a model via a consistency score. The score characterizes the expected accuracy for a held-out instance given training sets of varying size sampled from the data distribution. |
Ziheng Jiang; Chiyuan Zhang; Kunal Talwar; Michael C Mozer; |

461 | Optimal Streaming Algorithms for Multi-Armed BanditsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose an algorithm that works for any k and achieves the optimal sample complexity O(\frac{n}{\epsilon^2} \log\frac{k}{\delta}) using a single-arm memory and a single pass of the stream. |
Tianyuan Jin; Keke Huang; Jing Tang; Xiaokui Xiao; |

462 | Towards Tight Bounds on The Sample Complexity of Average-reward MDPsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: When the mixing time of the probability transition matrix of all policies is at most $t_\mathrm{mix}$, we provide an algorithm that solves the problem using $\widetilde{O}(t_\mathrm{mix} \epsilon^{-3})$ (oblivious) samples per state-action pair. |
Yujia Jin; Aaron Sidford; |

463 | Almost Optimal Anytime Algorithm for Batched Multi-Armed BanditsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study the anytime batched multi-armed bandit problem. |
Tianyuan Jin; Jing Tang; Pan Xu; Keke Huang; Xiaokui Xiao; Quanquan Gu; |

464 | MOTS: Minimax Optimal Thompson SamplingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we fill this long open gap by proposing a new Thompson sampling algorithm called MOTS that adaptively truncates the sampling result of the chosen arm at each time step. |
Tianyuan Jin; Pan Xu; Jieming Shi; Xiaokui Xiao; Quanquan Gu; |

465 | Is Pessimism Provably Efficient for Offline RL?Related Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a pessimistic variant of the value iteration algorithm (PEVI), which incorporates an uncertainty quantifier as the penalty function. |
Ying Jin; Zhuoran Yang; Zhaoran Wang; |

466 | Adversarial Option-Aware Hierarchical Imitation LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose Option-GAIL, a novel method to learn skills at long horizon. |
Mingxuan Jing; Wenbing Huang; Fuchun Sun; Xiaojian Ma; Tao Kong; Chuang Gan; Lei Li; |

467 | Discrete-Valued Latent Preference Matrix Estimation with Graph Side InformationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose a new model in which 1) the unknown latent preference matrix can have any discrete values, and 2) users can be clustered into multiple clusters, thereby relaxing the assumptions made in prior work. |
Changhun Jo; Kangwook Lee; |

468 | Provable Lipschitz Certification for Generative ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a scalable technique for upper bounding the Lipschitz constant of generative models. |
Matt Jordan; Alex Dimakis; |

469 | Isometric Gaussian Process Latent Variable Model for Dissimilarity DataRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a probabilistic model where the latent variable respects both the distances and the topology of the modeled data. |
Martin J?rgensen; Soren Hauberg; |

470 | On The Generalization Power of Overfitted Two-Layer Neural Tangent Kernel ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study the generalization performance of min $\ell_2$-norm overfitting solutions for the neural tangent kernel (NTK) model of a two-layer neural network with ReLU activation that has no bias term. |
Peizhong Ju; Xiaojun Lin; Ness Shroff; |

471 | Improved Confidence Bounds for The Linear Logistic Model and Applications to BanditsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose improved fixed-design confidence bounds for the linear logistic model. |
Kwang-Sung Jun; Lalit Jain; Houssam Nassif; Blake Mason; |

472 | Detection of Signal in The Spiked Rectangular ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider the problem of detecting signals in the rank-one signal-plus-noise data matrix models that generalize the spiked Wishart matrices. |
Ji Hyung Jung; Hye Won Chung; Ji Oon Lee; |

473 | Estimating Identifiable Causal Effects on Markov Equivalence Class Through Double Machine LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study the problem of causal estimation from a MEC represented by a partial ancestral graph (PAG), which is learnable from observational data. |
Yonghan Jung; Jin Tian; Elias Bareinboim; |

474 | A Nullspace Property for Subspace-Preserving RecoveryRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper derives a necessary and sufficient condition for subspace-preserving recovery that is inspired by the classical nullspace property.Based on this novel condition, called here the subspace nullspace property, we derive equivalent characterizations that either admit a clear geometric interpretation that relates data distribution and subspace separation to the recovery success, or can be verified using a finite set of extreme points of a properly defined set. |
Mustafa D Kaba; Chong You; Daniel P Robinson; Enrique Mallada; Rene Vidal; |

475 | Training Recurrent Neural Networks Via Forward Propagation Through TimeRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel forward-propagation algorithm, FPTT, where at each time, for an instance, we update RNN parameters by optimizing an instantaneous risk function. |
Anil Kag; Venkatesh Saligrama; |

476 | The Distributed Discrete Gaussian Mechanism for Federated Learning with Secure AggregationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present a comprehensive end-to-end system, which appropriately discretizes the data and adds discrete Gaussian noise before performing secure aggregation. |
Peter Kairouz; Ziyu Liu; Thomas Steinke; |

477 | Practical and Private (Deep) Learning Without Sampling or ShufflingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We consider training models with differential privacy (DP) using mini-batch gradients. |
Peter Kairouz; Brendan Mcmahan; Shuang Song; Om Thakkar; Abhradeep Thakurta; Zheng Xu; |

478 | A Differentiable Point Process with Its Application to Spiking Neural NetworksRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper is concerned about a learning algorithm for a probabilistic model of spiking neural networks (SNNs). |
Hiroshi Kajino; |

479 | Projection Techniques to Update The Truncated SVD of Evolving Matrices with ApplicationsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: The algorithm presented in this paper undertakes a projection viewpoint and focuses on building a pair of subspaces which approximate the linear span of the sought singular vectors of the updated matrix. |
Vasileios Kalantzis; Georgios Kollias; Shashanka Ubaru; Athanasios N. Nikolakopoulos; Lior Horesh; Kenneth Clarkson; |

480 | Optimal Off-Policy Evaluation from Multiple Logging PoliciesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we resolve this dilemma by finding the OPE estimator for multiple loggers with minimum variance for any instance, i.e., the efficient one. |
Nathan Kallus; Yuta Saito; Masatoshi Uehara; |

481 | Efficient Performance Bounds for Primal-Dual Reinforcement Learning from DemonstrationsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To bridge the gap between theory and practice, we introduce a novel bilinear saddle-point framework using Lagrangian duality. |
Angeliki Kamoutsi; Goran Banjac; John Lygeros; |

482 | Statistical Estimation from Dependent DataRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: As our main contribution we provide algorithms and statistically efficient estimation rates for this model, giving several instantiations of our bounds in logistic regression, sparse logistic regression, and neural network regression settings with dependent data. |
Vardis Kandiros; Yuval Dagan; Nishanth Dikkala; Surbhi Goel; Constantinos Daskalakis; |

483 | SKIing on Simplices: Kernel Interpolation on The Permutohedral Lattice for Scalable Gaussian ProcessesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we develop a connection between SKI and the permutohedral lattice used for high-dimensional fast bilateral filtering. |
Sanyam Kapoor; Marc Finzi; Ke Alexander Wang; Andrew Gordon Gordon Wilson; |

484 | Variational Auto-Regressive Gaussian Processes for Continual LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: By relying on sparse inducing point approximations for scalable posteriors, we propose a novel auto-regressive variational distribution which reveals two fruitful connections to existing results in Bayesian inference, expectation propagation and orthogonal inducing points. |
Sanyam Kapoor; Theofanis Karaletsos; Thang D Bui; |

485 | Off-Policy Confidence SequencesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We develop confidence bounds that hold uniformly over time for off-policy evaluation in the contextual bandit setting. |
Nikos Karampatziakis; Paul Mineiro; Aaditya Ramdas; |

486 | Learning from History for Byzantine Robust OptimizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address these issues, we present two surprisingly simple strategies: a new robust iterative clipping procedure, and incorporating worker momentum to overcome time-coupled attacks. |
Sai Praneeth Karimireddy; Lie He; Martin Jaggi; |

487 | Non-Negative Bregman Divergence Minimization for Deep Direct Density Ratio EstimationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, to mitigate train-loss hacking, we propose non-negative correction for empirical BD estimators. |
Masahiro Kato; Takeshi Teshima; |

488 | Improved Algorithms for Agnostic Pool-based Active ClassificationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we propose an algorithm that, in contrast to uniform sampling over the disagreement region, solves an experimental design problem to determine a distribution over examples from which to request labels. |
Julian Katz-Samuels; Jifan Zhang; Lalit Jain; Kevin Jamieson; |

489 | When Does Data Augmentation Help With Membership Inference Attacks?Related Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Employing two recent MIAs, we explore the lower bound on the risk in the absence of formal upper bounds. |
Yigitcan Kaya; Tudor Dumitras; |

490 | Regularized Submodular Maximization at ScaleRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose scalable methods for maximizing a regularized submodular function $f \triangleq g-\ell$ expressed as the difference between a monotone submodular function $g$ and a modular function $\ell$. |
Ehsan Kazemi; Shervin Minaee; Moran Feldman; Amin Karbasi; |

491 | Prior Image-Constrained Reconstruction Using Style-Based Generative ModelsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this study, a framework for estimating an object of interest that is semantically related to a known prior image, is proposed. |
Varun A Kelkar; Mark Anastasio; |

492 | Self Normalizing FlowsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose \emph{Self Normalizing Flows}, a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer. |
Thomas A Keller; Jorn W.T. Peters; Priyank Jaini; Emiel Hoogeboom; Patrick Forr?; Max Welling; |

493 | Interpretable Stability Bounds for Spectral Graph FiltersRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we study filter stability and provide a novel and interpretable upper bound on the change of filter output, where the bound is expressed in terms of the endpoint degrees of the deleted and newly added edges, as well as the spatial proximity of those edges. |
Henry Kenlay; Dorina Thanou; Xiaowen Dong; |

494 | Affine Invariant Analysis of Frank-Wolfe on Strongly Convex SetsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we introduce new structural assumptions on the problem (such as the directional smoothness) and derive an affine invariant, norm-independent analysis of Frank-Wolfe. |
Thomas Kerdreux; Lewis Liu; Simon Lacoste-Julien; Damien Scieur; |

495 | Markpainting: Adversarial Machine Learning Meets InpaintingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper we study how to manipulate it using our markpainting technique. |
David Khachaturov; Ilia Shumailov; Yiren Zhao; Nicolas Papernot; Ross Anderson; |

496 | Finite-Sample Analysis of Off-Policy Natural Actor-Critic AlgorithmRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we provide finite-sample convergence guarantees for an off-policy variant of the natural actor-critic (NAC) algorithm based on Importance Sampling. |
Sajad Khodadadian; Zaiwei Chen; Siva Theja Maguluri; |

497 | Functional Space Analysis of Local GAN ConvergenceRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel perspective where we study the local dynamics of adversarial training in the general functional space and show how it can be represented as a system of partial differential equations. |
Valentin Khrulkov; Artem Babenko; Ivan Oseledets; |

498 | Hey, That’s Not An ODE: Faster ODE Adjoints Via SeminormsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Here, we demonstrate that the particular structure of the adjoint equations makes the usual choices of norm (such as $L^2$) unnecessarily stringent. |
Patrick Kidger; Ricky T. Q. Chen; Terry J Lyons; |

499 | Neural SDEs As Infinite-Dimensional GANsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Here, we show that the current classical approach to fitting SDEs may be approached as a special case of (Wasserstein) GANs, and in doing so the neural and classical regimes may be brought together. |
Patrick Kidger; James Foster; Xuechen Li; Terry J Lyons; |

500 | GRAD-MATCH: Gradient Matching Based Data Subset Selection for Efficient Deep Model TrainingRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose a general framework, GRAD-MATCH, which finds subsets that closely match the gradient of the \emph{training or validation} set. |
Krishnateja Killamsetty; Durga S; Ganesh Ramakrishnan; Abir De; Rishabh Iyer; |

501 | Improving Predictors Via Combination Across Diverse Task CategoriesRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Our algorithm aligns the heterogeneous domains of different predictors in a shared latent space to facilitate comparisons of predictors independently of the domains on which they are originally defined. |
Kwang In Kim; |

502 | Self-Improved Retrosynthetic PlanningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Motivated by this, we propose an end-to-end framework for directly training the DNNs towards generating reaction pathways with the desirable properties. |
Junsu Kim; Sungsoo Ahn; Hankook Lee; Jinwoo Shin; |

503 | Reward Identification in Inverse Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we formalize the reward identification problem in IRL and study how identifiability relates to properties of the MDP model. |
Kuno Kim; Shivam Garg; Kirankumar Shiragur; Stefano Ermon; |

504 | I-BERT: Integer-only BERT QuantizationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes the entire inference with integer-only arithmetic. |
Sehoon Kim; Amir Gholami; Zhewei Yao; Michael W. Mahoney; Kurt Keutzer; |

505 | Message Passing Adaptive Resonance Theory for Online Active Semi-supervised LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this study, we propose Message Passing Adaptive Resonance Theory (MPART) that learns the distribution and topology of input data online. |
Taehyeong Kim; Injune Hwang; Hyundo Lee; Hyunseo Kim; Won-Seok Choi; Joseph J Lim; Byoung-Tak Zhang; |

506 | Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-SpeechRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. |
Jaehyeon Kim; Jungil Kong; Juhee Son; |

507 | A Policy Gradient Algorithm for Learning to Learn in Multiagent Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we propose a novel meta-multiagent policy gradient theorem that directly accounts for the non-stationary policy dynamics inherent to multiagent learning settings. |
Dong Ki Kim; Miao Liu; Matthew D Riemer; Chuangchuang Sun; Marwa Abdulhai; Golnaz Habibi; Sebastian Lopez-Cot; Gerald Tesauro; Jonathan How; |

508 | Inferring Latent Dynamics Underlying Neural Population Activity Via Neural Differential EquationsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Here we address this problem by introducing a low-dimensional nonlinear model for latent neural population dynamics using neural ordinary differential equations (neural ODEs), with noisy sensory inputs and Poisson spike train outputs. |
Timothy D Kim; Thomas Z Luo; Jonathan W Pillow; Carlos Brody; |

509 | The Lipschitz Constant of Self-AttentionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we investigate the Lipschitz constant of self-attention, a non-linear neural network module widely used in sequence modelling. |
Hyunjik Kim; George Papamakarios; Andriy Mnih; |

510 | Unsupervised Skill Discovery with Bottleneck Option LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a novel unsupervised skill discovery method named Information Bottleneck Option Learning (IBOL). |
Jaekyeom Kim; Seohong Park; Gunhee Kim; |

511 | ViLT: Vision-and-Language Transformer Without Convolution or Region SupervisionRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. |
Wonjae Kim; Bokyung Son; Ildoo Kim; |

512 | Bias-Robust Bayesian Optimization Via Dueling BanditsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: Our first contribution is a reduction of the confounded setting to the dueling bandit model. Then we propose a novel approach for dueling bandits based on information-directed sampling (IDS). |
Johannes Kirschner; Andreas Krause; |

513 | CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and PatientsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose a family of contrastive learning methods, CLOCS, that encourages representations across space, time, \textit{and} patients to be similar to one another. |
Dani Kiyasseh; Tingting Zhu; David A Clifton; |

514 | Scalable Optimal Transport in High Dimensions for Graph Distances, Embedding Alignment, and MoreRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work we propose two effective log-linear time approximations of the cost matrix: First, a sparse approximation based on locality sensitive hashing (LSH) and, second, a Nystr{ö}m approximation with LSH-based sparse corrections, which we call locally corrected Nystr{ö}m (LCN). |
Johannes Klicpera; Marten Lienen; Stephan G?nnemann; |

515 | Representational Aspects of Depth and Conditioning in Normalizing FlowsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In our paper, we tackle representational aspects around depth and conditioning of normalizing flows: both for general invertible architectures, and for a particular common architecture, affine couplings. |
Frederic Koehler; Viraj Mehta; Andrej Risteski; |

516 | WILDS: A Benchmark of In-the-Wild Distribution ShiftsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: To address this gap, we present WILDS, a curated benchmark of 10 datasets reflecting a diverse range of distribution shifts that naturally arise in real-world applications, such as shifts across hospitals for tumor identification; across camera traps for wildlife monitoring; and across time and location in satellite imaging and poverty mapping. |
Pang Wei Koh; Shiori Sagawa; Henrik Marklund; Sang Michael Xie; Marvin Zhang; Akshay Balsubramani; Weihua Hu; Michihiro Yasunaga; Richard Lanas Phillips; Irena Gao; Tony Lee; Etienne David; Ian Stavness; Wei Guo; Berton Earnshaw; Imran Haque; Sara M Beery; Jure Leskovec; Anshul Kundaje; Emma Pierson; Sergey Levine; Chelsea Finn; Percy Liang; |

517 | One-sided Frank-Wolfe Algorithms for Saddle ProblemsRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We study a class of convex-concave saddle-point problems of the form $\min_x\max_y ?Kx,y?+f_{\cal P}(x)-h^*(y)$ where $K$ is a linear operator, $f_{\cal P}$ is the sum of a convex function $f$ with a Lipschitz-continuous gradient and the indicator function of a bounded convex polytope ${\cal P}$, and $h^\ast$ is a convex (possibly nonsmooth) function. |
Vladimir Kolmogorov; Thomas Pock; |

518 | A Lower Bound for The Sample Complexity of Inverse Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: This paper develops an information-theoretic lower bound for the sample complexity of the finite state, finite action IRL problem. |
Abi Komanduru; Jean Honorio; |

519 | Consensus Control for Decentralized Deep LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We identify the changing consensus distance between devices as a key parameter to explain the gap between centralized and decentralized training. |
Lingjing Kong; Tao Lin; Anastasia Koloskova; Martin Jaggi; Sebastian Stich; |

520 | A Distribution-dependent Analysis of Meta LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: For this case we propose to adopt the EM method, which is shown to enjoy efficient updates in our case. |
Mikhail Konobeev; Ilja Kuzborskij; Csaba Szepesvari; |

521 | Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?Related Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: In this work, we present the first large-scale, in-depth study of the robustness of DBU models under adversarial attacks. |
Anna-Kathrin Kopetzki; Bertrand Charpentier; Daniel Z?gner; Sandhya Giri; Stephan G?nnemann; |

522 | Kernel Stein Discrepancy DescentRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We investigate the properties of its Wasserstein gradient flow to approximate a target probability distribution $\pi$ on $\mathbb{R}^d$, known up to a normalization constant. |
Anna Korba; Pierre-Cyril Aubin-Frankowski; Szymon Majewski; Pierre Ablin; |

523 | Boosting The Throughput and Accelerator Utilization of Specialized CNN Inference Beyond Increasing Batch SizeRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose FoldedCNNs, a new approach to CNN design that increases inference throughput and utilization beyond large batch size. |
Jack Kosaian; Amar Phanishayee; Matthai Philipose; Debadeepta Dey; Rashmi Vinayak; |

524 | NeRF-VAE: A Geometry Aware 3D Scene Generative ModelRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We propose NeRF-VAE, a 3D scene generative model that incorporates geometric structure via Neural Radiance Fields (NeRF) and differentiable volume rendering. |
Adam R Kosiorek; Heiko Strathmann; Daniel Zoran; Pol Moreno; Rosalia Schneider; Sona Mokra; Danilo Jimenez Rezende; |

525 | Active Testing: Sample-Efficient Model EvaluationRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We introduce a new framework for sample-efficient model evaluation that we call active testing. |
Jannik Kossen; Sebastian Farquhar; Yarin Gal; Tom Rainforth; |

526 | High Confidence Generalization for Reinforcement LearningRelated Papers Related Patents Related Grants Related Orgs Related Experts DetailsHighlight: We present several classes of reinforcement learning algorithms that safely generalize to Markov decision processes (MDPs) not seen during training. |
James Kostas; |