Most Influential ICLR Papers (2022-05)
The International Conference on Learning Representations (ICLR) is one of the top machine learning conferences in the world. Paper Digest Team analyzes all papers published on ICLR in the past years, and presents the 15 most influential papers for each year. This ranking list is automatically constructed based upon citations from both research papers and granted patents, and will be frequently updated to reflect the most recent changes. To browse the most productive ICLR authors by year ranked by #papers accepted, here is a list of most productive ICLR authors. To find the most influential papers from other conferences/journals, visit Best Paper Digest page. Note: the most influential papers may or may not include the papers that won the best paper awards. (Version: 2022-05)
Based in New York, Paper Digest is dedicated to producing high-quality text analysis results that people can acturally use on a daily basis. Since 2018, we have been serving users across the world with a number of exclusive services on ranking, search, tracking and automatic literature review.
If you do not want to miss interesting academic papers, you are welcome to sign up our free daily paper digest service to get updates on new papers published in your area every day. You are also welcome to follow us on Twitter and Linkedin to get updated with new conference digests.
Paper Digest Team
New York City, New York, 10017
team@paperdigest.org
TABLE 1: Most Influential ICLR Papers (2022-05)
Year | Rank | Paper | Author(s) |
---|---|---|---|
2022 | 1 | Multitask Prompted Training Enables Zero-Shot Task Generalization IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Can zero-shot generalization instead be directly induced by explicit multitask learning? To test this question at scale, we develop a system for easily mapping general natural language tasks into a human-readable prompted form. |
VICTOR SANH et. al. |
2022 | 2 | VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Variance regularization prevents collapse in self-supervised representation learning |
Adrien Bardes; Jean Ponce; Yann LeCun; |
2022 | 3 | SimVLM: Simple Visual Language Model Pretraining with Weak Supervision IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this work, we relax these constraints and present a minimalist pretraining framework, named Simple Visual Language Model (SimVLM). |
ZIRUI WANG et. al. |
2022 | 4 | How Much Can CLIP Benefit Vision-and-Language Tasks? IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: To further study the advantage brought by CLIP, we propose to use CLIP as the visual encoder in various V&L models in two typical scenarios: 1) plugging CLIP into task-specific fine-tuning; 2) combining CLIP with V&L pre-training and transferring to downstream tasks. |
SHENG SHEN et. al. |
2022 | 5 | LoRA: Low-Rank Adaptation of Large Language Models IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Finetuning updates have a low intrinsic rank which allows us to train only the rank decomposition matrices of certain weights, yielding better performance and practical benefits. |
EDWARD J HU et. al. |
2022 | 6 | How Attentive Are Graph Attention Networks? IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We identify that Graph Attention Networks (GAT) compute a very weak form of attention. We show its empirical implications and propose a fix. |
Shaked Brody; Uri Alon; Eran Yahav; |
2022 | 7 | AS-MLP: An Axial Shifted MLP Architecture for Vision IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We design the first MLP-based architecture for downstream tasks. It achieves competitive performance compared to the transformer-based architecture, which establishes a new strong baseline of MLP-based architecture. |
Dongze Lian; Zehao Yu; Xing Sun; Shenghua Gao; |
2022 | 8 | Efficient Self-supervised Vision Transformers for Representation Learning IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Achieving SoTA ImageNet linear probe task with 10 times higher throughput, using the synergy of a multi-stage Transformer architecture and a non-contrastive region-matching pre-training task. |
CHUNYUAN LI et. al. |
2022 | 9 | Bayesian Neural Network Priors Revisited IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Using BNN priors that are not isotropic Gaussians can improve performance and reduce the cold posterior effect. |
VINCENT FORTUIN et. al. |
2022 | 10 | Gradient Matching for Domain Generalization IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose to learn features that are invariant across domains by maximizing the gradient inner product between domains. |
YUGE SHI et. al. |
2022 | 11 | Mirror Descent Policy Optimization IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: A theory-grounded practical algorithm for policy optimization in RL, which is conceptually simpler and performs better or on par to SOTA. |
Manan Tomar; Lior Shani; Yonathan Efroni; Mohammad Ghavamzadeh; |
2022 | 12 | Maximum Entropy RL (Provably) Solves Some Robust RL Problems IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Maximum Entropy RL (provably) solves some robust RL problems. |
Benjamin Eysenbach; Sergey Levine; |
2022 | 13 | Coordination Among Neural Modules Through A Shared Global Workspace IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: communication among different specialist using a shared workspace allowing higher order interactions |
ANIRUDH GOYAL et. al. |
2022 | 14 | Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We proposed a model-free off-policy algorithm for image-based continuous control that significantly outperforms previous methods both in sample and time complexity. |
Denis Yarats; Rob Fergus; Alessandro Lazaric; Lerrel Pinto; |
2022 | 15 | Charformer: Fast Character Transformers Via Gradient-based Subword Tokenization IF:3 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Fast Token-Free Models |
YI TAY et. al. |
2021 | 1 | An Image Is Worth 16×16 Words: Transformers for Image Recognition at Scale IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Transformers applied directly to image patches and pre-trained on large datasets work really well on image classification. |
ALEXEY DOSOVITSKIY et. al. |
2021 | 2 | Deformable DETR: Deformable Transformers for End-to-End Object Detection IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Deformable DETR is an efficient and fast-converging end-to-end object detector. It mitigates the high complexity and slow convergence issues of DETR via a novel sampling-based efficient attention mechanism. |
XIZHOU ZHU et. al. |
2021 | 3 | Rethinking Attention with Performers IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We introduce Performers, linear full-rank-attention Transformers via provable random feature approximation methods, without relying on sparsity or low-rankness. |
KRZYSZTOF MARCIN CHOROMANSKI et. al. |
2021 | 4 | DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: A new model architecture DeBERTa is proposed that improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. |
Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen; |
2021 | 5 | Adaptive Federated Optimization IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose adaptive federated optimization techniques, and highlight their improved performance over popular methods such as FedAvg. |
SASHANK J. REDDI et. al. |
2021 | 6 | FastSpeech 2: Fast and High-Quality End-to-End Text to Speech IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose a non-autoregressive TTS model named FastSpeech 2 to better solve the one-to-many mapping problem in TTS and surpass autoregressive models in voice quality. |
YI REN et. al. |
2021 | 7 | Fourier Neural Operator for Parametric Partial Differential Equations IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: A novel neural operator based on Fourier transformation for learning partial differential equations. |
ZONGYI LI et. al. |
2021 | 8 | Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: This paper improves the learning of dense text retrieval using ANCE, which selects global negatives with bigger gradient norms using an asynchronously updated ANN index. |
LEE XIONG et. al. |
2021 | 9 | Prototypical Contrastive Learning of Unsupervised Representations IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose an unsupervised representation learning method that bridges contrastive learning with clustering in an EM framework. |
Junnan Li; Pan Zhou; Caiming Xiong; Steven Hoi; |
2021 | 10 | Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: The first successful demonstration that image augmentation can be applied to image-based Deep RL to achieve SOTA performance. |
Denis Yarats; Ilya Kostrikov; Rob Fergus; |
2021 | 11 | In Search of Lost Domain Generalization IF:5 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Our ERM baseline achieves state-of-the-art performance across many domain generalization benchmarks |
Ishaan Gulrajani; David Lopez-Paz; |
2021 | 12 | GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding IF:4 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this paper we demonstrate conditional computation as a remedy to the above mentioned impediments, and demonstrate its efficacy and utility. |
DMITRY LEPIKHIN et. al. |
2021 | 13 | Score-Based Generative Modeling Through Stochastic Differential Equations IF:4 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: A general framework for training and sampling from score-based models that unifies and generalizes previous methods, allows likelihood computation, and enables controllable generation. |
YANG SONG et. al. |
2021 | 14 | Sharpness-aware Minimization for Efficiently Improving Generalization IF:4 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Motivated by the connection between geometry of the loss landscape and generalization, we introduce a procedure for simultaneously minimizing loss value and loss sharpness. |
Pierre Foret; Ariel Kleiner; Hossein Mobahi; Behnam Neyshabur; |
2021 | 15 | Recurrent Independent Mechanisms IF:4 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Learning recurrent mechanisms which operate independently, and sparingly interact can lead to better generalization to out of distribution samples. |
ANIRUDH GOYAL et. al. |
2020 | 1 | ALBERT: A Lite BERT For Self-supervised Learning Of Language Representations IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: A new pretraining method that establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large. |
ZHENZHONG LAN et. al. |
2020 | 2 | ELECTRA: Pre-training Text Encoders As Discriminators Rather Than Generators IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: A text encoder trained to distinguish real input tokens from plausible fakes efficiently learns effective language representations. |
Kevin Clark; Minh-Thang Luong; Quoc V. Le; Christopher D. Manning; |
2020 | 3 | BERTScore: Evaluating Text Generation With BERT IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose BERTScore, an automatic evaluation metric for text generation, which correlates better with human judgments and provides stronger model selection performance than existing metrics. |
Tianyi Zhang*; Varsha Kishore*; Felix Wu*; Kilian Q. Weinberger; Yoav Artzi; |
2020 | 4 | On The Variance Of The Adaptive Learning Rate And Beyond IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: If warmup is the answer, what is the question? |
LIYUAN LIU et. al. |
2020 | 5 | The Curious Case Of Neural Text Degeneration IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Current language generation systems either aim for high likelihood and devolve into generic repetition or miscalibrate their stochasticity?we provide evidence of both and propose a solution: Nucleus Sampling. |
Ari Holtzman; Jan Buys; Leo Du; Maxwell Forbes; Yejin Choi; |
2020 | 6 | Reformer: The Efficient Transformer IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Efficient Transformer with locality-sensitive hashing and reversible layers |
Nikita Kitaev; Lukasz Kaiser; Anselm Levskaya; |
2020 | 7 | VL-BERT: Pre-training Of Generic Visual-Linguistic Representations IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: VL-BERT is a simple yet powerful pre-trainable generic representation for visual-linguistic tasks. It is pre-trained on the massive-scale caption dataset and text-only corpus, and can be finetuned for varies down-stream visual-linguistic tasks. |
WEIJIE SU et. al. |
2020 | 8 | On The Convergence Of FedAvg On Non-IID Data IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs. |
Xiang Li; Kaixuan Huang; Wenhao Yang; Shusen Wang; Zhihua Zhang; |
2020 | 9 | Once For All: Train One Network And Specialize It For Efficient Deployment IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We introduce techniques to train a single once-for-all network that fits many hardware platforms. |
Han Cai; Chuang Gan; Tianzhe Wang; Zhekai Zhang; Song Han; |
2020 | 10 | Fast Is Better Than Free: Revisiting Adversarial Training IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: FGSM-based adversarial training, with randomization, works just as well as PGD-based adversarial training: we can use this to train a robust classifier in 6 minutes on CIFAR10, and 12 hours on ImageNet, on a single machine. |
Eric Wong; Leslie Rice; J. Zico Kolter; |
2020 | 11 | DropEdge: Towards Deep Graph Convolutional Networks On Node Classification IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: This paper proposes DropEdge, a novel and flexible technique to alleviate over-smoothing and overfitting issue in deep Graph Convolutional Networks. |
Yu Rong; Wenbing Huang; Tingyang Xu; Junzhou Huang; |
2020 | 12 | AugMix: A Simple Data Processing Method To Improve Robustness And Uncertainty IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We obtain state-of-the-art on robustness to data shifts, and we maintain calibration under data shift even though even when accuracy drops |
DAN HENDRYCKS* et. al. |
2020 | 13 | Dream To Control: Learning Behaviors By Latent Imagination IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We present Dreamer, an agent that learns long-horizon behaviors purely by latent imagination using analytic value gradients. |
Danijar Hafner; Timothy Lillicrap; Jimmy Ba; Mohammad Norouzi; |
2020 | 14 | Large Batch Optimization For Deep Learning: Training BERT In 76 Minutes IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: A fast optimizer for general applications and large-batch training. |
YANG YOU et. al. |
2020 | 15 | Deep Double Descent: Where Bigger Models And More Data Hurt IF:6 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We demonstrate, and characterize, realistic settings where bigger models are worse, and more data hurts. |
PREETUM NAKKIRAN et. al. |
2019 | 1 | Decoupled Weight Decay Regularization IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Novel variants of optimization methods that combine the benefits of both adaptive and non-adaptive methods. |
Ilya Loshchilov; Frank Hutter; |
2019 | 2 | Large Scale GAN Training For High Fidelity Natural Image Synthesis IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: GANs benefit from scaling up. |
Andrew Brock; Jeff Donahue; Karen Simonyan; |
2019 | 3 | GLUE: A Multi-Task Benchmark And Analysis Platform For Natural Language Understanding IF:9 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We present a multi-task benchmark and analysis platform for evaluating generalization in natural language understanding systems. |
ALEX WANG et. al. |
2019 | 4 | How Powerful Are Graph Neural Networks? IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We develop theoretical foundations for the expressive power of GNNs and design a provably most powerful GNN. |
Keyulu Xu*; Weihua Hu*; Jure Leskovec; Stefanie Jegelka; |
2019 | 5 | DARTS: Differentiable Architecture Search IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose a differentiable architecture search algorithm for both convolutional and recurrent networks, achieving competitive performance with the state of the art using orders of magnitude less computation resources. |
Hanxiao Liu; Karen Simonyan; Yiming Yang; |
2019 | 6 | The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Feedforward neural networks that can have weights pruned after training could have had the same weights pruned before training |
Jonathan Frankle; Michael Carbin; |
2019 | 7 | Learning Deep Representations By Mutual Information Estimation And Maximization IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We learn deep representation by maximizing mutual information, leveraging structure in the objective, and are able to compute with fully supervised classifiers with comparable architectures |
R DEVON HJELM et. al. |
2019 | 8 | ImageNet-trained CNNs Are Biased Towards Texture; Increasing Shape Bias Improves Accuracy And Robustness IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: ImageNet-trained CNNs are biased towards object texture (instead of shape like humans). Overcoming this major difference between human and machine vision yields improved detection performance and previously unseen robustness to image distortions. |
ROBERT GEIRHOS et. al. |
2019 | 9 | ProxylessNAS: Direct Neural Architecture Search On Target Task And Hardware IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Proxy-less neural architecture search for directly learning architectures on large-scale target task (ImageNet) while reducing the cost to the same level of normal training. |
Han Cai; Ligeng Zhu; Song Han; |
2019 | 10 | Benchmarking Neural Network Robustness To Common Corruptions And Perturbations IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose ImageNet-C to measure classifier corruption robustness and ImageNet-P to measure perturbation robustness |
Dan Hendrycks; Thomas Dietterich; |
2019 | 11 | Robustness May Be At Odds With Accuracy IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We show that adversarial robustness might come at the cost of standard classification performance, but also yields unexpected benefits. |
Dimitris Tsipras; Shibani Santurkar; Logan Engstrom; Alexander Turner; Aleksander Madry; |
2019 | 12 | A Closer Look At Few-shot Classification IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: A detailed empirical study in few-shot classification that revealing challenges in standard evaluation setting and showing a new direction. |
Wei-Yu Chen; Yen-Cheng Liu; Zsolt Kira; Yu-Chiang Frank Wang; Jia-Bin Huang; |
2019 | 13 | Gradient Descent Provably Optimizes Over-parameterized Neural Networks IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We prove gradient descent achieves zero training loss with a linear rate on over-parameterized neural networks. |
Simon S. Du; Xiyu Zhai; Barnabas Poczos; Aarti Singh; |
2019 | 14 | Rethinking The Value Of Network Pruning IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In structured network pruning, fine-tuning a pruned model only gives comparable performance with training it from scratch. |
Zhuang Liu; Mingjie Sun; Tinghui Zhou; Gao Huang; Trevor Darrell; |
2019 | 15 | Meta-Learning With Latent Embedding Optimization IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Latent Embedding Optimization (LEO) is a novel gradient-based meta-learner with state-of-the-art performance on the challenging 5-way 1-shot and 5-shot miniImageNet and tieredImageNet classification tasks. |
ANDREI A. RUSU et. al. |
2018 | 1 | Graph Attention Networks IF:9 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: A novel approach to processing graph-structured data by neural networks, leveraging attention over a node’s neighborhood. Achieves state-of-the-art results on transductive citation network tasks and an inductive protein-protein interaction task. |
PETAR VELICKOVIC et. al. |
2018 | 2 | Towards Deep Learning Models Resistant To Adversarial Attacks IF:9 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We provide a principled, optimization-based re-look at the notion of adversarial examples, and develop methods that produce models that are adversarially robust against a wide range of adversaries. |
Aleksander Madry; Aleksandar Makelov; Ludwig Schmidt; Dimitris Tsipras; Adrian Vladu; |
2018 | 3 | Progressive Growing Of GANs For Improved Quality, Stability, And Variation IF:9 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We train generative adversarial networks in a progressive fashion, enabling us to generate high-resolution images with high quality. |
Tero Karras; Timo Aila; Samuli Laine; Jaakko Lehtinen; |
2018 | 4 | Mixup: Beyond Empirical Risk Minimization IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Training on convex combinations between random training examples and their labels improves generalization in deep neural networks |
Hongyi Zhang; Moustapha Cisse; Yann N. Dauphin; David Lopez-Paz; |
2018 | 5 | Spectral Normalization For Generative Adversarial Networks IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator of GANs. |
Takeru Miyato; Toshiki Kataoka; Masanori Koyama; Yuichi Yoshida; |
2018 | 6 | Ensemble Adversarial Training: Attacks And Defenses IF:9 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Adversarial training with single-step methods overfits, and remains vulnerable to simple black-box and white-box attacks. We show that including adversarial examples from multiple sources helps defend against black-box attacks. |
FLORIAN TRAM�R et. al. |
2018 | 7 | Unsupervised Representation Learning By Predicting Image Rotations IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. |
Spyros Gidaris; Praveer Singh; Nikos Komodakis; |
2018 | 8 | Word Translation Without Parallel Data IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Aligning languages without the Rosetta Stone: with no parallel data, we construct bilingual dictionaries using adversarial training, cross-domain local scaling, and an accurate proxy criterion for cross-validation. |
Guillaume Lample; Alexis Conneau; Marc’Aurelio Ranzato; Ludovic Denoyer; Herv� J�gou; |
2018 | 9 | On The Convergence Of Adam And Beyond IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We investigate the convergence of popular optimization algorithms like Adam , RMSProp and propose new variants of these methods which provably converge to optimal solution in convex settings. |
Sashank J. Reddi; Satyen Kale; Sanjiv Kumar; |
2018 | 10 | A Deep Reinforced Model For Abstractive Summarization IF:9 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: A summarization model combining a new intra-attention and reinforcement learning method to increase summary ROUGE scores and quality for long sequences. |
Romain Paulus; Caiming Xiong; Richard Socher; |
2018 | 11 | Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: A neural sequence model that learns to forecast on a directed graph. |
Yaguang Li; Rose Yu; Cyrus Shahabi; Yan Liu; |
2018 | 12 | Regularizing And Optimizing LSTM Language Models IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: Effective regularization and optimization strategies for LSTM-based language models achieves SOTA on PTB and WT2. |
Stephen Merity; Nitish Shirish Keskar; Richard Socher; |
2018 | 13 | Countering Adversarial Images Using Input Transformations IF:8 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We apply a model-agnostic defense strategy against adversarial examples and achieve 60% white-box accuracy and 90% black-box accuracy against major attack algorithms. |
Chuan Guo; Mayank Rana; Moustapha Cisse; Laurens van der Maaten; |
2018 | 14 | A Simple Neural Attentive Meta-Learner IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: a simple RNN-based meta-learner that achieves SOTA performance on popular benchmarks |
Nikhil Mishra; Mostafa Rohaninejad; Xi Chen; Pieter Abbeel; |
2018 | 15 | Enhancing The Reliability Of Out-of-distribution Image Detection In Neural Networks IF:7 Literature Review Related Patents Related Grants Related Orgs Related Experts Details Highlight: We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. |
Shiyu Liang; Yixuan Li; R. Srikant; |