Hit enter to search or ESC to close

Paper Digest: CVPR 2019 Highlights


Download CVPR-2019-Paper-Digests.pdf– highlights of all 1,294 CVPR-2019 papers (.PDF file size is ~1M).

You can also download paper highlights by sessions (15 sessions in total):

3D Multiview;    3D Single View & RGBD;    Action & Video;    Applications;    Computational Photography & Graphics;

Deep Learning;    Face & Body;    Language & Reasoning;    Low-Level & Optimization;    Motion & Biometrics;

Recognition;    Scenes & Representation;    Segmentation, Grouping,  & Shape;    Statistics, Physics, Theory, & Datasets;    Synthesis.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) is one of the top computer vision conferences in the world. In 2019, it is to be held in California. There were more than 5,000 paper submissions, of which 1,294 were accepted.

To help AI community quickly catch up on the work presented in this conference, Paper Digest Team processed all CVPR-2019 accepted papers, and generated one highlight sentence (typically the main topic) for each paper. Readers are encouraged to read these machine generated highlights to quickly get the main idea of each paper.

We thank all authors for writing these interesting papers, and readers for reading our digests. If you do not want to miss any interesting paper in your areas, you are welcome to sign up our free daily paper digest service to get new paper updates customized to your own interests on a daily basis.

Paper Digest Team



1, TITLE: Finding Task-Relevant Features for Few-Shot Learning by Category Traversal
AUTHORS: Hongyang Li, David Eigen, Samuel Dodge, Matthew Zeiler, Xiaogang Wang
HIGHLIGHT: In this work, we introduce a Category Traversal Module that can be inserted as a plug-and-play module into most metric-learning based few-shot learners.

2, TITLE: Edge-Labeling Graph Neural Network for Few-Shot Learning
AUTHORS: Jongmin Kim, Taesup Kim, Sungwoong Kim, Chang D. Yoo
HIGHLIGHT: In this paper, we propose a novel edge-labeling graph neural network (EGNN), which adapts a deep neural network on the edge-labeling graph, for few-shot learning.

3, TITLE: Generating Classification Weights With GNN Denoising Autoencoders for Few-Shot Learning
AUTHORS: Spyros Gidaris, Nikos Komodakis
HIGHLIGHT: Given an initial recognition model already trained on a set of base classes, the goal of this work is to develop a meta-model for few-shot learning.

4, TITLE: Kervolutional Neural Networks
AUTHORS: Chen Wang, Jianfei Yang, Lihua Xie, Junsong Yuan
HIGHLIGHT: To solve this problem, a new operation, kervolution (kernel convolution), is introduced to approximate complex behaviors of human perception systems leveraging on the kernel trick.

5, TITLE: Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem
AUTHORS: Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf
HIGHLIGHT: For bounded domains like images we propose a new robust optimization technique similar to adversarial training which enforces low confidence predictions far away from the training data.

6, TITLE: On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions
AUTHORS: Yusuke Tsuzuku, Issei Sato
HIGHLIGHT: As a byproduct of the analysis, we propose an algorithm to create shift-invariant universal adversarial perturbations available in black-box settings.

7, TITLE: Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization
AUTHORS: Siyuan Qiao, Zhe Lin, Jianming Zhang, Alan L. Yuille
HIGHLIGHT: In this paper, we study the problem of improving computational resource utilization of neural networks.

8, TITLE: Hardness-Aware Deep Metric Learning
AUTHORS: Wenzhao Zheng, Zhaodong Chen, Jiwen Lu, Jie Zhou
HIGHLIGHT: This paper presents a hardness-aware deep metric learning (HDML) framework.

9, TITLE: Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation
AUTHORS: Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L. Yuille, Li Fei-Fei
HIGHLIGHT: In this paper, we study NAS for semantic image segmentation.

10, TITLE: Learning Loss for Active Learning
AUTHORS: Donggeun Yoo, In So Kweon
HIGHLIGHT: In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks.

11, TITLE: Striking the Right Balance With Uncertainty
AUTHORS: Salman Khan, Munawar Hayat, Syed Waqas Zamir, Jianbing Shen, Ling Shao
HIGHLIGHT: In this paper, we demonstrate that the Bayesian uncertainty estimates directly correlate with the rarity of classes and the difficulty level of individual samples.

12, TITLE: AutoAugment: Learning Augmentation Strategies From Data
AUTHORS: Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, Quoc V. Le
HIGHLIGHT: In this paper, we describe a simple procedure called AutoAugment to automatically search for improved data augmentation policies.

13, TITLE: SDRSAC: Semidefinite-Based Randomized Approach for Robust Point Cloud Registration Without Correspondences
AUTHORS: Huu M. Le, Thanh-Toan Do, Tuan Hoang, Ngai-Man Cheung
HIGHLIGHT: This paper presents a novel randomized algorithm for robust point cloud registration without correspondences.

14, TITLE: BAD SLAM: Bundle Adjusted Direct RGB-D SLAM
AUTHORS: Thomas Schops, Torsten Sattler, Marc Pollefeys
HIGHLIGHT: In contrast, in this paper we present a novel, fast direct BA formulation which we implement in a real-time dense RGB-D SLAM algorithm.
In order to facilitate state-of-the-art research on direct RGB-D SLAM, we propose a novel, well-calibrated benchmark for this task that uses synchronized global shutter RGB and depth cameras.

15, TITLE: Revealing Scenes by Inverting Structure From Motion Reconstructions
AUTHORS: Francesco Pittaluga, Sanjeev J. Koppal, Sing Bing Kang, Sudipta N. Sinha
HIGHLIGHT: In this paper, we show, for the first time, that such point clouds retain enough information to reveal scene appearance and compromise privacy.

16, TITLE: Strand-Accurate Multi-View Hair Capture
AUTHORS: Giljoo Nam, Chenglei Wu, Min H. Kim, Yaser Sheikh
HIGHLIGHT: In this paper, we present the first method to capture high-fidelity hair geometry with strand-level accuracy.

17, TITLE: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
AUTHORS: Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, Steven Lovegrove
HIGHLIGHT: In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.

18, TITLE: Pushing the Boundaries of View Extrapolation With Multiplane Images
AUTHORS: Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, Noah Snavely
HIGHLIGHT: We explore the problem of view synthesis from a narrow baseline pair of images, and focus on generating high-quality view extrapolations with plausible disocclusions.

19, TITLE: GA-Net: Guided Aggregation Net for End-To-End Stereo Matching
AUTHORS: Feihu Zhang, Victor Prisacariu, Ruigang Yang, Philip H.S. Torr
HIGHLIGHT: We propose two novel neural net layers, aimed at capturing local and the whole-image cost dependencies respectively.

20, TITLE: Real-Time Self-Adaptive Deep Stereo
AUTHORS: Alessio Tonioni, Fabio Tosi, Matteo Poggi, Stefano Mattoccia, Luigi Di Stefano
HIGHLIGHT: Instead, we propose to perform unsupervised and continuous online adaptation of a deep stereo network, which allows for preserving its accuracy in any environment.


Download CVPR-2019-Paper-Digests.pdf to read the highlights of all 1,294 CVPR-2019 papers.

If you do not want to miss any interesting paper in your areas, you are welcome to sign up our free daily paper digest service to get new paper updates customized to your own interests on a daily basis.