Most Influential SIGGRAPH Papers (2026-03 Version)
To search or review papers within SIGGRAPH related to a specific topic, please use the search by venue (SIGGRAPH) and review by venue (SIGGRAPH) services. To browse the most productive SIGGRAPH authors by year ranked by #papers accepted, here are the most productive SIGGRAPH authors grouped by year.
As a pioneer in the field since 2018, Paper Digest has curated thousands of such lists, drawing on years of accumulated data across decades of conferences and research topics.To ensure users never miss a breakthrough, our daily digest service sifts through tens of thousands of new papers, clinical trials, news articles, community posts every day – delivering only what matters most to your specific interests. Beyond discovery, Paper Digest offers built-in research tools to help users read articles, write articles, get answers, conduct literature reviews, and generate research reports more efficiently.
Paper Digest Team
New York City, New York, 10017
TABLE 1: Most Influential SIGGRAPH Papers (2026-03 Version)
| Year | Rank | Paper | Author(s) |
|---|---|---|---|
| 2025 | 1 | Diffusion As Shader: 3D-aware Video Diffusion for Versatile Video Generation Control IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we introduce Diffusion as Shader (DaS), a novel approach that supports multiple video control tasks within a unified architecture. |
ZEKAI GU et. al. |
| 2025 | 2 | CAST: Component-Aligned 3D Scene Reconstruction from An RGB Image IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: To address these, we propose CAST (Component-Aligned 3D Scene Reconstruction from a Single RGB Image), a novel method for 3D scene reconstruction. |
KAIXIN YAO et. al. |
| 2025 | 3 | StableMakeup: When Real-World Makeup Transfer Meets Diffusion Model IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we introduce Stable-Makeup, a novel diffusion-based makeup transfer method capable of robustly transferring a wide range of real-world makeup, onto user-provided faces. |
Yuxuan Zhang; Yirui Yuan; Yiren Song; Jiaming Liu; |
| 2025 | 4 | CineMaster: A 3D-Aware and Controllable Framework for Cinematic Text-to-Video Generation IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work, we present CineMaster, a novel framework for 3D-aware and controllable text-to-video generation. |
QINGHE WANG et. al. |
| 2025 | 5 | VideoPainter: Any-length Video Inpainting and Editing with Plug-and-Play Context Control IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: However, current methods relying on limited pixel propagation or single-branch image inpainting architectures face challenges with generating fully masked objects, balancing background preservation with foreground generation, and maintaining ID consistency over long video. To address these issues, we propose VideoPainter, an efficient dual-branch framework featuring a lightweight context encoder. |
YUXUAN BIAN et. al. |
| 2025 | 6 | LayerPano3D: Layered 3D Panorama for Hyper-Immersive Scene Generation IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: However, the generated scene suffers from semantic drift during expansion and is unable to handle occlusion among scene hierarchies. To tackle these challenges, we introduce LayerPano3D, a novel framework for full-view, explorable panoramic 3D scene generation from a single text prompt. |
SHUAI YANG et. al. |
| 2025 | 7 | VideoAnydoor: High-fidelity Video Object Insertion with Precise Motion Control IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we propose VideoAnydoor, a zero-shot video object insertion framework with high-fidelity detail preservation and precise motion control. |
YUANPENG TU et. al. |
| 2025 | 8 | TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present TokenVerse – a method for multi-concept personalization, leveraging a pre-trained text-to-image diffusion model. |
DANIEL GARIBI et. al. |
| 2025 | 9 | MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: This paper presents a method that allows users to design cinematic video shots in the context of image-to-video generation. |
JINBO XING et. al. |
| 2025 | 10 | Motion Inversion for Video Customization IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work, we present a novel approach for motion customization in video generation, addressing the widespread gap in the exploration of motion representation within video generative models. |
LUOZHOU WANG et. al. |
| 2025 | 11 | RigAnything: Template-Free Autoregressive Rigging for Diverse 3D Assets IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present RigAnything, a novel autoregressive transformer-based model, which makes 3D assets rig-ready by probabilistically generating joints and skeleton topologies and assigning skinning weights in a template-free manner. |
ISABELLA LIU et. al. |
| 2025 | 12 | Deformable Beta Splatting IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We introduce Deformable Beta Splatting (DBS), a deformable and compact approach that enhances both geometry and color representation. |
Rong Liu; Dylan Sun; Meida Chen; Yue Wang; Andrew Feng; |
| 2025 | 13 | FLoD: Integrating Flexible Level of Detail Into 3D Gaussian Splatting for Customizable Rendering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Conversely, methods that enhance rendering quality require high-end GPU with large VRAM, making such methods impractical for lower-end devices with limited memory capacity. Consequently, 3DGS-based works generally assume a single hardware setup and lack the flexibility to adapt to varying hardware constraints.To overcome this limitation, we propose Flexible Level of Detail (FLoD) for 3DGS. |
Yunji Seo; Young Sun Choi; HyunSeung Son; Youngjung Uh; |
| 2025 | 14 | One Model to Rig Them All: Diverse Skeleton Rigging with UniRig IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We introduce UniRig, a novel, unified framework for automatic skeletal rigging that leverages the power of large autoregressive models and a bone-point cross-attention mechanism to generate both high-quality skeletons and skinning weights. |
Jia-Peng Zhang; Cheng-Feng Pu; Meng-Hao Guo; Yan-Pei Cao; Shi-Min Hu; |
| 2025 | 15 | Image-GS: Content-Adaptive Image Representation Via 2D Gaussians IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Combined with learning-based workflows, they demonstrate impressive trade-offs between visual fidelity and memory footprint. Existing methods in this domain, however, often rely on fixed data structures that suboptimally allocate memory or compute-intensive implicit models, hindering their practicality for real-time graphics applications.Inspired by recent advancements in radiance field rendering, we introduce Image-GS, a content-adaptive image representation based on 2D Gaussians. |
YUNXIANG ZHANG et. al. |
| 2024 | 1 | 2D Gaussian Splatting for Geometrically Accurate Radiance Fields IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present 2D Gaussian Splatting (2DGS), a novel approach to model and reconstruct geometrically accurate radiance fields from multi-view images. |
Binbin Huang; Zehao Yu; Anpei Chen; Andreas Geiger; Shenghua Gao; |
| 2024 | 2 | MotionCtrl: A Unified and Flexible Motion Controller for Video Generation IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: However, existing works either mainly focus on one type of motion or do not clearly distinguish between the two, limiting their control capabilities and diversity. Therefore, this paper presents MotionCtrl, a unified and flexible motion controller for video generation designed to effectively and independently control camera and object motion. |
ZHOUXIA WANG et. al. |
| 2024 | 3 | CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In the realm of digital creativity, our potential to craft intricate 3D worlds from imagination is often hampered by the limitations of existing digital tools, which demand extensive expertise and efforts. To narrow this disparity, we introduce CLAY, a 3D geometry and material generator designed to effortlessly transform human imagination into intricate 3D digital structures. |
LONGWEN ZHANG et. al. |
| 2024 | 4 | A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We introduce a divide-and-conquer approach that allows us to train very large scenes in independent chunks. |
BERNHARD KERBL et. al. |
| 2024 | 5 | Subject-Diffusion: Open Domain Personalized Text-to-Image Generation Without Test-time Fine-tuning IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we propose Subject-Diffusion, a novel open-domain personalized image generation model that, in addition to not requiring test-time fine-tuning, also only requires a single reference image to support personalized generation of single- or two-subjects in any domain. |
Jian Ma; Junhao Liang; Chen Chen; Haonan Lu; |
| 2024 | 6 | VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Our proposed VR-GS system represents a leap forward in human-centered 3D content interaction, offering a seamless and intuitive user experience. |
YING JIANG et. al. |
| 2024 | 7 | Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We introduce Motion-I2V, a novel framework for consistent and controllable text-guided image-to-video generation (I2V). |
XIAOYU SHI et. al. |
| 2024 | 8 | High-quality Surface Reconstruction Using Gaussian Surfels IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We propose a volumetric cutting method to aggregate the information of Gaussian surfels so as to remove erroneous points in depth maps generated by alpha blending. |
PINXUAN DAI et. al. |
| 2024 | 9 | 4D-Rotor Gaussian Splatting: Towards Efficient Novel View Synthesis for Dynamic Scenes IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we introduce 4D Gaussian Splatting (4DRotorGS), a novel method that represents dynamic scenes with anisotropic 4D XYZT Gaussians, inspired by the success of 3D Gaussian Splatting in static scenes [Kerbl et al. 2023]. |
YUANXING DUAN et. al. |
| 2024 | 10 | Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we introduce Direct-a-Video, a system that allows users to independently specify motions for multiple objects as well as camera’s pan and zoom movements, as if directing a video. |
SHIYUAN YANG et. al. |
| 2024 | 11 | Cross-Image Attention for Zero-Shot Appearance Transfer IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Recent advancements in text-to-image generative models have demonstrated a remarkable ability to capture a deep semantic understanding of images. In this work, we leverage this semantic knowledge to transfer the visual appearance between objects that share similar semantics but may differ significantly in shape. |
Yuval Alaluf; Daniel Garibi; Or Patashnik; Hadar Averbuch-Elor; Daniel Cohen-Or; |
| 2024 | 12 | RGB↔X: Image Decomposition and Synthesis Using Material- and Lighting-aware Diffusion Models IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Our X → RGB model explores a middle ground between traditional rendering and generative models: We can specify only certain appearance properties that should be followed, and give freedom to the model to hallucinate a plausible version of the rest. |
ZHENG ZENG et. al. |
| 2024 | 13 | Training-Free Consistent Text-to-Image Generation IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Here, we present ConsiStory, a training-free approach that enables consistent subject generation by sharing the internal activations of the pretrained model. |
YOAD TEWEL et. al. |
| 2024 | 14 | MonoGaussianAvatar: Monocular Gaussian Point-based Head Avatar IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: However, 3DMM-based methods are constrained by their fixed topologies, point-based approaches suffer from a heavy training burden due to the extensive quantity of points involved, and the last ones suffer from limitations in deformation flexibility and rendering efficiency. In response to these challenges, we propose MonoGaussianAvatar (Monocular Gaussian Point-based Head Avatar), a novel approach that harnesses 3D Gaussian point representation coupled with a Gaussian deformation field to learn explicit head avatars from monocular portrait videos. |
YUFAN CHEN et. al. |
| 2024 | 15 | StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time Rendering IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we present a novel hierarchical rasterization approach that systematically resorts and culls splats with minimal processing overhead. |
LUKAS RADL et. al. |
| 2023 | 1 | 3D Gaussian Splatting for Real-Time Radiance Field Rendering IF:8 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (≥ 30 fps) novel-view synthesis at 1080p resolution. |
Bernhard Kerbl; Georgios Kopanas; Thomas Leimkuehler; George Drettakis; |
| 2023 | 2 | Nerfstudio: A Modular Framework for Neural Radiance Field Development IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In order to streamline the development and deployment of NeRF research, we propose a modular PyTorch framework, Nerfstudio. |
MATTHEW TANCIK et. al. |
| 2023 | 3 | Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. |
Hila Chefer; Yuval Alaluf; Yael Vinker; Lior Wolf; Daniel Cohen-Or; |
| 2023 | 4 | Zero-shot Image-to-Image Translation IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work, we introduce pix2pix-zero, an image-to-image translation method that can preserve the original image’s content without manual prompting. |
GAURAV PARMAR et. al. |
| 2023 | 5 | Blended Latent Diffusion IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we present an accelerated solution to the task of local text-driven editing of generic images, where the desired edits are confined to a user-provided mask. |
Omri Avrahami; Ohad Fried; Dani Lischinski; |
| 2023 | 6 | 3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We introduce 3DShape2VecSet, a novel shape representation for neural fields designed for generative diffusion models. |
Biao Zhang; Jiapeng Tang; Matthias Nießner; Peter Wonka; |
| 2023 | 7 | TEXTure: Text-Guided Texturing of 3D Shapes IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we present TEXTure, a novel method for text-guided generation, editing, and transfer of textures for 3D shapes. |
Elad Richardson; Gal Metzer; Yuval Alaluf; Raja Giryes; Daniel Cohen-Or; |
| 2023 | 8 | Drag Your GAN: Interactive Point-based Manipulation on The Generative Image Manifold IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work, we study a powerful yet much less explored way of controlling GANs, that is, to "drag" any points of the image to precisely reach target points in a user-interactive manner, as shown in Fig.1. |
XINGANG PAN et. al. |
| 2023 | 9 | BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis. |
LIOR YARIV et. al. |
| 2023 | 10 | MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present a Memory-Efficient Radiance Field (MERF) representation that achieves real-time rendering of large-scale scenes in a browser. |
CHRISTIAN REISER et. al. |
| 2023 | 11 | Sketch-Guided Text-to-Image Diffusion Models IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Our key idea is to train a Latent Guidance Predictor (LGP) – a small, per-pixel, Multi-Layer Perceptron (MLP) that maps latent features of noisy images to spatial maps, where the deep features are extracted from the core Denoising Diffusion Probabilistic Model (DDPM) network. |
Andrey Voynov; Kfir Aberman; Daniel Cohen-Or; |
| 2023 | 12 | Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: However, current personalization approaches struggle with lengthy training times, high storage requirements or loss of identity. To overcome these limitations, we propose an encoder-based domain-tuning approach. |
RINON GAL et. al. |
| 2023 | 13 | Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Diffusion models have experienced a surge of interest as highly expressive yet efficiently trainable probabilistic models. We show that these models are an excellent fit for synthesising human motion that co-occurs with audio, e.g., dancing and co-speech gesticulation, since motion is complex and highly ambiguous given audio, calling for a probabilistic description. |
Simon Alexanderson; Rajmund Nagy; Jonas Beskow; Gustav Eje Henter; |
| 2023 | 14 | Key-Locked Rank One Editing for Text-to-Image Personalization IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: The task of T2I personalization poses multiple hard challenges, such as maintaining high visual fidelity while allowing creative control, combining multiple personalized concepts in a single image, and keeping a small model size. We present Perfusion, a T2I personalization method that addresses these challenges using dynamic rank-1 updates to the underlying T2I model. |
Yoad Tewel; Rinon Gal; Gal Chechik; Yuval Atzmon; |
| 2023 | 15 | GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work, we present GestureDiffuCLIP, a neural network framework for synthesizing realistic, stylized co-speech gestures with flexible style control. |
Tenglong Ao; Zeyi Zhang; Libin Liu; |
| 2022 | 1 | Palette: Image-to-Image Diffusion Models IF:8 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: This paper develops a unified framework for image-to-image translation based on conditional diffusion models and evaluates this framework on four challenging image-to-image translation tasks, namely colorization, inpainting, uncropping, and JPEG restoration. |
CHITWAN SAHARIA et. al. |
| 2022 | 2 | StyleGAN-NADA: CLIP-guided Domain Adaptation of Image Generators IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image. |
RINON GAL et. al. |
| 2022 | 3 | StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Our final model, StyleGAN-XL, sets a new state-of-the-art on large-scale image synthesis and is the first to generate images at a resolution of 10242 at such a dataset scale. |
Axel Sauer; Katja Schwarz; Andreas Geiger; |
| 2022 | 4 | Domain Enhanced Arbitrary Image Style Transfer Via Contrastive Learning IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work, we tackle the challenging problem of arbitrary image style transfer using a novel style feature representation learning method. |
YUXIN ZHANG et. al. |
| 2022 | 5 | EAMM: One-Shot Emotional Talking Face Via Audio-Based Emotion-Aware Motion Model IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we propose the Emotion-Aware Motion Model (EAMM) to generate one-shot emotional talking faces by involving an emotion source video. |
XINYA JI et. al. |
| 2022 | 6 | Authentic Volumetric Avatars from A Phone Scan IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Creating photorealistic avatars of existing people currently requires extensive person-specific data capture, which is usually only accessible to the VFX industry and not the general public. Our work aims to address this drawback by relying only on a short mobile phone capture to obtain a drivable 3D head avatar that matches a person’s likeness faithfully. |
CHEN CAO et. al. |
| 2022 | 7 | Variable Bitrate Neural Fields IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Unfortunately, these feature grids usually come at the cost of significantly increased memory consumption compared to stand-alone neural network models. We present a dictionary method for compressing such feature grids, reducing their memory consumption by up to 100 × and permitting a multiresolution representation which can be useful for out-of-core streaming. |
TOWAKI TAKIKAWA et. al. |
| 2022 | 8 | Differentiable Signed Distance Function Rendering IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this article, we show how to extend the commonly used sphere tracing algorithm so that it additionally outputs a reparameterization that provides the means to compute accurate shape parameter derivatives. |
Delio Vicini; Sébastien Speierer; Wenzel Jakob; |
| 2022 | 9 | Physics-based Character Controllers Using Conditional VAEs IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: High-quality motion capture datasets are now publicly available, and researchers have used them to create kinematics-based controllers that can generate plausible and diverse human motions without conditioning on specific goals (i.e., a task-agnostic generative model). In this paper, we present an algorithm to build such controllers for physically simulated characters having many degrees of freedom. |
Jungdam Won; Deepak Gopinath; Jessica Hodgins; |
| 2022 | 10 | Learning Smooth Neural Functions Via Lipschitz Regularization IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work, we introduce a novel regularization designed to encourage smooth latent spaces in neural fields by penalizing the upper bound on the field’s Lipschitz constant. |
Hsueh-Ti Derek Liu; Francis Williams; Alec Jacobson; Sanja Fidler; Or Litany; |
| 2022 | 11 | CLIP2StyleGAN: Unsupervised Extraction of StyleGAN Edit Directions IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work, we investigate how to effectively link the pretrained latent spaces of StyleGAN and CLIP, which in turn allows us to automatically extract semantically-labeled edit directions from StyleGAN, finding and naming meaningful edit operations, in a fully unsupervised setup, without additional human guidance. |
Rameen Abdal; Peihao Zhu; John Femiani; Niloy Mitra; Peter Wonka; |
| 2022 | 12 | Neural Dual Contouring IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We introduce neural dual contouring (NDC), a new data-driven approach to mesh reconstruction based on dual contouring (DC). |
Zhiqin Chen; Andrea Tagliasacchi; Thomas Funkhouser; Hao Zhang; |
| 2022 | 13 | ReLU Fields: The Little Non-linearity That Could IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Hence, in this work, we investigate what is the smallest change to grid-based representations that allows for retaining the high fidelity result of MLPs while enabling fast reconstruction and rendering times. |
Animesh Karnewar; Tobias Ritschel; Oliver Wang; Niloy Mitra; |
| 2022 | 14 | Approximate Convex Decomposition for 3D Meshes with Collision-aware Concavity and Tree Search IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: While prior works can capture the global structure of input shapes, they may fail to preserve fine-grained details (e.g., filling a toaster’s slots), which are critical for retaining the functionality of objects in interactive environments. In this paper, we propose a novel method that addresses the limitations of existing approaches from three perspectives: (a) We introduce a novel collision-aware concavity metric that examines the distance between a shape and its convex hull from both the boundary and the interior. |
Xinyue Wei; Minghua Liu; Zhan Ling; Hao Su; |
| 2022 | 15 | Dual Octree Graph Networks for Learning Adaptive Volumetric Shape Representations IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present an adaptive deep representation of volumetric fields of 3D shapes and an efficient approach to learn this deep representation for high-quality 3D shape reconstruction and auto-encoding. |
Peng-Shuai Wang; Yang Liu; Xin Tong; |
| 2021 | 1 | AMP: Adversarial Motion Priors for Stylized Physics-based Character Control IF:8 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work, we propose to obviate the need to manually design imitation objectives and mechanisms for motion selection by utilizing a fully automated approach based on adversarial imitation learning. |
Xue Bin Peng; Ze Ma; Pieter Abbeel; Sergey Levine; Angjoo Kanazawa; |
| 2021 | 2 | Acorn: Adaptive Coordinate Networks for Neural Scene Representation IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Here, we introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference based on the local complexity of a signal of interest. |
JULIEN N. P. MARTEL et. al. |
| 2021 | 3 | Designing An Encoder for StyleGAN Image Manipulation IF:8 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we carefully study the latent space of StyleGAN, the state-of-the-art unconditional generator. |
Omer Tov; Yuval Alaluf; Yotam Nitzan; Or Patashnik; Daniel Cohen-Or; |
| 2021 | 4 | Learning An Animatable Detailed 3D Face Model from In-the-wild Images IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present the first approach that regresses 3D face shape and animatable details that are specific to an individual but change with expression. |
Yao Feng; Haiwen Feng; Michael J. Black; Timo Bolkart; |
| 2021 | 5 | Mixture of Volumetric Primitives for Efficient Neural Rendering IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present Mixture of Volumetric Primitives (MVP), a representation for rendering dynamic 3D content that combines the completeness of volumetric representations with the efficiency of primitive-based rendering, e.g., point-based or mesh-based methods. |
STEPHEN LOMBARDI et. al. |
| 2021 | 6 | Editable Free-viewpoint Video Using A Layered Neural Representation IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: To fill this gap, in this paper, we propose the first approach for editable free-viewpoint video generation for large-scale view-dependent dynamic scenes using only 16 cameras. |
JIAKAI ZHANG et. al. |
| 2021 | 7 | Real-time Deep Dynamic Characters IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance learned in a new weakly supervised way from multi-view imagery. |
MARC HABERMANN et. al. |
| 2021 | 8 | Only A Matter of Style: Age Transformation Using A Style-based Regression Model IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work, we present an image-to-image translation method that learns to directly encode real facial images into the latent space of a pre-trained unconditional GAN (e.g., StyleGAN) subject to a given aging shift. |
Yuval Alaluf; Or Patashnik; Daniel Cohen-Or; |
| 2021 | 9 | Codimensional Incremental Potential Contact IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: Extending the IPC model to thin structures poses new challenges in computing strain, modeling thickness and determining collisions. To address these challenges we propose three corresponding contributions. |
Minchen Li; Danny M. Kaufman; Chenfanfu Jiang; |
| 2021 | 10 | Fusion 360 Gallery: A Dataset and Environment for Programmatic CAD Construction from Human Design Sequences IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper we present the Fusion 360 Gallery, consisting of a simple language with just the sketch and extrude modeling operations, and a dataset of 8,625 human design sequences expressed in this language. |
KARL D. D. WILLIS et. al. |
| 2021 | 11 | Neural Monocular 3D Human Motion Capture with Physical Awareness IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present a new trainable system for physically plausible markerless 3D human motion capture, which achieves state-of-the-art results in a broad range of challenging scenarios. |
Soshi Shimada; Vladislav Golyanik; Weipeng Xu; Patrick Pérez; Christian Theobalt; |
| 2021 | 12 | End-to-end Complex Lens Design with Differentiate Ray Tracing IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: To overcome these challenges, we propose a general end-to-end complex lens design framework enabled by a differentiable ray tracing image formation model. |
Qilin Sun; Congli Wang; Qiang Fu; Xiong Dun; Wolfgang Heidrich; |
| 2021 | 13 | Driving-signal Aware Full-body Avatars IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present a learning-based method for building driving-signal aware full-body avatars. |
TIMUR BAGAUTDINOV et. al. |
| 2021 | 14 | GPU-based Simulation of Cloth Wrinkles at Submillimeter Levels IF:3 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we study physics-based cloth simulation in a very high resolution setting, presumably at submillimeter levels with millions of vertices, to meet perceptual precision of our human eyes. |
Huamin Wang; |
| 2021 | 15 | Neural Animation Layering for Synthesizing Martial Arts Movements IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we propose a deep learning framework to produce a large variety of martial arts movements in a controllable manner from raw motion capture data. |
Sebastian Starke; Yiwei Zhao; Fabio Zinno; Taku Komura; |
| 2020 | 1 | XNect: Real-time Multi-person 3D Motion Capture With A Single RGB Camera IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. |
DUSHYANT MEHTA et. al. |
| 2020 | 2 | Consistent Video Depth Estimation IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present an algorithm for reconstructing dense, geometrically consistent depth for all pixels in a monocular video. |
Xuan Luo; Jia-Bin Huang; Richard Szeliski; Kevin Matzen; Johannes Kopf; |
| 2020 | 3 | Robust Motion In-betweening IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work we present a novel, robust transition generation technique that can serve as a new tool for 3D animators, based on adversarial recurrent neural networks. |
Félix G. Harvey; Mike Yurick; Derek Nowrouzezahrai; Christopher Pal; |
| 2020 | 4 | Immersive Light Field Video With A Layered Mesh Representation IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. |
MICHAEL BROXTON et. al. |
| 2020 | 5 | Character Controllers Using Motion VAEs IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We learn data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motion VAEs. |
Hung Yu Ling; Fabio Zinno; George Cheng; Michiel Van De Panne; |
| 2020 | 6 | Learning Temporal Coherence Via Self-supervision For GAN-based Video Generation IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In contrast, we focus on improving learning objectives and propose a temporally self-supervised algorithm. |
Mengyu Chu; You Xie; Jonas Mayer; Laura Leal-Taixé; Nils Thuerey; |
| 2020 | 7 | Skeleton-aware Networks For Deep Motion Retargeting IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We introduce a novel deep learning framework for data-driven motion retargeting between skeletons, which may have different structure, yet corresponding to homeomorphic graphs. |
KFIR ABERMAN et. al. |
| 2020 | 8 | Local Motion Phases For Learning Multi-contact Character Movements IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we propose a novel framework to learn fast and dynamic character interactions that involve multiple contacts between the body and an object, another character and the environment, from a rich, unstructured motion capture database. |
Sebastian Starke; Yiwei Zhao; Taku Komura; Kazi Zaman; |
| 2020 | 9 | Fast Tetrahedral Meshing In The Wild IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We propose a new tetrahedral meshing method, fTetWild, to convert triangle soups into high-quality tetrahedral meshes. |
Yixin Hu; Teseo Schneider; Bolun Wang; Denis Zorin; Daniele Panozzo; |
| 2020 | 10 | Spatiotemporal Reservoir Resampling For Real-time Ray Tracing With Dynamic Direct Lighting IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We introduce a new algorithm—ReSTIR—that renders such lighting interactively, at high quality, and without needing to maintain complex data structures. |
BENEDIKT BITTERLI et. al. |
| 2020 | 11 | Unpaired Motion Style Transfer From Video To Animation IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we present a novel data-driven framework for motion style transfer, which learns from an unpaired collection of motions with style labels, and enables transferring motion styles not observed during training. |
Kfir Aberman; Yijia Weng; Dani Lischinski; Daniel Cohen-Or; Baoquan Chen; |
| 2020 | 12 | Single Image HDR Reconstruction Using A CNN With Masked Features And Perceptual Loss IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we present a novel learning-based approach to reconstruct an HDR image by recovering the saturated pixels of an input LDR image in a visually pleasing way. |
Marcel Santana Santos; Tsang Ing Ren; Nima Khademi Kalantari; |
| 2020 | 13 | A Scalable Approach To Control Diverse Behaviors For Physically Simulated Characters IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we develop a technique for learning controllers for a large set of heterogeneous behaviors. |
Jungdam Won; Deepak Gopinath; Jessica Hodgins; |
| 2020 | 14 | Path-space Differentiable Rendering IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we show how path integrals can be differentiated with respect to arbitrary differentiable changes of a scene. |
Cheng Zhang; Bailey Miller; Kai Yan; Ioannis Gkioulekas; Shuang Zhao; |
| 2020 | 15 | Learned Motion Matching IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper we present a learned alternative to the Motion Matching algorithm which retains the positive properties of Motion Matching but additionally achieves the scalability of neural-network-based generative models. |
Daniel Holden; Oussama Kanoun; Maksym Perepichka; Tiberiu Popa; |
| 2019 | 1 | Deferred Neural Rendering: Image Synthesis Using Neural Textures IF:8 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this work, we explore the use of imperfect 3D content, for instance, obtained from photo-metric reconstructions with noisy and incomplete surface geometry, while still aiming to produce photo-realistic (re-)renderings. |
Justus Thies; Michael Zollh�fer; Matthias Nie�ner; |
| 2019 | 2 | Local Light Field Fusion: Practical View Synthesis With Prescriptive Sampling Guidelines IF:8 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. |
BEN MILDENHALL et. al. |
| 2019 | 3 | PlanIT: Planning And Instantiating Indoor Scenes With Relation Graph And Spatial Prior Networks IF:7 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present a new framework for interior scene synthesis that combines a high-level relation graph representation with spatial prior neural networks. |
KAI WANG et. al. |
| 2019 | 4 | Semantic Photo Manipulation With A Generative Image Prior IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we address these issues by adapting the image prior learned by GANs to image statistics of an individual image. |
DAVID BAU et. al. |
| 2019 | 5 | MeshCNN: A Network With An Edge IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we utilize the unique properties of the mesh for a direct analysis of 3D shapes using MeshCNN, a convolutional neural network designed specifically for triangular meshes. |
RANA HANOCKA et. al. |
| 2019 | 6 | Single Image Portrait Relighting IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: To this end, we present a system for portrait relighting: a neural network that takes as input a single RGB image of a portrait taken with a standard cellphone camera in an unconstrained environment, and from that image produces a relit image of that subject as though it were illuminated according to any provided environment map. |
TIANCHENG SUN et. al. |
| 2019 | 7 | Text-based Editing Of Talking-head Video IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We propose a novel method to edit talking-head video based on its transcript to produce a realistic output video in which the dialogue of the speaker has been modified, while maintaining a seamless audio-visual flow (i.e. no jump cuts). |
OHAD FRIED et. al. |
| 2019 | 8 | Learning To Optimize Halide With Tree Search And Random Programs IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present a new algorithm to automatically schedule Halide programs for high-performance image processing and deep learning. |
ANDREW ADAMS et. al. |
| 2019 | 9 | Handheld Multi-frame Super-resolution IF:6 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we supplant the use of traditional demosaicing in single-frame and burst photography pipelines with a multiframe super-resolution algorithm that creates a complete RGB image directly from a burst of CFA raw images. |
BARTLOMIEJ WRONSKI et. al. |
| 2019 | 10 | Scalable Muscle-actuated Human Simulation And Control IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: This work aims to build a comprehensive musculoskeletal model and its control system that reproduces realistic human movements driven by muscle contraction dynamics. |
Seunghwan Lee; Moonseok Park; Kyoungmin Lee; Jehee Lee; |
| 2019 | 11 | Content-aware Generative Modeling Of Graphic Design Layouts IF:5 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper, we study the problem of content-aware graphic design layout generation. To train our model, we build a large-scale magazine layout dataset with fine-grained layout annotations and keyword labeling. |
Xinru Zheng; Xiaotian Qiao; Ying Cao; Rynson W. H. Lau; |
| 2019 | 12 | Deep Inverse Rendering For High-resolution SVBRDF Estimation From An Arbitrary Number Of Images IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: In this paper we present a unified deep inverse rendering framework for estimating the spatially-varying appearance properties of a planar exemplar from an arbitrary number of input photographs, ranging from just a single photograph to many photographs. |
DUAN GAO et. al. |
| 2019 | 13 | Real-time Pose And Shape Reconstruction Of Two Interacting Hands With A Single Depth Camera IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We present a novel method for real-time pose and shape reconstruction of two strongly interacting hands. |
FRANZISKA MUELLER et. al. |
| 2019 | 14 | Interactive Hand Pose Estimation Using A Stretch-sensing Soft Glove IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We propose a stretch-sensing soft glove to interactively capture hand poses with high accuracy and without requiring an external optical setup. |
Oliver Glauser; Shihao Wu; Daniele Panozzo; Otmar Hilliges; Olga Sorkine-Hornung; |
| 2019 | 15 | A Symmetric Objective Function For ICP IF:4 Related Papers Related Patents Related Grants Related Venues Related Experts View Save Highlight: We introduce a new symmetrized objective function that achieves the simplicity and computational efficiency of point-to-plane optimization, while yielding improved convergence speed and a wider convergence basin. |
Szymon Rusinkiewicz; |