Below is the program for Tuesday (MAM), Wednesday (EGSR), Thursday (EGSR) and Friday (EGSR). The alloted time for MAM presentations is 15 minutes plus 5 minutes for discussion and setup. EGSR presentations take 18 minutes plus 5 minutes for discussion and setup. Additional information for presenters are available on the venue page.
Please find the titles and summaries of the keynotes below the program.
Note that High-Performance Graphics takes place from July 8 to July 10, at the same location. Please consult the HPG website for the HPG program.
08:30 – 12:00 Registration and information desk open
10:30 – 11:00 Coffee
11:00 – 11.10 Welcome and Introduction (H. Rushmeier and R. Klein)
11:10 – 12.00 Session: Practical Models
Spectral Rendering with the Bounded MESE and sRGB Data
Fresnel Equations Considered Harmful
12:00 – 13:30 Lunch (Resto’ U Paul Appell)
13:30 – 15:30 Session: Models, Fitting and Measurement
Rendering transparent materials with a complex refractive index: semi-conductor and conductor thin layers
Comparative Study of Layered Material Models
Estimating Homogeneous Data-driven BRDF Parameters from a Reflectance Map under Known Natural Lighting
What is the Reddening Effect and does it really exist?
A New Material Database
15:30 – 16:00 Coffee break
16:00 – 17:20 Session: Perception, Neural Methods and Research Needs
On Visual Attractiveness of Anisotropic Effect Coatings
Discussion: Research and questions in perception of materials
Neural Appearance Synthesis and Transfer
Discussion: Research and questions in neural methods for material acquisition
17:20 – 17:30 Conclusion (H. Rushmeier and R. Klein)
18:00 – 20:00 Reception (Atrium building) – For MAM attendees only
08:30 – 16:00 Registration and information desk open
11:00 – 12:00 Keynote: Jaakko Lehtinen, Aalto/Nvidia – Shared with HPG
12:00 – 13:30 Lunch (Resto’ U Paul Appell)
13:30 – 13:45 EGSR Opening Ceremony
13:45 – 15:15 Paper Session #1: Materials & Reflectance (chair Holly Rushmeier)
Flexible SVBRDF Capture with a Multi-Image Deep Network
On-Site Example-Based Material Appearance Acquisition
Glint Rendering Based on a Multiple-Scattering Patch BRDF
Microfacet Model Regularization for Robust Light Transport
15:15 – 15:45 Coffee break
15:45 – 17:00 Industry Track Session (chairs T. Boubekeur and P. Sen)
Implementing One-Click Caustics in Corona Renderer
De-lighting a high-resolution picture for material acquisition
The challenges of releasing the Moana Island Scene
Presentation of the Academy Software Foundation
18:00 – 20:00 Boat tour – Shared with HPG (Pier Batorama)
20:00 – 00:00 Conference Dinner – Shared with HPG (Aubette)
08:30 – 12:00 Registration and information desk open
09:00 – 10:30 Paper Session #2: High Performance Rendering (chair Alexander Wilkie)
Ray Classification for Accelerated BVH Traversal
Scalable Virtual Ray Lights Rendering for Participating Media
Real-Time Hybrid Hair Rendering
Adaptive Temporal Sampling for Volumetric Path Tracing of Medical Data
10:30 – 11:00 Coffee Break
11:00 – 12:00 Keynote: Marcos Fajardo, Autodesk / Solid Angle
12:00 – 13:30 Lunch (Resto’ U Paul Appell)
13:30 – 15:00 Paper Session #3: Spectral Effects (chair Pascal Barla)
Real-time Image-based Lighting of Microfacet BRDFs with Varying Iridescence
Wide Gamut spectral upsampling with fluorescence
Analytic Spectral Integration of Birefringence-Induced Iridescence
Spectral Primary Decomposition for Rendering with sRGB Reflectance
15:00 – 15:30 Coffee Break
15:30 – 17:30 Paper Session #4: Light Transport (chair George Drettakis)
Quantifying the Error of Light Transport Algorithms
Adaptive BRDF-Aware Multiple Importance Sampling of Many Lights
Progressive Transient Photon Beams
Adaptive Multi-View Path Tracing
17:30 – 18:30 EGSR Townhall Meeting
19:00 – 23:00 Reception – Sponsored by Activision (Le Jardin de l’Orangerie)
08:30 – 12:00 Registration and information desk open
09:00 – 10:30 Paper Session #5: Sampling (chair Laurent Belcour)
Orthogonal Array Sampling for Monte Carlo Rendering
Distributing Monte Carlo Errors as a Blue Noise in Screen Space by Permuting Pixel Seeds Between Frames
Fourier Analysis of Correlated Monte Carlo Importance Sampling
Combining Point and Line Samples for Direct Illumination
10:30 – 11:00 Coffee Break
11:00 – 12:00 Keynote: Ali Eslami, Google DeepMind
12:00 – 13:30 Lunch (Resto’ U Paul Appell)
13:30 – 15:00 Paper Session #6: Interactive & Real-Time Rendering (chair Lingqi Yan)
Impulse Responses for Precomputing Light from Volumetric Media
Tessellated Shading Streaming
Foveated Real-Time Path Tracing in Visual-Polar Space
Global Illumination Shadow Layers
15:00 – 15:30 Coffee Break
15:30 – 17:00 Paper Session #7: Deep Learning (chair Wojciech Jarosz)
Learned Fitting of Spatially Varying BRDFs
Puppet Dubbing
Deep-learning the Latent Space of Light Transport
17:00 – 17:30 EGSR Awards & Closing Remarks
Jaakko Lehtinen Aalto University & NVIDIA |
Title: Why learn something you already know?
Summary: While computer graphics has many faces, a central one is the fact that it enables creation of photorealistic pictures by simulating light propagation, motion, shape, appearance, and so on. In this talk, I’ll argue that this ability puts graphics research in a unique position to make fundamental contributions to machine learning and AI, while solving its own longstanding problems.
The majority of modern high-performing machine learning models are not particularly interpretable; you cannot, say, interrogate an image-generating Generative Adversarial Network (GAN) to truly tease apart shape, appearance, lighting, and motion, or directly instruct an image classifier to pay attention to shape instead of texture. Yet, reasoning in such terms is the bread and butter of graphics algorithms! I argue that tightly combining the power of modern machine learning models with sophisticated graphics simulators will enable us to push the learning beyond pixels, into the physically meaningful, interpretable constituents of the world that are all tied together by the fact they come together under well-understood physical processes to form pictures. Of course, such “simulator-based inference” or “analysis by synthesis” is seeing an increasing interest in the research community, but I’ll try to convince you that what we’re seeing at the moment is just a small sample of things to come.
Bio: Jaakko Lehtinen is a tenured associate professor at Aalto University, and a research scientist at NVIDIA Research. Prior to that, he spent a few years as a postdoc with Frédo Durand at MIT. He works on computer graphics and computer vision, in particular realistic image synthesis, appearance acquisition, and procedural animation.
Marcos Fajardo Autodesk / Solid Angle |
Title: Tales from Production Rendering
Summary: To be announced.
Bio: Marcos is the founder of Solid Angle, where he lead the development of the Arnold path tracing renderer which he has worked on for 20 years. Prior to that, he was a visiting Software Architect at Sony Pictures Imageworks, a visiting researcher at USC’s Institute for Creative Technologies under the supervision of Dr. Paul Debevec, and also consulted at various CG studios around the world. In 2017 Marcos and his team received a Scientific and Engineering Academy Award for their work on Arnold. More recently, he co-produced the feature film “Despido Procedente” and the short film “La Noria”. He is a frequent speaker at SIGGRAPH, FMX and EGSR. His favorite sushi is Hokkaido uni.
Ali Eslami Google DeepMind |
Title: Neural Scene Representation and Rendering
Summary: In this talk I will introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors, and to render those scenes from any new viewpoint. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints. The GQN demonstrates representation learning and rendering without human labels or domain knowledge, paving the way toward machines that autonomously learn to understand and imagine the world around them.
Bio: S. M. Ali Eslami is a staff research scientist at DeepMind. His research is focused on getting computers to learn generative models of images that not only produce good samples but also good explanations for their observations. Prior to this, he was a post-doctoral researcher at Microsoft Research in Cambridge. He did his PhD in the School of Informatics at the University of Edinburgh, during which he was also a visiting researcher in the Visual Geometry Group at the University of Oxford.