header header

Strasbourg, France | July 10-12, 2019



Below is the program for Tuesday (MAM), Wednesday (EGSR), Thursday (EGSR) and Friday (EGSR). The alloted time for MAM presentations is 15 minutes plus 5 minutes for discussion and setup. EGSR presentations take 18 minutes plus 5 minutes for discussion and setup. Additional information for presenters are available on the venue page.

Please find the titles and summaries of the keynotes below the program.

Note that High-Performance Graphics takes place from July 8 to July 10, at the same location. Please consult the HPG website for the HPG program.

Tuesday, July 9 – MAM – Room AT9

08:30 – 12:00 Registration and information desk open

10:30 – 11:00 Coffee

11:00 – 11.10 Welcome and Introduction (H. Rushmeier and R. Klein)

11:10 – 12.00 Session: Practical Models

Spectral Rendering with the Bounded MESE and sRGB Data

C. Peters, S. Merzbach, J. Hanika and C. Dachsbacher

Fresnel Equations Considered Harmful

N. Hoffman

12:00 – 13:30 Lunch (Resto’ U Paul Appell)

13:30 – 15:30 Session: Models, Fitting and Measurement

Rendering transparent materials with a complex refractive index: semi-conductor and conductor thin layers

M. Gerardin, N. Holzschuch, and P. Martinetto

Comparative Study of Layered Material Models

M. Bati, R. Pacanowski, and P. Barla

Estimating Homogeneous Data-driven BRDF Parameters from a Reflectance Map under Known Natural Lighting

V. Cooper, J. Bieron, and P. Peers

What is the Reddening Effect and does it really exist?

O. Clausen, R. Marroquim, A. Fuhrmann and H. Weigand

A New Material Database

J. Dupuy, W. Jakob

15:30 – 16:00 Coffee break

16:00 – 17:20 Session: Perception, Neural Methods and Research Needs

On Visual Attractiveness of Anisotropic Effect Coatings

J. Filip and M. Kolafová

Discussion: Research and questions in perception of materials

H. Rushmeier

Neural Appearance Synthesis and Transfer

I. Mazlov, S. Merzbach, E. Trunz and R. Klein

Discussion: Research and questions in neural methods for material acquisition

V. Deschaintre

17:20 – 17:30 Conclusion (H. Rushmeier and R. Klein)

18:00 – 20:00 Reception (Atrium building) – For MAM attendees only

Wednesday, July 10 – EGSR – Room AT8

08:30 – 16:00 Registration and information desk open

11:00 – 12:00 Keynote: Jaakko Lehtinen, Aalto/Nvidia – Shared with HPG

12:00 – 13:30 Lunch (Resto’ U Paul Appell)

13:30 – 13:45 EGSR Opening Ceremony

13:45 – 15:15 Paper Session #1: Materials & Reflectance (chair Holly Rushmeier)

Flexible SVBRDF Capture with a Multi-Image Deep Network

Valentin Deschaintre, Miika Aittala, Frédo Durand, George Drettakis, Adrien Bousseau

On-Site Example-Based Material Appearance Acquisition

Yiming Lin, Pieter Peers, Abhijeet Ghosh

Glint Rendering Based on a Multiple-Scattering Patch BRDF

Xavier Chermain, Frédéric Claux, Stephane Merillou

Microfacet Model Regularization for Robust Light Transport

Johannes Jendersie, Thorsten Grosch

15:15 – 15:45 Coffee break

15:45 – 17:00 Industry Track Session (chairs T. Boubekeur and P. Sen)

Implementing One-Click Caustics in Corona Renderer

Martin Šik and Jaroslav Křivánek

Render Legion

De-lighting a high-resolution picture for material acquisition

Rosalie Martin, Arthur Meyer, Davide Pesare


The challenges of releasing the Moana Island Scene

R. Tamstorf and H. Pritchett


Presentation of the Academy Software Foundation

Daniel Heckenberg

Animal Logic

18:00 – 20:00 Boat tour – Shared with HPG (Pier Batorama)

20:00 – 00:00 Conference Dinner – Shared with HPG (Aubette)

Thursday, July 11 – EGSR – Room AT8

08:30 – 12:00 Registration and information desk open

09:00 – 10:30 Paper Session #2: High Performance Rendering (chair Alexander Wilkie)

Ray Classification for Accelerated BVH Traversal

Jakub Hendrich, Adam Pospíšil, Daniel Meister, Jiří Bittner

Scalable Virtual Ray Lights Rendering for Participating Media

Nicolas Vibert, Adrien Gruson, Heine Stokholm, Troels Mortensen, Wojciech Jarosz, Toshiya Hachisuka, Derek Nowrouzezahrai

Real-Time Hybrid Hair Rendering

Erik Sven Vasconcelos Jansson, Matthäus Chajdas, Jason Lacroix, Ingemar Ragnemalm

Adaptive Temporal Sampling for Volumetric Path Tracing of Medical Data

Jana Martschinke, Stefan Hartnagel, Benjamin Keinert, Klaus Engel, Marc Stamminger

10:30 – 11:00 Coffee Break

11:00 – 12:00 Keynote: Marcos Fajardo, Autodesk / Solid Angle

12:00 – 13:30 Lunch (Resto’ U Paul Appell)

13:30 – 15:00 Paper Session #3: Spectral Effects (chair Pascal Barla)

Real-time Image-based Lighting of Microfacet BRDFs with Varying Iridescence

Tom Kneiphof, Tim Golla, Reinhard Klein

Wide Gamut spectral upsampling with fluorescence

Alisa Jung, Alexander Wilkie, Johannes Hanika, Wenzel Jakob, Carsten Dachsbacher

Analytic Spectral Integration of Birefringence-Induced Iridescence

Shlomi Steinberg

Spectral Primary Decomposition for Rendering with sRGB Reflectance

Ian Mallett, Cem Yuksel

15:00 – 15:30 Coffee Break

15:30 – 17:30 Paper Session #4: Light Transport (chair George Drettakis)

Quantifying the Error of Light Transport Algorithms

Adam Celarek, Wenzel Jakob, Michael Wimmer, Jaakko Lehtinen

Adaptive BRDF-Aware Multiple Importance Sampling of Many Lights

Yifan Liu, Kun Xu, Lingqi Yan

Progressive Transient Photon Beams

Julio Marco, Ibón Guillén, Wojciech Jarosz, Diego Gutierrez, Adrian Jarabo

Adaptive Multi-View Path Tracing

Basile Fraboni, Jean-Claude Iehl, Vincent Nivoliers

17:30 – 18:30 EGSR Townhall Meeting

19:00 – 23:00 Reception – Sponsored by Activision (Le Jardin de l’Orangerie)

Friday, July 12 – EGSR – Room AT8

08:30 – 12:00 Registration and information desk open

09:00 – 10:30 Paper Session #5: Sampling (chair Laurent Belcour)

Orthogonal Array Sampling for Monte Carlo Rendering

Wojciech Jarosz, Afnan Enayet, Andrew Kensler, Charlie Kilpatrick, Per Christensen

Distributing Monte Carlo Errors as a Blue Noise in Screen Space by Permuting Pixel Seeds Between Frames

Eric Heitz, Laurent Belcour

Fourier Analysis of Correlated Monte Carlo Importance Sampling

Gurprit Singh, Kartic Subr, David Coeurjolly, Victor Ostromoukhov, Wojciech Jarosz

Combining Point and Line Samples for Direct Illumination

Katherine Salesin, Wojciech Jarosz

10:30 – 11:00 Coffee Break

11:00 – 12:00 Keynote: Ali Eslami, Google DeepMind

12:00 – 13:30 Lunch (Resto’ U Paul Appell)

13:30 – 15:00 Paper Session #6: Interactive & Real-Time Rendering (chair Lingqi Yan)

Impulse Responses for Precomputing Light from Volumetric Media

Adrien Dubouchet, Peter-Pike Sloan, Wojciech Jarosz, Derek Nowrouzezahrai

Tessellated Shading Streaming

Jozef Hladky, Hans-Peter Seidel, Markus Steinberger

Foveated Real-Time Path Tracing in Visual-Polar Space

Matias Koskela, Atro Lotvonen, Markku Mäkitalo, Petrus Kivi, Timo Viitanen, Pekka Jääskeläinen

Global Illumination Shadow Layers

François Desrichard, David Vanderhaeghe, Mathias Paulin

15:00 – 15:30 Coffee Break

15:30 – 17:00 Paper Session #7: Deep Learning (chair Wojciech Jarosz)

Learned Fitting of Spatially Varying BRDFs

Sebastian Merzbach, Max Hermann, Martin Rump, Reinhard Klein

Puppet Dubbing

Ohad Fried, Maneesh Agrawala

Deep-learning the Latent Space of Light Transport

Pedro Hermosilla, Sebastian Maisch, Tobias Ritschel, Timo Ropinski

17:00 – 17:30 EGSR Awards & Closing Remarks


Jaakko Lehtinen
Aalto University & NVIDIA

Title: Why learn something you already know?

Summary: While computer graphics has many faces, a central one is the fact that it enables creation of photorealistic pictures by simulating light propagation, motion, shape, appearance, and so on. In this talk, I’ll argue that this ability puts graphics research in a unique position to make fundamental contributions to machine learning and AI, while solving its own longstanding problems.

The majority of modern high-performing machine learning models are not particularly interpretable; you cannot, say, interrogate an image-generating Generative Adversarial Network (GAN) to truly tease apart shape, appearance, lighting, and motion, or directly instruct an image classifier to pay attention to shape instead of texture. Yet, reasoning in such terms is the bread and butter of graphics algorithms! I argue that tightly combining the power of modern machine learning models with sophisticated graphics simulators will enable us to push the learning beyond pixels, into the physically meaningful, interpretable constituents of the world that are all tied together by the fact they come together under well-understood physical processes to form pictures. Of course, such “simulator-based inference” or “analysis by synthesis” is seeing an increasing interest in the research community, but I’ll try to convince you that what we’re seeing at the moment is just a small sample of things to come.

Bio: Jaakko Lehtinen is a tenured associate professor at Aalto University, and a research scientist at NVIDIA Research. Prior to that, he spent a few years as a postdoc with Frédo Durand at MIT. He works on computer graphics and computer vision, in particular realistic image synthesis, appearance acquisition, and procedural animation.

Marcos Fajardo
Autodesk / Solid Angle

Title: Tales from Production Rendering

Summary: To be announced.

Bio: Marcos is the founder of Solid Angle, where he lead the development of the Arnold path tracing renderer which he has worked on for 20 years. Prior to that, he was a visiting Software Architect at Sony Pictures Imageworks, a visiting researcher at USC’s Institute for Creative Technologies under the supervision of Dr. Paul Debevec, and also consulted at various CG studios around the world. In 2017 Marcos and his team received a Scientific and Engineering Academy Award for their work on Arnold. More recently, he co-produced the feature film “Despido Procedente” and the short film “La Noria”. He is a frequent speaker at SIGGRAPH, FMX and EGSR. His favorite sushi is Hokkaido uni.

Ali Eslami
Google DeepMind

Title: Neural Scene Representation and Rendering

Summary: In this talk I will introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors, and to render those scenes from any new viewpoint. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints. The GQN demonstrates representation learning and rendering without human labels or domain knowledge, paving the way toward machines that autonomously learn to understand and imagine the world around them.

Bio: S. M. Ali Eslami is a staff research scientist at DeepMind. His research is focused on getting computers to learn generative models of images that not only produce good samples but also good explanations for their observations. Prior to this, he was a post-doctoral researcher at Microsoft Research in Cambridge. He did his PhD in the School of Informatics at the University of Edinburgh, during which he was also a visiting researcher in the Visual Geometry Group at the University of Oxford.