Movatterモバイル変換


[0]ホーム

URL:


PPTX, PDF163,317 views

Stochastic Screen-Space Reflections

This document presents a detailed overview of advancements in real-time rendering techniques, specifically focusing on stochastic screen-space reflections as discussed in the SIGGRAPH 2015 course. It outlines the motivation, requirements, previous work, core algorithm, and proposed improvements for rendering sharp and blurry reflections in real-time applications. The approach includes techniques such as ray allocation, hierarchical tracing, and importance sampling to enhance visual quality while maintaining performance efficiency.

Embed presentation

Downloaded 725 times
SIGGRAPH 2015: Advances in Real-Time Rendering courseStochastic Screen-Space ReflectionsTOMASZ STACHOWIAK ELECTRONIC ARTS / FROSTBITEYASIN ULUDAG ELECTRONIC ARTS / DICE
SIGGRAPH 2015: Advances in Real-Time Rendering courseAgenda Motivation and requirements Previous work Core algorithm Secret sauce
SIGGRAPH 2015: Advances in Real-Time Rendering courseMotivation and requirements
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering courseOur requirements Sharp and blurry reflections Contact hardening Specular elongation Per-pixel roughness and normal
SIGGRAPH 2015: Advances in Real-Time Rendering courseOur requirements Sharp and blurry reflections Contact hardening Specular elongation Per-pixel roughness and normal
SIGGRAPH 2015: Advances in Real-Time Rendering courseOur requirements Sharp and blurry reflections Contact hardening Specular elongation Per-pixel roughness and normal
SIGGRAPH 2015: Advances in Real-Time Rendering courseOur requirements Sharp and blurry reflections Contact hardening Specular elongation Per-pixel roughness and normal
SIGGRAPH 2015: Advances in Real-Time Rendering coursePrevious work
SIGGRAPH 2015: Advances in Real-Time Rendering coursePrevious work Basic mirror-only SSR Shoot rays from G-Buffer Normals just work Raymarch depth Return color at hit point Final lighting of last frame Reprojected
SIGGRAPH 2015: Advances in Real-Time Rendering coursePrevious work Killzone Shadow Fall [Valient14]
SIGGRAPH 2015: Advances in Real-Time Rendering coursePrevious work Killzone Shadow Fall [Valient14]???
SIGGRAPH 2015: Advances in Real-Time Rendering coursePrevious work Killzone Shadow Fall [Valient14]No hardening
SIGGRAPH 2015: Advances in Real-Time Rendering coursePrevious work Filtered Importance Sampling
SIGGRAPH 2015: Advances in Real-Time Rendering courseOur approach
SIGGRAPH 2015: Advances in Real-Time Rendering courseOur approach BRDF importance sampling
SIGGRAPH 2015: Advances in Real-Time Rendering courseOur approach Hit point reuse across neighbors
SIGGRAPH 2015: Advances in Real-Time Rendering courseOur approach Prefiltered samples Weighed by each BRDF
SIGGRAPH 2015: Advances in Real-Time Rendering courseOur approach - screenshots
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering courseAlgorithm breakdownTile classificationRay allocationCheap raytracing HQ raytracingColor resolveTemporal filter
SIGGRAPH 2015: Advances in Real-Time Rendering courseTile-based classification Split screen in tiles Shoot tracer rays at 1/8th resolution Estimate tile importance Hit anything at all? Perceptual variance of hit values Allocate ray count per tile Between user-specified min and max
SIGGRAPH 2015: Advances in Real-Time Rendering courseRay classification Classify each pixel based on roughness Expensive rays Hierarchical raytracing Exact intersection Cheap rays Simple linear march May skip thin objects
SIGGRAPH 2015: Advances in Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
SIGGRAPH 2015: Advances in Real-Time Rendering courseImportance sampling refresher Variance reduction for Monte Carlo Given a target function to integrate… Find a probability distribution functionwe can sample As close as possible to target Generate samples according to PDF Calculate mean function / PDFprobability
SIGGRAPH 2015: Advances in Real-Time Rendering courseBRDF importance sampling Any BRDF We use GGX Every ray is sacred, every ray is great… Make them count! Low discrepancy Halton sequence Some rays go under the surface We re-generate them Could try Distribution of Visible Normals[Heitz14]
SIGGRAPH 2015: Advances in Real-Time Rendering courseRay reuse Neighboring pixels shoot useful rays Visibility might be different We assume it’s the same Reuse intersection results
SIGGRAPH 2015: Advances in Real-Time Rendering courseRay reuse Weigh by local BRDF Divide by original PDF Returned by ray-trace along with hit-point Neighbors can have vastly different properties Spikes in BRDF/PDF ratio Worse results than without reuse :(
SIGGRAPH 2015: Advances in Real-Time Rendering course
SIGGRAPH 2015: Advances in Real-Time Rendering course1 ray, 1 resolve samplehalf-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course1 ray, 4 resolve sampleshalf-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering courseVariance reduction This is what we’re solving Where fs is the BRDF Li is the incident radiance We integrate over hemisphere We skip emitted light (Le) in the formulationVariance!
SIGGRAPH 2015: Advances in Real-Time Rendering courseMul and div by the same factor
SIGGRAPH 2015: Advances in Real-Time Rendering course… pre-integrate one of them
SIGGRAPH 2015: Advances in Real-Time Rendering course… and do the rest with Monte Carlo.
SIGGRAPH 2015: Advances in Real-Time Rendering courseSame thing in Simple English BRDF-weighted image contributions Normalization of BRDF weights Pre-integrated BRDF
SIGGRAPH 2015: Advances in Real-Time Rendering course… and pseudocoderesult = 0.0weightSum = 0.0for pixel in neighborhood:weight = localBrdf(pixel.hit) / pixel.hitPdfresult += color(pixel.hit) * weightweightSum += weightresult /= weightSum
SIGGRAPH 2015: Advances in Real-Time Rendering course1 ray, 1 resolve sampleno normalizationhalf-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course1 ray, 4 resolve samplesno normalizationhalf-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course1 ray, 4 resolve sampleswith normalizationhalf-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course4 rays, 1 resolve samplewith normalizationhalf-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course1 ray, 1 resolve samplewith normalizationhalf-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course1 ray, 4 resolve sampleswith normalizationhalf-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course4 rays, 4 resolve sampleswith normalizationhalf-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering course1 ray, 4 resolve sampleswith normalization and temporal filterhalf-resolution
SIGGRAPH 2015: Advances in Real-Time Rendering courseSparse raytracing Raytrace at a reduced resolution Reuse multiple rays at full-res Unique blend of rays for every pixel Automatic from BRDF Per-pixel normals and roughness preserved Weight normalization fixes gaps
SIGGRAPH 2015: Advances in Real-Time Rendering course1 ray, 4 resolve sampleswith normalization and temporal filterhalf-res trace; half-res resolve
SIGGRAPH 2015: Advances in Real-Time Rendering course1 ray, 4 resolve sampleswith normalization and temporal filterhalf-res trace; full-res resolve
SIGGRAPH 2015: Advances in Real-Time Rendering courseTemporal reprojection Reprojection along G-Buffer depth ‘smears’ Add reflection depth Average from local rays Proper reflection parallax
SIGGRAPH 2015: Advances in Real-Time Rendering courseImportance sampling bias Lots of noise from BRDF tail Shift samples toward mirror direction Still need accurate PDF values Truncated distribution samplingfloat2 u = halton(sampleIdx);u.x = lerp(u.x, 1.0, bias);importanceSample(u); Different normalization constant Our variance reduction re-normalizes!angleprobability
SIGGRAPH 2015: Advances in Real-Time Rendering courseFiltered importance sampling Pre-filter image pyramid Estimate footprint of a cone at intersection No actual cone tracing Mip determined by log function fit Roughness Distance to hit Elongation
SIGGRAPH 2015: Advances in Real-Time Rendering courseFilter bias Counter sampling bias with filter bias Same parameter Tuned for similar look across the range Improves performance Better spatial coherency Smaller mips
SIGGRAPH 2015: Advances in Real-Time Rendering courseBias 0.0
SIGGRAPH 2015: Advances in Real-Time Rendering courseBias 0.7
SIGGRAPH 2015: Advances in Real-Time Rendering courseMulti-pixel resolve Resolve four pixels at a time Using the same rays Four running color and weight sums Four times the VGPRs Two-three waves enough in practice Same four color buffer UVs Different mips :( Find min and max mips, interpolate samples Two color samples instead of four :)
SIGGRAPH 2015: Advances in Real-Time Rendering courseMip anchor interpolation2 mip fetches+ interpolation4 mip fetches1 mip fetch
SIGGRAPH 2015: Advances in Real-Time Rendering coursePerformance PS4 timings (Frostbite testbed) 1600 x 900; all pixels reflective HQ rays for “Disney” roughness < 20% Bias 0.7 All passes use compute, can run async.ResolvesamplesRays /half-respixelEffectivesamples/pixelTileclassifyRayallocLineartraceHi-ZtraceResolve Temporal Total4 1 4 0.16ms 0.24ms 0.20ms 0.37ms 0.81ms 0.30ms 2.19ms4 2 8 0.16ms 0.34ms 0.34ms 0.65ms 1.46ms 0.30ms 3.34ms1 4 4 0.16ms 1.06ms 0.61ms 0.91ms 0.91ms 0.33ms 4.41ms
SIGGRAPH 2015: Advances in Real-Time Rendering courseVideo
SIGGRAPH 2015: Advances in Real-Time Rendering courseConclusion All requirements fulfilled Accuracy, per-pixel normal, roughness, etc. Tiny bright highlights still result in noise Can we detect and sample them more? Adaptive Multiple Importance Sampling? Track variance and blur where it’s high?
SIGGRAPH 2015: Advances in Real-Time Rendering courseThanks! Questions? tomasz.stachowiak@frostbite.com @h3r2tic By the way, we’re hiring!
SIGGRAPH 2015: Advances in Real-Time Rendering courseReferences [Heitz14] Eric Heitz, Eugene D'Eon "Importance Sampling Microfacet-Based BSDFsusing the Distribution of Visible Normals" https://hal.inria.fr/hal-00996995 [Hermanns15] Lukas Hermanns "Screen space cone tracing for glossy reflections"http://publica.fraunhofer.de/documents/N-336466.html [Karis13] Brian Karis "Real Shading in Unreal Engine 4"http://blog.selfshadow.com/publications/s2013-shading-course/ [Karis14] Brian Karis "High-quality Temporal Supersampling"http://advances.realtimerendering.com/s2014/ [McGuire14] Morgan McGuire, Michael Mara "Efficient GPU Screen-Space RayTracing" http://jcgt.org/published/0003/04/04/ [Pearce15] William Pearce "Screen Space Glossy Reflections"http://roar11.com/2015/07/screen-space-glossy-reflections/ [Tokuyoshi15] Yusuke Tokuyoshi "Specular Lobe-Aware Filtering and Upsampling forInteractive Indirect Illumination" http://www.jp.square-enix.com/info/library/ [Uludag14] Yasin Uludag "Hi-Z Screen-Space Cone-Traced Reflections" GPU Pro 5 [Valient14] Michal Valient "Reflections And Volumetrics Of Killzone Shadow Fall"http://advances.realtimerendering.com/s2014/
SIGGRAPH 2015: Advances in Real-Time Rendering courseBonus slides
SIGGRAPH 2015: Advances in Real-Time Rendering coursePrevious work Lobe-aware filtering[Tokuyoshi15]No hardening :(
SIGGRAPH 2015: Advances in Real-Time Rendering coursePrevious work Hi-Z Cone Tracing [Uludag14]
SIGGRAPH 2015: Advances in Real-Time Rendering coursePrevious work Hi-Z Cone Tracing [Uludag14]???Can’t trace this
SIGGRAPH 2015: Advances in Real-Time Rendering courseOverblurring at grazing angles Cones poorly fit specular lobes at grazing angles Lobes become anisotropic in shape Cone fit to wider (azimuthal) angle over-blurs Fit it to polar angle Effectively: shrink the cone at grazing angles
SIGGRAPH 2015: Advances in Real-Time Rendering courseFilter shrinking due to elongation Found a close fit in Mathematica Also came up with an ad-hoc one Ad-hoc close enough in testingspecularConeTangent *=lerp(saturate(NdotV * 2.0), 1.0, sqrt(roughness));
SIGGRAPH 2015: Advances in Real-Time Rendering courseFilter shrinking due to elongationNo filter Adjusted filterNaive filter
SIGGRAPH 2015: Advances in Real-Time Rendering coursePre-integrated FG note Remember this? Multiply by FG after temporal We do it when applying SSR to the screen Reduces smearing and noise
SIGGRAPH 2015: Advances in Real-Time Rendering courseMulti-pixel resolve jittering Ray reuse across 2x2 quads == 2x2 noise Makes Temporal AA unhappy! 2x2 blocks look like features, not aliasing Spread out the target pixels Jitter temporally to hide artifacts
SIGGRAPH 2015: Advances in Real-Time Rendering courseVariance reduction for ray reuse Initial idea: Multiple Importance Sampling Treat neighboring pixels as generation strategies Accurate, unbiased results Expensive ALU and VGPR heavy
SIGGRAPH 2015: Advances in Real-Time Rendering courseNeighborhood clamping Similar to Temporal AA Tuned for some smearing over noise Can’t kill all the lag anyway Reflection color is from previous frame! Expand the color bounding box Tiled, loaded from LDS
SIGGRAPH 2015: Advances in Real-Time Rendering courseCommon materialsSSR
SIGGRAPH 2015: Advances in Real-Time Rendering courseCommon materialsSSR

Recommended

PPTX
Physically Based and Unified Volumetric Rendering in Frostbite
PPTX
Moving Frostbite to Physically Based Rendering
PPTX
Rendering Technologies from Crysis 3 (GDC 2013)
PDF
Screen Space Reflections in The Surge
PDF
The Rendering Technology of 'Lords of the Fallen' (Game Connection Europe 2014)
PPTX
Physically Based Sky, Atmosphere and Cloud Rendering in Frostbite
PPTX
Five Rendering Ideas from Battlefield 3 & Need For Speed: The Run
PDF
Graphics Gems from CryENGINE 3 (Siggraph 2013)
PDF
Siggraph2016 - The Devil is in the Details: idTech 666
PPTX
Shiny PC Graphics in Battlefield 3
PDF
Bindless Deferred Decals in The Surge 2
PPTX
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time Raytracing
PDF
Lighting Shading by John Hable
PPT
A Bit More Deferred Cry Engine3
PPT
Stable SSAO in Battlefield 3 with Selective Temporal Filtering
PDF
Rendering AAA-Quality Characters of Project A1
PDF
Taking Killzone Shadow Fall Image Quality Into The Next Generation
PPTX
Optimizing the Graphics Pipeline with Compute, GDC 2016
PPTX
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
PPSX
Vertex Shader Tricks by Bill Bilodeau - AMD at GDC14
PPTX
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...
PPTX
Hable John Uncharted2 Hdr Lighting
PPT
Crysis Next-Gen Effects (GDC 2008)
PPT
Secrets of CryENGINE 3 Graphics Technology
PPT
Frostbite Rendering Architecture and Real-time Procedural Shading & Texturing...
 
PPTX
The Rendering Technology of Killzone 2
PDF
Introduction to Bidirectional Path Tracing (BDPT) & Implementation using Open...
PPTX
SPU-Based Deferred Shading in BATTLEFIELD 3 for Playstation 3
PPTX
Past, Present and Future Challenges of Global Illumination in Games
PDF
Penner pre-integrated skin rendering (siggraph 2011 advances in real-time r...
 

More Related Content

PPTX
Physically Based and Unified Volumetric Rendering in Frostbite
PPTX
Moving Frostbite to Physically Based Rendering
PPTX
Rendering Technologies from Crysis 3 (GDC 2013)
PDF
Screen Space Reflections in The Surge
PDF
The Rendering Technology of 'Lords of the Fallen' (Game Connection Europe 2014)
PPTX
Physically Based Sky, Atmosphere and Cloud Rendering in Frostbite
PPTX
Five Rendering Ideas from Battlefield 3 & Need For Speed: The Run
PDF
Graphics Gems from CryENGINE 3 (Siggraph 2013)
Physically Based and Unified Volumetric Rendering in Frostbite
Moving Frostbite to Physically Based Rendering
Rendering Technologies from Crysis 3 (GDC 2013)
Screen Space Reflections in The Surge
The Rendering Technology of 'Lords of the Fallen' (Game Connection Europe 2014)
Physically Based Sky, Atmosphere and Cloud Rendering in Frostbite
Five Rendering Ideas from Battlefield 3 & Need For Speed: The Run
Graphics Gems from CryENGINE 3 (Siggraph 2013)

What's hot

PDF
Siggraph2016 - The Devil is in the Details: idTech 666
PPTX
Shiny PC Graphics in Battlefield 3
PDF
Bindless Deferred Decals in The Surge 2
PPTX
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time Raytracing
PDF
Lighting Shading by John Hable
PPT
A Bit More Deferred Cry Engine3
PPT
Stable SSAO in Battlefield 3 with Selective Temporal Filtering
PDF
Rendering AAA-Quality Characters of Project A1
PDF
Taking Killzone Shadow Fall Image Quality Into The Next Generation
PPTX
Optimizing the Graphics Pipeline with Compute, GDC 2016
PPTX
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
PPSX
Vertex Shader Tricks by Bill Bilodeau - AMD at GDC14
PPTX
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...
PPTX
Hable John Uncharted2 Hdr Lighting
PPT
Crysis Next-Gen Effects (GDC 2008)
PPT
Secrets of CryENGINE 3 Graphics Technology
PPT
Frostbite Rendering Architecture and Real-time Procedural Shading & Texturing...
 
PPTX
The Rendering Technology of Killzone 2
PDF
Introduction to Bidirectional Path Tracing (BDPT) & Implementation using Open...
PPTX
SPU-Based Deferred Shading in BATTLEFIELD 3 for Playstation 3
Siggraph2016 - The Devil is in the Details: idTech 666
Shiny PC Graphics in Battlefield 3
Bindless Deferred Decals in The Surge 2
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time Raytracing
Lighting Shading by John Hable
A Bit More Deferred Cry Engine3
Stable SSAO in Battlefield 3 with Selective Temporal Filtering
Rendering AAA-Quality Characters of Project A1
Taking Killzone Shadow Fall Image Quality Into The Next Generation
Optimizing the Graphics Pipeline with Compute, GDC 2016
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
Vertex Shader Tricks by Bill Bilodeau - AMD at GDC14
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...
Hable John Uncharted2 Hdr Lighting
Crysis Next-Gen Effects (GDC 2008)
Secrets of CryENGINE 3 Graphics Technology
Frostbite Rendering Architecture and Real-time Procedural Shading & Texturing...
 
The Rendering Technology of Killzone 2
Introduction to Bidirectional Path Tracing (BDPT) & Implementation using Open...
SPU-Based Deferred Shading in BATTLEFIELD 3 for Playstation 3

Similar to Stochastic Screen-Space Reflections

PPTX
Past, Present and Future Challenges of Global Illumination in Games
PDF
Penner pre-integrated skin rendering (siggraph 2011 advances in real-time r...
 
PPTX
The Rendering Pipeline - Challenges & Next Steps
 
PDF
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time Raytracing
PDF
Practical Spherical Harmonics Based PRT Methods
PPT
5 Major Challenges in Interactive Rendering
PPT
GRPHICS08 - Raytracing and Radiosity
PDF
Ray Tracing.pdf
PPTX
Rendering Algorithms.pptx
PDF
Rendering Techniques in Virtual Reality.pdf
PPTX
5 Major Challenges in Real-time Rendering (2012)
PDF
Ray-Tracing an Introduction - An Overview
PDF
The technology behind_the_elemental_demo_16x9-1248544805
PDF
Foveated Ray Tracing for VR on Multiple GPUs
PDF
Hello Ray-Tracing - What's it all about? Introduction to Ray-Tracing and why ...
PDF
Screen space reflections on Epsilon Engine
PDF
【Unite Tokyo 2019】Unityでレイトレーシングしよう!レイトレーシング実装と最適化の解説
PPTX
Unity AMD FSR - SIGGRAPH 2021.pptx
PPTX
Penn graphics
PPT
november29.ppt
Past, Present and Future Challenges of Global Illumination in Games
Penner pre-integrated skin rendering (siggraph 2011 advances in real-time r...
 
The Rendering Pipeline - Challenges & Next Steps
 
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time Raytracing
Practical Spherical Harmonics Based PRT Methods
5 Major Challenges in Interactive Rendering
GRPHICS08 - Raytracing and Radiosity
Ray Tracing.pdf
Rendering Algorithms.pptx
Rendering Techniques in Virtual Reality.pdf
5 Major Challenges in Real-time Rendering (2012)
Ray-Tracing an Introduction - An Overview
The technology behind_the_elemental_demo_16x9-1248544805
Foveated Ray Tracing for VR on Multiple GPUs
Hello Ray-Tracing - What's it all about? Introduction to Ray-Tracing and why ...
Screen space reflections on Epsilon Engine
【Unite Tokyo 2019】Unityでレイトレーシングしよう!レイトレーシング実装と最適化の解説
Unity AMD FSR - SIGGRAPH 2021.pptx
Penn graphics
november29.ppt

More from Electronic Arts / DICE

PDF
DD18 - SEED - Raytracing in Hybrid Real-Time Rendering
PPTX
High Dynamic Range color grading and display in Frostbite
PPTX
Lighting the City of Glass
PPTX
Frostbite on Mobile
PPTX
FrameGraph: Extensible Rendering Architecture in Frostbite
PPTX
HPG 2018 - Game Ray Tracing: State-of-the-Art and Open Problems
PPTX
Photogrammetry and Star Wars Battlefront
PDF
Syysgraph 2018 - Modern Graphics Abstractions & Real-Time Ray Tracing
PPT
SIGGRAPH 2010 - Style and Gameplay in the Mirror's Edge
PPTX
Khronos Munich 2018 - Halcyon and Vulkan
PDF
SEED - Halcyon Architecture
PPTX
Future Directions for Compute-for-Graphics
PPTX
Rendering Battlefield 4 with Mantle
PPTX
SIGGRAPH 2018 - PICA PICA and NVIDIA Turing
PPTX
Shiny Pixels and Beyond: Real-Time Raytracing at SEED
PPTX
Mantle for Developers
PPTX
CEDEC 2018 - Functional Symbiosis of Art Direction and Proceduralism
PDF
EPC 2018 - SEED - Exploring The Collaboration Between Proceduralism & Deep Le...
PDF
Creativity of Rules and Patterns: Designing Procedural Systems
PPTX
GDC2019 - SEED - Towards Deep Generative Models in Game Development
DD18 - SEED - Raytracing in Hybrid Real-Time Rendering
High Dynamic Range color grading and display in Frostbite
Lighting the City of Glass
Frostbite on Mobile
FrameGraph: Extensible Rendering Architecture in Frostbite
HPG 2018 - Game Ray Tracing: State-of-the-Art and Open Problems
Photogrammetry and Star Wars Battlefront
Syysgraph 2018 - Modern Graphics Abstractions & Real-Time Ray Tracing
SIGGRAPH 2010 - Style and Gameplay in the Mirror's Edge
Khronos Munich 2018 - Halcyon and Vulkan
SEED - Halcyon Architecture
Future Directions for Compute-for-Graphics
Rendering Battlefield 4 with Mantle
SIGGRAPH 2018 - PICA PICA and NVIDIA Turing
Shiny Pixels and Beyond: Real-Time Raytracing at SEED
Mantle for Developers
CEDEC 2018 - Functional Symbiosis of Art Direction and Proceduralism
EPC 2018 - SEED - Exploring The Collaboration Between Proceduralism & Deep Le...
Creativity of Rules and Patterns: Designing Procedural Systems
GDC2019 - SEED - Towards Deep Generative Models in Game Development

Recently uploaded

PDF
DSD-INT 2025 Delft3D FM Suite 2026.01 2D3D - New features + Improvements - Sp...
PPTX
Building AI agents in Java - Devoxx Belgium 2025
PPTX
The Sync Strikes Back: Tales from the MOPs Trenches
PDF
Smarter Testing Safer Systems Balancing AI and Oversight in Regulated Environ...
PDF
IAAM Meetup #7 chez Onepoint - Construire un Rag-as-a-service en production. ...
PDF
DSD-INT 2025 DevOps - Automated testing and delivery of Delft3D FM - van West...
PDF
DSD-INT 2025 Exploring different domain decomposition approaches for enhanced...
PDF
inSis suite - Laboratory Information Management System
PPTX
Building a RAG System for Customer Support - L1
PPTX
Future of Software Testing: AI-Powered Open Source Testing Tools
PDF
DSD-INT 2025 Modernizing Hydrodynamics in Large Flood Forecasting System - Mi...
PDF
Building Custom Insurance Applications With
PDF
DSD-INT 2025 Thermal and chemical plumes of sea cooling water from PEM platfo...
PDF
DSD-INT 2025 UK Coastal Flooding Incident Guide - Dam
PDF
DSD-INT 2025 Transport and Fate of Microplastics in Fluvial System (Rhine Riv...
PDF
ECFT Case Study: Digital Pilot Transportation System
PDF
SCORM Cloud: The 5 categories of content distribution
PDF
DSD-INT 2025 Validation of SFINCS on Historical River Floods at the Global Sc...
PDF
DSD-INT 2025 Flood Early Warning System for the Trans-African Hydrometeorolog...
PPTX
AI Clinic Management Tool for Dermatologists Making Skin Care Smarter, Simple...
DSD-INT 2025 Delft3D FM Suite 2026.01 2D3D - New features + Improvements - Sp...
Building AI agents in Java - Devoxx Belgium 2025
The Sync Strikes Back: Tales from the MOPs Trenches
Smarter Testing Safer Systems Balancing AI and Oversight in Regulated Environ...
IAAM Meetup #7 chez Onepoint - Construire un Rag-as-a-service en production. ...
DSD-INT 2025 DevOps - Automated testing and delivery of Delft3D FM - van West...
DSD-INT 2025 Exploring different domain decomposition approaches for enhanced...
inSis suite - Laboratory Information Management System
Building a RAG System for Customer Support - L1
Future of Software Testing: AI-Powered Open Source Testing Tools
DSD-INT 2025 Modernizing Hydrodynamics in Large Flood Forecasting System - Mi...
Building Custom Insurance Applications With
DSD-INT 2025 Thermal and chemical plumes of sea cooling water from PEM platfo...
DSD-INT 2025 UK Coastal Flooding Incident Guide - Dam
DSD-INT 2025 Transport and Fate of Microplastics in Fluvial System (Rhine Riv...
ECFT Case Study: Digital Pilot Transportation System
SCORM Cloud: The 5 categories of content distribution
DSD-INT 2025 Validation of SFINCS on Historical River Floods at the Global Sc...
DSD-INT 2025 Flood Early Warning System for the Trans-African Hydrometeorolog...
AI Clinic Management Tool for Dermatologists Making Skin Care Smarter, Simple...
In this document
Powered by AI

Overview of SIGGRAPH 2015 presentation on stochastic reflections by Tomasz Stachowiak and Yasin Uludag.

Discusses motivation for the research and requirements including sharp and blurry reflections, contact hardening, etc.

Summary of past efforts in screen-space reflections including limitations of techniques used in 'Killzone Shadow Fall'.

Covers importance of filtered importance sampling in previous research efforts.

Introduces the approach taken which includes BRDF importance sampling, handling hit points, and use of prefiltered samples.

Details on algorithm components like tile classification and ray classification, including pixel roughness.

Explanation of stackless ray walk in hierarchical tracing using a min-Z pyramid for efficient ray tracing.Refresher on importance sampling for variance reduction and its application to BRDF importance sampling.Discusses how rays from neighboring pixels can be reused and potential pitfalls of incorrect assumptions.

Methodology on how to conduct resolve sampling using normalization and temporal filters to enhance accuracy of reflections.

Performance evaluation based on PS4 timings, detailing efficiency in rendering with various configurations.

Conclusions drawn from the study and discussions on potential enhancements like adaptive sampling.

Thanking attendees and providing references for further reading on the subject matter.

Additional insights into previous works, improvements in techniques, and challenges faced in stochastic SSR.

Stochastic Screen-Space Reflections

  • 1.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseStochastic Screen-Space ReflectionsTOMASZ STACHOWIAK ELECTRONIC ARTS / FROSTBITEYASIN ULUDAG ELECTRONIC ARTS / DICE
  • 2.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseAgenda Motivation and requirements Previous work Core algorithm Secret sauce
  • 3.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseMotivation and requirements
  • 4.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 5.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 6.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 7.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseOur requirements Sharp and blurry reflections Contact hardening Specular elongation Per-pixel roughness and normal
  • 8.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseOur requirements Sharp and blurry reflections Contact hardening Specular elongation Per-pixel roughness and normal
  • 9.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseOur requirements Sharp and blurry reflections Contact hardening Specular elongation Per-pixel roughness and normal
  • 10.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseOur requirements Sharp and blurry reflections Contact hardening Specular elongation Per-pixel roughness and normal
  • 11.
    SIGGRAPH 2015: Advancesin Real-Time Rendering coursePrevious work
  • 12.
    SIGGRAPH 2015: Advancesin Real-Time Rendering coursePrevious work Basic mirror-only SSR Shoot rays from G-Buffer Normals just work Raymarch depth Return color at hit point Final lighting of last frame Reprojected
  • 13.
    SIGGRAPH 2015: Advancesin Real-Time Rendering coursePrevious work Killzone Shadow Fall [Valient14]
  • 14.
    SIGGRAPH 2015: Advancesin Real-Time Rendering coursePrevious work Killzone Shadow Fall [Valient14]???
  • 15.
    SIGGRAPH 2015: Advancesin Real-Time Rendering coursePrevious work Killzone Shadow Fall [Valient14]No hardening
  • 16.
    SIGGRAPH 2015: Advancesin Real-Time Rendering coursePrevious work Filtered Importance Sampling
  • 17.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseOur approach
  • 18.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseOur approach BRDF importance sampling
  • 19.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseOur approach Hit point reuse across neighbors
  • 20.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseOur approach Prefiltered samples Weighed by each BRDF
  • 21.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseOur approach - screenshots
  • 22.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 23.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 24.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 25.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 26.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 27.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 28.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 29.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 30.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 31.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 32.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseAlgorithm breakdownTile classificationRay allocationCheap raytracing HQ raytracingColor resolveTemporal filter
  • 33.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseTile-based classification Split screen in tiles Shoot tracer rays at 1/8th resolution Estimate tile importance Hit anything at all? Perceptual variance of hit values Allocate ray count per tile Between user-specified min and max
  • 34.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseRay classification Classify each pixel based on roughness Expensive rays Hierarchical raytracing Exact intersection Cheap rays Simple linear march May skip thin objects
  • 35.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
  • 36.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
  • 37.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
  • 38.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
  • 39.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
  • 40.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
  • 41.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseHierarchical tracing Stackless ray walk of min-Z pyramidmip = 0;while (level > -1)step through current cell;if (above Z plane) ++level;if (below Z plane) --level;
  • 42.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseImportance sampling refresher Variance reduction for Monte Carlo Given a target function to integrate… Find a probability distribution functionwe can sample As close as possible to target Generate samples according to PDF Calculate mean function / PDFprobability
  • 43.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseBRDF importance sampling Any BRDF We use GGX Every ray is sacred, every ray is great… Make them count! Low discrepancy Halton sequence Some rays go under the surface We re-generate them Could try Distribution of Visible Normals[Heitz14]
  • 44.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseRay reuse Neighboring pixels shoot useful rays Visibility might be different We assume it’s the same Reuse intersection results
  • 45.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseRay reuse Weigh by local BRDF Divide by original PDF Returned by ray-trace along with hit-point Neighbors can have vastly different properties Spikes in BRDF/PDF ratio Worse results than without reuse :(
  • 46.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course
  • 47.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course1 ray, 1 resolve samplehalf-resolution
  • 48.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course1 ray, 4 resolve sampleshalf-resolution
  • 49.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseVariance reduction This is what we’re solving Where fs is the BRDF Li is the incident radiance We integrate over hemisphere We skip emitted light (Le) in the formulationVariance!
  • 50.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseMul and div by the same factor
  • 51.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course… pre-integrate one of them
  • 52.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course… and do the rest with Monte Carlo.
  • 53.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseSame thing in Simple English BRDF-weighted image contributions Normalization of BRDF weights Pre-integrated BRDF
  • 54.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course… and pseudocoderesult = 0.0weightSum = 0.0for pixel in neighborhood:weight = localBrdf(pixel.hit) / pixel.hitPdfresult += color(pixel.hit) * weightweightSum += weightresult /= weightSum
  • 55.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course1 ray, 1 resolve sampleno normalizationhalf-resolution
  • 56.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course1 ray, 4 resolve samplesno normalizationhalf-resolution
  • 57.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course1 ray, 4 resolve sampleswith normalizationhalf-resolution
  • 58.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course4 rays, 1 resolve samplewith normalizationhalf-resolution
  • 59.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course1 ray, 1 resolve samplewith normalizationhalf-resolution
  • 60.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course1 ray, 4 resolve sampleswith normalizationhalf-resolution
  • 61.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course4 rays, 4 resolve sampleswith normalizationhalf-resolution
  • 62.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course1 ray, 4 resolve sampleswith normalization and temporal filterhalf-resolution
  • 63.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseSparse raytracing Raytrace at a reduced resolution Reuse multiple rays at full-res Unique blend of rays for every pixel Automatic from BRDF Per-pixel normals and roughness preserved Weight normalization fixes gaps
  • 64.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course1 ray, 4 resolve sampleswith normalization and temporal filterhalf-res trace; half-res resolve
  • 65.
    SIGGRAPH 2015: Advancesin Real-Time Rendering course1 ray, 4 resolve sampleswith normalization and temporal filterhalf-res trace; full-res resolve
  • 66.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseTemporal reprojection Reprojection along G-Buffer depth ‘smears’ Add reflection depth Average from local rays Proper reflection parallax
  • 67.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseImportance sampling bias Lots of noise from BRDF tail Shift samples toward mirror direction Still need accurate PDF values Truncated distribution samplingfloat2 u = halton(sampleIdx);u.x = lerp(u.x, 1.0, bias);importanceSample(u); Different normalization constant Our variance reduction re-normalizes!angleprobability
  • 68.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseFiltered importance sampling Pre-filter image pyramid Estimate footprint of a cone at intersection No actual cone tracing Mip determined by log function fit Roughness Distance to hit Elongation
  • 69.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseFilter bias Counter sampling bias with filter bias Same parameter Tuned for similar look across the range Improves performance Better spatial coherency Smaller mips
  • 70.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseBias 0.0
  • 71.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseBias 0.7
  • 72.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseMulti-pixel resolve Resolve four pixels at a time Using the same rays Four running color and weight sums Four times the VGPRs Two-three waves enough in practice Same four color buffer UVs Different mips :( Find min and max mips, interpolate samples Two color samples instead of four :)
  • 73.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseMip anchor interpolation2 mip fetches+ interpolation4 mip fetches1 mip fetch
  • 74.
    SIGGRAPH 2015: Advancesin Real-Time Rendering coursePerformance PS4 timings (Frostbite testbed) 1600 x 900; all pixels reflective HQ rays for “Disney” roughness < 20% Bias 0.7 All passes use compute, can run async.ResolvesamplesRays /half-respixelEffectivesamples/pixelTileclassifyRayallocLineartraceHi-ZtraceResolve Temporal Total4 1 4 0.16ms 0.24ms 0.20ms 0.37ms 0.81ms 0.30ms 2.19ms4 2 8 0.16ms 0.34ms 0.34ms 0.65ms 1.46ms 0.30ms 3.34ms1 4 4 0.16ms 1.06ms 0.61ms 0.91ms 0.91ms 0.33ms 4.41ms
  • 75.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseVideo
  • 76.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseConclusion All requirements fulfilled Accuracy, per-pixel normal, roughness, etc. Tiny bright highlights still result in noise Can we detect and sample them more? Adaptive Multiple Importance Sampling? Track variance and blur where it’s high?
  • 77.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseThanks! Questions? tomasz.stachowiak@frostbite.com @h3r2tic By the way, we’re hiring!
  • 78.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseReferences [Heitz14] Eric Heitz, Eugene D'Eon "Importance Sampling Microfacet-Based BSDFsusing the Distribution of Visible Normals" https://hal.inria.fr/hal-00996995 [Hermanns15] Lukas Hermanns "Screen space cone tracing for glossy reflections"http://publica.fraunhofer.de/documents/N-336466.html [Karis13] Brian Karis "Real Shading in Unreal Engine 4"http://blog.selfshadow.com/publications/s2013-shading-course/ [Karis14] Brian Karis "High-quality Temporal Supersampling"http://advances.realtimerendering.com/s2014/ [McGuire14] Morgan McGuire, Michael Mara "Efficient GPU Screen-Space RayTracing" http://jcgt.org/published/0003/04/04/ [Pearce15] William Pearce "Screen Space Glossy Reflections"http://roar11.com/2015/07/screen-space-glossy-reflections/ [Tokuyoshi15] Yusuke Tokuyoshi "Specular Lobe-Aware Filtering and Upsampling forInteractive Indirect Illumination" http://www.jp.square-enix.com/info/library/ [Uludag14] Yasin Uludag "Hi-Z Screen-Space Cone-Traced Reflections" GPU Pro 5 [Valient14] Michal Valient "Reflections And Volumetrics Of Killzone Shadow Fall"http://advances.realtimerendering.com/s2014/
  • 79.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseBonus slides
  • 80.
    SIGGRAPH 2015: Advancesin Real-Time Rendering coursePrevious work Lobe-aware filtering[Tokuyoshi15]No hardening :(
  • 81.
    SIGGRAPH 2015: Advancesin Real-Time Rendering coursePrevious work Hi-Z Cone Tracing [Uludag14]
  • 82.
    SIGGRAPH 2015: Advancesin Real-Time Rendering coursePrevious work Hi-Z Cone Tracing [Uludag14]???Can’t trace this
  • 83.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseOverblurring at grazing angles Cones poorly fit specular lobes at grazing angles Lobes become anisotropic in shape Cone fit to wider (azimuthal) angle over-blurs Fit it to polar angle Effectively: shrink the cone at grazing angles
  • 84.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseFilter shrinking due to elongation Found a close fit in Mathematica Also came up with an ad-hoc one Ad-hoc close enough in testingspecularConeTangent *=lerp(saturate(NdotV * 2.0), 1.0, sqrt(roughness));
  • 85.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseFilter shrinking due to elongationNo filter Adjusted filterNaive filter
  • 86.
    SIGGRAPH 2015: Advancesin Real-Time Rendering coursePre-integrated FG note Remember this? Multiply by FG after temporal We do it when applying SSR to the screen Reduces smearing and noise
  • 87.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseMulti-pixel resolve jittering Ray reuse across 2x2 quads == 2x2 noise Makes Temporal AA unhappy! 2x2 blocks look like features, not aliasing Spread out the target pixels Jitter temporally to hide artifacts
  • 88.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseVariance reduction for ray reuse Initial idea: Multiple Importance Sampling Treat neighboring pixels as generation strategies Accurate, unbiased results Expensive ALU and VGPR heavy
  • 89.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseNeighborhood clamping Similar to Temporal AA Tuned for some smearing over noise Can’t kill all the lag anyway Reflection color is from previous frame! Expand the color bounding box Tiled, loaded from LDS
  • 90.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseCommon materialsSSR
  • 91.
    SIGGRAPH 2015: Advancesin Real-Time Rendering courseCommon materialsSSR

Editor's Notes

  • #5 Here’s a concept art piece from Mirror’s Edge Catalyst. As you can see, pretty much everything here is reflective. It is worth noting that while the surfaces are generally smooth, not all reflections are mirror-like. Notice the stretched reflection on the left, and the blurry reflections in the walls on the right.
  • #6 Here we have a photograph of one of our conference rooms at Frostbite. Interestingly, when looking at this picture, you don’t perceive reflections per se; instead we see some shadowing of the outdoor light. This highlights an important feature of SSR which is often underplayed. It provides specular occlusion for local and distant reflections. Additionally, in this scene, there are plenty of rougher reflections, as well as variations in the reflectivity, especially on the floor. Take notice also how the reflections become sharper as they approach the contact points of the surfaces they reflect in, for example the chair legs.
  • #7 This photo represents something that you might see quite often in a racing game. But it also highlights a few important features we would like to support. The reflections stretch vertically; there is also high frequency variation in surface roughness and normals.
  • #8 So now we can formulate a set of requirements that we would like to have for our SSR. First of all, it should support sharp and blurry reflections.But those aren’t realistic blurry reflections…
  • #9 … we would like them to sharpen as they come in contact with the reflecting surface.
  • #10 They should also stretch vertically. If you’re using a microfacet BRDF, your analytical lights do this already. SSR should also exhibit this physical phenomenon, as it’s important for realism.
  • #11 Finally, we would like to support per-pixel normal and roughness variation.
  • #12 Let’s take a look at a few SSR techniques, and see if and how they fulfill the requirements we want here. We don’t have the time to go through all techniques presented so far, so we will mostly look at the methods which inspired our work.
  • #13 Let’s start with a quick refresher of the basic SSR algorithm. There were a few publications detailing a possible implementation, so I won’t go into much detail. If you’d like to cover those basics more, check out Morgan McGuire and Michael Mara’s paper, for example [McGuire14].Anyway, the basic algorithm only works with mirror-like reflections, so ray generation is trivial. We just take the G-Buffer’s depth and normals, and calculate the mirror directions. Then we ray-march the same depth buffer using any algorithm, for example by taking constant-sized steps in screen-space. When we intersect a surface, we must fetch the color at the intersection. There are a few options here, but the most popular one is re-projecting the previous frame’s final lighting, and returning that.
  • #14 Killzone Shadow Fall had an interesting approach for glossy SSR, similar to Image Space Gathering. They would first generate a sharp reflection image...… for all the pixels on the screen.And then they would create a blur pyramid of this sharp reflection, and choose the blurriness based on surface roughness. This guarantees that there are no discontinuities, although there could be other artifacts. For one, it isn’t clear what level of blurriness should correspond to roughness value.
  • #15 And then there’s the problem of normals. Meaning, they basically disappear on rougher surfaces, as the blur is not feature-aware.
  • #16 Plus there’s the lack of contact hardening, as the blurriness is only derived from roughness, not object proximity.
  • #17 An approach which does satisfy all the requirements is importance sampling, with or without filtering. Instead of blurring a sharp reflection, this approach produces a blurry reflection by stochastically sampling the directions. For a rough surface, the directions will be more dispersed, and for a smooth surfaces, they will go in a narrow beam. The filtered variant uses a pre-blurred color buffer to sample from.The only problem with this approach is that unless you use a lot of rays, it will be very noisy.
  • #18 Our approach takes ideas from both filtered importance sampling, and the Killzone approach. We stochastically shoot rays, but also perform filtering in the image space, although with a pretty fancy kernel.
  • #19 We begin by importance sampling directions from surfaces, but shoot only very few rays, even as low as just one per pixel.
  • #20 The major difference is that instead of returning the colors immediately, we barely store the intersection points.Then we have a ‘resolve’ pass, which reuses those intersection points across neighboring pixels.
  • #21 During the reuse, we calculate the desired blur level of each sample, using an approximate cone fit, and taking the reflection distance into account.We carefully weigh the contributions by each pixel’s BRDF, and that way we avoid normal smearing, over-blurring, and we retain sharp contact points.
  • #22 Here’s a few screenshots of the results. I’ll show a video later as well.
  • #33 Now that you (hopefully) liked the results, let’s take a look at the break-down of our algorithm.We begin by classifying which areas of the screen need reflections, and estimating how much work we need to spend in each tile.Then we adaptively allocate rays, and importance sample them.Next, we perform the ray-tracing in two flavors.After all the ray-tracing is done, we resolve the colors via a ray-reuse filter.Finally, we reproject the previous frame’s results, and blend the results. This temporal filter significantly reduces noise.
  • #34 Diving deeper, let’s take a closer look at all the steps.The first one is tile-based classification. We subdivide the screen into square tiles, and we estimate for each tile how many rays we need for it. We let the user specify a min and max number of rays that should be shot per pixel, and this pass decides on a value between the min and max. To do so, we actually trace some rays for each tile, and guesstimate the perceptual variance of the reflection. This catches stark contrasts in the image, and allocates work where the reflection is estimated to be more noisy. We also approximately skip tiles from further calculations if none of the generated rays hit anything.
  • #35 Once we know which tiles need SSR, we launch an indirect compute pass which decides what kind of ray-tracing is needed for each pixel. We have two ray-tracing algorithms.There’s the super precise hierarchical Z tracer, which gives exact intersection points. We use this one on smooth surfaces, as for mirror-like reflections we would like the intersection points to be as precise as possible.For rough reflections, we cheat and use a low quality linear marcher, as the result will be heavily filtered anyway, and exact intersection points aren’t that important.
  • #36 Just a quick note about the hierarchical tracing. A description of it is in Yasin’s article in GPU Pro 5 [Uludag14] and a few other publications [Pearce15] [Hermanns15], but it’s probably good to mention it here as well. The idea is quite simple. We have a min-z pyramid, which is basically a quad-tree.
  • #37 We start at the most refined level of the hierarchy. Regardless of which level we are in at a given moment, we step the ray to the edge of the cell, or until the intersectoin with the Z plane in the cell. If we don’t hit the Z plane, we move to a coarser level.
  • #38 This provides efficient empty space skipping
  • #39 Once we find an intersection with a Z plane, we move to a finer level in the hierarchy.
  • #40 … again.
  • #41 … and again.
  • #42 Once we hit the Z plane at the coarsest level, we have found the intersection.
  • #43 I have mentioned importance sampling before, but you may not be intrinsically familiar with the concept.It is a variance reduction technique for Monte Carlo integration.Suppose that we want to integrate this function; we need to have a probability distribution from which we can generate samples. It should be as close as possible to the original function that we’re integrating.We generate samples from the PDF, and for each one we calculate the ratio of the function value to the value of the PDF. As the number of samples increases, the average of those values approaches the value of our integrand.
  • #44 We use importance sampling of the BRDF in order to generate ray directions. Any BRDF can be used here; we happen to worship GGX.Now, we can’t generate lousy rays. Ray-tracing is expensive, so we spend some extra effort to generate very nice rays. As a bare minimum, we use a Halton sampler, with the values pre-calculated on the CPU side.Now, the classic way for importance sampling BRDFs works by generating half-angle vectors, and reflecting them off the microfacet normal. This results in some rays actually pointing below the surface. We can’t use those, so we re-generate them a few times until we get nice ones. The extra cost is actually negligible here.One could go even further, and importance sample from the Distribution of Visible Normals, as proposed by Eric Heitz and Eugene D’Eon. This will probably considerably reduce the noise, and is certainly future work for us.
  • #45 Once we’ve performed all the ray-tracing, it’s time to calculate the colors of our picture.The simplest way is to look-up the colors at the intersections of each pixel’s rays. But then we would need plenty of rays per pixel to achieve a noise-free result.Most of the time however, we can steal some rays from neighboring pixels. Usually the surface properties will be similar, and visibility is almost always the same. Therefore we pretend the neighboring rays were shot from the local pixel, and use their intersection points.
  • #46 Of course we can’t just arbitrarily reuse the intersection points. We still need to be careful to produce the correct result. All the rays’ contributions need to be correctly weighed by the current pixel’s BRDF, and divided by the PDF that they were sampled from.… and then it turns out that we have been overly optimistic. The neighboring probability distributions can be quite different from the BRDF values at the local pixel, causing pretty big spikes in the f over pdf ratio.
  • #47 Let’s take a look at an example scene, where there’s roughness and normal variation.
  • #48 Here’s just the reflections, calculated with one ray per pixel at half-res, and without any ray reuse.
  • #49 And here we use four rays in the resolve pass. A few surfaces became smoother, but we have plenty of high intensity speckles everywhere ground. Overall, this is a worse result than without any ray reuse.
  • #50 We can solve that, but we must first take a step back and take a look at what exactly we’re trying to solve here.This is a version of the scattering integral, with the emitted light removed for brevity. The reflection’s value is determined by a hemi-spherical integral of the incident radiance weighed by the BRDF.Here, the BRDF and cosine term divided by the PDF value is the source of variance. After summing up all the samples, spikes in those ratios result in pixels with overly bright or dark colors.
  • #51 We attack the variance problem directly, as will become obvious in a few moments. We take the original scattering equation, and we multiply its numerator and denominator by the same value. As long as it’s not zero, that’s an identity transformation.
  • #52 We then replace one of the integrals with a constant which we can pre-calculate offline. Chances are that you are already calculating this value in your engine; it happens to be the same thing which we normally use for the split integral formulation for image based lighting. See for example Brian Karis’s presentation in the Physically Based Rendering course from 2013 [Karis13].
  • #53 And finally, we calculate the remaining bit with Monte Carlo quadrature.
  • #54 Here’s the same thing again, with some annotations. What this basically boils down to is a weighted sum. The BRDF over PDF values are weights, and divide the result by their sum at the end.
  • #55 The code ends up being super simple too.
  • #56 Here we have the same pictures again. First, without ray reuse.
  • #57 Now with ray reuse again, without the new variance reduction trick.
  • #58 And now with the variance reduction. Much saner now.
  • #59 For a comparison, this is without ray reuse, but with four rays /traced/ per pixel. It looks a bit nicer due to more uniform noise distribution, but otherwise variance is pretty similar.
  • #60 Just one ray and no reuse again.
  • #61 And now with the variance reduction. Much saner now.
  • #62 Another image; here we trace 4 rays per pixel, and use 4 samples to resolve. So 16 color values integrated, with only four rays traced per pixel. It’s starting to look nice, but looks like it will be still too expensive to get a noise-free result.
  • #63 … which is why we use everyone’s favorite technique in the recent days, which is temporal filtering. We get a similar result to the previous image, except here we still only trace one ray per pixel.
  • #64 The weight re-normalization allows us to do one more neat thing. What we’re doing is saying “The BRDF weights *must* integrate to one”, even if we have poorly fitting rays. Thing is, we will usually find some rays in the neighbors which we can use. Even if the given pixel didn’t trace any rays for itself. What this enables us to do, is sparse ray-tracing. We don’t need to shoot rays for every pixel, we just need to resolve at full resolution. We will find *some* rays which will suit us anyway. And we will properly weigh them by the full-res BRDF.
  • #65 Here’s the last image again. This is a half-resolution result.
  • #66 … and here with a full-res resolve. We’re still ray-tracing at half-res, but we get full-res detail.Noise is lower here as well, as our temporal filter is designed with the full-res resolve in mind.
  • #67 I mentioned temporal reprojection, so let’s have a quick look at it now. The idea is simple: given a pixel’s 3D coordinate, check where it was in the previous frame, project it, and we have the texcoord to sample.… except when we move the viewport, reflections exhibit parallax. That is, the reflected objects move according to their depth, not the depth of the surface which reflects them. Attempting to reproject reflections using the closer depth results in smearing. So we calculate the average depth of the reflected objects, and reproject using that. This results in much reduced smearing.
  • #68 We are actually quite nasty with our importance sampling. We bias it quite a bit. It is difficult to eliminate all the noise without using a crazy number of samples. A lot of it comes from the tail of the BRDF, especially for distributions like GGX. Which is why we shift the generated samples toward the mirror direction. In a fancy way, because we still need accurate PDF values for importance sampling.It’s actually quite easy to sample a ‘truncated distribution’ when using the inversion method…… we just offset the pseudo-random value used to generate the offset from the mirror direction.This has the effect of redistributing the samples from the ‘truncated’ region proportionally in the ‘untruncated’ region of the function. The PDF values are the same except for a constant scale factor… and we don’t need to calculate it, since our variance reduction scheme re-normalizes all the constant factors anyway.
  • #69 Another variance reduction technique we use is filtered importance sampling. I have already covered the ‘importance sampling’ bit, so the filtering part is quite easy to add. Instead of fetching the high frequency color buffer for our reflections, we use appropriately pre-blurred versions of it. We pretend that the rays we trace are cones, and we estimate their footprint at intersection points. From that, we get the mip level of a pre-blurred image pyramid to use.
  • #70 The sampling of pre-blurred image levels can be biased to produce blurrier reflections than necessary. This is useful to counter the bias in importance sampling, which over-sharpens reflections. We wire the two together, such that we end up with a single ‘bias’ parameter, which goes from “pure unbiased Monte Carlo” to tracing in the mirror direction, and taking all reflection blurriness from the filter.Increasing the bias results in improved performance in two ways. The ray-tracing becomes more coherent, and we sample smaller color mips. Pushing it *too* high is not desirable however, as we introduce some discontinuities and leaks, since proximity in screen-space doesn’t necessarily mean proximity in the reflection. Increasing the bias also reduces the desirable elongation effect.
  • #71 Here’s our scene with a bias of zero.
  • #72 And again with 0.7. Some of the reflections changed slightly, but that is mostly due to the temporal filter’s neighborhood clamping. Otherwise the look of the image is similar, yet the noise is reduced, and the performance is improved.
  • #73 Remember the full-res color resolve? We actually run it four pixels at a time, and reuse the same rays for all four pixels. It becomes more VGPR heavy, but lets us save some memory bandwidth, so it’s a win in practice. We only need to fetch the ray data once for all four pixels, and we also calculate the hit point screen-space coordinates once, as they are the same.The thing which doesn’t necessarily have to be the same however is the mip level of the color buffer to fetch for each pixel. The pixels can have four different roughness values, and therefore should use different pre-filtered images. Here we use a similar trick as the DXT format uses for color compression. We find the min and max anchor mip levels for the four pixels, and we interpolate between them. That way we take only two color buffer samples instead of four.
  • #74 To illustrate that the mip anchor interpolation works, here we have some comparison images. At the bottom, we’se using just one mip level for the four pixels we resolve; this results in a blurry image. Then at the top we have four unique mip levels, which is the reference. In the middle we have the result of interpolation. As you can see, the difference is barely noticable.
  • #75 Here’s a quick slide with performance on the PS4. As you can see, the ray reuse scheme allows us to quite efficiently reduce the ray tracing cost.
  • #81 One can try to filter the importance sampling noise. Similarly to how the Killzone approach blurs mirror-like reflections in screen-space, we can filter the noisy reflections. Only we need a smarter filter, which respects surface details. Yusuke Tokuyoshi has recently published a really cool paper about lobe-aware filtering [Tokuyoshi15], which produces great results. He uses it to de-noise path-tracing and real-time cone tracing.An image is first rendered using importance sampling.Then the filtering process compares the similarity of outgoing specular lobes through the use of fitted spherical gaussians. The similarity acts as a weight in a bilateral filter.This approach fits almost all the bills, except contact hardening. The lobe similarity does not take object proximity into account, therefore we lose sharp contacts between objects.
  • #82 Yasin Uludag published a screen-space cone tracing algorithm in GPU Pro 5, which ticks many of the boxes on our list of requirements; everything except elongation.// See also [Pearce15] and [Hermanns15]The algorithm starts by tracing a ray, not a cone.Once the ray hits something, the algorithm pretends it was a cone, and samples a pre-filtered color buffer. The contribution is weighed by a calculated coverage of the pre-filtered area.It then marches forward and accumulates more samples, until full coverage is obtained.
  • #83 The trouble is with the coverage calculation, and with tracing following the first hit. The depth buffer only contains the first visible surface, so we don’t acutally know the thickness of objects. We can do some guesswork, or assume a constant thickness. But the algorithm still won’t know whether it just hit a solid wall, and should stop, or whether the thing it hit was a thin surface, and it should continue. The cone trace after the first hit can’t be performed accurately either since we can’t trace behind objects. We would need to perform depth peeling, or have other information about the scene structure.Overall, in simple scenes the algorithm produces great results, but it can produce discontinuities and artifacts which are difficult to get rid of.
  • #87 Just a quick note about the pre-integrated FG term that I’d mentioned before. It is much better to multiply by it after the temporal filter, so that the changes in it do not affect the neighborhood clamping. They also don’t need to be re-projected, since the pre-integrated value is noise free anyway.In our case, we multiply by the FG term in the pass which applies SSR to the screen. Its value is also used for image-based lights, so it effectively costs just one multiply operation.
  • #88 There’s one more detail to the multi-pixel resolve that I need to mention. We process four pixels at the same time, but those are not the four closest pixels. Doing so would produce half-resolution noise, which doesn’t play well with Temporal AA due to its neighborhood clamping in a 3x3 neighborhood. We actually spread out the pixels such that the distance between them is greater than the TAA window. Every frame we change which four pixels are resolved together, and let TAA resolve that to a smooth hi-resolution image.
  • #89 I actually got the idea for ray reuse while reading Eric Veach’s thesis, particularly the bit about multiple importance sampling. The insight I had is that I could treat the neighboring pixels as ray generation strategies. That way, if we get an unlikely ray with a high BRDF over PDF ratio, we can ask the other neighbors “so what’s the chance that /you/ would give me this ray”, and weigh down the contribution of the unlikely one.The approach worked fine, and produced great result. But it was also too expensive. All the careful weighing was ALU and VGPR hell.
  • #90 To reduce temporal smearing and lag, we use neighborhood clamping. This is similar to the temporal anti-aliasing algorithm from Brian Karis [Karis14], but it’s tuned a bit differently. The values here are generally much more noisy, so the color bounding box clipping can’t be exact like in TAA. It turns out that we can’t remove all the reprojection lag anyway, since the reflection color buffer we sample in the resolve pass is from the previous frame. With that in mind, if we give some slack to the neighborhood clamping, it will only marginally increase the lag and smearing.

[8]ページ先頭

©2009-2025 Movatter.jp