Cyberpunk 2077

thank you to make1 for asking this question.

the very short version is

1. ray tracing is a brute force approach, but it's limited to reflections, maybe global illumination, etc. there is plenty of things still happening as good ol' raster (often shadows and maybe ambient occlusion).

2. path tracing is even more brute force because there is NO raster anymore, only ray tracing. it really isn't different from ray tracing. a ray traces (follows) a path... the main difference is selective things having path tracing, versus everything... and having a complete raster image to fall back on, vs. not.

3. so then you have this problem that for everything path tracing, it normally takes several seconds (or minutes) per frame or it looks awful. why is this?

4. the reason is, the engine sends a ray of light from your eye (the camera) out into the world...  eventually this ray hits something. let's say a sphere in front of you. at the front of the sphere, it's easy, the ray bounces straight back to you (camera), except it now looks like a tiny dot from a sphere. let's say it's grey, because it's a grey sphere.

5. but what about the rays that go to the side of the sphere? a whole bunch of them bounce off the screen and never return to the camera. so these rays serve no actual purpose... but we have no way of knowing this until we actually trace them. (remember also that a ray can bounce 2 or more times... but it still may end up anywhere).

6. this is why RT and PT are brute force... we cast a whole bunch of rays out into the world, and only some come back. each ray requires one or more calculations (bounces), so usually we limit how far we trace by distance, and bounces.

7. so how do we fix this? well, what if instead of sending out all the rays at once, we just send out a small percentage of them, and see which ones bounce off something and return to the camera, and which ones disappear forever. this is called importance sampling.

8. we can then use this information to send more rays where some are returning to the camera, and fewer where they don't return.

9. we can take this a bit further by using a simplified (and faster) version of the raytracing equation ... which is the math of how all rays are calculated.

10. what this importance sampling scene will look like (and where Zyanide's article is helpful to see their images) is it looks a lot better where rays are bouncing back. but still very dotty where there are only a few rays.

11. okay, so knowing it's still not perfect, here's what we can do... we send a few rays out for importance sampling, to tell us where to send more rays... then when these next rays come back, we now have even more information about which rays are important and which aren't. we start to see the finer details.

12. let's say there is the big sphere but also a bunch of small marbles. some initial importance samples hit marbles, but other marbles are missed... and when we send out more rays based on the initial importance sampling, we now get a few hits on some of the missing marbles. this gives us better quality information again.

13. so, this is what we call resampled importance sampling. it's also resampled because none of the initial rays are thrown away. we store them as we build the frame, so ideally the initial importance samples are part of the final image, and the next rays are as well.

14. there's more we can do though, because we had all the rays in the previous frame, right? they are information, too, so it would be ideal not to just throw them away — some might still be relevant. we can store these up and reuse them in the current frame if they're still visible from the camera. this can be done over several frames. this is where the term temporal (over time) comes from.

15. spatiotemporal then just means over time and space. rays are remembered and reused over several frames. because to make a whole picture, we need a LOT of rays... the above is how we go from being able to do 1 ray every few pixels, to doing dozens or hundreds of rays per pixel (and focusing most of the effort where the detail is).

16. but there's a problem. some of the assumptions... the simpler equations used, and the spatiotemporal rays introduce something called bias. bias is some degree of error. the problem usually is, we don't know exactly how much error. each ray might be off by "a bit", and the more we use optimisations like the above, the greater the risk of bias.

17. this is the main reason we see ghosting, smearing, edge noise, boiling noise, etc... what you're actually seeing is bias... some percentage of the rays are wrong, or in a slightly wrong place. it would be ideal not to introduce any bias, but to do that, we end up with the very slow original path tracing that takes several seconds per frame, instead of milliseconds per frame.

18. the way we fix this is debiasing, which is essentially the negative of what the bias looks like, to try and cancel it out. as far as i can see Cyberpunk has two different debias methods.

19. so what's ReGIR? well, this is a different way of looking at resampled importance sampling (RIS). instead of sending rays out into the whole world, let's divide the 3D worldspace into a grid, then send out a few rays per grid sector. we then look at how many rays come back, and then send more rays to just the grid sectors that have the most bounces. this is then resampled (with RIS).

(the above is simplified a little)

what is Ray Reconstruction then exactly?

this is where i don't really like the term "ray reconstruction", and even RIS, ReGIR etc. become a bit confusing because they aren't exactly a bunch of separate things. they are all working very tightly together... just the same, you can't really turn off RIS in Cyberpunk... to do so would drastically reduce the quality and/or performance of the game... (i believe the EnableRIS setting is only controlling RIS for very specific objects that BabaBooey's modified to now be properly included in the path tracing worldspace).

and there are several settings i disagree with CDPR on. for example they use BiasCorrectionMode 2 for RTXDI, but mode 1 looks a lot cleaner.

often what happens during software development is there were a few bias methods developed at the beginning, when RTXDI was originally released with 1.6, it looked the best with bias mode 2 .... but then RTXDI has had a few updates, which has changed the bias, and nobody has realised... or different teams missed some communication... there's hundreds or thousands of things being coordinated every release in a game like this, it's easy to criticise, but to be too critical here would be a fundamental attribution error.

what do they all stand for?

so now that we know all of the above, i can tell you ReSTIR stands for Reservoir-based SpatioTemporal Importance Resampling. the reservoir is simply where we store the historical (spatiotemporal) ray information. GI is just global illumination. so — "secondary lighting that is not from direct light sources" is what ReSTIR is for (direct lighting is what RTXDI is being used for).

ReGIR is Reservoir-based Grid Importance Resampling. you know what this means now, too).

RTXDI is simply (NVIDIA) RTX Direct Illumination... in other words, light sources (but remember, we have to find the rays that bounce from the camera to a light, or from camera, to an object to a light... and we have to do it cheaply. THAT is the challenge).

now here's something I find interesting. if you sent rays out from the camera, and didn't have any optimisations, and traced all those rays to completion, you would (eventually) render the entire game world. of course this is insanely expensive.

quantum physicists suspect reality may only exist when we measure it(1). is this because fully rendering the entire universe all the time (while nobody's looking) would be expensive?

1. https://www.sciencedaily.com/releases/2015/05/150527103110.htm (or, in plain English https://www.sciencealert.com/reality-doesn-t-exist-until-we-measure-it-quantum-experiment-confirms)

Article information

Added on

Edited on

Written by

sammilucia

7 comments

  1. nobody91blues
    nobody91blues
    • member
    • 0 kudos
    thx for this! im just too ignorant to understand what was the point lol
    are you tryng to find a way to re gain access to all the ini values?
    is what cdpred has done in 2.12 bad? is it irreversible?
    my pics definitely looked cleaner on 2.11 with your ultra plus mod installed
    1. sammilucia
      sammilucia
      • supporter
      • 210 kudos
      no not at all

      simply because there isn't a lot of entry-level information on this. i figure the more people can understand how things work, the more we can work together 😊
    2. nobody91blues
      nobody91blues
      • member
      • 0 kudos
      got it! great work
  2. wastelandtrilla
    wastelandtrilla
    • member
    • 0 kudos
    What a write up, I love it! Thanks for taking the time to break it all down.
    1. sammilucia
      sammilucia
      • supporter
      • 210 kudos
      💕
  3. gummyfresh86
    gummyfresh86
    • member
    • 0 kudos
    wow this is awesome. thank you so much. so interesting how it all works. kudos.
    1. sammilucia
      sammilucia
      • supporter
      • 210 kudos
      ty ❤️