0- What I'll describe here?
What this mod exactly does and why.
Describing what some of the important params do in the mod.
1- Why this mod?
This mod makes the lighting even more realistic. 1.1-How?
In the real world, the whole thing you see with your eyes is the result of "Tracing Rays". A RayTracing rendering technique tries to calculate lighting just like how the real world does this; Of course, with a lot of biases to keep the performance reasonable! And some of those biases are present in Cyberpunk 2077.
1.1.1- What are those biases?
a- Limited "Tracing Radius" for both Diffuse and Specular rays.
b- Intensive Importance Sampling.
c- Denoising!!!
and more...
a- Tracing Radius - In the real world, light rays travel to infinity until they get reflected and/or absorbed by something. If one of these two events happens, the surface behind the reflector/absorber can't receive the light anymore and we call this event "occlusion" or "Shadow". There's also something called "Ambient Occlusion" which is when the ambient light gets occluded by a surface or thick air. By the word "Ambient Light" we usually mean the light that's coming from the sky. An accurate and unbiased ray tracer always checks to see if a surface is occluded or not. But in a game, which heavily biases calculations in order to extract the last possible frame, It's not "always". In this case, the Ray Tracer engine has some parameters that the dev/modder can set in order to achieve a balance between accuracy and performance.
"Tracing Radius" simply means how far the ray should travel to find the occluder. In CP77 this number is 200m! which means if an occluder is further than 200m, the ray tracer says: "Ok it doesn't matter, I don't try to find it anymore, I guess the skylight can be received by that surface from that angle" which is not always a correct guess.
The same story goes for "Tracing Radius Reflection". This one is, as it says, for specular reflections instead of Diffuse ones. As these kinds of reflections are usually more visible than diffuse reflections, the default value is higher, 2000m. But still, in some rare cases, the surface that's going to be reflected is further than 2000m and also very visible! So by increasing these values, we can fix the inaccuracies, And in cyberpunk, it wonderfully, it just doesn't affect the performance at all! :/
b- Importance Sampling - What is this? Well, it's a super-duper complex thing to talk about it here. Simply talking, it's an optimization technique. It checks which light sources contribute more to the lighting of the image then traces more samples toward those light sources. This process can reduce the noise or can improve the performance. Or it can do both! But it also can possibly make bias in the result depending on the implementation method. I suspect that Red Engine is using a method similar to this one from this article.For real-time ray tracing with many lights, Schmittler et al. restrict the influence region of lights and use a k-d tree to quickly locate the lights that affect each point. Bikker takes a similar approach in the Arauna ray tracer, but it
uses a BVH with spherical nodes to more tightly bound the light volumes. Shading
is done Whitted-style by evaluating all contributing lights. These methods suffer
from bias as the light contributions are cut off, but that may potentially be alleviated
with stochastic light ranges as mentioned earlier.
On the other hand, a side effect of this "limiting the contribution of lights" technique is that "the chance of ray to find a light = hit probability" is lower; Making more noise! AH... NOISE...
c- Denoising - In image rendering, we have noise! IN THE REAL WORLD there're an uncountable amount of light rays hitting a millimeter of a surface, maybe a billion? billions of billion? billions of billions of (Trump voiceover activated)... well, you got the idea ;-) But in a computer, we can't simulate that many rays. in the animation and movie industry, big studios use beefy supercomputers to render those products. They still stick with 1,000-10,000 samples per pixel (which can lead to more than an hour for a single frame!), Which is not a percent of the real-world light rays hitting the same surface. And is usually still noisy as hell. So what should they do? They use different filters to interpolate between neighboring pixels and make the image look soft and natural. This step is called "Denoising". We gamers, we can't afford a beefy supercomputer and we still want 60 frames per second instead of 60minutes per frame(like how the cinema industry works). Games can only calculate one -or even less!- sample(s) per pixel in each frame. So we still need to Denoise the image. But 1,000-10,000 times more intensive than the cinema industry cause we have 1000-10,000 times fewer samples!!!! WOW!!!! not interesting...
Real-Time Denoising!
Cinematic denoisers are time-consuming and yet ineffective for such a low amount of samples (1 or even 0.5 or maybe 0.05 samples per pixel! in games)
We need something faster and more aggressive. Cyberpunk uses "NRD: Nvidia Real-time Denoiser" as the solution. So how this guy works?
1- Prerequisite knowledge
First I need to mention that the only noisy thing we have here is the ray-tracing passes, Textures, and geometries can become noiseless with only TAA or DLSS.
We have ray-traced lighting pass, ray-traced reflections pass, and ray-traced shadow pass.
In addition, we have
depth pass: indicating -per pixel- how far is the rendered surface from the camera
Motion vector pass: indicating -per pixel- in which direction the surface is moving in the next frame, including the movement speed
Roughness pass: exporting the roughness textures of each surface, mapped on the surface. Roughness texture is used to indicate how rough or glossy a surface is.
2- Temporal Accumulation
As I said before, game engines sample 1 or even fewer rays per pixel which is quite low compared to the real world. Temporal accumulation is a solution to recycle the samples previously calculated in the past frames. Using Motion Vectors to approximate where the previously sampled surface is in the new frame. Accumulating the samples from a lot of frames can result in a more sampled/less noisy final image. This step can, unfortunately, produce different artifacts, The lighting reacts to changes in the scene with -usually- visible delays. If a new surface comes to the screen, as it wasn't sampled previously, it can be noisier than the rest of the image. But yet, it's worth it when you see the benefits. In CP77, by default, it mixes the maximum of 15 frames; But it can be changed using "Diffuse/Specular max accumulated frame num(ber)" to a maximum of 64 frames (higher values are ineffective as there are only 64 noise patterns existing in the game files) and -1 means infinite.
3- Spatial Filtering
Finally! we can get rid of all those ugly noises, hopefully. There're different approaches for spatial filtering. it simply is about blurring the image! Blurring can mess with all the edges, details, textures, etc... in the image. So I shouldn't leave it as (blurring the image). Well, Thanks to the depth pass, we can have a 3D presentation of the rendered image. Considering the Horizontal axis of the image as X-axis and the vertical one as the Y-axis, the depth pass can be the Z-axis. This way we can know where are the edges and avoid blurring them. Also, we can use the normal maps of surfaces, in addition, to save their details as well. (depth and normal maps are called determinators here)
The textures? Actually, a render engine firstly calculates the lighting, then multiplies the resulting value of each main color (RGB) to the values of these colors in the color texture of that surface. If we do this multiplication before the denoising, we can blur those textures too! But if we do that after the denoising, we can save almost all the detail in those textures. After all, the lighting itself can be still blurry as hell. :((
The question is: how much blur we should have? Here comes "Denoising Radius" which determines how much it should blur the RT pass. 60 is the default, meaning each pixel will be blurred until it reaches a neighboring pixel 60px away if the determinators permit. More noise needs more denoising so we need to have a higher denoising radius. But if we increase it more than what we need, we also lose details in the shadows and the lighting as a side effect. In my mod, it's 30 but if you are ok with a little bit amount of noise and prefer higher detailed lighting, tweak it with your own taste.
0 comments