• Ei tuloksia

3. Real-Time Path Tracing

3.3 Feature Buffers

With path tracing it is possible to generate feature buffers like shown in figure 3.2. More-over, in a system where there is only a one primary sample per pixel the feature buffers and primary rays can be calculated with rasterization like done in works [1, 46, 47]. This also means that the feature buffer with the first ray is ’noise-free’. From this first ray inter-section for example thedepth,3D world position,texture and normal of the surface can be collected and used for post-processing. Furthermore, unlike photographic image de-noising these feature buffers can be used to guide edge avoidance when reconstructing the final image when the sample count is low.

Surface normal is a 3D value describing the normal in the hit point of the primary ray.

Normal buffer can also be projected to the camera as aTwo-Dimensional (2D) value as a ’view space surface normal’. The surface normal is not usually enough to avoid edges in a 3D scene as different objects in different depths can have the same normal as seen in figure 3.2 where the normal of the tables blend with the floor.

Depth is the distance of the primary ray from the surface point to the camera. Depth is related to the world space coordinates of the hit point and can be mapped back to it by inverse transform of using the view space x and y coordinates. As the depth values may differ a lot even on the same surface the depth does not by itself help the edge avoidance much. A better way to utilize the depth values is for example to calculate the gradient∇Z for the depth values and use this like done in the work by Schied et al. [1].

Albedo describes the texture color in the hit point. The texture information is usually helpful to save in a separate buffer and it can be used to for example to remove it from the noisy image and process the untextured irradiance and added to the image again like done in related work [1, 46, 48].

Variance buffer describes the statistics of the sample variance. In a system where there is only one sample per pixel the variance cannot be estimated from the single sample and thus it can be estimated for example by using either spatial estimation from neighboring samples or samples from previous frames like done in other work by Schied et al. [1]. In a system where there are multiple samples per pixel the sample variance can be estimated.

The estimation is usually done only with the luminance of the RGB values which outputs

Normal buffer Depth buffer

Albedo buffer Variance buffer

Figure 3.2. Examples of feature buffers generated with path tracing.

a 1 channel value.

In systems where there are multiple primary samples, and the samples are also random-ized the gbuffer is also noisy. But for a system like described in this paper in next section it is better to use just one primary sample per pixel to achieve real-time simulation.

3.4 Real-Time Path Tracing

The current real-time path tracing implementations have usually been based on really low sample counts per pixel for example only 1 sample per pixel and using a fast blurring denoising filter to reconstruct the final image [1, 46, 48]. With the dedicated hardware for ray traversal and for example the power of cloud computing an interesting path for real-time implementations would be to generate more samples per pixel. For offline rendering without time budget this could also mean that the primary sample from the camera is also varying in space and with this the final fully converged image is anti-aliased. For real-time systems this would require more ray traversal calculations and noisy feature buffer which would further complicate the post processing methods. Moreover, there are already real-time anti-aliasing methods to produce good quality anti-aliased results for example Temporal Anti-Aliasing(TAA) [49] and it can be successfully applied after other denoising methods like done in the works [1, 46] in real-time.

Furthermore, temporal reuse of samples for real-time in path tracing is also used like

Figure 3.3. Example path tracing setup. The primary rays from the camera are illustrated as red arrows and the secondary rays as blue arrows. The dashed arrows are shadow rays. In this example there are 1 primary ray per pixel and 2 secondary rays.

in related work [1, 46, 48]. This method requires post-processing of the samples by moving the samples with motion vectors and dropping occluded samples from reuse.

Temporal reuse of samples causes problems with dynamic environment like moving lights and animations where the samples might heavily change in intensity between the frames.

Furthermore, a method to preprocess the reuse of samples has been also proposed by Schied et al. in [50]. Also, in a distributed setting where the path tracing is distributed between different computing units the temporal data is more difficult to reuse for example in a moving camera where the smaller tiles rendered do not share the temporal data.

However, in this master’s thesis temporal pre- and post-processing methods are not used to simplify the denoising process and the denoising is applied only to a single frame of samples. For future work temporal denoising is an interesting topic and the direction is further explored in future work Chapter 6. For a real-time path tracing setup without temporal reuse of samples the setup could be as follows: One primary sample per pixel and two secondary paths from the first intersection. From each of these intersections a shadow ray is casted to a random point in a random light. This kind of setup is also illustrated in figure 3.3.

In the example setup one consideration is that the primary rays are rasterized as to gener-ate the noise-free feature buffers and to decrease the computational workload of multiple primary samples. As such, the multiple samples per pixel in this work means that the multiple samples are generated only after the first primary ray. So, for example 8 spp in

8 spp 16 spp

32 spp 64 spp

Figure 3.4.Noisy outputs for path tracing for few samples.

this setup means that 8 secondary rays are casted after the first primary ray. As opposed to an offline setup for example used by Bako et al. in [12] where the primary rays are also randomized which also generates noisy feature buffers for reconstruction. Path traced im-ages for different amount of samples per pixel with the previously described online setup are shown in figure 3.4.

The Monte Carlo integration of path tracing result is a time consuming process. Moreover, as each sample is independent from each other the problem is ’embarissingly parallel’

meaning that the computation of each sample can be distributed to be computed sepa-rately. For example, a system which is able to compute 1 spp path tracing in real-time for 60 fps (∼16 ms per frame), a distributed system with 64 units similar to this are able to generate effective 64 spp when the results are accumulated. However, in practice the time for path tracing more samples per pixel may be used for denoising the lower spp result to achieve better results with the same computation time. The trade-off between spp and the denoising results are further explored in this thesis.