But it is also possible that seeing the pictures, they just lost their heads, sold their PS5 and rushed to buy an XSX.I don't think you are convincing anyone here with that picture.
But it is also possible that seeing the pictures, they just lost their heads, sold their PS5 and rushed to buy an XSX.I don't think you are convincing anyone here with that picture.
Which UE5 game is most like CP?
LoD transitions in CP when driving fast are most obvious. I’d like to see the UE5 equivalent to see how that’s handled.
If they'd want to buy an XsX for awesome water they'd try sea of Thieves.But it is also possible that seeing the pictures, they just lost their heads, sold their PS5 and rushed to buy an XSX.
Funny you said that, I bought Sea of thieves on PS5 and it has lovely water. And they dropped a ps5 pro patchIf they'd want to buy an XsX for awesome water they'd try sea of Thieves.
RTX 'rays per second' counts are really off. RTX was introduced as 10 gigarays per second. Real-world actual rays per second is a tiny, tiny fraction of this.
10 gigarays actual performance should provide really high quality results. I feel the promise of realtime raytracing was overstated, and though valuable, it's a long way from early dreams.
What's the minimum ideal amount of rays-per-second actually-achieved for 1080p60 full pathtracing, and how far are we from that? Recursion depth of 7 provides good results in classic Monte Carlo renderers, so 2 million pixels x 7 x 60fps = 840 million rays per second.
What are the bottlenecks preventing this? I'm guessing we're not anywhere near attaining that level of raw performance and we're going to be hacking graphics with RT components for years to come.
Could just be a triangle intersect tests. The reference was to the 'RT Cores'. These accelerate BVH traversal and triangle intersects. From the source Turing whitepaperDepends how they’re counting gigarays. Is each BVH node and triangle intersection counted as a separate ray? I assume so as it’s impossible to define a metric that encapsulates the entire traversal of a single ray as that’s entirely content specific.
Turing can deliver far more Giga Rays/Sec than Pascal on different workloads, as shown in Figure 21. Pascal is spending approximately 1.1 Giga Rays/Sec, or 10 TFLOPS / Giga Ray to do ray tracing in software, whereas Turing can do 10+ Giga Rays/Sec using RT Cores, and run ray tracing 10 times faster.
Yeah.What he calls one of the path tracing problems in Cyberpunk 2077 has more to do with post processing like automatic exposure than path tracing.
In Cyberpunk 2077 you can also increase the number of rays via mod and relatively early the accuracy hardly improves. With ReSTIR which Cyberpunk 2077 got last year you get infinite bounces.
I'll just refine my definition of end-goal here. We presently are running at well below one sample per pixel. For ideal RT pathtracing, we need 1 sample per pixel for reflections and refractions, with multiple rays, and several samples per pixel for lighting. I guess we should stop looking at pure Monte Carlo and look at the optimised offline renderers like Cycles. The main issue is generating enough information that we eliminate noise, up to whatever standard our theoretical ideal denoising solution can wrok with.
We are very far from what I've experience. It would totally depend on the content ( as someone else mentioned). I just made a thread about the requirements for games to be considered fully path-traced and it's a bear. From my experience, smoke, fog, clouds and fur/hair were the things that brought CPUs to their knees. Most of the participating medium took hours due to the ray-marching. The fur was a big bear in interior scenes that had 1 light source coming into the room. Getting rid of the noise took several hours render time from all the samples we had to cast. It's probably much much faster now with denoising being incorporated into the pipelines. I haven't worked in film since 2016. I'm working with Unreal 5.0 nowadays.Can you give a ball park time-to-render in an offline renderer of a gaming-type complexity? I guess that comparison is fraught with incomparable parameters, such as memory speed of render-farms versus consumer level gear. But let's say a movie frame takes 180 seconds to render, we can do a bare minimum performance delta consideration between the horsepower that takes 180 seconds and what it'll take to get that down to realtime. If instead it's only 3 seconds on a workstation, we're not as far.