PS5 Pro *spawn

Rereading the tweet the manager was likely talking about the countless PS4 games still stuck at 30fps (like Bloodborne).

See. you are still gaslighting about it. "but it shouldn't be the case" like if it was actually the case. That's a new goalpost too.
It wasn't something on my mind until you pointed it out. That's not why I posted that tweet. There just wasn't any information on the time on how 5Pro handled unpatched titles.
 
Honestly, anything that isn't a first person shooter or that needs aiming is fine on a controller, aside from edge cases like DMC.

Ps: forgot to add, 30 fps with acceptable amounts of input lag.

Also, I'm sorry but most videogame cutscenes look much better at 30 fps with motion blur. Those 60 fps or even 120 fps cutscenes look like soap operas from turkey.
I had a fine time with Destiny being 30fps, and I spent countless hours playing that on console, and at lower resolution.

I've switched over to PC, but I've never felt like too many titles killed me being at 30fps. I think Witcher 3 was the only one causing me issues as some boss fights required faster reaction than latency could provide at 30fps.
 
I had a fine time with Destiny being 30fps, and I spent countless hours playing that on console, and at lower resolution.

I've switched over to PC, but I've never felt like too many titles killed me being at 30fps. I think Witcher 3 was the only one causing me issues as some boss fights required faster reaction than latency could provide at 30fps.
30 fps on a shooter can be enjoyable, but in my experience, aiming can become kind of hard, especially when you need to make small adjustments.

Destiny did it right, with high amounts of aim assist and low input lag. But that's Bungie, at least they are good with that.
 
Why would I need to look at this code when there is a whitepaper and guide available that explain what the SER does, where to use it, and how? In the code you linked, the integrate_indirect_pass is responsible for computing indirect lighting. It handles sorting out materials before shading them, as I mentioned earlier. So, I’ll ask one more time - how does this relate to the ReSTIR, which is a completely unrelated pass?
If you've even tried to look at the included header file, you would immediately realize that the integration pass is a part of RTXDI's (ReSTIR) algorithm but you obviously don't know any better ...
Nobody asked you for these links and the discussion was never about tile-based GPUs in the first place. I am perfectly aware of how tile-based GPUs reduce memory accesses by using tiling, which have been used for decades not only in the GPUs but also as a general software optimization. Doubt HW tiling used anywhere today besides particle rendering on modern GPUs, because you still need to store the buffer with all the vertices prior to tiling the screen, and unless the buffer fits into the cache, it's not feasible. The more vertices you need to store, the worse it gets. Even if tilers had any advantages for G-Buffer rendering in modern applications, which I sincerely doubt, they would be outweighed by the minimal time spent in G-Buffer passes in modern games. You can’t achieve a 10x speedup by accelerating a 2-3 ms fraction of a frame. You don't even need the heavy machinery that comes with the complex TDBR as modern games are not primarily limited by memory-bound passes. That’s why, as I mentioned earlier, even classic rasterization (and, for god's sake, by rasterization, I meant the entire pipeline, not just the G-Buffer/Depth or shadowmaps passes) is not generally limited by memory bandwidth.
The snarky reply with TBR architectures was to demonstrate that merging/fusing render passes can be a performance win due to the reduced memory traffic of having fewer rendering pass. I find your claim that the industry is "evolving towards higher math density" to be extremely contentious now that the most popular AAA PC & console game engine has added yet ANOTHER rendering pass that involves performing 64-bit atomic r/m/w memory operations to render geometry into a visibility buffer with other hardware vendors are now scrambling to implement said memory traffic compression scheme for this as well and results showing handheld PCs churning really hard to attain low performance. The industry is also looking to implement/use persistent threads or Work Graphs to reduce the amount of GPU work starvation that happens with cache flushes ...

You clearly have no idea what hardware vendors have to do behind the scenes to optimize their memory system to enable high-end modern rendering ...
It seems your lack of understanding of LLM architecture prevents you from grasping a simple concept - LLMs are bandwidth-limited because the attention phase doesn’t have enough parallelism to saturate a GPU. This is why batching and speculative decoding help improve GPU utilization. See, I don’t need a thousand irrelevant links to show how wrong you are - there are numerous benchmarks with different batch sizes that clearly demonstrate you have no idea of LLM architectures or its bottlenecks.
I guess Intel's advice must be "irrelevant and wrong too" according to you since they seem to think that the prefill phase is peanuts compared to the token phase (of which EVERYONE uses as a benchmark for AI models in general) ...
 
On PS5 I can count those games with the fingers of one hand. Incredible that even Sony execs are gaslighted by their competitor's narrative.

And yeah you can count on GTA6 to have a mode targeting 60fps. When was the last time a game targeting 30fps was released on PS5? 2022?

30fps is dead on Playstation and Cerny officialized it. Devs would be suicidal to release their games (on PS5 Pro) without a 60fps mode.
Nothing Rockstar does can be "suicidal". They could release GTA VI at 15 fps only and it would still be the best-selling game of the decade.
 
Back
Top