Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Do you think a 2070 Super - 2080 level GPU is very powerful? I think it is.

Periodic reminder: a 9 TFLOP RDNA1 GPU will never be as computationally powerful as a RTX 2070 Super, let alone a RTX 2080.

The Turing GPUs have concurrent processing of INT32 through the presence of dedicated INT ALUs. A RTX 2070 Super does 9 TFLOPs FP32 + 9 TIOPs INT32.
Game engines can use INT32 in up to 33% of their instructions.


0Xv9lbf.jpg




RDNA1 GPUs need to promote all INT32 variables to FP32 and use their FP32 ALUs.


Point being: a 9 TFLOPs Navi 10 is a midrange solution that will not be computationally competitive to a 9 TFLOPs Turing.
It can only compete in bandwidth and fillrate limited scenarios.

If "squeezed" well enough, a Turing GPU will surpass a RDNA1 GPU with similar FP32 throughput by a large margin, and experienced devs should be able to see through it.


Nice but not relevant [emoji1]
Unless you pay attention to the details:
- 1 TB storage
- Logo for the Decima Engine (Guerrilla's engine used in Killzone Shadow Fall, Horizon: Zero Dawn and Death Stranding)



5700 XT is 9tf game clock 1755mhz. I think 2ghz 9.216 TF is a tier above the 5700 XT.
Only if you assume the 5700 XT isn't a balanced GPU and it's substantially bottlenecked by fillrate performance.
 
BTW OsirisBlack, the GAF insider who's still insisting both consoles are very similar in performance (11.4 TFLOPs SeriesX vs 12 TFLOPs PS5) is the user who posted this, in March 2016 (20 months before the PS4 Pro's release):

https://www.neogaf.com/threads/ps4k...r-w-clock-new-cpu-price-tent-q1-2017.1202462/

He also claims the last time he got updated on the consoles' compute throughput was in November.

Which was pretty much bullshit, there is no 4K blue ray player on PS4Pro and Deep Down never saw the light of day. Plus I doubt a substantial CPU upgrade was ever in the cards.
 
BTW OsirisBlack, the GAF insider who's still insisting both consoles are very similar in performance (11.4 TFLOPs SeriesX vs 12 TFLOPs PS5) is the user who posted this, in March 2016 (20 months before the PS4 Pro's release):

https://www.neogaf.com/threads/ps4k...r-w-clock-new-cpu-price-tent-q1-2017.1202462/

He also claims the last time he got updated on the consoles' compute throughput was in November.
Not only is it relatively wrong, especially part about CPU upgrade 6 months before console is coming out and UHD drive, its also reported after Digital Foundry and Kotaku had their articles up on "PS4K".

http://kotaku.com/sources-sony-is-working-on-a-ps4-5-1765723053

On the other hand, you had Arstechnica and DF reporting on specs which could be expected, with Arstechnica saying 16nm FF chip would bring roundabout ~2x of PS4 (day before his GAF post).

https://arstechnica.com/gaming/2016/03/playstation-4k-4-5-announced-october/

Oh and it was released 6 months before Pro coming out, not 20 months. Right after GDC actually. So basically, like alot of his other posts on GAF, alot of fluff I would say.
 
Last edited:
They never said "hardware RT". They said "hardware accelerated RT".
Which was pretty much bullshit, there is no 4K blue ray player on PS4Pro and Deep Down never saw the light of day. Plus I doubt a substantial CPU upgrade was ever in the cards.
These last two posts really highlight how subjective language is and how different people interpret the same info different ways!
 
What's the difference? "Hardware RT" means RT accelerated using hardware.

People are getting confused if it's like the RTX gpu's versus some other solution, like CU's that can be used to accelerate ray tracing i think. In the end it doesn't matter, cause both are being done on the hardware? Aslong it isn't like NV pascal ray tracing 'hardware support' then.
 
"Hardware accelerated" could mean both compute on GPU, or fixed function on GPU. It could even mean on CPU, which is hardware too. So we can't get rid if this kind of discussion completely yet.
FF GPU really seems almost for sure to me, but one could assume marketing terminology going on, so doubts are still justified eventually.
 
"Hardware accelerated" could mean both compute on GPU, or fixed function on GPU. It could even mean on CPU, which is hardware too. So we can't get rid if this kind of discussion completely yet.
FF GPU really seems almost for sure to me, but one could assume marketing terminology going on, so doubts are still justified eventually.

So why you think Sony and MS would say hardware RT, the average consumer doesn't know or even care anyway (the biggest market share is them).
 
"Hardware accelerated" could mean both compute on GPU, or fixed function on GPU. It could even mean on CPU, which is hardware too. So we can't get rid if this kind of discussion completely yet.
FF GPU really seems almost for sure to me, but one could assume marketing terminology going on, so doubts are still justified eventually.
Under the definitions of MS and Nvidia that I’ve read; That would be considered GPU accelerated. Not hardware accelerated.
Hardware accelerated is a defined term for silicon that is specific in its function with the sole purpose of speeding up a particular function.

Software based ray tracing is not running on a dedicated purposed silicon, And is a shared resource (Nvidia’s words for Pascal).

Under the information of Github leaks, Compute based solutions would also wouldn’t show up on a hardware regression test as it is software. That would be equivalent of saying Unity’s ray tracing library will show up on every GPU regression test.
 
"Hardware accelerated" could mean both compute on GPU, or fixed function on GPU. It could even mean on CPU, which is hardware too. So we can't get rid if this kind of discussion completely yet.
OMG I can't believe it's this discussion again! :runaway:

A hardware accelerator, or hardware acceleration, is hardware designed to accelerate a workload. End of. There's zero ambiguity. You can't claim hardware accelerated is "something running on the CPU because the CPU is hardware." That's nonsensical - what's the difference in that case between hardware accelerated running on CPU hardware and a software solution running on CPU hardware?! That definition means an 8088 running floating-point calculations with its integer units is 'hardware accelerated' the same as running floating-point calcs on an 8087 maths-coprocessor, which clearly it isn't. If it's running on the CPU but isn't accelerated by hardware structures designed to accelerate the workload, such as processing floating-point maths on integer units or doing video decoding in the CPU instead of in a specific CODEC block, it is not hardware accelerated.

If XBSX (and PS5 for that matter) has "hardware accelerated RT", or "hardware RT", or "hardware RT acceleration", or "RT acceleration hardware", or "RT accelerating hardware", then it has hardware designed for the purpose of accelerating RT, whether that's a discrete chip or a specific RT block on one or other processor or some modifications to the CUs or something else - it's a specific design consideration given over to the task of accelerating one or more aspects of RT implemented in the silicon. It is not running raytracing code on the CPU or standard compute.
 
Last edited:
Status
Not open for further replies.
Back
Top