Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Well, I don't quite see where comes the expectation that PS5 will have RDNA2 GPU.

AMD presentations clearly say that the PS5 APU will use 7nm process and has Zen 2 CPU cores with Navi GPU.

PS5-GPU-Will-Use-Brand-New-RDNA-Architecture-Developed-by-AMD.jpg


And from AMD product road maps we can see that 7nm process is used for RDNA and Zen 2, while 7nm EUV (7N+) process is used for RDNA 2 and Zen 3.

amd-arkkitehtuurit-zen.jpg

amd-arkkitehtuurit-rdna.jpg


I.e. following options are possible with single chip APU:

1) 7nm: Zen 2 + RDNA
2) 7nm EUV: Zen 3 + RDNA 2

We know that Sony has selected the option 1. We don't know if Microsoft has selected option 1 or option 2.

Because RDNA does not have Raytracing. It seems RDNA 2 will be the first AMD GPU on PC with raytracing. RDNA 2 is not a process node but an evolution of an architecture.
 
PS5:
_ CPU 8 cores 16 threads 3.2 Ghz Zen 2 with only 8MB of L3 cache
_ 10-11 Tflops RDNA GPU with RT
_ 20 GB of RAM (16 GB of 18Gbps GDDR6 with 256 bits bus 576 GB/s and 4GB of DDR4 for OS)
_ Bus 100 GB/s between CPU and GPU
_ Custom SSD 1TB 5GB/s

And somehing nearly similar for Xbox Scarlett probably a PCIE 3 4 GB/s SSD
 
Last edited:
Consoles could be getting a mix of rdna/rdna2. Or rdna with RT. AMD's highest end now is 9.75TF without RT on a older arch. Expecting more then that TF number plus RT hardware in a max 499 dollar box seems abit unrealistic this day and age. Not saying it's impossible but as rumors go from 7 to 14TF.... i don't believe in any leak so far tbh.
Really why is that TF number so important to be high, if that is to be inportant to Sony they probably would have botched the hw elsewere like RT perf etc.

Seems that for some the specs are more important then what will actually matter, the games and what devs will do with the hw.
 
Really why is that TF number so important to be high, if that is to be inportant to Sony they probably would have botched the hw elsewere like RT perf etc.

Seems that for some the specs are more important then what will actually matter, the games and what devs will do with the hw.

TF is important for things like SS fakery, denoising (think of one denoising pass per effect or even per light), and other things that neither rasterization nor RT can help much with (physics, fluid, diffuse GI, and other crap that did not really made it into games yet).
 
TF is important for things like SS fakery, denoising (think of one denoising pass per effect or even per light), and other things that neither rasterization nor RT can help much with (physics, fluid, diffuse GI, and other crap that did not really made it into games yet).

And RT is more shading demanding then rasterizing. For example PICA PICA demo they said the limit is shading not ray intersection.

Consoles could be getting a mix of rdna/rdna2. Or rdna with RT. AMD's highest end now is 9.75TF without RT on a older arch. Expecting more then that TF number plus RT hardware in a max 499 dollar box seems abit unrealistic this day and age. Not saying it's impossible but as rumors go from 7 to 14TF.... i don't believe in any leak so far tbh.
Really why is that TF number so important to be high, if that is to be inportant to Sony they probably would have botched the hw elsewere like RT perf etc.

Seems that for some the specs are more important then what will actually matter, the games and what devs will do with the hw.

14 TFlops:LOL:
 
Well, I don't quite see where comes the expectation that PS5 will have RDNA2 GPU.

AMD presentations clearly say that the PS5 APU will use 7nm process and has Zen 2 CPU cores with Navi GPU.



And from AMD product road maps we can see that 7nm process is used for RDNA and Zen 2, while 7nm EUV (7N+) process is used for RDNA 2 and Zen 3.




I.e. following options are possible with single chip APU:

1) 7nm: Zen 2 + RDNA
2) 7nm EUV: Zen 3 + RDNA 2

We know that Sony has selected the option 1. We don't know if Microsoft has selected option 1 or option 2.
There is a third option actually, a middle ground between 7nm and 7nm EUV:

https://www.anandtech.com/show/14687/tsmc-announces-performanceenhanced-7nm-5nm-process-technologies

And welcome !
 
And RT is more shading demanding then rasterizing. For example PICA PICA demo they said the limit is shading not ray intersection.
It's a hybrid renderer, with shaders doing a lot of work in both rasterising and clean-up of the traced information. Pure RT would be less shader intensive as you wouldn't be computing anything in shaders except materials which be heavily LOD'd, so slightly over one material calculation per pixel on average. The real requirements sit somewhere between based on how much RT performance there is and how it can be used, as to how much non-surface-shading work the shaders need to do.
 
It's a hybrid renderer, with shaders doing a lot of work in both rasterising and clean-up of the traced information. Pure RT would be less shader intensive as you wouldn't be computing anything in shaders except materials which be heavily LOD'd, so slightly over one material calculation per pixel on average. The real requirements sit somewhere between based on how much RT performance there is and how it can be used, as to how much non-surface-shading work the shaders need to do.

But pure RT is not an option for AAA games. Correction Hybrid rendering is more demanding than rasterizing.
 
Last edited:
Sure, but you stated RT is more shader demanding,. That's untrue. You need to qualify how much 'raytracing' is more shader demanding than rasterising, such that adding more raytracing increases shader performance requirements. As I say, it's a scale. PICA PICA is one example of one end of that scale based on one implementation. And if we were accurate, to achieve the same results with shaders would likely be far more intensive than hybrid RT, not less.

In short, adding RT capacilities does not increase the need for more compute power as you imply. It depends entirely on what the devs do with RT, and how that RT is implemented, as to what the compute requirement will be.
 
Sure, but you stated RT is more shader demanding,. That's untrue. You need to qualify how much 'raytracing' is more shader demanding than rasterising, such that adding more raytracing increases shader performance requirements. As I say, it's a scale. PICA PICA is one example of one end of that scale based on one implementation. And if we were accurate, to achieve the same results with shaders would likely be far more intensive than hybrid RT, not less.

In short, adding RT capacilities does not increase the need for more compute power as you imply. It depends entirely on what the devs do with RT, and how that RT is implemented, as to what the compute requirement will be.

I agree but for the next 6 to 8 years out of maybe indie game or added raytracing in old game, AAA games will probably use hybrid rendering and shading cost will be higher than pure rasterizing game.

And this is important for next generation console to improve as much as possible shading power. When pure RT arrive in AAA games, the next-generation consoles will not be the focus.

Edit: Maybe we will see pure RT on Cloud gaming before 6 to 8 years but this is not concerning next generation console.
 
I fail to see a difference between hybrid and full RT for shading work. How does it matter if you shade primary ray intersections or screenspace pixels? It's the same number.
But to reduce shading costs for RT it should work to use a single simplified uber shader for secondary ray hits in most cases, likely requiring global parametrization to address a low res virtual texture or something.
I think that's the reason we have not seen much of this optimisation in current games yet, where they just glue RT functionality on top of an engine not designed for it. But with this i expect traversal/intersection costs to become higher than shading costs.
I think Metro (which uses flat color per object for hit point shading IIRC) performs worse on non RTX GPUs than BFV, so this game might be an example for this already.
 
I fail to see a difference between hybrid and full RT for shading work. How does it matter if you shade primary ray intersections or screenspace pixels? It's the same number.
As I said, non-shading work, computing all those buffers together. Look at chris1515's example of PICA PICA for a beautiful hybrid renderer, and there's a lot of work needed that isn't shading surfaces to draw pixels. But things like denoising...what if there's hardware denoising in the RT hardware? 7 TFs of GPU clearly isn't (wouldn't be) the total number of floating point operations the GPU could perform per second, and that'll just be available compute power with no idea how much is needed in this hypothetical PS5 because we've no idea what the RT side is doing and how much compute work on top of shading surfaces the compute-based TFs will have to apply.
 
I fail to see a difference between hybrid and full RT for shading work. How does it matter if you shade primary ray intersections or screenspace pixels? It's the same number.
But to reduce shading costs for RT it should work to use a single simplified uber shader for secondary ray hits in most cases, likely requiring global parametrization to address a low res virtual texture or something.
I think that's the reason we have not seen much of this optimisation in current games yet, where they just glue RT functionality on top of an engine not designed for it. But with this i expect traversal/intersection costs to become higher than shading costs.
I think Metro (which uses flat color per object for hit point shading IIRC) performs worse on non RTX GPUs than BFV, so this game might be an example for this already.

I wait a true hybrid games with more global shader and assets made with hybrig rendering in mind. This will probably be very impressive, mabe year three to four of next-generation console.
 
As I said, non-shading work, computing all those buffers together. Look at chris1515's example of PICA PICA for a beautiful hybrid renderer, and there's a lot of work needed that isn't shading surfaces to draw pixels. But things like denoising...what if there's hardware denoising in the RT hardware? 7 TFs of GPU clearly isn't (wouldn't be) the total number of floating point operations the GPU could perform per second, and that'll just be available compute power with no idea how much is needed in this hypothetical PS5 because we've no idea what the RT side is doing and how much compute work on top of shading surfaces the compute-based TFs will have to apply.
Ok, i thought you would say RT could reduce the amount of necessary pure shading operations, which can not be.
But i don't believe in an additional HW denoising unit either, with research being so young in this area.
What i believe is: We won't get rid of 30 fps. 7TF Navi is barely enough for WQHD at 60 fps in demanding newer titles. Adding RT on top can only make this worse. Still hoping for 10TF, but 7 may be what's possible at the price point.
 
What i believe is: We won't get rid of 30 fps. 7TF Navi is barely enough for WQHD at 60 fps in demanding newer titles. Adding RT on top can only make this worse. Still hoping for 10TF, but 7 may be what's possible at the price point.

That's a well thought-out educated and realistic take on about what to expect in terms of hardware. What some are expecting is above-RX5700XT raw performance (AMD's current highest end dGPU), with the addition of Ray Tracing hardware, on a newer more efficient arch in a 400 to 500 dollar box shy a year from now. Specs are most likely finalized, final clock adjustments being made, cooling etc.
Those Navi2 7TF's might actually equal ballpark 9TF Navi.

Yes 30 fps will still be a thing for games that push for real next-gen graphics, like that open world game. That's to be expected too considering. I'm honestly most interested in AMD's RT solution, how performant it will be and what the on-screen results will be like at said performance. This open world game most likely will see a pc version and how Nvidia will compete to AMD's solution. Kinda feels like it would be best suited for NV to have compatible RT with AMD's thinking that 90% of the games will be focused on AMD's RT logic.
 
1 TB/ sec for the SSD, we don't need memory at all.:LOL:
it's in the spirit of the thread! " Baseless Next Generation"
Sounds plausible. 1 TeraByte per second SSD would only require a PCIe 8.0 x32 physical interface with 32 GigaByte per second in each link, in a familiar 32-lane M.4 2201000 form-factor. Will be a good match to a 1 TeraHerz 1024-core CPU with 16 TeraByte HBM16 DRAM and 1 Mbit wide memory interface with a bandwidth of 1 PetaByte per second .
 
What i believe is: We won't get rid of 30 fps. 7TF Navi is barely enough for WQHD at 60 fps in demanding newer titles. Adding RT on top can only make this worse.
Those titles are having to process all lighting and shadowing on compute. If those workloads were moved onto other hardware, that 7TFs could achieve more.

It all depends on what this other 'RT hardware' is! ;)
 
PS5:
_ CPU 8 cores 16 threads 3.2 Ghz Zen 2 with only 8MB of L3 cache
_ 10-11 Tflops RDNA GPU with RT
_ 20 GB of RAM (16 GB of 18Gbps GDDR6 with 256 bits bus 576 GB/s and 4GB of DDR4 for OS)
_ Bus 100 GB/s between CPU and GPU
_ Custom SSD 1TB 5GB/s

And somehing nearly similar for Xbox Scarlett probably a PCIE 3 4 GB/s SSD

Is this coming from that neogaf mod?
Mentioning a specific CPU<->GPU bus speed could mean they're going with a chiplet design, with both the GPU and CPU having their own memory controllers (256bit GDDR6 GPU, 64/128bit DDR4 CPU).
Also, the rumored 300mm^2 for the PS5's GPU would fit with those 10-11 TFLOPs if it's a chip that is wider than 250mm^2 Navi 10 while running at lower, more power efficient clocks.
In practice, it'd be a chip with 20% more compute resources (24 DCUs) while clocked at ~1750MHz (which the GPU could sustain if they use TSMC's 2nd-gen 7nm).


Consoles could be getting a mix of rdna/rdna2. Or rdna with RT. AMD's highest end now is 9.75TF without RT on a older arch. Expecting more then that TF number plus RT hardware in a max 499 dollar box seems abit unrealistic this day and age.
Thankfully, the PS5 isn't releasing this day. It's releasing over a year from now.

Also, we have no idea on the economics of the RX 5700 series. For all intents and purposes, they're mid-range cards using mid-range sized chips on a 256bit memory bus, using a PCB with power characteristics similar to the RX580/RX590.
For all we know, the RX 5700XT could be selling for $250 and still pull a nice profit to AMD.
 
Also, the rumored 300mm^2 for the PS5's GPU would fit with those 10-11 TFLOPs if it's a chip that is wider than 250mm^2 Navi 10 while running at lower, more power efficient clocks.

But how much space does the RT hardware take up? Since we have zero clues about anything GPU related we don't know. All i'm seeing is wishes for the highest TF number possible. Probably to win an arms race against the competition, or something.

Thankfully, the PS5 isn't releasing this day. It's releasing over a year from now.

Exactly, and it's general specs are set in stone.

Also, we have no idea on the economics of the RX 5700 series. For all intents and purposes, they're mid-range cards on a 256bit memory bus.

RX5700XT is AMD's highest end dGPU at the moment, expecting the raw power of that + ray tracing seems far stretched. The 2013 consoles saw a year and a half old mid range amd GPU's. Now were getting ray tracing hardware, 1TB fast SSD's, a zen 2 cpu etc. We can't expect high/highest end hardware landing in there. Like DF mentioned in one of their video's, there will be dissapointments.

Btw, i think someone mentioned it but, is ps5 going with RDNA2+ confirmed? Why go with an old Zen 2 then, just hop to Zen 3 aswell? Both are releasing next year? Or it is Zen 2 and a modified Navi 1 architectured gpu, to feature RT? That could explain the hunger after higher TF numbers as it is likely not as efficient.
 
Status
Not open for further replies.
Back
Top