Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I wonder if one or even both consoles would allow dynamically adjusting power budget between gpu and cpu. It would allow for interesting optimizations. It would especially help if there are some parts of frame that are cpu bound(serial execution)and one or more cores could clock much higher for a period of time to help. Combine dynamic power budget with variable refresh rate and things should be pretty nice as there is no need to try to get solid 30Hz or 60Hz.
 
ND once described the access latency across Jag L2 clusters on the PS4 as almost as bad as accessing DRAM.
Isn't that because the PS4 has no L3 so the data interconnect between the two 4-core modules is actually done through the RAM?
 
Isn't that because the PS4 has no L3 so the data interconnect between the two 4-core modules is actually done through the RAM?

I would be inclined to think not. Writing out and then reading from DRAM shouldn't be slightly faster than just reading from DRAM. Clusters have to share data over the northbridge. Forcing data to DRAM just for cluster sharing would seem like a major design hiccup.
 
ND once described the access latency across Jag L2 clusters on the PS4 as almost as bad as accessing DRAM. A single 8 core CCX may be the better option regardless of the increase in L3 latency.

Indeed, which I why I guess Zen 3 is going for that option. I was thinking of latency reductions vs Zen 2 chiplets, rather than vs Zen 3.

I suppose we can hope that consoles use a generational hybrid of Zen 2 L2, L1 and core, but Zen 3's unified L3 cache arrangement, but unlike GPUs this kind of bringing forward of features doesn't seem to happen.

MS did make some small enhancements to the Jaguar L2 caches for the 1X, but nothing on the level of combining core clusters or changing how a cache works.
 
I wonder if one or even both consoles would allow dynamically adjusting power budget between gpu and cpu. It would allow for interesting optimizations. It would especially help if there are some parts of frame that are cpu bound(serial execution)and one or more cores could clock much higher for a period of time to help. Combine dynamic power budget with variable refresh rate and things should be pretty nice as there is no need to try to get solid 30Hz or 60Hz.

I speculated on the same thing back before the Switch came out.

You could have a number of modes - directly developer controlled or automatically selected on the fly - to target where the bottleneck was at any particular moment. You could even do something like idle an entire CCX and boost the other.

Down side would be that it would probably make binning more complex, and that Hovis thingy with board level optimisations per chip that MS did on the X1X would also become more complex.
 
I wonder if one or even both consoles would allow dynamically adjusting power budget between gpu and cpu. It would allow for interesting optimizations. It would especially help if there are some parts of frame that are cpu bound(serial execution)and one or more cores could clock much higher for a period of time to help. Combine dynamic power budget with variable refresh rate and things should be pretty nice as there is no need to try to get solid 30Hz or 60Hz.

That's what I wrote on my friendly wager for PS5 specs.
 
Considering the efficiency benefits of RDNA compared to early GCN, why does MS only state 2x GPU performance over XBO-X and not up to 2,5x or 3x (whatever it may be) performance?
 
Considering the efficiency benefits of RDNA compared to early GCN, why does MS only state 2x GPU performance over XBO-X and not up to 2,5x or 3x (whatever it may be) performance?

Because math is math when it comes to comparing peak measurements. The net effect will be more than twice the effective performance of OneX. But if they said that, the SDF would crucify that statement alive as being PR-speak immediately regardless of no one having any actual performance measurements yet.
 
https://www.techquila.co.in/amd-raytracing-navi-radeon-rx-gpu/

This AMD guys seems to confirm PS5 use AMD raytracing and PS5 to be RDNA 2 too. From January 2020 but everyone miss this article.

AMD AS A COMPANY…STRONGLY BELIEVES IN THE VALUE AND CAPABILITY OF RAYTRACING. RDNA 2, THE NEXT-GEN, WILL SUPPORT RAYTRACING. BOTH THE NEXT-GEN XBOX AND PLAYSTATION WILL SUPPORT HARDWARE RAYTRACING WITH RADEON NATIVELY. WE WILL BE SURE TO HAVE THE CONTENT THAT GAMERS CAN ACTUALLY USE TO RUN ON THOSE GPUS

WE BELIEVE IN OUR RAYTRACING, AND WE WILL HAVE IT WHEN THE TIME IS RIGHT.

Mithun Chandrashekhar, Product Management Lead, AMD
 
BOTH THE NEXT-GEN XBOX AND PLAYSTATION WILL SUPPORT HARDWARE RAYTRACING WITH RADEON NATIVELY
this is the part which makes me believe your right regarding raytracing being confirmed to be amd based.
I'm hardly an avid follower of these threads now, so if it was there before I definitely missed it. Thanks
 
Considering the efficiency benefits of RDNA compared to early GCN, why does MS only state 2x GPU performance over XBO-X and not up to 2,5x or 3x (whatever it may be) performance?
Math is math. A flop is a flop. Horsepower is horsepower. Big trucks have over 1000HP but couldn’t beat a smart car in a straight line race.

most people just assume everything out with the flops are balanced enough to just use flops as a measurement of performance. But the reality is that there are all sorts of bottlenecks everywhere and since we design software for hardware; some designs will be better for some software than others.
 
Considering the efficiency benefits of RDNA compared to early GCN, why does MS only state 2x GPU performance over XBO-X and not up to 2,5x or 3x (whatever it may be) performance?
It's the only measurable metric of 'performance'. In the age old discussion "which console is/was the most powerful?' one has to qualify that in which area. If you're faster at fillrate and overdraw and raw maths, but slower in shading and vertex setup and random-access, general purpose processing, are you faster or slower?

For example, let's say you have a machine with 1 TF of GPU power and 100 GB/s RAM BW. Then you have another that's 2 TF and 200 GB/s. Is it twice as fast, or four times (double in both aspects)? What if the first machine has a CPU at 2 GHz and the second has the same CPU at 1 GHZ; how does that factor into the maths?

The complexity of these machines means it's impossible to create a meaningful, scientific measure of performance. The closest you can get to accurate is to run a load of benchmarks and provide the stats on relative performance. And in the absence of a real performance metric, simple GPU speed gives you a public-friendly approximation. Sadly the general populace doesn't understand this and will compare numbers as the only meaningful way to compare. Hence nonsense like the old megapixel wars of digital cameras, as the only way to show a 'better' camera. I think people have moved on from that and look to reviews of camera performance now, actual benchmarks on quality.
 
https://www.techquila.co.in/amd-raytracing-navi-radeon-rx-gpu/

This AMD guys seems to confirm PS5 use AMD raytracing and PS5 to be RDNA 2 too. From January 2020 but everyone miss this article.

"Both the next-gen Xbox and Playstation will support hardware raytracing with Radeon natively".

He doesn't say "both Xbox and Playstation will use our hardware raytracing". Nor "both Xbox and Playstation will use Radeon Raytracing" -> both sentences that could be easily used as 100% confirmation that both consoles are using the same approach as AMD's on the PC space.
He says both consoles will support hardware raytracing that will work natively with their Radeon GPU. What does "natively" mean here, other than "developers will have access to raytracing using official SDKs"?


Considering the efficiency benefits of RDNA compared to early GCN, why does MS only state 2x GPU performance over XBO-X and not up to 2,5x or 3x (whatever it may be) performance?
On top of what was mentioned above, there's also the fact that experienced and talented 1st-party developers tend to be able to saturate the console GPUs' compute capabilities as much as possible.
On a console where devs can squeeze out a high percentage of the GPU's compute resources at any given moment, saying "12TF" as "equivalent of 12TF in GCN architecture" would be extremely deceiving to those who are already pushing e.g. 5.5TFLOPs on a XBoneX and 3.7TFLOPs on a PS4 Pro.

RDNA / Navi on the PC space has the advantage of being able to use its compute resources more effectively in an ecosystem where games aren't optimized for AMD's hardware. On a console that advantage might still be real for those using the high-level APIs, but for the devs using low-level optimizations the advantage might be mitigated.
 
"Both the next-gen Xbox and Playstation will support hardware raytracing with Radeon natively".

He doesn't say "both Xbox and Playstation will use our hardware raytracing". Nor "both Xbox and Playstation will use Radeon Raytracing" -> both sentences that could be easily used as 100% confirmation that both consoles are using the same approach as AMD's on the PC space.
He says both consoles will support hardware raytracing that will work natively with their Radeon GPU. What does "natively" mean here, other than "developers will have access to raytracing using official SDKs"?

You got a point there... All will use AMD Ray tracing... but the hardware implementation may be different. At least the wording is not definitive on killing that possibility!
 
Last edited:
Math is math. A flop is a flop. Horsepower is horsepower. Big trucks have over 1000HP but couldn’t beat a smart car in a straight line race.

most people just assume everything out with the flops are balanced enough to just use flops as a measurement of performance. But the reality is that there are all sorts of bottlenecks everywhere and since we design software for hardware; some designs will be better for some software than others.

It was very funny to see people thinking it is not 12 TFlops RDNA lol. People need to think they need to present the console to all type most of the game hardcore included don't care about the RDNA or GCN for them 6*2=12 is easier to understand and they did the same with Xbox One X.
 
RIP secret sauces.

Not sure at all from example if they decide to use the photon mapping technology in the patent,they can do it. Photon mapping is a light transport algorithm and one way do to do global illumination.

After I think it will be pretty standard and every innovation will come from API and software.
 
It was very funny to see people thinking it is not 12 TFlops RDNA lol. People need to think they need to present the console to all type most of the game hardcore included don't care about the RDNA or GCN for them 6*2=12 is easier to understand and they did the same with Xbox One X.
Yea indeed. I think most people made a critical mistake with the thinking that we should tie power back to the current generation of how power is rated.

I would be interested if something like that happened when we moved from PS1 to 2 to 3 to 4. Which all had vastly different architectures.
 
In the PS1/2 days "power" wasn't measured in flops but in polygons per second. It was sort of the same back then, though, because not all polygons are equal. Are they textured? Filtered? Perspective correct? Lit? Shaded? A PS2 polygon wasn't the same as a PS1 polygon.
 
Status
Not open for further replies.
Back
Top