General Next Generation Rumors and Discussions [Post GDC 2020]

I think we can forget Avatar graphics or even close to that. Even the UE5 demo isn't in the same leage (and that was an empty world with no gameplay etc). Not even on PC that will happen.
 
I think we can forget Avatar graphics or even close to that. Even the UE5 demo isn't in the same leage (and that was an empty world with no gameplay etc). Not even on PC that will happen.

Avatar graphics is too high but we will see much better graphics maybe better than UE 5 demo.


The art director of Guerrilla games think with single light Unreal engine hair are easy. And this is lokking great.


Out of knowing how they store the assets, the other intriguing things on UE5 is the AA they use. I don't think they use only Temporal accumulation for AA maybe they use analytical anti aliasing and this is much better than TAA or MSAA.


timhpg10.jpg


I saw some direct feed images of Epic UE 5 demo on my 4k Tv and this is stunning, no artifact at all. From digitalfoundry article.

Penwarden also confirms that the temporal accumulation system seen in Unreal Engine 4 - which essentially adds detail from prior frames to increase resolution in the current one - is also used in UE5 and in this demo. The transparency here from Epic is impressive. We've spent a long time poring over a range of 3840x2160 uncompressed PNG screenshots supplied by the firm. They defy pixel-counting, with resolution as a metric pretty much as meaningless as it is for, say, a Blu-ray movie. But temporal accumulation does so much more for UE5 than just anti-aliasing or image reconstruction - it underpins the Lumen GI system.
 
Last edited:
I don't know, I feel like next gen foliage tech will surprise us just like how Nanite and Lumen do right now. Polygon count is gonna be a thing of the past, lighting should be good enough to fool us and with the ability to load source LOD photogrametry asset on the fly I think we will come close to Avatar. Even if it doesn't it should still look mind blowing.
 
Isn't the biggest improvement with RDNA 2 the 50% performance improvement per watt and shouldn't that mean higher clocks than RDNA 1?

Wouldn't that depend on if RDNA1 was limited in its clock by thermals or if they where just hitting the max of the architecture and layout?.
 
Isn't the biggest improvement with RDNA 2 the 50% performance improvement per watt and shouldn't that mean higher clocks than RDNA 1?
in short, yes expect higher clocks. It will largely depend on what the Vendor wants to do with it. Some of it to increase clocks and some of it to reduce the power footprint.

In light of this however, we still don’t expect 2230 to be a bottom barrel number. Cerny cited that the traditional method, fixed clocks, they had trouble getting to 2000Mhz. So that is sort of a solid indication of how hard they are pushing the chip. To push is 2230Mhz and then suddenly claim that it’s going to be at that number the majority of the time is... seems at odds with each other.

I’m not saying it’s impossible. But it’s going to have to result in some form of increasing costs to handle it.

I don’t dislike what Sony did, it’s ambitious. And ambitious is what gets you far ahead if it works out. Ambitious is what got SpaceX where it is. But if what he says is true to the point it doesn’t make a lot of sense to me, or his marketing isn’t really representing the behaviour of the console.
 
Well seeing as the next gen consoles have high frequency I'm going to guess it was thermals.

The XSX seems to be well within a small bump of the current game clock. The PS5 seems to be the only outlier here with such a large 'base-ish' clock. If I recall correctly didn't the PS5 APU have a large number of respins / revisions?. This would be consistent with not being able to hit the desired clock, and that we may not see most RDNA2 cards reach those speeds with the PS5 being an outlier.
 
Well seeing as the next gen consoles have high frequency I'm going to guess it was thermals.
MS is below the 5700XT series and nominally above the 5700. Granted the chip is much larger though. But GPUs can not approach CPU frequencies. Which is where I sort of find the claim that 2230 is a bottom barrel yield at odds. If we took that as true; Implications for the high end is 3000+ or so. Way too high, GPUs don’t boost like CPUs do, at least it’s not effective for them to do it in that way. Individual cores are weak.
 
The XSX seems to be well within a small bump of the current game clock.
Much bigger chip. I agree it's probably not bottom of the barrel as iroboto says but I just think comparing it with RDNA 1 might not be the best comparison all though it's the only one available to us.

So I'm expecting quite high clocks for RDNA 2 or Sony and Microsoft have really pushed high end more than I thought they would.
 
MS is below the 5700XT series and nominally above the 5700. Granted the chip is much larger though. But GPUs can not approach CPU frequencies. Which is where I sort of find the claim that 2230 is a bottom barrel yield at odds. If we took that as true; Implications for the high end is 3000+ or so. Way too high, GPUs don’t boost like CPUs do, at least it’s not effective for them to do it in that way. Individual cores are weak.
We don't know enough about RDNA2.

And with such a large chip and fixed clock, MS would have more thermal issues to deal with than Sony, so they would go for a more efficient clocking.

Sony will have a thermal density problem, but they deal with it by clocking both for equal thermal density against the 3.5 CPU, which indicates they are not producing any bigger hot-spot than MS despite the higher GPU clock. Unless there's something funky happening with AVX256 on MS side.
 
We don't know enough about RDNA2.

And with such a large chip and fixed clock, MS would have more thermal issues to deal with than Sony, so they would go for a more efficient clocking.

Sony will have a thermal density problem, but they deal with it by clocking both for equal thermal density against the 3.5 CPU, which indicates they are not producing any bigger hot-spot than MS despite the higher GPU clock. Unless there's something funky happening with AVX256 on MS side.
I suspect for both consoles, heating issues will be on the GPU side over the CPU side. So the concern should really be there than the CPU. The SoC itself has a voltage limit, the diffference between what MS and Sony did is that MS fixed the amount of voltage for both CPU and GPU, giving it the characteristic of consistency, with the downsides that it doesn't ever change clocks to maximize potential for lighter workloads and will be pushed harder on thermals for heavier workloads.

Sony went for a boost mode that would drive consistency across all their devices. I have no issue with how boost works. Or how power transfers. I just am at odds with the indication that it rarely goes below 2230, as if they are running fixed clocks. It should be variable as low as sub 2000, which is normal for a boost mode to drop under heavier loads.

As for the chipsizes respectively, XSX is 363mm^2. Smaller than XBO and comparable to X1X.
5700XT is 251mm^2. If you tack on 40mm^2 for the GPU, you can sort of estimate PS5 to be around 300mm^2. I don't know if this necessarily means XSX will have a harder time with thermals. That's it's heat divided over a larger surface area.
They seem to have solved all their cooling problems with a single fan for the whole console, if there were cooling issues, I think we would have heard something about it. Or the cost of assembly etc.

I'm not saying Sony has a cooling issue, but I'm more inclined to agree with rumours about cooling/heating/yield issues about PS5 than I am with XSX when I consider the above.
 
It should be variable as low as sub 2000, which is normal for a boost mode to drop under heavier loads.
You have no data to make that claim, since you don't know how much RDNA2 improved efficiency, nor how much the rework AMD have done improves clocks on the architecture by placing data closer to where it's needed. Nor the changes Sony required for their own variation of RDNA2.

The clock difference between the two is in the same ballpark as between xb1x and ps4pro. However the TDP-related engineering margin is not required on PS5, while xbsx requires it for any hypothetical future games which might reach up to the burn-test TDP, even if extremely unlikely. Sony doesn't need that margin.
 
Last edited:
You have no data to make that claim, since you don't know how much RDNA2 improved efficiency, nor how the rework AMD did, placing data closer to where it's needed, improves clocks on the architecture. Nor the changes Sony required for their own variation of RDNA2.
While it's true we don't know the exact amounts. Cerny made the claims himself that using fixed clocks it was very difficult to get it over 2GHz. Meaning boost clocks gets it above 2GHz. Meaning when the load is high enough it should dip back to sub 2Ghz, as in the load is high enough to remove boost from the equation and bring it down to where they could achieve it as fixed clocks, as per his original claims.

The clock difference between the two is in the same ballpark as between xb1x and ps4pro. However the TDP-related engineering margin is not required on PS5, while xbsx requires it for any hypothetical future games which might reach up to the burn-test TDP, even if extremely unlikely. Sony doesn't need that margin.
There's no difference between PS5 and XSX in this regard. All Cerny did was create boost mode for consoles so that all consoles share the same amount of boost with the game code being the dependent factor on controlling the boost not the thermals.

All fixed clock consoles will continue to try harder and harder to cool the system down as it gets hotter until shut off.
  • On lighter loads less voltage is used thus less cooling is needed.
  • On heavier loads more voltage is used more cooling is needed.
  • Clocks will stay the same, but the voltage will fluctuate

PS5 has its voltage and clock rating dependent on workload not heat of the chip.
  • PS5 in North Pole will run the same frame rates as PS5 in the Gobi desert.
  • PS5 in gobi desert is going to get a lot hotter than the one in the North Pole.
AFAICS thermal issues are very much so present on PS5.
  • In a hotter climate, with lighter game code, the SoC will be told to run full tilt.
  • Under liquid cooling and a heavy load, it will downclock the SoC and never take advantage of the boost, something that wouldn't happen with thermal based boost.
The solution is not ideal which is why it's never been adopted on the PC space. Mainly because on the PC space we can aftermarket cool our items further to keep the clock rates running high.
But in this case, you never get the choice of when it should down or upclock.
 
Last edited:
I'm not saying Sony has a cooling issue, but I'm more inclined to agree with rumours about cooling/heating/yield issues about PS5 than I am with XSX when I consider the above.

You could be on to something there, out of the two consoles, the PS5 has had rumours of having heating issues.

The solution is not ideal which is why it's never been adopted on the PC space. Mainly because on the PC space we can aftermarket cool our items further to keep the clock rates running high.

True that, just for marketing alone that wouldn't work out very well for gaming gpu's. Maybe in laptop configs where cooling/heating can be an issue, but then the other way around, advertise baseclock/minimal performance, not maximal performance/boosts. I don't see AMD selling a high end part like that anyway, imagine a 36CU high end gpu with the advertisement 10.xTF most of the time, can drop down to 9.x when things get hammered. In the PC gaming market that's not how things work out.
 
You're missing the part where it impacts the entire design. A fixed-clock console that will dissipate 150W in the majority of games, but the calculated worst case is 200W "burn test", must be designed for 200W even if no games ever reach that. The other can be designed for 150W all around and will only downclock under hypothetical conditions that may or may not ever happen. Both would provide the same performance and typical power consumption, and the variable clock will have exceptions based on the success or failure of predicting what future devs will do in the next 6-7 years. The fixed clock requires to be much more conservative on it's clock because of that unless it's designed to pass the thermal tests at 200W. This is why no modern GPU or CPU, from smartphones to high end servers, uses a fixed clock anymore. It's a big waste on the BOM, and a reliability risk unless they literally make a power/cooling system based on engineering TDP instead of the real world expected power.

The difference in cooling 150W and 200W die is a significant challenge if the BOM have to stay under control.
 
Last edited:
This is why no modern GPU or CPU, from smartphones to high end servers, uses a fixed clock anymore.

As far im aware GPU's today are not doing what PS5 is going to do, as in having a max clock and downclocking from there. It's the other away around.
 
You're missing the part where it impacts the entire design. A console that will have 150W in the majority of games but the calculated worst case is 200W "burn test", must be designed for 200W even if no games ever reach that. The other can be designed for 150W all around and will only downclock under hypothetical conditions that may or may not ever happen. Both would provide the same performance and typical power consumption, and the variable clock will have exceptions based on the success or failure of predicting what future devs will do in the next 6-7 years. The fixed clock requires to be much more conservative on it's clock because of that. It needs to pass thermal tests at 200W. This is why no modern GPU or CPU, from smartphones to high end servers, uses a fixed clock anymore. It's a big waste on the BOM, and a reliability risk unless they literally make a power/cooling system based on engineering TDP instead of the real world expected power.

The difference in cooling 150W and 200W is a significant challenge if the BOM have to stay under control.
The real world uses thermal boost blocks everywhere because they only need to manage thermals and each owner is given the ability to customize how they want to manage that, this is provided to the benefit of the owner to manage heat within their own environment as they see beneficial.

As for fixed clocks, both are fixed, it's just how they are fixed.

They are fixed to ensure that each and every console performs exactly the same as the other:
One is bound by frequency;
another is bound by game code.

While you are right that fixed comes with it's inefficiencies in being more conservative and less to the limit,
With one, you know exactly how much performance to budget for you can bank on all of that being there and it's an issue of data and memory management for optimization.

The other, the budget is constantly shrinking away as you increase loads.
A non issue on PC, you leave the PC user to decide their own experience.
A bigger issue on console, where the developer is responsible for the user experience.
Can you imagine where you budget the game for so many millions of triangles per second at 2230Mhz, and once the load gets too high and the system downclocks, now your triangle load is way out of budget? Same problem as fixed clocks now. Now you're just using less load to keep those clocks up. Same problem, different path.

So in theory it sounds like you're getting more performance out of boost mode, and that's true in the way PC operates, I'm not so sure about the console space.
 
Last edited:
So in theory it sounds like you're getting more performance out of boost mode, and that's true in the way PC operates.

In the PC space, GPU's like Turing have a baseclock, say 1600mhz, but most of those GPU's allow for much higher clocks then that, depending on thermals, load etc. The PS5 gpu is seemingly doing it the other way around, it downclocks depending on load to keep thermals and power in check.
 
The real world uses thermal boost blocks everywhere because they only need to manage thermals and each owner is given the ability to customize how they want to manage that, this is provided to the benefit of the owner to manage heat within their own environment as they see beneficial.

As for fixed clocks, both are fixed, it's just how they are fixed.

They are fixed to ensure that each and every console performs exactly the same as the other:
One is bound by frequency;
another is bound by game code.

While you are right that fixed comes with it's inefficiencies in being more conservative and less to the limit,
With one, you know exactly how much performance to budget for you can bank on all of that being there and it's an issue of data and memory management for optimization.

The other, the budget is constantly shrinking away as you increase loads.
A non issue on PC, you leave the PC user to decide their own experience.
A bigger issue on console, where the developer is responsible for the user experience.
Can you imagine where you budget the game for so many millions of triangles per second at 2230Mhz, and once the load gets too high and the system downclocks, now your triangle load is way out of budget? Same problem as fixed clocks now. Now you're just using less load to keep those clocks up. Same problem, different path.

So in theory it sounds like you're getting more performance out of boost mode, and that's true in the way PC operates, I'm not so sure about the console space.
You changed the subject again with baseless conjectures. I'm going to move on.
 
Back
Top