I think we can forget Avatar graphics or even close to that. Even the UE5 demo isn't in the same leage (and that was an empty world with no gameplay etc). Not even on PC that will happen.
Penwarden also confirms that the temporal accumulation system seen in Unreal Engine 4 - which essentially adds detail from prior frames to increase resolution in the current one - is also used in UE5 and in this demo. The transparency here from Epic is impressive. We've spent a long time poring over a range of 3840x2160 uncompressed PNG screenshots supplied by the firm. They defy pixel-counting, with resolution as a metric pretty much as meaningless as it is for, say, a Blu-ray movie. But temporal accumulation does so much more for UE5 than just anti-aliasing or image reconstruction - it underpins the Lumen GI system.
Isn't the biggest improvement with RDNA 2 the 50% performance improvement per watt and shouldn't that mean higher clocks than RDNA 1?
Wouldn't that depend on if RDNA1 was limited in its clock by thermals or if they where just hitting the max of the architecture and layout?.
in short, yes expect higher clocks. It will largely depend on what the Vendor wants to do with it. Some of it to increase clocks and some of it to reduce the power footprint.Isn't the biggest improvement with RDNA 2 the 50% performance improvement per watt and shouldn't that mean higher clocks than RDNA 1?
Well seeing as the next gen consoles have high frequency I'm going to guess it was thermals.
MS is below the 5700XT series and nominally above the 5700. Granted the chip is much larger though. But GPUs can not approach CPU frequencies. Which is where I sort of find the claim that 2230 is a bottom barrel yield at odds. If we took that as true; Implications for the high end is 3000+ or so. Way too high, GPUs don’t boost like CPUs do, at least it’s not effective for them to do it in that way. Individual cores are weak.Well seeing as the next gen consoles have high frequency I'm going to guess it was thermals.
Much bigger chip. I agree it's probably not bottom of the barrel as iroboto says but I just think comparing it with RDNA 1 might not be the best comparison all though it's the only one available to us.The XSX seems to be well within a small bump of the current game clock.
We don't know enough about RDNA2.MS is below the 5700XT series and nominally above the 5700. Granted the chip is much larger though. But GPUs can not approach CPU frequencies. Which is where I sort of find the claim that 2230 is a bottom barrel yield at odds. If we took that as true; Implications for the high end is 3000+ or so. Way too high, GPUs don’t boost like CPUs do, at least it’s not effective for them to do it in that way. Individual cores are weak.
I suspect for both consoles, heating issues will be on the GPU side over the CPU side. So the concern should really be there than the CPU. The SoC itself has a voltage limit, the diffference between what MS and Sony did is that MS fixed the amount of voltage for both CPU and GPU, giving it the characteristic of consistency, with the downsides that it doesn't ever change clocks to maximize potential for lighter workloads and will be pushed harder on thermals for heavier workloads.We don't know enough about RDNA2.
And with such a large chip and fixed clock, MS would have more thermal issues to deal with than Sony, so they would go for a more efficient clocking.
Sony will have a thermal density problem, but they deal with it by clocking both for equal thermal density against the 3.5 CPU, which indicates they are not producing any bigger hot-spot than MS despite the higher GPU clock. Unless there's something funky happening with AVX256 on MS side.
You have no data to make that claim, since you don't know how much RDNA2 improved efficiency, nor how much the rework AMD have done improves clocks on the architecture by placing data closer to where it's needed. Nor the changes Sony required for their own variation of RDNA2.It should be variable as low as sub 2000, which is normal for a boost mode to drop under heavier loads.
While it's true we don't know the exact amounts. Cerny made the claims himself that using fixed clocks it was very difficult to get it over 2GHz. Meaning boost clocks gets it above 2GHz. Meaning when the load is high enough it should dip back to sub 2Ghz, as in the load is high enough to remove boost from the equation and bring it down to where they could achieve it as fixed clocks, as per his original claims.You have no data to make that claim, since you don't know how much RDNA2 improved efficiency, nor how the rework AMD did, placing data closer to where it's needed, improves clocks on the architecture. Nor the changes Sony required for their own variation of RDNA2.
There's no difference between PS5 and XSX in this regard. All Cerny did was create boost mode for consoles so that all consoles share the same amount of boost with the game code being the dependent factor on controlling the boost not the thermals.The clock difference between the two is in the same ballpark as between xb1x and ps4pro. However the TDP-related engineering margin is not required on PS5, while xbsx requires it for any hypothetical future games which might reach up to the burn-test TDP, even if extremely unlikely. Sony doesn't need that margin.
I'm not saying Sony has a cooling issue, but I'm more inclined to agree with rumours about cooling/heating/yield issues about PS5 than I am with XSX when I consider the above.
The solution is not ideal which is why it's never been adopted on the PC space. Mainly because on the PC space we can aftermarket cool our items further to keep the clock rates running high.
This is why no modern GPU or CPU, from smartphones to high end servers, uses a fixed clock anymore.
The real world uses thermal boost blocks everywhere because they only need to manage thermals and each owner is given the ability to customize how they want to manage that, this is provided to the benefit of the owner to manage heat within their own environment as they see beneficial.You're missing the part where it impacts the entire design. A console that will have 150W in the majority of games but the calculated worst case is 200W "burn test", must be designed for 200W even if no games ever reach that. The other can be designed for 150W all around and will only downclock under hypothetical conditions that may or may not ever happen. Both would provide the same performance and typical power consumption, and the variable clock will have exceptions based on the success or failure of predicting what future devs will do in the next 6-7 years. The fixed clock requires to be much more conservative on it's clock because of that. It needs to pass thermal tests at 200W. This is why no modern GPU or CPU, from smartphones to high end servers, uses a fixed clock anymore. It's a big waste on the BOM, and a reliability risk unless they literally make a power/cooling system based on engineering TDP instead of the real world expected power.
The difference in cooling 150W and 200W is a significant challenge if the BOM have to stay under control.
So in theory it sounds like you're getting more performance out of boost mode, and that's true in the way PC operates.
You changed the subject again with baseless conjectures. I'm going to move on.The real world uses thermal boost blocks everywhere because they only need to manage thermals and each owner is given the ability to customize how they want to manage that, this is provided to the benefit of the owner to manage heat within their own environment as they see beneficial.
As for fixed clocks, both are fixed, it's just how they are fixed.
They are fixed to ensure that each and every console performs exactly the same as the other:
One is bound by frequency;
another is bound by game code.
While you are right that fixed comes with it's inefficiencies in being more conservative and less to the limit,
With one, you know exactly how much performance to budget for you can bank on all of that being there and it's an issue of data and memory management for optimization.
The other, the budget is constantly shrinking away as you increase loads.
A non issue on PC, you leave the PC user to decide their own experience.
A bigger issue on console, where the developer is responsible for the user experience.
Can you imagine where you budget the game for so many millions of triangles per second at 2230Mhz, and once the load gets too high and the system downclocks, now your triangle load is way out of budget? Same problem as fixed clocks now. Now you're just using less load to keep those clocks up. Same problem, different path.
So in theory it sounds like you're getting more performance out of boost mode, and that's true in the way PC operates, I'm not so sure about the console space.