Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
Significantly more labour to do this correctly. And the challenge for this generation is getting lighting and shadows to the next level which I don’t think there is enough power to do correctly. Even then absent of that, the increase in graphical options may not be noticeable either; whereas 4K with high res textures and a good HDR implementation is likely cheaper and can provide better ability to resolve textures at a distance giving that next gen look.

If we want more details or better lighting and shadows that will have to come next gen.
Pixel quality > pixel quantity.

I agree that 60fps should be the standard. We've gone through many resolution increases but framerate is stuck on pre-Dreamcast levels.
 
Pixel quality > pixel quantity.

I agree that 60fps should be the standard. We've gone through many resolution increases but framerate is stuck on pre-Dreamcast levels.
Game prices stay the same :)
Development costs keep going up :)

I love reading about technical stuff about how games look better with newer techniques and what not. But for pixel quality to constantly increase we need to continue to see evolutions in the way games are developed to bring the $$ per pixel down if that makes sense. This is why I think ray tracing = awesome. Better tools, less baking etc. Pixel quality up, but I think the costs won’t necessarily increase to the studio. Well I suppose supporting both do, but perhaps if RT is standard in consoles the issue is resolved faster.
 
Game prices stay the same :)
Development costs keep going up :)

I love reading about technical stuff about how games look better with newer techniques and what not. But for pixel quality to constantly increase we need to continue to see evolutions in the way games are developed to bring the $$ per pixel down if that makes sense. This is why I think ray tracing = awesome. Better tools, less baking etc. Pixel quality up, but I think the costs won’t necessarily increase to the studio. Well I suppose supporting both do, but perhaps if RT is standard in consoles the issue is resolved faster.
The costs are mostly on the art side. I'd be fine with PS2 level assets rendered with RTRT :devilish:
 
Game prices stay the same :)
Development costs keep going up :)

I love reading about technical stuff about how games look better with newer techniques and what not. But for pixel quality to constantly increase we need to continue to see evolutions in the way games are developed to bring the $$ per pixel down if that makes sense. This is why I think ray tracing = awesome. Better tools, less baking etc. Pixel quality up, but I think the costs won’t necessarily increase to the studio. Well I suppose supporting both do, but perhaps if RT is standard in consoles the issue is resolved faster.
- Establish full development studios (not just supprt studios making environment assets) in lower wage countries with robust talent pools eg ukraine, turkey, poland, and maybe india in the future
- Automate automate automate those jobs away (eg some of those deep learning algorithms are starting to get frighteningly capable so you might be able to reduce animation team by 1/4)
- Hire someone with a whip
- shackle staff to their desk
 
- Establish full development studios (not just supprt studios making environment assets) in lower wage countries with robust talent pools eg ukraine, turkey, poland, and maybe india in the future
- Automate automate automate those jobs away (eg some of those deep learning algorithms are starting to get frighteningly capable so you might be able to reduce animation team by 1/4)
- Hire someone with a whip
- shackle staff to their desk
:(
Sadly this is mainly true and likely in the process of happening.
 
I'm pretty sure it was mentioned to be some sort of SSD when the first beans were spilled by Sony?
It was in contention. I think we all knew it was, but the writer wrote that Mark Cerny refused to acknowledge that it was. Even though the writer liberally used SSD in the article.
 
I didn't understand the 3rd image comparing Vega 64, 56 and Fury X, could you please explain?.

I forgot to hover over the chart which would have made it more clear. The chart shows that clock for clock Vega (5th gen GCN) has an advantage of 6% over Fiji (3rd gen GCN), which is basically the same advantage that Polaris (4th gen GCN) has over Tonga (3rd gen GCN). Which means even though Vega has quite a few architecture changes the clock for clock performance seems to be the same as Polaris.

vega-vs-fiji-1.png

They tested it in the same way that they tested Polaris against it's predecessors. They used cards (Vega 64, Fury X) that have the same number of shaders (4096), same number of ROPs (64), same number of TMUs (256) and equalized their clocks to get the same TFLOPs (8.602) and the same memory bandwidth (512 GB/s). One advantage that Vega still has is the 8 GB memory compared to Fijis 4 GB (which is why I used the 1080p comparison) and 4 MB L2 cache compared to Fijis 2 MB.

But as you can see 5th gen GCN (Vega) is only 6% faster then 3rd gen GCN (Fiji) which is the same advantage that Polaris (7%) has over Tonga (3rd gen GCN). Computerbase.de re-did the test 1 year later [0] to see if something has changed due to newer drivers and it did. The gap over 3rd gen GCN increased to 10%.

vega-vs-fiji-2.png

The problem is that Polaris also widened it's gap over 3rd gen GCN to 10% meaning that Vegas new features seem to do nothing to improve the performance in games, which is why computerbase concludes that the performance advantages boil down to having more shaders, clocks, bandwidth etc.

Clockability is of course a feature of the architecture but considering AMDs claims about the changes I would have thought that it improves over polaris at the same clock.

Do you think the 1.8 GHz+ rumors are insane? Can different GPU architectures aim for very different clock speeds by changing things like pipeline stages as happens with CPUs?


I wouldn't say insane, let's just say that I'm skeptical unless Navi is much improved over Vega 20 (Radeon 7) or it proves to be more viable to choose a smaller die + higher clocks compared to a lager die + conservative clocks (due to yield or other factors).

The architecture definitely plays an important role for clockability considering that AMD (Vega, Vega 20) and Nvidia (Pascal) have said they made changes to allow better clocks.

Halfway in the text I think you meant to say that 60 CUs (and not 64) like in the Vega VII would be unlikely, and I for sure agree if we're talking about the PS5. But recently there was another great post by iroboto here that suggested using different chips for different consoles and servers could allow low yield parts to be used in Anaconda.

Sorry, that was a leftover of the post that I forgot to delete. It was about yield and that even if I find the high clockspeed unlikely for a console that maybe Sony or MS decided to go for a high clock and small die because of yield (like other users have pointed out to me a few pages back).

Yes, what iroboto wrote would certainly make sense for MS if they want to reuse as much as they can between consoles and servers, and I hope it's what they do. The thing I look forward to the most in closed systems like consoles are non standardized design possibilities, their "secret sauce", clever engineering or however we want to call it.

They can't win on raw clocks, amount of RAM, TFLOPs etc. against upgradeable and more unrestrained systems like the PC, which is why I hope Sony and MS use different and clever ways to achieve their goal.

For example MS with chiplets and/or a small APU for Lockhart and a big APU shared between Anaconda plus xCloud/Azure while Sony might impress with their custom SSD solution - maybe even with the use of HBCC to tie different memory pools together. Or weird cooling solutions like the Sandia inspired Thermaltake Engine:


Maybe one of them uses new packaging technologies like TSMCs InFo (or something else) which (according to TSMC and Cadence) seems to be better suited for high volume consumer products like consoles from a cost standpoint compared to CoWoS.

Where CoWoS is used in lower volume high-performance markets where the benefits of high performance and large dies outweigh the costs it would be cool if something like InFo finds it's way into consoles as secret sauce ("targeted at a consumer price point", "high-volume consumer markets require something that is lower cost and don't need the bleeding-edge performance").

The usual die/package size of consoles seems to suit InFo and it's siblings:

InFo_CoWoS.jpg

I don't know about MS but Sony seems to be no stranger to innovating when it comes to packaging as can be seen on the Vita like @anexanhume pointed out somewhere.

Secret sauce like that is why I look forward to every new console launch. Plus the clever ways that engine developers have to come up with to squeeze more performance out of the consoles down the road, instead of brute forcing it like on the PC where you can just throw more hardware grunt at it.

What would be your TDP prediction for a 3.0 GHz 8 core Zen 2 used in consoles under full load? How much power would everything that is not the APU consume in a next gen console?

I don't know, unlike GPUs were we already have a 7nm product from AMD to use as a basis for speculation we don't have anything substantial for 7nm CPUs from AMD yet. Only the "preview" / "early sample" (according to Lisa Su) of a Zen 2 CPU with non final clocks and a TDP of 65W shown at CES this year, which can compete with Intels 9900k (95W/105W) in a specific applications that is AMDs forte.

Other than that we only have data on AMDs current Zen generation and their clock scaling behaviour [1][2], but who knows how much of that applies to Zen 2.

zen_plus.png

Also, since Zen 2 is already around 3 times as fast clock for clock compared to Jaguar they probably have quite some wiggle room for clocks. Especially considering that clocks can also be used as a last minute counter against the competition like the clock increase of the Xbox One or a marketing tool.

Google Stadias CPU comes in at 2.7 GHz and in order to one up them they could decide to go with 2.8 GHz. But from there the the magical 3 GHz barrier is not far which means they could decide to reach it. 3.2 GHz could be nice marketing material for Sony if they can spare the power draw since it doubles the 1.6 GHz of the PS4.

Can we say that the increase in clock speeds and efficiency in the PS4 Pro and X1X CPU was just to allow the new GPU to do its job, in a way it didn't help in increasing the performance of the new machines?

I would say they can't show their full performance potential due to the basline which is still 1st gen GCN (PS4, Xbox One).

If you would downlock the PS4 Pro and X1X to reach the same TFLOPs as PS4 and Xbox One then they should have better performance due to architecture changes and things like DCC (just like Polaris on PC).

But at the same time they can't show their full performance potential. Considering how clever engine programmers are in exploiting architecture features and their quirks I wouldn't be surprised if they already have thought of many ways which would allow them to increase the performance even more but can't do that because the engine has still to run on 1st gen GCN. So they can sprinkle their magic here and there, hide stuff behind compiler flags for the PS4 Pro and XoX builds but can't exploit every performance feature to it's full potential.

[0] https://translate.google.com/translate?sl=de&tl=en&u=https://www.computerbase.de/2018-05/amd-radeon-rx-vega-untersucht/2/

[1] https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/

[2] https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-72#post-39391302
 
Status
Not open for further replies.
Back
Top