Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
I'm sure both Ms and Sony (specially this last one) will make sure to have CPU, RAM and bandwidth future proof... also bandwidth towards mass storage that will be properly buffered ... Teraflops in the GPU can be increased in the "pro" interation that will reasonably will be increased as 6nm process will mature in 2021 or maybe with 4 nm or such... so, for marketing, will have 10 TF (but for ps5 actually just doubling ps4-pro TF could be enough).. will just see in the beginning ps4-pro games that runs at 30 fps running on ps5 at 60 fps... I believe in the revaming at 7nm of a really cheap, quiet, small, ps4-pro... that can even goes trough 6nm... remember the 100 millions ps4 around this are the REAL concern for Sony: not letting them behind

There will for sure be cross gen games but that will include PS4 base in fact the pro becomes irrelevant once the PS5 comes out and probably will stop being manufactured by Sony where the base will get the super slim later this year.
 
Can we leave the pointless TF discussion and talk about performance?
No. Some people like calculating specs and talking about these numbers. None of those metrics factor in bottlenecks, so calculating RAM max BW doesn't really tell us how efficiently that RAM will be used, but it's at least something. That doesn't prohibit others from talking about features. If you don't like the stats, just ignore them.
 
Do we have any idea about the likely relative costs of GDDR5 Vs 6 currently, and during 2020?

A 4TF console might just be in the range that a 256 bit, 8 gbit/pin bus could sustain. If Navi is a big improvement in terms of efficiency (Nvidia work wonders with relatively little BW), and GDDR5 is cheap enough to mitigate board and power costs, could the crusty old standard have another outing?

Obviously something like 192 bit GDDR6 would be a lot better, but I'm wondering if in the effort to push costs as low as possible on day one, scrubbing round in the bins for GDDR5 might be a goer.
 
I think you would be right on the edge. That's 256 GB/sec that would need to be shared between the CPU and GPU. Navi would have to be extremely efficient.
 
This got me thinking the Chinese Subor Z-Plus console has a 4C/8T Ryzen at 3 GHz with a 24 CU /1300 MHz GPU (3.99TF) tied to 8GB of GDDR5.

The next gen Xbox would likely double the CPU and swap out the Vega for Navi GPU. But that kind of setup is out there.

I wish DF had done more testing with this thing, but it's been silence since the initial look (I think).
 
Do we have any idea about the likely relative costs of GDDR5 Vs 6 currently, and during 2020?

A 4TF console might just be in the range that a 256 bit, 8 gbit/pin bus could sustain. If Navi is a big improvement in terms of efficiency (Nvidia work wonders with relatively little BW), and GDDR5 is cheap enough to mitigate board and power costs, could the crusty old standard have another outing?

Obviously something like 192 bit GDDR6 would be a lot better, but I'm wondering if in the effort to push costs as low as possible on day one, scrubbing round in the bins for GDDR5 might be a goer.

Yes, we do. As of March 28th, 2018.

https://www.gamersnexus.net/industr...-next-gen-nvidia-gpus-3-month-mass-production

Also of interest, we learned that GDDR6 will be about 20% more expensive in manufacturer cost than GDDR5 at launch. That cost will obviously be passed on from board vendors to consumers. GDDR6 should come down to +15% and +10% over time, as mass production ramps and overtakes GDDR5 production lines in the factories.
 
[...] Can we leave the pointless TF discussion and talk about performance?

But TFLOPs are not pointless when talking about GPU performance from the same familiy. Unless Navi is a major change to GCN, that for example has much better efficiency and maybe additional clockability compared to Vega 20, then we can roughly guesstimate what performance we can expect with next gen consoles based on TFLOPs (in conjunction with power draw) of previous and current GCN generations.

gcn-generations.png

Looking at these charts from computerbase.de [0] we can see that the clock for clock performance increase between GCN generations is decent but not that large.

To compare the generations they picked cards (280X, 380X, RX 470) with the same number of shaders (2048) and equalized the clocks to get the same TFLOPs (4.260) as well as memory bandwidth (211.100 GB/s with the exception of the 280X which has 10 GB/s more due to a 384 bit bus).

3rd gen GCN (Tonga, Fiji) was the first generation with delta color compression which is probably a decent contributor to the better performance. 4th gen GCN (Polaris) has even better color compression which I would guess is again a decent contributor to the increase considering how bandwidth starved GCN seems to be.

amd-compression.png

Interestingly if we compare 5th gen GCN (Vega) to 3rd gen GCN (Fiji) we only see similiar gains as Polaris (4th gen GCN) had, even though AMD changed the compute units quite a bit according to their marketing material. hardocp [1] and gamersnexus [2] show similiar or even worse results when comparing Fury and Vega.

v56-v64-fury.png

Some of the architectural changes in Vega have been said to be not active due to problems, not correctly working or just not giving the advertised performance increases outside of special applications (DSBR). So, maybe this will lead to a bigger performance increase compared to other generations if they bring those Vega features over to Navi in a working state and add new Navi features (possibly variable rate shading?)?

[...] but is it crazy to believe that after 3 years we will get a GPU with more than twice the performance? It doesn't seem unlikely to me. [...]

Is it crazy? No, they would "only" need to double the TFLOPs and bring the usual generation to generation performance increase and we would see more than twice the performance. There are even working current gen features that will only really be used once all new consoles have it. For example FP16 is only available on the PS4 Pro and Switch which together have only ~20% market share (just spitballing), so if Lockhart, Anaconda and the PS5 all support FP16 we will probably see more engines take advantage of it which should increase the performance.

Of course, if AMD knocks it out of the park with Navi maybe a GPU with 9 TFLOPs will already deliver twice the performance. For example Nvidias Turing based GTX 2060 FE with 6.5 TFLOPs is almost as fast as the Pascal based GTX 1080 FE with 8.9 TFLOPs. But I doubt it, Nvidia has better generation to generation increases then AMD.

Since I didn't dare to use the web to search for better comparisons because Mozilla fucked up and deactivated all the add-ons for all Firefox users - inlcuding essential stuff like NoScript and add blockers - let's use the performance per watt data to show the difference (they compare the cards while running WoW at 60 fps). I added some scribbles to the pictures.

perf-watt-amd-nvidia.png

Since the consoles have a limited power draw budget we also have to look at the power draw in order to guesstimate what performance we can expect. For example the Xbox One X draws ~180 watt while gaming, which not only includes the GPU portion but the CPU, RAM, HDD, fans etc. as well. The original PS4 and Xbox One are in the same ballpark.

If we look at the Radeon 7 we see that AMD can deliver 13.44 TFLOPs on 7nm while running 60 CUs @ 1750 MHz. But to get there they need a power draw of 288 watt for the GPU alone which goes far beyond the console power budget:

radeon-7-power-draw.jpg

Unlike Radeon 7 the consoles will most likely use GDDR6 which has a bit more power draw then HBM. Sadly I don't know of a chart which shows how much the power draw of the Radeon 7 changes when the clocks are lowered. Otherwise we could better guesstimate because it would only need 1565 MHz to reach ~12 TFLOPs, which seems like a more realistic console clockspeed to me. On the other hand the Vega 20 die used in the Radeon 7 has 64 CUs which could be a problem for consoles due to yields.

For a recent power draw chart with AMD cards I'm only aware of the following one for the RX 580 vs RX 590. If Vega 20 has a similiar behaviour and the last 200 MHz increase the power draw by 75-100 watt then Vega 20 would consume 188-213 watt for ~12 TFLOPs (= roughly double the performance since Polaris which is used in the Xbox One X and Vega seem similar in games).

590-watt.png

Of course, if the console manufacturers really want to then they have ways to better optimise the power draw as can be seen on the Xbox One X which in total consumes less power then the RX 580 while having similiar TFLOPs (6.001 vs 6.175), a bigger bus (384 bit vs 256 bit), more memory (12 GB vs 8 GB) but uses the same architecture (Polaris) - only wider (40 vs 36 active CUs) and lower clocked (1172 vs 1340 MHz).

I would guess removing gaming irrelevant stuff like 1:4 FP64 and machine learning instructions that Radeon 7 has will make the GPU also more efficient.

As the 2nd biggest consumer the CPU is also interesting for the total power draw since it eats into the power budget for the GPU. When measuring the package of current 65W TDP Zen and Zen+ based Ryzen CPUs they consume ~50W while gaming and ~80W under full load. But since the consoles can't run their CPU at 20 degrees celsius the power draw would be higher. I'm not aware of measurements for AMDs 45W TDP 8c/16t CPU - the Ryzen 2700E - which has a base clock of 2.8 GHz.

cpu-watt.png

The 8c/16t Zen 2 sample with a TDP of 65W that was showcased at CES can compete with Intels 9900k in multi threaded workloads (AMDs forte). Based on that users have guesstimated that it would need an additional ~500 MHz plus ~10% IPC increase to reach those scores. In other words Zen 2 running the same clocks as the current CPUs will be more power efficient, so something between 2.8 GHz and 3.2 GHz like is mentioned in almost all rumors seems realistic for consoles without impacting the GPU performance budget too much.

So, based on current GPUs TFLOPs I expect anything between 10 and 12 TF for the PS4 and Anaconda. Which means depending on the architecture enhancements I think double the performance of the Xbox One X seems not crazy but realistic.

However, I'm curious if they target native 4k or if they use checkerboard 4k or someting along those lines. The Xbox One X needs 4.5 times the power of the Xbox One to render Xbox One games at native 4k, even though 4k has "only" 4 times as much pixels as 1080p. Probably because many Xbox One games run below 1080p and quite a few games get enhanced textures, better tesselated models etc. on the Xbox One X. According to Mark Cerny to run PS4 games at 4k you would need around 8TF which seems to be in line with the increase Microsoft gave the One X (PS4 with 1.84 TF * 4.5 would be 8.28 TF).

Which means developers would only have ~4 TFLOPs more to play around with compared to the PS4 (if we take the PS4 as baseline) and devs target native 4k with consoles that have ~12 TFLOPs. Of course depending on the architectural enhancements the difference might be more then what the TFLOPs difference suggests, but I only expect similiar gains like previous GCN changes.

Luckily the best thing about the new consoles will be the powerful Zen 2 based 8 core CPUs and the SSDs anyway - at least in my opinion. Everything else is only the icing on the cake. Consoles can have advantages due to being closed systems and I hope Sony can deliver something which is even better than "just" a PCIe4 SSD with their custom SSD solution.

TL;DR: forgot what I really wanted to write half way through, but posted my jumbled thoughts anyway in order to tempt others with more knowledge/actual kowledge into writing their thoughts as well while correcting me, heh.

[0] https://translate.google.com/translate?sl=auto&tl=en&u=https://www.computerbase.de/2016-08/amd-radeon-polaris-architektur-performance/
[1] https://www.hardocp.com/article/2017/09/12/radeon_rx_vega_64_vs_r9_fury_x_clock_for
[2] https://www.gamersnexus.net/guides/2977-vega-fe-vs-fury-x-at-same-clocks-ipc
 
But TFLOPs are not pointless when talking about GPU performance from the same familiy. Unless Navi is a major change to GCN, that for example has much better efficiency and maybe additional clockability compared to Vega 20, then we can roughly guesstimate what performance we can expect with next gen consoles based on TFLOPs (in conjunction with power draw) of previous and current GCN generations.

View attachment 3060

Looking at these charts from computerbase.de [0] we can see that the clock for clock performance increase between GCN generations is decent but not that large.

To compare the generations they picked cards (280X, 380X, RX 470) with the same number of shaders (2048) and equalized the clocks to get the same TFLOPs (4.260) as well as memory bandwidth (211.100 GB/s with the exception of the 280X which has 10 GB/s more due to a 384 bit bus).

3rd gen GCN (Tonga, Fiji) was the first generation with delta color compression which is probably a decent contributor to the better performance. 4th gen GCN (Polaris) has even better color compression which I would guess is again a decent contributor to the increase considering how bandwidth starved GCN seems to be.

View attachment 3052

Interestingly if we compare 5th gen GCN (Vega) to 3rd gen GCN (Fiji) we only see similiar gains as Polaris (4th gen GCN) had, even though AMD changed the compute units quite a bit according to their marketing material. hardocp [1] and gamersnexus [2] show similiar or even worse results when comparing Fury and Vega.

View attachment 3046

Some of the architectural changes in Vega have been said to be not active due to problems, not correctly working or just not giving the advertised performance increases outside of special applications (DSBR). So, maybe this will lead to a bigger performance increase compared to other generations if they bring those Vega features over to Navi in a working state and add new Navi features (possibly variable rate shading?)?



Is it crazy? No, they would "only" need to double the TFLOPs and bring the usual generation to generation performance increase and we would see more than twice the performance. There are even working current gen features that will only really be used once all new consoles have it. For example FP16 is only available on the PS4 Pro and Switch which together have only ~20% market share (just spitballing), so if Lockhart, Anaconda and the PS5 all support FP16 we will probably see more engines take advantage of it which should increase the performance.

Of course, if AMD knocks it out of the park with Navi maybe a GPU with 9 TFLOPs will already deliver twice the performance. For example Nvidias Turing based GTX 2060 FE with 6.5 TFLOPs is almost as fast as the Pascal based GTX 1080 FE with 8.9 TFLOPs. But I doubt it, Nvidia has better generation to generation increases then AMD.

Since I didn't dare to use the web to search for better comparisons because Mozilla fucked up and deactivated all the add-ons for all Firefox users - inlcuding essential stuff like NoScript and add blockers - let's use the performance per watt data to show the difference (they compare the cards while running WoW at 60 fps). I added some scribbles to the pictures.

View attachment 3061

Since the consoles have a limited power draw budget we also have to look at the power draw in order to guesstimate what performance we can expect. For example the Xbox One X draws ~180 watt while gaming, which not only includes the GPU portion but the CPU, RAM, HDD, fans etc. as well. The original PS4 and Xbox One are in the same ballpark.

If we look at the Radeon 7 we see that AMD can deliver 13.44 TFLOPs on 7nm while running 60 CUs @ 1750 MHz. But to get there they need a power draw of 288 watt for the GPU alone which goes far beyond the console power budget:

View attachment 3057

Unlike Radeon 7 the consoles will most likely use GDDR6 which has a bit more power draw then HBM. Sadly I don't know of a chart which shows how much the power draw of the Radeon 7 changes when the clocks are lowered. Otherwise we could better guesstimate because it would only need 1565 MHz to reach ~12 TFLOPs, which seems like a more realistic console clockspeed to me. On the other hand the Vega 20 die used in the Radeon 7 has 64 CUs which could be a problem for consoles due to yields.

For a recent power draw chart with AMD cards I'm only aware of the following one for the RX 580 vs RX 590. If Vega 20 has a similiar behaviour and the last 200 MHz increase the power draw by 75-100 watt then Vega 20 would consume 188-213 watt for ~12 TFLOPs (= roughly double the performance since Polaris which is used in the Xbox One X and Vega seem similar in games).

View attachment 3058

Of course, if the console manufacturers really want to then they have ways to better optimise the power draw as can be seen on the Xbox One X which in total consumes less power then the RX 580 while having similiar TFLOPs (6.001 vs 6.175), a bigger bus (384 bit vs 256 bit), more memory (12 GB vs 8 GB) but uses the same architecture (Polaris) - only wider (40 vs 36 active CUs) and lower clocked (1172 vs 1340 MHz).

I would guess removing gaming irrelevant stuff like 1:4 FP64 and machine learning instructions that Radeon 7 has will make the GPU also more efficient.

As the 2nd biggest consumer the CPU is also interesting for the total power draw since it eats into the power budget for the GPU. When measuring the package of current 65W TDP Zen and Zen+ based Ryzen CPUs they consume ~50W while gaming and ~80W under full load. But since the consoles can't run their CPU at 20 degrees celsius the power draw would be higher. I'm not aware of measurements for AMDs 45W TDP 8c/16t CPU - the Ryzen 2700E - which has a base clock of 2.8 GHz.

View attachment 3063

The 8c/16t Zen 2 sample with a TDP of 65W that was showcased at CES can compete with Intels 9900k in multi threaded workloads (AMDs forte). Based on that users have guesstimated that it would need an additional ~500 MHz plus ~10% IPC increase to reach those scores. In other words Zen 2 running the same clocks as the current CPUs will be more power efficient, so something between 2.8 GHz and 3.2 GHz like is mentioned in almost all rumors seems realistic for consoles without impacting the GPU performance budget too much.

So, based on current GPUs TFLOPs I expect anything between 10 and 12 TF for the PS4 and Anaconda. Which means depending on the architecture enhancements I think double the performance of the Xbox One X seems not crazy but realistic.

However, I'm curious if they target native 4k or if they use checkerboard 4k or someting along those lines. The Xbox One X needs 4.5 times the power of the Xbox One to render Xbox One games at native 4k, even though 4k has "only" 4 times as much pixels as 1080p. Probably because many Xbox One games run below 1080p and quite a few games get enhanced textures, better tesselated models etc. on the Xbox One X. According to Mark Cerny to run PS4 games at 4k you would need around 8TF which seems to be in line with the increase Microsoft gave the One X (PS4 with 1.84 TF * 4.5 would be 8.28 TF).

Which means developers would only have ~4 TFLOPs more to play around with compared to the PS4 (if we take the PS4 as baseline) and devs target native 4k with consoles that have ~12 TFLOPs. Of course depending on the architectural enhancements the difference might be more then what the TFLOPs difference suggests, but I only expect similiar gains like previous GCN changes.

Luckily the best thing about the new consoles will be the powerful Zen 2 based 8 core CPUs and the SSDs anyway - at least in my opinion. Everything else is only the icing on the cake. Consoles can have advantages due to being closed systems and I hope Sony can deliver something which is even better than "just" a PCIe4 SSD with their custom SSD solution.

TL;DR: forgot what I really wanted to write half way through, but posted my jumbled thoughts anyway in order to tempt others with more knowledge/actual kowledge into writing their thoughts as well while correcting me, heh.

[0] https://translate.google.com/translate?sl=auto&tl=en&u=https://www.computerbase.de/2016-08/amd-radeon-polaris-architektur-performance/
[1] https://www.hardocp.com/article/2017/09/12/radeon_rx_vega_64_vs_r9_fury_x_clock_for
[2] https://www.gamersnexus.net/guides/2977-vega-fe-vs-fury-x-at-same-clocks-ipc

Thanks for the great post. I didn't know those comparisons existed, it's a very nice read.

I didn't understand the 3rd image comparing Vega 64, 56 and Fury X, could you please explain?.

Do you think the 1.8 GHz+ rumors are insane? Can different GPU architectures aim for very different clock speeds by changing things like pipeline stages as happens with CPUs?

Halfway in the text I think you meant to say that 60 CUs (and not 64) like in the Vega VII would be unlikely, and I for sure agree if we're talking about the PS5. But recently there was another great post by iroboto here that suggested using different chips for different consoles and servers could allow low yield parts to be used in Anaconda.

What would be your TDP prediction for a 3.0 GHz 8 core Zen 2 used in consoles under full load? How much power would everything that is not the APU consume in a next gen console?

Can we say that the increase in clock speeds and efficiency in the PS4 Pro and X1X CPU was just to allow the new GPU to do its job, in a way it didn't help in increasing the performance of the new machines?
 
But recently there was another great post by iroboto here that suggested using different chips for different consoles and servers could allow low yield parts to be used in Anaconda.
It was a hypothetical option. I've never done product management for semi-conductors but I can only assume they would bin to be successful in reaching their performance targets and price points. What those bins are and how many there are outside of my domain knowledge. It's likely going to be 2 or 3. 1 doesn't make a lot of sense if there is only 1 SKU. And anything more than 3 doesn't make a lot of sense either.

Something to keep note of that seems to be up for debate is whether it will be chiplet or APU. If the latter, these TDP amounts (in above post) are vastly too high, they need to come down significantly, or, as I have predicted in the past we're going to see something closer to 7-8 TF of power, but you may see the chip have other features to enable graphical fidelity in other ways than requiring brute computational power.

The chiplet strategy would be interesting, but I just think it's too costly for the volumes required by consoles. IMO, the chiplet strategy will greatly cut into the savings at the lower end SKUs. So I don't see this strategy working for MS. Assuming they did go chiplets the power output still needs to stay low.

APU is where you're going to get the best bang for buck for binning strategies. As the labour to assemble chiplets is going to be the same cost whether you use higher binned chips or lower binned chips.

As per the AMD investor slides posted on the Navi thread
Navi_cloud.png


As per MS' commentary about Azure team working hard with the Xbox team so that it's suitable for both Azure compute and game streaming and we're seeing in this slide that Navi is being positioned for Cloud Gaming; at this point in time I'm just waiting for MS to confirm this detail.

But that's where it gets interesting since the MI25 series are all chiplet based as are the EYPC series. So chiplets are competent in the data centres.

*shrug* and this is where I can only offer garbage unfortunately. Perhaps we will see chiplets.

So if there are chiplets and there is binning -- then perhaps we'll see bins for the top of the line go into servers since they can handle higher wattages. And the lower bins into consoles since their TDP is going to be limited.
 
Last edited:
Do you think the 1.8 GHz+ rumors are insane? Can different GPU architectures aim for very different clock speeds by changing things like pipeline stages as happens with CPUs?

A lot of transistors went into pushing clock speeds higher in Vega. I would assume physical layout and critical path optimisation would also have played a role as it did in Pascal.

It was a hypothetical option. I've never done product management for semi-conductors but I can only assume they would bin to be successful in reaching their performance targets and price points. What those bins are and how many there are outside of my domain knowledge. It's likely going to be 2 or 3. 1 doesn't make a lot of sense if there is only 1 SKU. And anything more than 3 doesn't make a lot of sense either.

Something to keep note of that seems to be up for debate is whether it will be chiplet or APU. If the latter, these TDP amounts (in above post) are vastly too high, they need to come down significantly, or, as I have predicted in the past we're going to see something closer to 7-8 TF of power, but you may see the chip have other features to enable graphical fidelity in other ways than requiring brute computational power.

The chiplet strategy would be interesting, but I just think it's too costly for the volumes required by consoles. IMO, the chiplet strategy will greatly cut into the savings at the lower end SKUs. So I don't see this strategy working for MS. Assuming they did go chiplets the power output still needs to stay low.

APU is where you're going to get the best bang for buck for binning strategies. As the labour to assemble chiplets is going to be the same cost whether you use higher binned chips or lower binned chips.

As per the AMD investor slides posted on the Navi thread
Navi_cloud.png


As per MS' commentary about Azure team working hard with the Xbox team so that it's suitable for both Azure compute and game streaming and we're seeing in this slide that Navi is being positioned for Cloud Gaming; at this point in time I'm just waiting for MS to confirm this detail.

But that's where it gets interesting since the MI25 series are all chiplet based as are the EYPC series. So chiplets are competent in the data centres.

*shrug* and this is where I can only offer garbage unfortunately. Perhaps we will see chiplets.

So if there are chiplets and there is binning -- then perhaps we'll see bins for the top of the line go into servers since they can handle higher wattages. And the lower bins into consoles since their TDP is going to be limited.

Chiplets make sense when you can dip into an existing pool of supply, as will be the case with Zen 2 and accompanying IO die. Whether it offsets the higher cost of packaging is hard to say because we don’t know what those extra costs are.

What’s interesting is that it would enable upgrades of independent aspects of the chip throughout the console lifespan. Maybe a hypothetical Pro model would slot in a different GPU chiplet but retain the same Zen 2 and IO die. Perhaps Maverick and Anaconda will follow the same pattern.

It would also enable easy upgrades to 6nm, at least for the CPU chiplet. AMD will already have supply.
 
The problem with sourcing low yield chips for a specific platform is both MS and Sony will not be using same design, both companies will make modest modifications to the GPU based on their specific needs.
 
But that's where it gets interesting since the MI25 series are all chiplet based as are the EYPC series. So chiplets are competent in the data centres.
I wouldn't call HBM memory chips "chiplets", MI25 is Vega 10
 
Does Sony follow MS on the double Console release strategy by picking up best/worst chips ? In case YES ps4pro is going not be renewed... Otherwise can be IMHO.
 
The problem with sourcing low yield chips for a specific platform is both MS and Sony will not be using same design, both companies will make modest modifications to the GPU based on their specific needs.

MS also has its Azure cooperation with AMD for Epyc and probably GPUs.
 
Chiplets make sense when you can dip into an existing pool of supply, as will be the case with Zen 2 and accompanying IO die. Whether it offsets the higher cost of packaging is hard to say because w
Agreed. The challenge comes down to what MS and Sony are customizing. If they are customizing both then the idea of dipping into an existing supply pool doesn’t seem to work.

I just slept and woke up. But talking with Al last night while sailing, perhaps chiplet strategy would make sense for a data centre based Xbox where the restrictions aren’t the same and the goals for performance more broad, but the local units are APUs. We already know that MS chose them for EYPC, what if they could assemble the GPU as part of the package.

Jus thinking out loud, but there’s a possibility for both paths to be correct.
 
On the other hand, that's like a whole different SKU to support. View attachment 3064

This seems to be the inevitable future in the console world. In for penny, in for a pound! I know nothin about AMDs server chips, do they lack any core functionality in the same way Xeon lacks some of the hardware in their Core architecture?
 
Status
Not open for further replies.
Back
Top