Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
Vega VII is not that much larger than that at 311mm and it includes 60 CUs for 13.44 TFlops. It would easily fit 48 CUs giving 10.7 TFlops all things equal, so I can't see 10TFlops being out of reach. The only question being power of course and how much Sony and MS budgeted for that. But 10TF is what my expectations are.

What flops would vega vii give when constrained to console power consumption while also using less efficient memory for ram(gddr6? Vs. Hbm2)? Will be interesting to see how navi ends up.
 
What flops would vega vii give when constrained to console power consumption while also using less efficient memory for ram(gddr6? Vs. Hbm2)? Will be interesting to see how navi ends up.

Don't know, but unlike Vega VII, the console GPU won't have DP hardware.
 
What flops would vega vii give when constrained to console power consumption while also using less efficient memory for ram(gddr6? Vs. Hbm2)? Will be interesting to see how navi ends up.
This is partly why I think the HBM2/DDR4 rumor has some legs. If HBM supply is set to outpace GDDR6 over the next few years it makes sense. Both from a power and contention perspective. Sony can probably negotiate a much better price on HBM than RTG and DDR4 is headed for the bargain bin.
 
Last edited:
Lol. The GPU is the IO die. Are you expecting a ~500GB/s interface between separate IO and GPU dies? Come on.

AMD needs to have a solution to handle multiple cpu chiplets with GPU chiplet(s). From what I can find IFOPs can use multiple connections. Ryzen Threadripper uses 2 for instance for 85GB/s at MemCLK 1333Mhz. They might scale this to deal with 500GB/s++.

https://www.overclock3d.net/news/cp..._stacked_memory_and_moving_past_moore_s_law/1

If you look through these diagrams they still use the I/O chiplet separate from the GPU in a HBM setup.

Then there's AMD's planned support for Gen-Z which seems to replace Infinity Fabric.
 
In terms of GPU clocks, I'm guessing around 70% of base clocks of the desktop parts, maybe 80% with some of dat Hovis sauce, but no more than that. Navi 10 release is going to be super interesting in this thread.
 
since current gen XBO hardware feature set?
oh man that list is massive.
uh okay, this list may not be 100% accurate but I don't think OG XBO has these, and neither should Scorpio at least supported in FL1_2 Format. But I think if you were just checking which hardware features, it might be easier to just check for D3D12_FEATURE_D3D12_OPTIONS6 <- as this is the latest known one that contains everything listed below. Some of these were rebuilt because of alignment issues across multiple vendors. So xbox may not necessarily have the same features since it's own locked garden.
Then from a ShaderModel level.
I think OG Xbox is mainly SM6.
But it may not contain the hardware support for some of those items.
We are now at SM6.4 - which I guess is the model that supports DXR and DML and whatever other little things that needed to be included.
To support SM6.4 there is only 1 bit - so you either support it all or you do not, unlike 6 in which you can support some and not support some.
DXR and DirectML are not hardware features. Those are just software APIs.
 
yea, I don't see it either ;)

I'm not big on the chiplet idea. Assembly and chip costs too high for a product too dirt cheap.

I don't understand this belief. What's being proposed isn't really all that new to the console space. The 360 had a similar design. CPU that depended on the GPU because the gpu housed the northbridge. The gpu also had a chiplet or a daughter die (EDRAM).

Exactly whats too expensive about chiplets for use in console coming out in 2020?
 
Last edited:
DXR and DirectML are not hardware features. Those are just software APIs.
they have hardware requirements that are optional (but very optimal if present). The ask was what has been released since OG Xbox. And everything I listed is the same API. Those are features of DX12 SM6.4.

If we're going to get technical about it. DirectML and DXR are part of DX12. They aren't isolated and separate entities.
 
I don't understand this belief. What's being proposed isn't really all that new to the console space. The 360 had a similar design. CPU that depended on the GPU because the gpu housed the northbridge. The gpu also had a daughter die (EDRAM).

Exactly whats too expensive about chiplets for use in console coming out in 2020?
chipsets require multiple chips, i/o, cpu and cpu, and any other. And then additional assembly.
Just seems more expensive than having it all on 1 SOC.
 
chipsets require multiple chips, i/o, cpu and cpu, and any other. And then additional assembly.
Just seems more expensive than having it all on 1 SOC.

It's more expensive if your SOC can be produced at a size and at yields that easily fits within your cost range. But 350mm+ apus at 7nm might not fit within the cost requirements at the typical price range of consoles. The 360 ended up having one of the first APU ever released but it started life with a chiplet like design and its likely because a 360 APU at 90nm wasn't feasible.
 
It's more expensive if your SOC can be produced at a size and at yields that easily fits within your cost range. But 350mm+ apus at 7nm might not fit within the cost requirements at the typical price range of consoles. The 360 ended up having one of the first APU ever released but it started life with a chiplet like design and its likely because a 360 APU at 90nm wasn't feasible.
i was under the impression that the cost yields for something like infinity fabric was when you started to get around the 700mm range? which is why we don't see thread ripper for smaller CPUs?
 
It's more expensive if your SOC can be produced at a size and at yields that easily fits within your cost range. But 350mm+ apus at 7nm might not fit within the cost requirements at the typical price range of consoles. The 360 ended up having one of the first APU ever released but it started life with a chiplet like design and its likely because a 360 APU at 90nm wasn't feasible.
CPU (IBM) and GPU (AMD) were sourced from different suppliers on different lithographic processes. To get them to fuse their IP into a single device was quite remarkable from an industry point of view.
 
i was under the impression that the cost yields for something like infinity fabric was when you started to get around the 700mm range? which is why we don't see thread ripper for smaller CPUs?

Monolithic cpu vs. multi-die cpu isn't the same thing as using a separate cpu and gpu die vs apu.

Im not sure where you get the 700mm figure outside of the comparison of a monolithic 32 core cpu vs Epyc. But the manufacturing cost of Epyc MCM was estimated to be 60% of a monolithic design so I hardly doubt you need to go beyond 700 mm to see a cost savings.
 
Last edited:
Yeah, keep in mind that Infinity Fabric is meant to be their next-gen replacement solution for the frankenstein/duct-tape that they've been using in the past (*cough* strapping two jaguar quads together then bolting it onto a GPU stapled to some other bits >_>), so it's not specifically for Itanic-class chips.
 
Last edited:
8 TFlops isn't going to happen. What kind of cu count are you predicting for 8 TFlops.

I predicted 8.7TF FP32 and 17.4TF FP16 for a 40 CU GPU running at 1.7 GHz.

It's a console, it is constrained by power consumption (<250W) and cost (<$499). Your 60 CU GPU burns 300W alone, now add CPU, storage and internal PSU. Then build a memory system to sustain this GPU (Radeon VII has 1TB/s bandwidth) and keep cost below $499.

We are talking more than double the performance of the PS4 Pro, and considering fp16 adds 30% performance when utilized, we end up with ~1.9 x the X1X performance; That's a similar increase we got from PS 4 -> PRO, and for all those who upgrade from PS4/X1/X1S it is truly a generational upgrade.

Cheers
 
Last edited:
People here are out of their goddamn minds.

Next gen console will launch at $499 and like the past generation it is highly unlikely they will be sold at a loss.

The console vendors will need to produce a system with CPU/GPU, DRAM, Storage, PSU and a controller and sell it at the same price as a RTX 2070 8GB.

There is zero, ZERO!!!, chance , that we will see a 64 CU GPU in next gen consoles; 1.) the cost of the die is too large, 2.) the power consumption is too large, 3.) the bandwidth demand on the memory subsystem, and consequently price, is too large.

Both MS and Sony are going to compete against console as a service-providers next gen, that puts tremendous downward pressure on the purchasing price of physical consoles.

I would expect a 48CU GPU die, with only 40 active to ensure as many usable dies as possible. I would expect MS to pair hot GPU dies with cool CPU dies to maximize the power/yield point. If they can hit 1.7GHz, then that's 8.7TFlops with FP32 and 17.4TFlops using packed FP16. I would expect it to be paired with 16GB GDDR6 on a 256bit bus running either 13 or 14GHz (~400GB/s bandwidth).

What Lockhart is/isn't is just speculation at this point, every thing I've read originates from a Reddit post in february AFAICT. If it isn't just a SKU with gimped storage (no optical, half the SSD), it might be a client to MS' console-in-the-cloud service. It could be an APU with limited capability. Enough to play existing XB1 titles, but everything more demanding would be streamed from a server.

Cheers

I predicted 8.7TF FP32 and 17.4TF FP16 for a 40 CU GPU running at 1.7 GHz.

It's a console, it is constrained by power consumption (<250W) and cost (<$499). Your 60 CU GPU burns 300W alone, now add CPU, storage and internal PSU. Then build a memory system to sustain this GPU (Radeon VII has 1TB/s bandwidth) and keep cost below $499.

We are talking more than double the performance of the PS4 Pro, and considering fp16 adds 30% performance when utilized, we end up with ~1.9 x the X1X performance; That's a similar increase we got from PS 4 -> PRO, and for all those who upgrade from PS4/X1/X1S it is a truly a generational upgrade.

Cheers

God forbid being conservative and realistic! You won't be popular.... Seriously, i think you're right there, imo. People are expecting a huge jump from the mid-gen refreshes, foremost the One X which is a very powerfull machine for a console. We should be looking at the 1.8TF PS4, from there the jump to even just 8TF isn't that bad i think.
People are dreaming away and being very power-hungry, that is not why we buy consoles. Consoles never really had high-end hardware on release for the most. Some buzzwords from Mark Cerny seems to have upped some expectations and hype, which probably was their intention anyway.
 
We are talking more than double the performance of the PS4 Pro, and considering fp16 adds 30% performance when utilized, we end up with ~1.9 x the X1X performance;
This really isn't how RPM works which is what your talking about.
FP16 is already available and helps register pressure.
RPM can only be used in small amount of situations where you can get away with such a reduction in precision. Not as many places as you seem to think.
Useful and worth using but it's not going to give you the level of TF boost you seem to think.
 
Consoles never really had high-end hardware on release for the most.
I don't know if this is really true. Personally I think the only console hardware that's launched that was already outclassed by PC hardware has been Xbox One and PS4. Sure, PC hardware has outpaced consoles, but I think if you look back and think of the PC hardware available in, say, 1994, the PS1 would have been considered high end. 3Dfx Voodoo was at least a year away, PC hardware lacked the advanced video decompression of PS1, and it's lighting and geometry engine is very impressive for it's time. PS2 was mighty impressive. It's pixel fillrate is insane, as was it's geometry engine. Xbox launched with what amounts to a Geforce 3, along side the launch of the Geforce 3. Even then, Xbox had twice the vertex shaders. 360 had a tesellator, unified shaders, 4 sample per pixel per cycle MSAA, and competitive fillrate. And the PS3... Well Cell was paper impressive. If you go back further, console's ability to scroll backgrounds and draw sprites destroyed what was available on PC's when they launched.

Anyway, consoles have launched with high end hardware. They just don't anymore. I think it's partly because the PS3 / 360 generation was so long that even midrange parts were a huge upgrade, plus PC hardware's pace, especially in the ultra high end, has really gotten out of hand. I think traditionally (since the launch of 3D hardware) the highest end PC graphics cards have been close to price range of a console, at launch. An X800XT was roughly $450 when the 360 launched, IIRC. A 20GB 360 was $399.99. Now, we've got the ultra-high end PC graphics cards at $1300+. There's no way a console can launch at that price unless they want to get 3DO'd. Which also launched with impressive hardware at the time.
 
The only way Sony can keep on its promise of no loading times (including the initial loading of games) is if they ship PS5 with 1TB of non-upgradable ultra fast SSD. They can either have an empty slot for user upgradable HDD/SSD sata3 or allow external HDD/SSD.

Initially, about 8-10 AAA games can be installed on the non-upgradable ultra fast SSD drive, but once an HDD is installed all games will be transferred to that storage and the whole ultra fast SSD will be turned into a scratch pad.

250 GB = scratch pad for the actual game being played
750 GB = 20GB of each games are installed

Up to 35 games (20 GB each) can be installed on the SSD and there will be zero loading times, not even the initial start up.

If 256 TB SSD + 2 TB HDD is a lot cheaper, maybe Sony will still go that route.
 
Status
Not open for further replies.
Back
Top