Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
MS went with 8GB of DDR3 duo to this being critical for design of console itself (always on line, TV, Kinect, gaming). This was only safe bet because 8GB of GDDR5 was unlikely to be available by end of 2013. In the end it ended up possible and Sony had place on motherboard for this scenario, but if they had to they would have gone for 4GB duo to focus they had while designing their box was different.
 
“Definitely more powerful” was what was said.
Can you link which one you are referring to? I suspect you are referring to the one from main editor for game informer Or along the lines of that ?

The kits released are alpha kits/ PCs in a box meant to simulate what their potential could be. Not to be mistaken with actual console hardware.

The argument of HBM vs GDDR6; MS and Sony have exactly the same access to AMD; you would have to ask yourself why one company would go one route over the other. If consoles could run HBM over GDDR6 by next generation so would all consumer GPUS as well.

Just a thought. It would appear as though MS does not believe it to be cost efficient; and nvidia has also gone with GDDR6.

You’re welcome to explore HBM; I’m not wasting my time on it. As far as I’m concerned everyone made a big deal between Xbox and PS4 and that was just a 900p vs 1080p difference along the curve for most games. And that was apparently dramatic.

0.5 TF of power difference dramatic.

But then we get the same 50% increase on flops and 50% more memory and many would have you believe that the difference between their 4K is close enough. Let’s ignore the $100 price differential.

I see a land of diminishing returns. In which neither company does not want to end up more expensive than the other.

We’ve had a lot of “what if” conjectures before. People will take the smallest of signals and amplify it to be much larger a difference than it realistically be.

If HBM2 is going to be dramatically cheaper and better and serve up more performance for cheaper both companies would do it. Something so glaring obvious will not be missed this time around.
 
Even "definitely more powerful" doesn't mean that much. If we're looking at near enough identical architectures and there's a 0.5TF difference between two ~10TF systems, then one of the systems befits the definition of more powerful, even if it will only ever manifest as slightly better LOD'ing/shadows/resolution which can only be determined by magnifying identical frozen frames.

The argument of HBM vs GDDR6; MS and Sony have exactly the same access to AMD; you would have to ask yourself why one company would go one route over the other. If consoles could run HBM over GDDR6 by next generation so would all consumer GPUS as well.

True, but isn't HBM set to drop in price over the coming years? It may not seem worth it at the outset, but could cost projections indicate viability within a couple of years of launch, thereby making it worth eating some cost for a while?

I would expect that GDDR6 is a safe bet in terms of dev kits, because switching from 16GB of GDDR6 to 16GB HBMx would only be an improvement. And if HBMx isn't cheap enough, soon enough (or isn't projected to be) then the cheaper memory and smaller APU is the one that enters mass production.

As for why one company would bother with it when the other wouldn't, I suppose its just a matter of each company's engineering ethos. They may align, they may not. I hope they don't and we see Microsoft with a GDDR6 solution, and Sony with the rumoured HBMx+DDR4 solution.

I am however, an HBM fanboy, so I'll always be itching for its inclusion in a console.
 
The earliest target specs MS gave to third party was very conservative. It was supposed to be a baseline. I don't know whether or not that was because of Lockhart or something else.
 
Maybe they were designing against a 2019 ps5

Internally the plan for Anaconda was 12TF, but they gave the devs a much lower baseline. It could be that the 12 TF was before finding out about the Navi performance characteristics.

Which is why I think comparing early TF targets is misleading because we don't know if they were using Vega or Navi.

If you just go by the bus bandwidth (560GB/s) in Anaconda teaser we should be expecting 10-11TF from the final console.
 
HBM2 is far more power efficient than GDDR6, so it could free up significantly more power (100 W?) to use elsewhere: perhaps more memory and a better GPU? The choice is not an obvious one as high bandwidth memory could complicate manufacturing. The extra risk/cost could be mitigated with more investment in solving related problems and/or looking at the longer term picture: what happens to system costs 1-2 years down the road with the ‘slim version’?
100W doesn't seem realistic for power savings. For GPUs historically, the GDDR subsystem was expected to draw at the high end about a third of the TDP of the GPU chip itself. For the Radeon 290 versus Fury X, that meant savings were likely taken from a ~50W ceiling. There might be some variation based on the progress of technology, although GDDR6 is listed as at least being a little more efficient than GDDR5.
At least going from what's come before, the consoles would not be near a higher-end discrete card in TDP. The PS4 Pro and Xbox One X have been tested to be able to reach about 160W and 180W respectively, and that would be whole-system.
The PSU, VRMs, and other components would take a measurable share of that. The chip itself would be allocated the majority of the power budget, so I think the share left for memory might be worst-case closer to the 290X and any savings would be a fraction of that.
 
100W doesn't seem realistic for power savings. For GPUs historically, the GDDR subsystem was expected to draw at the high end about a third of the TDP of the GPU chip itself. For the Radeon 290 versus Fury X, that meant savings were likely taken from a ~50W ceiling. There might be some variation based on the progress of technology, although GDDR6 is listed as at least being a little more efficient than GDDR5.
At least going from what's come before, the consoles would not be near a higher-end discrete card in TDP. The PS4 Pro and Xbox One X have been tested to be able to reach about 160W and 180W respectively, and that would be whole-system.
The PSU, VRMs, and other components would take a measurable share of that. The chip itself would be allocated the majority of the power budget, so I think the share left for memory might be worst-case closer to the 290X and any savings would be a fraction of that.

How much power does the Vega 64’s 8 GB of HBM2 use? According to the informative memory video above, that figure would then be multiplied by 7-9 x for 16 GB of GDDR6.
 
How much power does the Vega 64’s 8 GB of HBM2 use? According to the informative memory video above, that figure would then be multiplied by 7-9 x for 16 GB of GDDR6.

I think the figures often center on the power cost of the interface itself, when discussed from the memory module's point of view. From a board perspective, there are elements like the memory controller and the memory chips themselves that scale differently.
The memory controller itself likely has a component of its scaling proportional to the voltage of the interface, while the memory arrays and modules themselves can have different voltage levels than the data lines, and the memory arrays on the chips are relatively consistent across memory types--and so their power consumption tends to match.

I'm not sure about the exact figures for Vega 64, and there may be some penalties depending on whether a given board is using a version of HBM2 that obeys the voltage specifications for the type. For many Vega 64 boards, the HBM2 stacks were running above spec, most likely if they were Hynix prior to a shift to the most recent manufacturing process.

https://www.gamersnexus.net/hwreviews/3020-amd-rx-vega-56-review-undervoltage-hbm-vs-core

From the above, the two components on the board that drive memory are VRM phase dedicated to the HBM stacks, and a separate VDDCI phase dedicated to the memory controller.
The VDDM phase was given a range of 10 to 20 to 30 amps at an assumed 1.2V (hoping it wasn't the over-specced Hynix memory), although the assumption was that for standard settings it would be between 10-20 amps.
The VDDCI phase for the memory controller was a 10 amp device, and this is the one that is several times smaller than a corresponding GDDR system.
10 to 20 * 1.2V + 10 * 0.9V (that one's a bit iffy, but at least part of the memory controller depends on that setting).
This may slot somewhere between 25-35W, with unknown but probably sizeable margins of error.

A different analysis of the Radeon VII with double the stacks has two VDDM phases and a 20 amp VDDCI phase. The memory power delivery was speculated to be oversized for the Radeon VII in order to accommodate a 32 GB board, although I am unsure for reasons I'll go into next.

As far as 16GB of GDDR6 goes, you may need to specify how that capacity is reached. As noted, there's a component of power consumption that scales by the width and speed of the memory bus, and another that is scales more closely to the device count.
A 256-bit GDDR6 bus can get to 16GB several ways. A 256-bit bus allows for 8 chips, which if you splurge can get to 16GB if there's a 16Gb density version of GDDR6 available in 2020. If not, an existing 8Gb GDDR6 version can be used with 16 chips in clamshell mode to get 16GB.
The power budget that varies most between GDDR6 and HBM is the speed and width of the memory bus, and would be mostly the same between the 256-bit GDDR6 possibilities, assuming constant speeds.
Part of the power budget of the GDDR6 devices is likely bound to the higher interface speed per device, with the rest of the budget being per-chip elements and their DRAM arrays.
Capacity-based power consumption has been shown to be very small.
Rather than being dominated by the size of the DRAM arrays, it's how active they are that matters--and that scales with the overall bandwidth of the system.

So if we were to take the 3.5x figure for GDDR6 versus HBM2 from the earlier video back to Vega 64, that pushes the VDDCI count to 3-4 chips, but I think the growth for the memory module supply would be possibly one additional. The 2080 TI has 50% more channels and has a significantly overspecced memory power delivery setup. A gamersnexus evaluation of the PCB for the 2080TI speculates that its loadout of GDDR6 would top out at ~30W for the devices in aggregate.
Clamshell does raise the number of devices, but at the same time each one uses half the interface width and its arrays will see about half the activity versus a single module serving at full bandwidth.

Maybe the ceiling goes to 60-80W, and that's going by the specifications for higher-end GPU boards. Bringing it closer to 50W rather than 80W for a 256-bit board seems reasonable, and so maybe 20-30W savings if the memory systems are otherwise comparable.



What calendar? By the standard one, that sounds too late. The start of silicon mass production and then getting assembled consoles out through the supply chain last time was on the order of 6 months.
 
^^^ Just a reminder Q3 2020 fiscal year in US should be April May June 2020. If PS5 is releasing holiday 2020 then this makes sense.
Q3 2020 fiscal year in Japan should be October November December 2019. This makes sense for an April 2020 release only.
 
Last edited:
^^^ Just a reminder Q3 2020 fiscal year in US should be April May June 2020. If PS5 is releasing holiday 2020 then this makes sense.
Q3 2020 fiscal year in Japan should be October November December 2019. This makes sense for an April 2020 release only.

How both quotes from Wccftech and DigiTimes is worded, seems like a late 2020 launch (November more than likely).

DigiTimes just pushed out a reportsaying that the 7nm chips that are going to power Sony’s upcoming next-generation PlayStation 5 (or whatever it ends up being called) will not be ready before Q3 2020. This means that the earliest we can expect a new console is sometimes in late 2020.

AMD's 7nm CPU and GPU are expected to be adopted by Sony in its next-generation PlayStation and the processors are estimated to be ready in the third quarter of 2020 for the games console's expected release in the second half of 2020, according to industry...

The second half of the year covers July-December. And knowing this, July-September will more than likely be for testing, debugging, and line production provisioning, while mid September to early October will begin mass production of PS5 (inventory / distribution channel build-up) towards a late 2020 launch in November.
 
Who would of thought a Christmas 2020 release date? Really? I mean really?

Edit......Wouldn’t it be funny if they were both delayed and released in March 2021. Never.
 
Last edited:
Who would of thought a Christmas 2020 release date? Really? I mean really?

Edit......Wouldn’t it be funny if they were both delayed and released in March 2021. Never.

I am hoping for a spring 2020 release with Cyberpunk 2077 available for PS5 :D
 
The only pieces of information about PS5 that I trust:
RuthenicCookie's info. 4k 60fps beast. $499 with $100 loss. Spring 2020 or Fall 2020 launch.
Delay from Fall 2019 to H1/H2 2020. (Matt from Era backed this up)
Wired Info.
Looking at the benchmarks it appears to me the only card that's native 4k/60fps stable is 2080 ti (on current gen titles even), not even 2080 holds it that often and much less a 2070 tier console. The only games that are 4k/60 for a 2070 level ps5 are BF5 and Strange Brigade and the rest averages around 30-45fps.
https://www.guru3d.com/articles_pages/msi_geforce_rtx_2080_ti_lightning_z_review,13.html
Could it be a 4k cbr 60fps at max settings current gen beast? Most likely. But then what of next gen?
 
Status
Not open for further replies.
Back
Top