Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Ok, now I can talk.

One of the two console will be composed by:
- single soc: on basic 7nm process at TSMC, not 7nm+ or 6nm
- cpu: 8 Zen2 core @2.2GH with SMT enabled, they will call them "custom", but it will just normal plumbing work to adapt the cache to the soc
- gpu: based on navi, hybrid between rdna and rdna2, @ about 7TF, frequency not finalized, hardware RT (no info if proprietary or amd's technology)
- memory: 24GB GDDR6 on a 384bit bus, bw not finalized, but expected at around 800GB/s
- ssd: 2TB soldered
- usb: 3.2 Gen. 2, type C, 1.5A
- size: comparable to actual xbox one x
- new very low power mode with less than 2 seconds from power button push to dashboard
- controller: current gen's evolutionary iteration, rechargeable with off the shelf cables

New Lockhart? This looks like a binned version of Anaconda without vapor chamber cooling.

In October I dismissed this because the ram didn't make sense. But the downclocked CPU mentioned by the Verge made me revisit this...

The bandwidth is still a red flag to me. Those are 18gbps chips...
I would expect the bandwidth to be 500-600GB/s max for 7TF, even accounting for RT footprint.

The only rational explanation to me:

Lockhart and Anaconda are confirmed to be using different APUs according to the Sparkman leak.
However, for scale purposes they share the same MOBO.

This means Lockhart will use 8gb chips or a mixture of 8gb and 16gb chips, thereby reducing the amount of ram and available bandwidth from the 384bit bus.

The 24GB of 18gbps GDDR6 is just a simple mistake...

I tried. ‍♂️
 
Last edited:
Would you rather have a much more expensive console that performs identically but uses hbm memory instead of gddr6?

I would take HBM if its faster , with less Latency then Gddr6 , and less power consumption. And a standard 256 Bit or 320,384 Bit Bus is very old last Generation Tech. I think it is not only the question wich Memory Solution is cheaper or more expensive, important is whether the availability of HBM memory including Interposer on the GPU etc. is given. Is AMD and its Chip Producers to be able to produce a large volume with low rejection as with conventional memory technology? What technical hurdles must be overcome to create this so that it is suitable for mass market? Money is the last thing that missing in this industry. But one of the big companies has to take some risk to bring the progress to the people, customers. The first graphics card with HBM memory came on the market in 2015. And 5 years later, this technology is still far from mass production?
 
I would take HBM if its faster , with less Latency then Gddr6 , and less power consumption. And a standard 256 Bit or 320,384 Bit Bus is very old last Generation Tech. I think it is not only the question wich Memory Solution is cheaper or more expensive, important is whether the availability of HBM memory including Interposer on the GPU etc. is given. Is AMD and its Chip Producers to be able to produce a large volume with low rejection as with conventional memory technology? What technical hurdles must be overcome to create this so that it is suitable for mass market? Money is the last thing that missing in this industry. But one of the big companies has to take some risk to bring the progress to the people, customers. The first graphics card with HBM memory came on the market in 2015. And 5 years later, this technology is still far from mass production?
As long as it costs more for the same performance, it's the wrong choice. The lower power isn't significant enough to matter on a home console where performance per dollar is the most important metric.

Gddr6 can do 876GB/s, so using HBM is a waste of money unless the product needs 1TB/s, which isn't the case for a 10TF gpu. Or until it reaches closer to cost parity.

The hope right now is that organic panel interposers (glass instead of silicon, rectangular panels instead or round wafers) might solve the cost and volume problems. I think nobody will use hbm until these low cost interposers are available. There have been massive roadblocks and delays with hbm, while gddr6 launched exactly when they said it would and there were no difficulty other than slightly more work on the pcb versus gddr5.
 
New Lockhart? This looks like a binned version of Anaconda without vapor chamber cooling.

In October I dismissed this because the ram didn't make sense. But the downclocked CPU mentioned by the Verge made me revisit this...

The bandwidth is still a red flag to me. Those are 18gbps chips...
I would expect the bandwidth to be 500-600GB/s max for 7TF, even accounting for RT footprint.

The only rational explanation to me:

Lockhart and Anaconda are confirmed to be using different APUs according to the Sparkman leak.
However, for scale purposes they share the same MOBO.

This means Lockhart will use 8gb chips or a mixture of 8gb and 16gb chips, thereby reducing the amount of ram and available bandwidth from the 384bit bus.

The 24GB of 18gbps GDDR6 is just a simple mistake...

I tried. ‍♂️
Nvm Fehu's post is BS.
 
If early devkits consist of PC CPU and GPUs, how could developers simulate real performance of ray-tracing accurately?
Since AMD's GPUs don't have hardware ray-tracing.
 
They probably can't and are waiting on final/test silicon. RT solutions a perhaps qualitative, running at a low framerate just to see that it works, and then they can optimise once the console hardware is available to test.
 
If early devkits consist of PC CPU and GPUs, how could developers simulate real performance of ray-tracing accurately?
Since AMD's GPUs don't have hardware ray-tracing.
You test on nvidia 2080s or 2080TIs.
The important thing is feature set. I believe this was the case for the XBO alpha kits that ran geforce 700 series or something like that.
 
You test on nvidia 2080s or 2080TIs.
The important thing is feature set. I believe this was the case for the XBO alpha kits that ran geforce 700 series or something like that.

At the same time, if amd have zero silicons now for a product launching next year, I would be problematic, no ? Can they have at least low clock speed versions of it, to let the devs know how this thing works ?
 
At the same time, if amd have zero silicons now for a product launching next year, I would be problematic, no ? Can they have at least low clock speed versions of it, to let the devs know how this thing works ?
Beta and close to retail developer kits will use close to release hardware. But in 2017 and 2018, both Sony and MS could have given out 2080 cards as their proxy. As I understand it; this is standard operating procedure because its cheaper than having to make your own hardware board and send those out.
 
You test on nvidia 2080s or 2080TIs.
The important thing is feature set. I believe this was the case for the XBO alpha kits that ran geforce 700 series or something like that.
What?
Why would they get Kepler GPUs in the alpha kits if GCN 1 cards with an almost identical featureset and performance had been available even before Kepler was released?
 
What?
Why would they get Kepler GPUs in the alpha kits if GCN 1 cards with an almost identical featureset and performance had been available even before Kepler was released?
I dunno. Good question. This kit was sold in 2012. So I think yea the 5000 series of GCN cards should have been available by then.
https://www.eurogamer.net/articles/digitalfoundry-the-curious-case-of-the-durango-devkit-leak
https://www.theinquirer.net/inquire...x-720-durango-sdk-sells-for-usd20-000-on-ebay

IIRC; MS caught this guy and sued him hard
 
PS5 devkit is currently 40CUs ~2.0ghz
I'll stake my account on this.
If this is proven to be wrong in the future, I'll take any punishment.

Has Sony ever released the specs of their playstation devkits? This might never be possible to verify...
 
Has Sony ever released the specs of their playstation devkits? This might never be possible to verify...

If the PS5 releases with CU count > 40 or clockspeed much greater than 2.0ghz, then my info is false.

Otherwise you're right.
 
PS5 devkit is currently 40CUs ~2.0ghz
I'll stake my account on this.
If this is proven to be wrong in the future, I'll take any punishment.
Im thinking 36CUs. If BC1 is 18CU, then BC2 cannot be 40CU but 36.

2.0GHZ is also a bit too high for 40CU part IMO, but lets wait and see. I do think they put them selves into 9-10TF territory by opting for 256bit bus.

If MS truly went for 320bit bus, they did so that they can react to any sudden requirements regarding competition.

If Sony goes with 36CU - 9.2TF and 16Gbps chips on 256bit bus, they can go with 14Gbps chips and get 560GB/s. If Sony uses fastest chips on market - 18Gbps to get 576GB/s, they can go for 16Gbps and get 640GB/s.

I think 320 is there for headroom.
 
Status
Not open for further replies.
Back
Top