I mean your sarcasm may not have come through... this may actually apply for both of you. It's quite difficult to read.I don't get it! What is it from?
Last edited:
I mean your sarcasm may not have come through... this may actually apply for both of you. It's quite difficult to read.I don't get it! What is it from?
Ok, now I can talk.
One of the two console will be composed by:
- single soc: on basic 7nm process at TSMC, not 7nm+ or 6nm
- cpu: 8 Zen2 core @2.2GH with SMT enabled, they will call them "custom", but it will just normal plumbing work to adapt the cache to the soc
- gpu: based on navi, hybrid between rdna and rdna2, @ about 7TF, frequency not finalized, hardware RT (no info if proprietary or amd's technology)
- memory: 24GB GDDR6 on a 384bit bus, bw not finalized, but expected at around 800GB/s
- ssd: 2TB soldered
- usb: 3.2 Gen. 2, type C, 1.5A
- size: comparable to actual xbox one x
- new very low power mode with less than 2 seconds from power button push to dashboard
- controller: current gen's evolutionary iteration, rechargeable with off the shelf cables
Would you rather have a much more expensive console that performs identically but uses hbm memory instead of gddr6?
As long as it costs more for the same performance, it's the wrong choice. The lower power isn't significant enough to matter on a home console where performance per dollar is the most important metric.I would take HBM if its faster , with less Latency then Gddr6 , and less power consumption. And a standard 256 Bit or 320,384 Bit Bus is very old last Generation Tech. I think it is not only the question wich Memory Solution is cheaper or more expensive, important is whether the availability of HBM memory including Interposer on the GPU etc. is given. Is AMD and its Chip Producers to be able to produce a large volume with low rejection as with conventional memory technology? What technical hurdles must be overcome to create this so that it is suitable for mass market? Money is the last thing that missing in this industry. But one of the big companies has to take some risk to bring the progress to the people, customers. The first graphics card with HBM memory came on the market in 2015. And 5 years later, this technology is still far from mass production?
Nvm Fehu's post is BS.New Lockhart? This looks like a binned version of Anaconda without vapor chamber cooling.
In October I dismissed this because the ram didn't make sense. But the downclocked CPU mentioned by the Verge made me revisit this...
The bandwidth is still a red flag to me. Those are 18gbps chips...
I would expect the bandwidth to be 500-600GB/s max for 7TF, even accounting for RT footprint.
The only rational explanation to me:
Lockhart and Anaconda are confirmed to be using different APUs according to the Sparkman leak.
However, for scale purposes they share the same MOBO.
This means Lockhart will use 8gb chips or a mixture of 8gb and 16gb chips, thereby reducing the amount of ram and available bandwidth from the 384bit bus.
The 24GB of 18gbps GDDR6 is just a simple mistake...
I tried.
You test on nvidia 2080s or 2080TIs.If early devkits consist of PC CPU and GPUs, how could developers simulate real performance of ray-tracing accurately?
Since AMD's GPUs don't have hardware ray-tracing.
You test on nvidia 2080s or 2080TIs.
The important thing is feature set. I believe this was the case for the XBO alpha kits that ran geforce 700 series or something like that.
Beta and close to retail developer kits will use close to release hardware. But in 2017 and 2018, both Sony and MS could have given out 2080 cards as their proxy. As I understand it; this is standard operating procedure because its cheaper than having to make your own hardware board and send those out.At the same time, if amd have zero silicons now for a product launching next year, I would be problematic, no ? Can they have at least low clock speed versions of it, to let the devs know how this thing works ?
What?You test on nvidia 2080s or 2080TIs.
The important thing is feature set. I believe this was the case for the XBO alpha kits that ran geforce 700 series or something like that.
I dunno. Good question. This kit was sold in 2012. So I think yea the 5000 series of GCN cards should have been available by then.What?
Why would they get Kepler GPUs in the alpha kits if GCN 1 cards with an almost identical featureset and performance had been available even before Kepler was released?
Correction: GCN started with the 7000 series cards, the 5000 series was the old VLIW5 architecture.So I think yea the 5000 series of GCN cards should have been available by then.
Whoops right. For some reason I thought it was 5850 not 7850.Correction: GCN started with the 7000 series cards, the 5000 series was the old VLIW5 architecture.
PS5 devkit is currently 40CUs ~2.0ghz
I'll stake my account on this.
If this is proven to be wrong in the future, I'll take any punishment.
Has Sony ever released the specs of their playstation devkits? This might never be possible to verify...
Im thinking 36CUs. If BC1 is 18CU, then BC2 cannot be 40CU but 36.PS5 devkit is currently 40CUs ~2.0ghz
I'll stake my account on this.
If this is proven to be wrong in the future, I'll take any punishment.