Fuck it:
https://www.3dmark.com/fs/19011253
I think it's probably unrelated to Gonzalo. Probably a coincidence.
https://www.3dmark.com/fs/19011253
I think it's probably unrelated to Gonzalo. Probably a coincidence.
If GDDR as single pool of memory is such a novel and great idea, why hasnt such laptop been released?
(...)
And yet, here we are still waiting APU that can match PS4s 2013 SOC. But yea, we are getting Navi XT + 8 core Zen2 with 16GB of GDDR6 relatively soon.
(...)
...And yet best AMD APU based laptops are 4 core parts with severly cut down Vega chips that are using dual channel DDR4 memory. Nothing, absolutely nothing close to full Navi chip paired with 16GB of GDDR6.
So what you're saying is you're not aware that memory clocks and voltage can drastically throttle down when not in full load.16GB of DDR4 (Two modules) will use less then 6W. For 16GB of GDDR6 you are looking 30W+. You do the math.
No, you really don't.No, I have several reasons why SINGLE POOL of GDDR memory is bad decision in laptops.
They can, and yet you can still do that with DDR4 and waste less energy, get more battery life and much more bang for back then going single pool for system memory.So your infallible logic for predicting that no future laptop APU is going to using GDDR in the future is that no laptop has used GDDR so far?
Wow.
So what you're saying is you're not aware that memory clocks and voltage can drastically throttle down when not in full load.
Nah, what you are arguing is lost cause. There is not one single laptop in existence that uses high performance APU coupled with single pool of GDDR as memory, let alone fastest one not yet in mass production. Not one, and you are carefully dropping that point and ignoring it.All you have is a bunch of general statements - some of them very clearly false and honestly rather ignorant - for which you provide zero proof or source despite being asked for it many times over by several different people.
Of course a gaming laptop is going to prioritize gaming performance above power consumption. If number of firefox tabs were a gamer's concern, there'd be no dGPUs in gaming laptops, or rather there'd be no gaming laptops period.
And a high-end APU designed for a gaming laptop would obviously need more memory bandwidth than whatever DDR4 within any reasonable width can provide.
And yes, there are very clear advantages to use a monolithic SoC with embedded CPU and GPU in a gaming system, such as lower VRM cost, higher power efficiency, simpler PCB, etc. It's so important for cost saving that the console makers had little choice than to go with AMD for a high-performance gaming SoC for their latest consoles.
But I see you finally dropped the similarly erroneous latency argument, so I guess at least someone learned something here.
I won't post a thing on this discussion and I apologize for derailing the thread, but I think I have been relatively civilized even if I am not agreeing with TT.Can you two tone it down and stick to debating facts in a civilised, respectful way. Thanks.
I dont think thats Gonzalo. Graphics score is ~2080S tier and Apisak did mention that only overall score is visible for Gonzalo.Fuck it:
https://www.3dmark.com/fs/19011253
I think it's probably unrelated to Gonzalo. Probably a coincidence.
I have found an interesting 3Dmark Fire strike benchmark. The day Apisak found Gonzalo: april 10th.
The overall score is 20425, Unknown GPU and CPU. Driver status : Not Approved
It's a huge market right now, to the point of every major to medium laptop maker having their own gaming brand.
So yes, it's very well worth the effort of developing a custom SoC for high-end laptops.
Even more considering that AMD charged Subor around $65M for their Z-Plus SoC IIRC.
How do they define a "gaming laptop"? Marketing? Discrete GPU(Nvidia) vs. embedded, TDP?
Write single core is 33.1GB/s but read only 12.8 ?Those are 2GB, which would be logical for a dev-kit.
Not necessarily. The total bandwidth gives actually 529.6 GB/s using the single chip test but that would be expected as those are benchs and not theoretical. Using the DDR4 as reference in others tests, there is often a 8% difference between the bench done by their tests and the max theoretical. Here there is about 8.76% from 529.6 to the max thoretical of 576.
From memory (SDK docs) there was a 10% difference between theoretical and tested max bandwidth for PS4 gddr5 (160 -> 176).
How long ago was it that computer upgrading was driven by need?Is there any value in a laptop workstation? Emphasis on portability, not battery life? Focussed on productivity. I can't really see a need for fast VRAM for anything beyond CG(I) though, or the very most ridiculous 4K/8K video editing. So I think the idea of AMD creating an APU for workstations may be unrealistic.
I think it's mostly based on the performance class of the present discrete GPU. AFAICS most laptops with a 45W H-series CPU and a GTX1050 or higher are coming under a gaming brand nowadays.How do they define a "gaming laptop"? Marketing? Discrete GPU(Nvidia) vs. embedded, TDP?
There's also a rising market for laptop worksations targeted at machine learning. I think you need hefty bandwidth to feed all those tens/hundreds of IOPS. Or at least the GPUs that have high IOPS throughput are also the larger GPUs that traditionally come with higher bandwidth.Is there any value in a laptop workstation? Emphasis on portability, not battery life? Focussed on productivity. I can't really see a need for fast VRAM for anything beyond CG(I) though, or the very most ridiculous 4K/8K video editing. So I think the idea of AMD creating an APU for workstations may be unrealistic.
I think for the battery life part things have progressed pretty well nowadays. Truth is the Intel H-series CPUs don't consume a whole lot more in light loads than the U-series, so a 15" gaming laptop with a H-series CPU and a hefty RTX2070 can get about the same battery life in office usage as a 15" office laptop with a U-series CPU. Thanks to optimus, the dGPU gets completely powered down.The joys of pitiful battery life and cooling high performance silicon with itty-bitty fans will be discovered after the purchase.
Absolute minimum is 9 tflops based on Gonzalo benchmark. It can't be less.So any new news? Are we still dancing around 8-9 RDNA Tflops here or seeing traces of more?
Absolute minimum is 9 tflops based on Gonzalo benchmark. It can't be less.
How can it not be over 9000 gflops
Absolute minimum is 9 tflops based on Gonzalo benchmark. It can't be less.
Why not? Even the PS4 has run it and got 5000 overall score so it would only be natural they updated the drivers for a more powerful SoC PS5 devkit (assuming it's near final). Not that I think this came from a PS5 devkit since the crazy high graphics score raises eyebrows a bit. But man oh man the amount of goats I would sacrifice to have a PS5 that powerful.I'm awfully suspicious of this seemingly widespread idea that developers with near-final hardware dev kits of game consoles (which won't run Windows 10) are capable of installing Windows 10 + working GPU and motherboard chipset drivers for windows 10 + a benchmark / 64bit executable and then running said benchmark to post results in an online database.
It'd find it to be rather believable if they were running some browser-based benchmark for Javascript performance like Octane, or at most some WebGL benchmark like Fishtank.
But full on Windows 10 running 3dmark on the PS5 devkit using a semicustom SoC? To me that's quite the white elephant in the room.
Maybe if Gonzalo and Flute are Scarlett SoCs and next-gen's XBox is running full on windows 10 I could see it happening, but not even that much probable to be honest.