Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
If GDDR as single pool of memory is such a novel and great idea, why hasnt such laptop been released?
(...)
And yet, here we are still waiting APU that can match PS4s 2013 SOC. But yea, we are getting Navi XT + 8 core Zen2 with 16GB of GDDR6 relatively soon.
(...)
...And yet best AMD APU based laptops are 4 core parts with severly cut down Vega chips that are using dual channel DDR4 memory. Nothing, absolutely nothing close to full Navi chip paired with 16GB of GDDR6.

So your infallible logic for predicting that no future laptop APU is going to using GDDR in the future is that no laptop has used GDDR so far?

Wow.



16GB of DDR4 (Two modules) will use less then 6W. For 16GB of GDDR6 you are looking 30W+. You do the math.
So what you're saying is you're not aware that memory clocks and voltage can drastically throttle down when not in full load.



No, I have several reasons why SINGLE POOL of GDDR memory is bad decision in laptops.
No, you really don't.

All you have is a bunch of general statements - some of them very clearly false and honestly rather ignorant - for which you provide zero proof or source despite being asked for it many times over by several different people.

Of course a gaming laptop is going to prioritize gaming performance above power consumption. If number of firefox tabs were a gamer's concern, there'd be no dGPUs in gaming laptops, or rather there'd be no gaming laptops period.
And a high-end APU designed for a gaming laptop would obviously need more memory bandwidth than whatever DDR4 within any reasonable width can provide.
And yes, there are very clear advantages to use a monolithic SoC with embedded CPU and GPU in a gaming system, such as lower VRM cost, higher power efficiency, simpler PCB, etc. It's so important for cost saving that the console makers had little choice than to go with AMD for a high-performance gaming SoC for their latest consoles.

But I see you finally dropped the similarly erroneous latency argument, so I guess at least someone learned something here.
 
Last edited by a moderator:
So your infallible logic for predicting that no future laptop APU is going to using GDDR in the future is that no laptop has used GDDR so far?

Wow.

So what you're saying is you're not aware that memory clocks and voltage can drastically throttle down when not in full load.
They can, and yet you can still do that with DDR4 and waste less energy, get more battery life and much more bang for back then going single pool for system memory.
All you have is a bunch of general statements - some of them very clearly false and honestly rather ignorant - for which you provide zero proof or source despite being asked for it many times over by several different people.

Of course a gaming laptop is going to prioritize gaming performance above power consumption. If number of firefox tabs were a gamer's concern, there'd be no dGPUs in gaming laptops, or rather there'd be no gaming laptops period.
And a high-end APU designed for a gaming laptop would obviously need more memory bandwidth than whatever DDR4 within any reasonable width can provide.
And yes, there are very clear advantages to use a monolithic SoC with embedded CPU and GPU in a gaming system, such as lower VRM cost, higher power efficiency, simpler PCB, etc. It's so important for cost saving that the console makers had little choice than to go with AMD for a high-performance gaming SoC for their latest consoles.
But I see you finally dropped the similarly erroneous latency argument, so I guess at least someone learned something here.
Nah, what you are arguing is lost cause. There is not one single laptop in existence that uses high performance APU coupled with single pool of GDDR as memory, let alone fastest one not yet in mass production. Not one, and you are carefully dropping that point and ignoring it.

There are clear advantages of monolithic SoC in gaming systems, such as consoles, but as I argued - not in laptops. This is why you WILL find consoles with high performance APUs in past (all last gen consoles) and in future (PS5 and Scarlett), but you won't find ONE SINGLE LAPTOP based on such APU. If it had a single advantage in its design for high performance laptops, you would see engineers going that route for at least number of laptops. But as I already said few times, there has been exactly 0 laptops of this kind in 8 years.
 
It's a huge market right now, to the point of every major to medium laptop maker having their own gaming brand.
So yes, it's very well worth the effort of developing a custom SoC for high-end laptops.
Even more considering that AMD charged Subor around $65M for their Z-Plus SoC IIRC.

How do they define a "gaming laptop"? Marketing? Discrete GPU(Nvidia) vs. embedded, TDP?

When I got my current MBPro it came with a Radeon Pro 460 4GB. IMHO completely overblown and the only reason I got the mid GPU was the larger VRAM. I definitely don't use it for gaming whatsoever and surely could live with a scaled down GPU.

P.S. If these laptops are really bought for gaming, 12B is definitely 12-120 times larger than I imagined.
 
How do they define a "gaming laptop"? Marketing? Discrete GPU(Nvidia) vs. embedded, TDP?

It's not like there is an official spec, so it's mostly marketing. But they generally fall into the category of barely portable with battery life being an afterthought. When they first started making them it was always discrete because there was no igpu that could really run games well, but that line has been crossed.
 
Those are 2GB, which would be logical for a dev-kit.


Not necessarily. The total bandwidth gives actually 529.6 GB/s using the single chip test but that would be expected as those are benchs and not theoretical. Using the DDR4 as reference in others tests, there is often a 8% difference between the bench done by their tests and the max theoretical. Here there is about 8.76% from 529.6 to the max thoretical of 576.

From memory (SDK docs) there was a 10% difference between theoretical and tested max bandwidth for PS4 gddr5 (160 -> 176).
Write single core is 33.1GB/s but read only 12.8 ?
 
Is there any value in a laptop workstation? Emphasis on portability, not battery life? Focussed on productivity. I can't really see a need for fast VRAM for anything beyond CG(I) though, or the very most ridiculous 4K/8K video editing. So I think the idea of AMD creating an APU for workstations may be unrealistic.
 
Is there any value in a laptop workstation? Emphasis on portability, not battery life? Focussed on productivity. I can't really see a need for fast VRAM for anything beyond CG(I) though, or the very most ridiculous 4K/8K video editing. So I think the idea of AMD creating an APU for workstations may be unrealistic.
How long ago was it that computer upgrading was driven by need?
It is driven by created desire. In this case though, the justification could be value - by having a sufficiently performant laptop, you could completely get rid of a stationary computer for instance, something that most private individuals have already done. Gaming is probably the single most common use case that benefits from increased performance, (and thus reason to hang on to a stationary solution) but if you could get enough performance from a relatively inexpensive laptop, why wouldn’t many find that an attractive option? And the more performant and less expensive, the better.

The joys of pitiful battery life and cooling high performance silicon with itty-bitty fans will be discovered after the purchase.
 
How do they define a "gaming laptop"? Marketing? Discrete GPU(Nvidia) vs. embedded, TDP?
I think it's mostly based on the performance class of the present discrete GPU. AFAICS most laptops with a 45W H-series CPU and a GTX1050 or higher are coming under a gaming brand nowadays.
At the same time, almost all office use laptops are coming with 15W U-series CPUs and either no dGPU or a MX150/MX250 at most.


And then for the sub-brands, HP has "Pavilion Gaming" and then the "Omen" laptops; MSI GT/GS/GE/GP/GF/GL series, Gigabyte Aorus and Aero, Asus ROG and TUF, Lenovo Legion, Acer Nitro and Predator, Dell G and Alienware, and there's a bunch of brands that only do gaming laptops like Razer and Falcon. Then there's some manufacturers like Clevo who licenses a bunch of gaming laptop "chassis" for retail stores to configure and put their own branding in them.

Regardless, if you go to newegg and choose "Gaming Laptops" you'll see they're selling over 3500 different variations of gaming laptop models:
https://www.newegg.com/Gaming-Lapto...ng-Laptops&Tid=167732&PageSize=96&Order=PRICE


Is there any value in a laptop workstation? Emphasis on portability, not battery life? Focussed on productivity. I can't really see a need for fast VRAM for anything beyond CG(I) though, or the very most ridiculous 4K/8K video editing. So I think the idea of AMD creating an APU for workstations may be unrealistic.
There's also a rising market for laptop worksations targeted at machine learning. I think you need hefty bandwidth to feed all those tens/hundreds of IOPS. Or at least the GPUs that have high IOPS throughput are also the larger GPUs that traditionally come with higher bandwidth.

And if I may speak for myself, even if I wanted a laptop for gaming I'd much rather find myself a "laptop workstation" with sober looks but powerful hardware, than the typical flamboyant gaming laptop with christmas tree RGB LEDs and rounded plastic everywhere to make it look aerodynamic.




The joys of pitiful battery life and cooling high performance silicon with itty-bitty fans will be discovered after the purchase.
I think for the battery life part things have progressed pretty well nowadays. Truth is the Intel H-series CPUs don't consume a whole lot more in light loads than the U-series, so a 15" gaming laptop with a H-series CPU and a hefty RTX2070 can get about the same battery life in office usage as a 15" office laptop with a U-series CPU. Thanks to optimus, the dGPU gets completely powered down.
 
Last edited by a moderator:
Absolute minimum is 9 tflops based on Gonzalo benchmark. It can't be less.


X1X is already 6 tf, granted less efficient.

Just like I predicted One X high specs makes next gen difficult to make a splash. At BEST they're going to be 16GB RAM too where X already has 12GB. I would assume 12GB RAM would also be in play although thinking just now I'd lean heavily to 16.

Of course next gen big weapon they dont have to support <2TF base consoles. Or do they...

Gonna be expensive. RAM hurts, ask Apple. That's why they give my otherwise nice iPad 9.7 only two GB...I always remember from Xbox days MS guy said the things that dont decrease in cost like the other components over time is RAM and hard drive.
 
How can it not be over 9000 gflops
source.gif
 
Absolute minimum is 9 tflops based on Gonzalo benchmark. It can't be less.

I'm awfully suspicious of this seemingly widespread idea that developers with near-final hardware dev kits of game consoles (which won't run Windows 10) are capable of installing Windows 10 + working GPU and motherboard chipset drivers for windows 10 + a benchmark / 64bit executable and then running said benchmark to post results in an online database.

It'd find it to be rather believable if they were running some browser-based benchmark for Javascript performance like Octane, or at most some WebGL benchmark like Fishtank.
But full on Windows 10 running 3dmark on the PS5 devkit using a semicustom SoC? To me that's quite the white elephant in the room.

Maybe if Gonzalo and Flute are Scarlett SoCs and next-gen's XBox is running full on windows 10 I could see it happening, but not even that much probable to be honest.
 
I'm awfully suspicious of this seemingly widespread idea that developers with near-final hardware dev kits of game consoles (which won't run Windows 10) are capable of installing Windows 10 + working GPU and motherboard chipset drivers for windows 10 + a benchmark / 64bit executable and then running said benchmark to post results in an online database.

It'd find it to be rather believable if they were running some browser-based benchmark for Javascript performance like Octane, or at most some WebGL benchmark like Fishtank.
But full on Windows 10 running 3dmark on the PS5 devkit using a semicustom SoC? To me that's quite the white elephant in the room.

Maybe if Gonzalo and Flute are Scarlett SoCs and next-gen's XBox is running full on windows 10 I could see it happening, but not even that much probable to be honest.
Why not? Even the PS4 has run it and got 5000 overall score so it would only be natural they updated the drivers for a more powerful SoC PS5 devkit (assuming it's near final). Not that I think this came from a PS5 devkit since the crazy high graphics score raises eyebrows a bit. But man oh man the amount of goats I would sacrifice to have a PS5 that powerful.
 
Status
Not open for further replies.
Back
Top