Why do some console ports run significantly slower on equivalent PC hardware?

Sorry, but that one very old "mobile" benchmark is not enough to establish this grand sweeping ridiculous claim of 3080m equivalence. This benchmark specifically doesn't scale well with high end GPUs at all. Worse, the Aztec test is one of several tests in a suite called GFXBench, all of which have simple dirt graphics.

Here is the desktop 3080Ti being barely faster than mobile 3080 in the Manhattan test, worse yet, the desktop 3080Ti is actually slower than 3080 mobile in the T-Rex test!
https://gfxbench.com/compare.jsp?be...type2=dGPU&hwname2=NVIDIA+GeForce+RTX+3080+Ti

For comparing GPUs, the whole suite is as useless as it gets!

Fair point. Though I didn't claim it was a great benchmark. I just said that it had a ground-up Metal implementation.

No, Geekbench (useless too, but consistent) and Shadow of Tomb Raider, which has a Metal API implementation.

Geekbench is problematic for the M1 devices as documented by Andrei at Anandtech. Something about how it doesn't trigger performance mode or something.

SoTR is a rush job port. So my point above stands.


It better be trading blows with a 3080m seeing the price of a 32core mac max(or pro) device on a bleeding edge 5nm technology.

This got debunked in the other "Apple is an existential threat to the PC" thread. The devices are in the same price class, and the MBP comes with other creature comforts that the PC laptop rivals do not have.

Probably that's because the Mobile 3080 was designed to run stuff like this, but not to be a kicker in a mobile phone benchmark that the Aztec is.

See my response to DavidGraham.

We're talking about subpar implementation or utilisation of resources. The point I was raising was that much like how a lot console games can extract more performance from a given set of hardware compared to the PC, many 3D applications also leave a lot of untapped potential performance on the macOS platform. This is most obvious when looking at the latest M1/Max/Pro and is also applicable to AMD GPUs on the Mac, and even back in the day when Nvidia GPUs were in Apple systems.
 
Have seen such statements before somewhere ;)

I'm pretty confident that it'll remain challenging to get playable frame rates with Dreams's point cloud renderer even with the most powerful PC hardware at least until they have feature parity. ;)

They don't just use ordered append for dual contouring but they use it in several other unspecified instances as well. If modern console emulation of other platforms ever sees an uptick, Dreams is the mostly likely candidate that'll remain infeasible for emulation too. By the usage of one of the most enigmatic hardware features, Media Molecule has by default implemented very strong countermeasures against emulation or porting!

In the past, it might've been an open possibility that we could Dreams running on PC before this functionality was shelved ...
 
It's not just the driver, virtually all of the GPU instruction set is exposed for developers. If performance parity meant not using console features like UMA then proper optimization on PC isn't possible regardless of how hard they try ...

Dreams by Media Molecule in particular would be very hard to get running on PC. Their point cloud renderer relies on using global ordered append to get hilbert ordering, self-intersection-free dual contouring all for free on consoles and there's no good alternative to mimic these effects on PC without killing your framerate even on the highest end hardware available over there. Given the extremely high density of the point clouds, I would not be surprised to find out that they use ordered append to do culling as well too ...

You can create entire rendering pipelines that's only possible for consoles. Truly obscure stuff ...

I don't think anyone would argue that you can achieve things in ways on consoles that can't be replicated on PC. And that in many cases the console implementation will be more efficient than doing it a different way on the PC. But that's very different to saying the end result can't be achieved on PC regardless of how much power you throw at it. There's always another way to achieve something even if that way is less efficient and thus requires more horsepower. Case in point, Media Molecule do intend to bring Dreams to the PC at some point:

https://www.gamesindustry.biz/artic...-to-publish-games-to-other-devices-and-beyond

So the real question isn't whether something that can be achieved on console is impossible on PC, it's how much the additional flexibility afforded by console API's and driver models can boost performance over the best alternatives for achieving the same end result on PC. That's where the optimization comes in - finding the best alternatives where they are needed. And that's where older architectures will receive little to no effort while newer ones will - both at the game level and the driver level.
 
The devices are in the same price class

An Asus G15 laptop (15.6'') with a 130watt 3070, 32gb and 5800h, 144hz IPS display can be had for around 1500USD here, you'd need the 32core to roughly match the GPU in theoretical benchmarks. That's theoretically though, as Linus Tech Tips points out in the video in this thread, its basically the high end video editing where you see larger performance advantages most likely due to the media accelerators/pro res. Talking about this in a gaming thread, i think its quite a bad value for the M1 pro/max. They are not in the same price class at all.

the MBP comes with other creature comforts that the PC laptop rivals do not have.

Ive compared my GE76 against a family members almost-maxed M1, the largest differences are battery-time (and performance on battery) and the audio quality from the build in speakers.
Would say for creature comforts (what do you mean exactly with that anyway?), their not too far apart, the Apple device is more premium in build and execution, but their enticing to different markets. The gamer/allround user will go the GE76 route while the content creator most likely the Apple device. Though to say, the GE76 is quite good at content creation tasks aswell (Cuda etc), while also excellent gaming performance for a laptop. The M1 series excel only in some specific content creation tasks, usually apple supported ones and basically not much else, nothing in gaming, as opposed to say the GE76.

I don't think anyone would argue that you can achieve things in ways on consoles that can't be replicated on PC. And that in many cases the console implementation will be more efficient than doing it a different way on the PC. But that's very different to saying the end result can't be achieved on PC regardless of how much power you throw at it. There's always another way to achieve something even if that way is less efficient and thus requires more horsepower. Case in point, Media Molecule do intend to bring Dreams to the PC at some point:

In the same vein that something done and optimized for the pc wont be as efficient on consoles. Its like that when having different architectures. Thing is the pc platform you can brute force things if needed.
Were mostly talking different architectures, not that one is superior to the other on a architectural level.
 
Last edited:
An Asus G15 laptop (15.6'') with a 130watt 3070, 32gb and 5800h, 144hz IPS display can be had for around 1500USD here, you'd need the 32core to roughly match the GPU in theoretical benchmarks.

You're about to argue a different point.

The cheapest G15 on Newegg is the Strix G15 with 16GB of RAM, RTX 3070, and 300Hz display is about $1800:
https://www.newegg.com/original-black-asus-rog-strix-g15-g513qr-es96-gaming-entertainment/p/N82E16834235646?Description=asus g15 3070&cm_re=asus_g15 3070-_-34-235-646-_-Product
But this is an apples to oranges comparison. It doesn't even come with a webcam, no Thunderbolt (only USB 3.0 at 5Gbps), speakers are garbage, and it's entirely plastic. The 144Hz panel version that you mentioned also has a trash screen: 62.5% of sRGB, which makes it entirely useless for professionals. I haven't read into whether the keyboard or trackpad are any good, but a priori I don't think they're any good. Also, the 3070 alone consumes more power than the entire M1 Max SoC.

That's theoretically though, as Linus Tech Tips points out in the video in this thread, its basically the high end video editing where you see larger performance advantages most likely due to the media accelerators/pro res. Talking about this in a gaming thread, i think its quite a bad value for the M1 pro/max. They are not in the same price class at all.

You get a laptop that is comparable to the overall package of the MacBook Pro (display, keyboard, aluminium construction, webcam, speakers, battery life etc), and you find that the M1 Max MBP will be fairly price competitive. I've explained this in the other thread too.

For example, stepping up from Strix to the Zephyrus (and getting all the bonus features such as better build quality, ports, and the RTX 3080) brings the price of the Asus to $3189:
https://www.newegg.com/p/2WC-000N-045A8?Description=Asus G15 3080&cm_re=Asus_G15 3080-_-2WC-000N-045A8-_-Product

Even then, there are still some features missing.

RE: the LTT video: Yes, this pretty much echoes what I've been saying all along -- see my point later in this message.

Would say for creature comforts (what do you mean exactly with that anyway?), their not too far apart,

See the list above.

the Apple device is more premium in build and execution, but their enticing to different markets.
Yes, exactly, and the entire point I'm making -- and the point of this thread -- is to ask "why?".

Gaming performance on macOS (M1/Radeon/GeForce equipped Macs) is utter trash mostly because of subpar implementation (see my original post). Apple refuses to support Vulkan, DirectX is unavailable, macOS has transitioned to ARM, and most AAA developers will not bother spending resources to make a proper Metal implementation of their games -- and, honestly, who can blame them? The potential return is far too low.

This is basically what happens with console-focused AAA games and their transition to the PC. Developers just rely on the brute force available on PCs to run their games, but this comes with a huge performance tax.

The gamer/allround user will go the GE76 route while the content creator most likely the Apple device.

Pretty sure "all around users" will go with a MacBook Pro/Air over a GE76.
 
I don't think anyone would argue that you can achieve things in ways on consoles that can't be replicated on PC. And that in many cases the console implementation will be more efficient than doing it a different way on the PC. But that's very different to saying the end result can't be achieved on PC regardless of how much power you throw at it.

FWIW, I never made the trivial argument Dreams can't run on PC because you can theoretically run any code regardless of the performance profile. That's why I framed my assertion on the basis of feasibility rather than absolutes ...

There's always another way to achieve something even if that way is less efficient and thus requires more horsepower. Case in point, Media Molecule do intend to bring Dreams to the PC at some point:

https://www.gamesindustry.biz/artic...-to-publish-games-to-other-devices-and-beyond

It depends on what your interpretation of "another way" implies. You'd have a better chance at making a feasible software renderer (CPU) for Dreams than you would on a GPU and current consumer grade hardware wouldn't be good enough to be playable. We'd ideally want future high-end server grade CPUs to be able to maintain a stable 30FPS at all times with PS4 equivalent settings. Standard PC GPUs are virtually paperweight either way you slice it and it's hardly realistic for them to demand server grade CPUs or make the game exclusive to a single hardware vendor (AMD) on PC. If you're other potential workaround involves Dreams changing from a point cloud to a polygonal renderer (massive implications) then you've pretty much lost the argument since PC wasn't able to render the same content as seen on consoles ...

As for the link, it's a couple years old now and he shouldn't take this the wrong way but they interviewed an art director in question so he's hardly an authority on appraising technical feasibility of a PC port ...

So the real question isn't whether something that can be achieved on console is impossible on PC, it's how much the additional flexibility afforded by console API's and driver models can boost performance over the best alternatives for achieving the same end result on PC. That's where the optimization comes in - finding the best alternatives where they are needed. And that's where older architectures will receive little to no effort while newer ones will - both at the game level and the driver level.

There is no such thing as a "driver" on consoles in the same sense as on PC. This "driver" on consoles is statically linked to each and every game executable. Basically that is to say that every game ships it's own "driver". There are too many people who underestimate what developers can do with console APIs from both a performance and functionality standpoint. Just because the common multiplatform party developer will see their own game's usage patterns is a friendly match for PC doesn't mean that usage patterns from other developers/games will always find a way that maps well to PC. There used to be tons of content changes between consoles and PC releases of games back in the past to highlight the irreconcilable differences in hardware design. Dreams is somewhat of a throwback to those times when there wasn't a lot of standardized functionality because it highlights that even with identical hardware architectures, console APIs demonstrate that there's still a rift between them and PCs that can't be crossed in very rare cases no matter how hard you try ...
 
And that in many cases the console implementation will be more efficient than doing it a different way on the PC.
Pretty sure there are many cases where newer PC's APIs, shader models and newer HW are way more efficient at doing certain things. And there are certainly many things that current PC hardware can do, but consoles can't, so that's a double-edged sword.
While consoles have advantages of lower level optimizations mostly in the late period of their lifecycle, PC hardware and SW also evolves. People are used to talk of consoles' "API advantage" as if it was set in stone, but in reality, PC HW will have more advantages with time due to much better HW architectures designed to robustly handle stuff that would require suboptimal SW API workarounds on older HW.
 
It depends on what your interpretation of "another way" implies. You'd have a better chance at making a feasible software renderer (CPU) for Dreams than you would on a GPU and current consumer grade hardware wouldn't be good enough to be playable. We'd ideally want future high-end server grade CPUs to be able to maintain a stable 30FPS at all times with PS4 equivalent settings. Standard PC GPUs are virtually paperweight either way you slice it and it's hardly realistic for them to demand server grade CPUs or make the game exclusive to a single hardware vendor (AMD) on PC. If you're other potential workaround involves Dreams changing from a point cloud to a polygonal renderer (massive implications) then you've pretty much lost the argument since PC wasn't able to render the same content as seen on consoles ...

"Another way" means whatever solution can produce a similar output for a similar performance cost regardless of the methods used to achieve the output.

I do think you're too quick to dismiss performant point cloud rendering on modern PC GPU's as impossible though. Take this for example which demonstrates rendering point clouds with compute shaders via various API's and achieving up to 50b points per second on an RTX 3090.

https://arxiv.org/pdf/2104.07526.pdf

As for the link, it's a couple years old now and he shouldn't take this the wrong way but they interviewed an art director in question so he's hardly an authority on appraising technical feasibility of a PC port ...

He's also co-founder of the company so if he says they have plans to bring it to PC (and other platforms) I'm inclined to believe he understands that it's not actually impossible.

Just because the common multiplatform party developer will see their own game's usage patterns is a friendly match for PC doesn't mean that usage patterns from other developers/games will always find a way that maps well to PC. There used to be tons of content changes between consoles and PC releases of games back in the past to highlight the irreconcilable differences in hardware design.

I can understand and appreciate this but ultimately, if the way it's done on console doesn't map very well to PC, then find a different way to do it that does map well to the PC for the PC version. And if you still lose performance in doing so then that's the genuine console API/driver model advantage. The real question is how much performance do you actually lose in such scenario's?

The argument here seems to be that these new games running on old GPU's might be an example of the amount of performance you can lose. Whereas I'd argue there are other factors at play there such as the games not having any reasonable level of optimization for those old architectures. If you look instead at modern GPU's, the performance delta's seem to be much smaller (if they exist at all).
 
You get a laptop that is comparable to the overall package of the MacBook Pro (display, keyboard, aluminium construction, webcam, speakers, battery life etc), and you find that the M1 Max MBP will be fairly price competitive. I've explained this in the other thread too.
And of course, the M1 can actually achieve very similar performance and (silent) acoustics running on battery. That's not some single bullet-point when we're talking about a portable computing device, it's kind of the reason they exist!
 
Last edited:
"Another way" means whatever solution can produce a similar output for a similar performance cost regardless of the methods used to achieve the output.

I do think you're too quick to dismiss performant point cloud rendering on modern PC GPU's as impossible though. Take this for example which demonstrates rendering point clouds with compute shaders via various API's and achieving up to 50b points per second on an RTX 3090.

https://arxiv.org/pdf/2104.07526.pdf

With the same feature set I presume ? Deformable/dynamic point clouds, real-time editing, dual contouring, transparency, and etc. ?

He's also co-founder of the company so if he says they have plans to bring it to PC (and other platforms) I'm inclined to believe he understands that it's not actually impossible.

Yes but that doesn't make anyone is capable of making these technical judgements at hand ...

I can understand and appreciate this but ultimately, if the way it's done on console doesn't map very well to PC, then find a different way to do it that does map well to the PC for the PC version. And if you still lose performance in doing so then that's the genuine console API/driver model advantage. The real question is how much performance do you actually lose in such scenario's?

You do realize that concepts can have both a good or a bad implementation, right ? What I'm trying to demonstrate is that PC can be potentially left with only bad options while good options are only possible on consoles ...
 
It doesn't even come with a webcam

It doesnt come with a NOTCH either.

The 144Hz panel version that you mentioned also has a trash screen: 62.5% of sRGB, which makes it entirely useless for professionals. I haven't read into whether the keyboard or trackpad are any good, but a priori I don't think they're any good.

For gamers that 144hz panel is more important than colour accuracy though. The G15 can be had for around 15.000kr (around 1500USD), and even at 1800USD in the US, how your going to match that value-wise with any of the M1's? Its impossible here.

speakers are garbage

Their not as good as the Apple's, but they aint trash either.

RE: the LTT video: Yes, this pretty much echoes what I've been saying all along -- see my point later in this message.

The LTT video is in line with the truth, your getting top-dog specific content creation performance. Thats about it performance wise. As compared to the GE76, its not entirely true that the Apple device is superior in all regards, performance the GE76 is ahead in just about anything bar specific content creation apps, its screen isnt worse, to me personally its 'better'. Mine has a 4k OLED that might not be more accurate, but its a more immersive screen to actually enjoy content, instead of creating it. Audio is very good aswell, its not as good as the Apple again, but not worth so much more money. Better efficiency and thus battery-time, yes, different arch and node advantages aside, its not a priority for a gamer. And yes, you'd have to go for the maxed M1 to theoretically come close to the GE76 with 3080m. The apple is going to be more expensive.

Gaming performance on macOS (M1/Radeon/GeForce equipped Macs) is utter trash mostly because of subpar implementation (see my original post).

Content creation is at a disadvantage on the amd/nv hardware due to lesser optimization as opposed to macos. It still is suffice though. The windows laptop does both quite well, even beats the apple in some content creation apps. The Apple does only one thing very well.


It doesnt matter why, its what you get today when you pay for the products.

This is basically what happens with console-focused AAA games and their transition to the PC. Developers just rely on the brute force available on PCs to run their games, but this comes with a huge performance tax.

Just that the performance delta's arent as huge though. A 7970 that was released 1.5 year ahead of the console didnt perform all that well, that thing probably didnt have proper driver support since.... no idea. And again, your looking at ports from AAA-studios. Multiplatform games which take most of the percentage of titles are doing better performance wise, mostly.

Pretty sure "all around users" will go with a MacBook Pro/Air over a GE76.

Pretty sure its the other way around. No idea why everyone on windows would consider a Apple mac all of a sudden, nothings changed, content creation is better done on a mac, just like before.

We'd ideally want future high-end server grade CPUs to be able to maintain a stable 30FPS at all times with PS4 equivalent settings.

Lol, didnt read the rest of your post after that comment.
 
It doesnt come with a NOTCH either.

:rolleyes:
Hilarious!

What more can I say? You've missed the point of this thread entirely, and have struggled to comprehend the substance of the discussion here.

It doesnt matter why, its what you get today when you pay for the products.

Then why are you even in this thread then? We're trying to discuss apps where consoles have equivalent performance to their PC counterparts, despite having far less resources to work with.
 
Ok, so lets keep Apple out of this discussion then.
This is Beyond3D not BeyondLTT.

We should have zero tolerance for cringe “PCMR” smugness or memeing about Apple’s hardware offerings and performance.

The parallel between extracting performance from consoles and PCs and extracting performance from Macs and PCs is pretty obvious and simple to appreciate.

The most performant UMA in the desktop/notebook space is invariably going to come up when we're talking about the relative performance of a very similar architecture in the console space.

Well said.
 
We should have zero tolerance for cringe “PCMR” smugness or memeing about Apple’s hardware offerings and performance.
Apparently you haven't been on B3D long enough.

The parallel between extracting performance from consoles and PCs and extracting performance from Macs and PCs is pretty obvious and simple to appreciate.
Bullshit. Apple has never, ever prioritized gaming performance on their platforms. They have always aimed towards the people who want to spend a lot more money for the spit-shine and polish of their ecosystem, which are the well-to-do "creative types" who can be funded by rich parents, companies who want to cater to their whims, educational subsidy (Apple did really well planting their seeds so to speak in the 80's and 90's) and "creative shops"

Apple is not in the gaming space by any meaningful measure; their relationship to console gaming is limited to basically the fact that they're both digital devices. It has no bearing at all on why ports from consoles to PC (which share the same foundational CPU, GPU, memory, storage and network instruction sets and architectures) perform so very differently.

Truth is, despite Digi's snark on the matter, the devs likely "don't care" enough to hand-tailor a game meant for console limitations into the PC world. Despite having near-identical archiectural foundations, consoles and PCs still have interesting differences in the OS and related abstraction layers. It's akin to looking at workload performance on the same application when backended by either an Oracle, Microsoft SQL, or IBM DB2 relational databse platform. All three, at the end of the day, are modern relational databases which are based on the same foundational technologies. However, they perform very differently with different workloads and simply "porting" code to target one platform (Oracle) to another (IBM DB2) can result in very significant performance changes.

Usually those can be tuned out, with a lot of care and time and attention -- but often simply throwing more hardware at it (ie a "lazy" PC port) is just easier.
 
Fair point. Though I didn't claim it was a great benchmark. I just said that it had a ground-up Metal implementation.
Be ware also that comparison to 3080m is often not specific, the 3080m can be tuned by OEMs to have a wide range of operating power limits, which can range from 80w up to 200w. with wildly different performance profiles as a result. This is a crucial information when doing comparisons between GPUs.
 
Back
Top