Why do some console ports run siginifacntly slower on equivalent PC hardware?

Discussion in 'Architecture and Products' started by DavidGraham, Dec 8, 2021.

  1. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,090
    Have seen such statements before somewhere ;)
     
    OlegSH likes this.
  2. Wesker

    Regular

    Joined:
    May 3, 2008
    Messages:
    299
    Likes Received:
    186
    Location:
    Oxford, UK
    Fair point. Though I didn't claim it was a great benchmark. I just said that it had a ground-up Metal implementation.

    Geekbench is problematic for the M1 devices as documented by Andrei at Anandtech. Something about how it doesn't trigger performance mode or something.

    SoTR is a rush job port. So my point above stands.


    This got debunked in the other "Apple is an existential threat to the PC" thread. The devices are in the same price class, and the MBP comes with other creature comforts that the PC laptop rivals do not have.

    See my response to DavidGraham.

    We're talking about subpar implementation or utilisation of resources. The point I was raising was that much like how a lot console games can extract more performance from a given set of hardware compared to the PC, many 3D applications also leave a lot of untapped potential performance on the macOS platform. This is most obvious when looking at the latest M1/Max/Pro and is also applicable to AMD GPUs on the Mac, and even back in the day when Nvidia GPUs were in Apple systems.
     
  3. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    I'm pretty confident that it'll remain challenging to get playable frame rates with Dreams's point cloud renderer even with the most powerful PC hardware at least until they have feature parity. :wink:

    They don't just use ordered append for dual contouring but they use it in several other unspecified instances as well. If modern console emulation of other platforms ever sees an uptick, Dreams is the mostly likely candidate that'll remain infeasible for emulation too. By the usage of one of the most enigmatic hardware features, Media Molecule has by default implemented very strong countermeasures against emulation or porting!

    In the past, it might've been an open possibility that we could Dreams running on PC before this functionality was shelved ...
     
    Lightman, T2098 and BRiT like this.
  4. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,235
    Likes Received:
    4,259
    Location:
    Guess...
    I don't think anyone would argue that you can achieve things in ways on consoles that can't be replicated on PC. And that in many cases the console implementation will be more efficient than doing it a different way on the PC. But that's very different to saying the end result can't be achieved on PC regardless of how much power you throw at it. There's always another way to achieve something even if that way is less efficient and thus requires more horsepower. Case in point, Media Molecule do intend to bring Dreams to the PC at some point:

    https://www.gamesindustry.biz/artic...-to-publish-games-to-other-devices-and-beyond

    So the real question isn't whether something that can be achieved on console is impossible on PC, it's how much the additional flexibility afforded by console API's and driver models can boost performance over the best alternatives for achieving the same end result on PC. That's where the optimization comes in - finding the best alternatives where they are needed. And that's where older architectures will receive little to no effort while newer ones will - both at the game level and the driver level.
     
    DavidGraham and PSman1700 like this.
  5. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,090
    An Asus G15 laptop (15.6'') with a 130watt 3070, 32gb and 5800h, 144hz IPS display can be had for around 1500USD here, you'd need the 32core to roughly match the GPU in theoretical benchmarks. That's theoretically though, as Linus Tech Tips points out in the video in this thread, its basically the high end video editing where you see larger performance advantages most likely due to the media accelerators/pro res. Talking about this in a gaming thread, i think its quite a bad value for the M1 pro/max. They are not in the same price class at all.

    Ive compared my GE76 against a family members almost-maxed M1, the largest differences are battery-time (and performance on battery) and the audio quality from the build in speakers.
    Would say for creature comforts (what do you mean exactly with that anyway?), their not too far apart, the Apple device is more premium in build and execution, but their enticing to different markets. The gamer/allround user will go the GE76 route while the content creator most likely the Apple device. Though to say, the GE76 is quite good at content creation tasks aswell (Cuda etc), while also excellent gaming performance for a laptop. The M1 series excel only in some specific content creation tasks, usually apple supported ones and basically not much else, nothing in gaming, as opposed to say the GE76.

    In the same vein that something done and optimized for the pc wont be as efficient on consoles. Its like that when having different architectures. Thing is the pc platform you can brute force things if needed.
    Were mostly talking different architectures, not that one is superior to the other on a architectural level.
     
    #45 PSman1700, Dec 14, 2021
    Last edited: Dec 14, 2021
    pjbliverpool likes this.
  6. Wesker

    Regular

    Joined:
    May 3, 2008
    Messages:
    299
    Likes Received:
    186
    Location:
    Oxford, UK
    You're about to argue a different point.

    The cheapest G15 on Newegg is the Strix G15 with 16GB of RAM, RTX 3070, and 300Hz display is about $1800:
    https://www.newegg.com/original-black-asus-rog-strix-g15-g513qr-es96-gaming-entertainment/p/N82E16834235646?Description=asus g15 3070&cm_re=asus_g15 3070-_-34-235-646-_-Product
    But this is an apples to oranges comparison. It doesn't even come with a webcam, no Thunderbolt (only USB 3.0 at 5Gbps), speakers are garbage, and it's entirely plastic. The 144Hz panel version that you mentioned also has a trash screen: 62.5% of sRGB, which makes it entirely useless for professionals. I haven't read into whether the keyboard or trackpad are any good, but a priori I don't think they're any good. Also, the 3070 alone consumes more power than the entire M1 Max SoC.

    You get a laptop that is comparable to the overall package of the MacBook Pro (display, keyboard, aluminium construction, webcam, speakers, battery life etc), and you find that the M1 Max MBP will be fairly price competitive. I've explained this in the other thread too.

    For example, stepping up from Strix to the Zephyrus (and getting all the bonus features such as better build quality, ports, and the RTX 3080) brings the price of the Asus to $3189:
    https://www.newegg.com/p/2WC-000N-045A8?Description=Asus G15 3080&cm_re=Asus_G15 3080-_-2WC-000N-045A8-_-Product

    Even then, there are still some features missing.

    RE: the LTT video: Yes, this pretty much echoes what I've been saying all along -- see my point later in this message.

    See the list above.

    Yes, exactly, and the entire point I'm making -- and the point of this thread -- is to ask "why?".

    Gaming performance on macOS (M1/Radeon/GeForce equipped Macs) is utter trash mostly because of subpar implementation (see my original post). Apple refuses to support Vulkan, DirectX is unavailable, macOS has transitioned to ARM, and most AAA developers will not bother spending resources to make a proper Metal implementation of their games -- and, honestly, who can blame them? The potential return is far too low.

    This is basically what happens with console-focused AAA games and their transition to the PC. Developers just rely on the brute force available on PCs to run their games, but this comes with a huge performance tax.

    Pretty sure "all around users" will go with a MacBook Pro/Air over a GE76.
     
    Flappy Pannus likes this.
  7. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    FWIW, I never made the trivial argument Dreams can't run on PC because you can theoretically run any code regardless of the performance profile. That's why I framed my assertion on the basis of feasibility rather than absolutes ...

    It depends on what your interpretation of "another way" implies. You'd have a better chance at making a feasible software renderer (CPU) for Dreams than you would on a GPU and current consumer grade hardware wouldn't be good enough to be playable. We'd ideally want future high-end server grade CPUs to be able to maintain a stable 30FPS at all times with PS4 equivalent settings. Standard PC GPUs are virtually paperweight either way you slice it and it's hardly realistic for them to demand server grade CPUs or make the game exclusive to a single hardware vendor (AMD) on PC. If you're other potential workaround involves Dreams changing from a point cloud to a polygonal renderer (massive implications) then you've pretty much lost the argument since PC wasn't able to render the same content as seen on consoles ...

    As for the link, it's a couple years old now and he shouldn't take this the wrong way but they interviewed an art director in question so he's hardly an authority on appraising technical feasibility of a PC port ...

    There is no such thing as a "driver" on consoles in the same sense as on PC. This "driver" on consoles is statically linked to each and every game executable. Basically that is to say that every game ships it's own "driver". There are too many people who underestimate what developers can do with console APIs from both a performance and functionality standpoint. Just because the common multiplatform party developer will see their own game's usage patterns is a friendly match for PC doesn't mean that usage patterns from other developers/games will always find a way that maps well to PC. There used to be tons of content changes between consoles and PC releases of games back in the past to highlight the irreconcilable differences in hardware design. Dreams is somewhat of a throwback to those times when there wasn't a lot of standardized functionality because it highlights that even with identical hardware architectures, console APIs demonstrate that there's still a rift between them and PCs that can't be crossed in very rare cases no matter how hard you try ...
     
    Lightman likes this.
  8. OlegSH

    Regular

    Joined:
    Jan 10, 2010
    Messages:
    798
    Likes Received:
    1,625
    Pretty sure there are many cases where newer PC's APIs, shader models and newer HW are way more efficient at doing certain things. And there are certainly many things that current PC hardware can do, but consoles can't, so that's a double-edged sword.
    While consoles have advantages of lower level optimizations mostly in the late period of their lifecycle, PC hardware and SW also evolves. People are used to talk of consoles' "API advantage" as if it was set in stone, but in reality, PC HW will have more advantages with time due to much better HW architectures designed to robustly handle stuff that would require suboptimal SW API workarounds on older HW.
     
  9. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,235
    Likes Received:
    4,259
    Location:
    Guess...
    "Another way" means whatever solution can produce a similar output for a similar performance cost regardless of the methods used to achieve the output.

    I do think you're too quick to dismiss performant point cloud rendering on modern PC GPU's as impossible though. Take this for example which demonstrates rendering point clouds with compute shaders via various API's and achieving up to 50b points per second on an RTX 3090.

    https://arxiv.org/pdf/2104.07526.pdf

    He's also co-founder of the company so if he says they have plans to bring it to PC (and other platforms) I'm inclined to believe he understands that it's not actually impossible.

    I can understand and appreciate this but ultimately, if the way it's done on console doesn't map very well to PC, then find a different way to do it that does map well to the PC for the PC version. And if you still lose performance in doing so then that's the genuine console API/driver model advantage. The real question is how much performance do you actually lose in such scenario's?

    The argument here seems to be that these new games running on old GPU's might be an example of the amount of performance you can lose. Whereas I'd argue there are other factors at play there such as the games not having any reasonable level of optimization for those old architectures. If you look instead at modern GPU's, the performance delta's seem to be much smaller (if they exist at all).
     
    PSman1700 likes this.
  10. Flappy Pannus

    Regular

    Joined:
    Jul 4, 2016
    Messages:
    329
    Likes Received:
    567
    And of course, the M1 can actually achieve very similar performance and (silent) acoustics running on battery. That's not some single bullet-point when we're talking about a portable computing device, it's kind of the reason they exist!
     
    #50 Flappy Pannus, Dec 14, 2021
    Last edited: Dec 14, 2021
    Wesker likes this.
  11. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    With the same feature set I presume ? Deformable/dynamic point clouds, real-time editing, dual contouring, transparency, and etc. ?

    Yes but that doesn't make anyone is capable of making these technical judgements at hand ...

    You do realize that concepts can have both a good or a bad implementation, right ? What I'm trying to demonstrate is that PC can be potentially left with only bad options while good options are only possible on consoles ...
     
  12. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,090
    It doesnt come with a NOTCH either.

    For gamers that 144hz panel is more important than colour accuracy though. The G15 can be had for around 15.000kr (around 1500USD), and even at 1800USD in the US, how your going to match that value-wise with any of the M1's? Its impossible here.

    Their not as good as the Apple's, but they aint trash either.

    The LTT video is in line with the truth, your getting top-dog specific content creation performance. Thats about it performance wise. As compared to the GE76, its not entirely true that the Apple device is superior in all regards, performance the GE76 is ahead in just about anything bar specific content creation apps, its screen isnt worse, to me personally its 'better'. Mine has a 4k OLED that might not be more accurate, but its a more immersive screen to actually enjoy content, instead of creating it. Audio is very good aswell, its not as good as the Apple again, but not worth so much more money. Better efficiency and thus battery-time, yes, different arch and node advantages aside, its not a priority for a gamer. And yes, you'd have to go for the maxed M1 to theoretically come close to the GE76 with 3080m. The apple is going to be more expensive.

    Content creation is at a disadvantage on the amd/nv hardware due to lesser optimization as opposed to macos. It still is suffice though. The windows laptop does both quite well, even beats the apple in some content creation apps. The Apple does only one thing very well.

    It doesnt matter why, its what you get today when you pay for the products.

    Just that the performance delta's arent as huge though. A 7970 that was released 1.5 year ahead of the console didnt perform all that well, that thing probably didnt have proper driver support since.... no idea. And again, your looking at ports from AAA-studios. Multiplatform games which take most of the percentage of titles are doing better performance wise, mostly.

    Pretty sure its the other way around. No idea why everyone on windows would consider a Apple mac all of a sudden, nothings changed, content creation is better done on a mac, just like before.

    Lol, didnt read the rest of your post after that comment.
     
  13. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,987
    Likes Received:
    3,529
    Location:
    Winfield, IN USA
    Pfft, lazy programmers! Everyone knows that.

    /me runs!
     
  14. Wesker

    Regular

    Joined:
    May 3, 2008
    Messages:
    299
    Likes Received:
    186
    Location:
    Oxford, UK
    :roll:
    Hilarious!

    What more can I say? You've missed the point of this thread entirely, and have struggled to comprehend the substance of the discussion here.

    Then why are you even in this thread then? We're trying to discuss apps where consoles have equivalent performance to their PC counterparts, despite having far less resources to work with.
     
  15. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,090
    Ok, so lets keep Apple out of this discussion then.
     
  16. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,987
    Likes Received:
    3,529
    Location:
    Winfield, IN USA
    I'll go one further, let's keep Apple out of ALL discussions. :D
     
    PSman1700 and Kaotik like this.
  17. Flappy Pannus

    Regular

    Joined:
    Jul 4, 2016
    Messages:
    329
    Likes Received:
    567
    The most performant UMA in the desktop/notebook space is invariably going to come up when we're talking about the relative performance of a very similar architecture in the console space.
     
    Wesker likes this.
  18. Wesker

    Regular

    Joined:
    May 3, 2008
    Messages:
    299
    Likes Received:
    186
    Location:
    Oxford, UK
    This is Beyond3D not BeyondLTT.

    We should have zero tolerance for cringe “PCMR” smugness or memeing about Apple’s hardware offerings and performance.

    The parallel between extracting performance from consoles and PCs and extracting performance from Macs and PCs is pretty obvious and simple to appreciate.

    Well said.
     
  19. Albuquerque

    Albuquerque Red-headed step child
    Veteran

    Joined:
    Jun 17, 2004
    Messages:
    4,309
    Likes Received:
    1,105
    Location:
    35.1415,-90.056
    Apparently you haven't been on B3D long enough.

    Bullshit. Apple has never, ever prioritized gaming performance on their platforms. They have always aimed towards the people who want to spend a lot more money for the spit-shine and polish of their ecosystem, which are the well-to-do "creative types" who can be funded by rich parents, companies who want to cater to their whims, educational subsidy (Apple did really well planting their seeds so to speak in the 80's and 90's) and "creative shops"

    Apple is not in the gaming space by any meaningful measure; their relationship to console gaming is limited to basically the fact that they're both digital devices. It has no bearing at all on why ports from consoles to PC (which share the same foundational CPU, GPU, memory, storage and network instruction sets and architectures) perform so very differently.

    Truth is, despite Digi's snark on the matter, the devs likely "don't care" enough to hand-tailor a game meant for console limitations into the PC world. Despite having near-identical archiectural foundations, consoles and PCs still have interesting differences in the OS and related abstraction layers. It's akin to looking at workload performance on the same application when backended by either an Oracle, Microsoft SQL, or IBM DB2 relational databse platform. All three, at the end of the day, are modern relational databases which are based on the same foundational technologies. However, they perform very differently with different workloads and simply "porting" code to target one platform (Oracle) to another (IBM DB2) can result in very significant performance changes.

    Usually those can be tuned out, with a lot of care and time and attention -- but often simply throwing more hardware at it (ie a "lazy" PC port) is just easier.
     
  20. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    Be ware also that comparison to 3080m is often not specific, the 3080m can be tuned by OEMs to have a wide range of operating power limits, which can range from 80w up to 200w. with wildly different performance profiles as a result. This is a crucial information when doing comparisons between GPUs.
     
    digitalwanderer and PSman1700 like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...