only possible for consoles
Have seen such statements before somewhere
only possible for consoles
Sorry, but that one very old "mobile" benchmark is not enough to establish this grand sweeping ridiculous claim of 3080m equivalence. This benchmark specifically doesn't scale well with high end GPUs at all. Worse, the Aztec test is one of several tests in a suite called GFXBench, all of which have simple dirt graphics.
Here is the desktop 3080Ti being barely faster than mobile 3080 in the Manhattan test, worse yet, the desktop 3080Ti is actually slower than 3080 mobile in the T-Rex test!
https://gfxbench.com/compare.jsp?be...type2=dGPU&hwname2=NVIDIA+GeForce+RTX+3080+Ti
For comparing GPUs, the whole suite is as useless as it gets!
No, Geekbench (useless too, but consistent) and Shadow of Tomb Raider, which has a Metal API implementation.
It better be trading blows with a 3080m seeing the price of a 32core mac max(or pro) device on a bleeding edge 5nm technology.
Probably that's because the Mobile 3080 was designed to run stuff like this, but not to be a kicker in a mobile phone benchmark that the Aztec is.
Have seen such statements before somewhere
It's not just the driver, virtually all of the GPU instruction set is exposed for developers. If performance parity meant not using console features like UMA then proper optimization on PC isn't possible regardless of how hard they try ...
Dreams by Media Molecule in particular would be very hard to get running on PC. Their point cloud renderer relies on using global ordered append to get hilbert ordering, self-intersection-free dual contouring all for free on consoles and there's no good alternative to mimic these effects on PC without killing your framerate even on the highest end hardware available over there. Given the extremely high density of the point clouds, I would not be surprised to find out that they use ordered append to do culling as well too ...
You can create entire rendering pipelines that's only possible for consoles. Truly obscure stuff ...
The devices are in the same price class
the MBP comes with other creature comforts that the PC laptop rivals do not have.
I don't think anyone would argue that you can achieve things in ways on consoles that can't be replicated on PC. And that in many cases the console implementation will be more efficient than doing it a different way on the PC. But that's very different to saying the end result can't be achieved on PC regardless of how much power you throw at it. There's always another way to achieve something even if that way is less efficient and thus requires more horsepower. Case in point, Media Molecule do intend to bring Dreams to the PC at some point:
An Asus G15 laptop (15.6'') with a 130watt 3070, 32gb and 5800h, 144hz IPS display can be had for around 1500USD here, you'd need the 32core to roughly match the GPU in theoretical benchmarks.
That's theoretically though, as Linus Tech Tips points out in the video in this thread, its basically the high end video editing where you see larger performance advantages most likely due to the media accelerators/pro res. Talking about this in a gaming thread, i think its quite a bad value for the M1 pro/max. They are not in the same price class at all.
Would say for creature comforts (what do you mean exactly with that anyway?), their not too far apart,
Yes, exactly, and the entire point I'm making -- and the point of this thread -- is to ask "why?".the Apple device is more premium in build and execution, but their enticing to different markets.
The gamer/allround user will go the GE76 route while the content creator most likely the Apple device.
I don't think anyone would argue that you can achieve things in ways on consoles that can't be replicated on PC. And that in many cases the console implementation will be more efficient than doing it a different way on the PC. But that's very different to saying the end result can't be achieved on PC regardless of how much power you throw at it.
There's always another way to achieve something even if that way is less efficient and thus requires more horsepower. Case in point, Media Molecule do intend to bring Dreams to the PC at some point:
https://www.gamesindustry.biz/artic...-to-publish-games-to-other-devices-and-beyond
So the real question isn't whether something that can be achieved on console is impossible on PC, it's how much the additional flexibility afforded by console API's and driver models can boost performance over the best alternatives for achieving the same end result on PC. That's where the optimization comes in - finding the best alternatives where they are needed. And that's where older architectures will receive little to no effort while newer ones will - both at the game level and the driver level.
Pretty sure there are many cases where newer PC's APIs, shader models and newer HW are way more efficient at doing certain things. And there are certainly many things that current PC hardware can do, but consoles can't, so that's a double-edged sword.And that in many cases the console implementation will be more efficient than doing it a different way on the PC.
It depends on what your interpretation of "another way" implies. You'd have a better chance at making a feasible software renderer (CPU) for Dreams than you would on a GPU and current consumer grade hardware wouldn't be good enough to be playable. We'd ideally want future high-end server grade CPUs to be able to maintain a stable 30FPS at all times with PS4 equivalent settings. Standard PC GPUs are virtually paperweight either way you slice it and it's hardly realistic for them to demand server grade CPUs or make the game exclusive to a single hardware vendor (AMD) on PC. If you're other potential workaround involves Dreams changing from a point cloud to a polygonal renderer (massive implications) then you've pretty much lost the argument since PC wasn't able to render the same content as seen on consoles ...
As for the link, it's a couple years old now and he shouldn't take this the wrong way but they interviewed an art director in question so he's hardly an authority on appraising technical feasibility of a PC port ...
Just because the common multiplatform party developer will see their own game's usage patterns is a friendly match for PC doesn't mean that usage patterns from other developers/games will always find a way that maps well to PC. There used to be tons of content changes between consoles and PC releases of games back in the past to highlight the irreconcilable differences in hardware design.
And of course, the M1 can actually achieve very similar performance and (silent) acoustics running on battery. That's not some single bullet-point when we're talking about a portable computing device, it's kind of the reason they exist!You get a laptop that is comparable to the overall package of the MacBook Pro (display, keyboard, aluminium construction, webcam, speakers, battery life etc), and you find that the M1 Max MBP will be fairly price competitive. I've explained this in the other thread too.
"Another way" means whatever solution can produce a similar output for a similar performance cost regardless of the methods used to achieve the output.
I do think you're too quick to dismiss performant point cloud rendering on modern PC GPU's as impossible though. Take this for example which demonstrates rendering point clouds with compute shaders via various API's and achieving up to 50b points per second on an RTX 3090.
https://arxiv.org/pdf/2104.07526.pdf
He's also co-founder of the company so if he says they have plans to bring it to PC (and other platforms) I'm inclined to believe he understands that it's not actually impossible.
I can understand and appreciate this but ultimately, if the way it's done on console doesn't map very well to PC, then find a different way to do it that does map well to the PC for the PC version. And if you still lose performance in doing so then that's the genuine console API/driver model advantage. The real question is how much performance do you actually lose in such scenario's?
It doesn't even come with a webcam
The 144Hz panel version that you mentioned also has a trash screen: 62.5% of sRGB, which makes it entirely useless for professionals. I haven't read into whether the keyboard or trackpad are any good, but a priori I don't think they're any good.
speakers are garbage
RE: the LTT video: Yes, this pretty much echoes what I've been saying all along -- see my point later in this message.
Gaming performance on macOS (M1/Radeon/GeForce equipped Macs) is utter trash mostly because of subpar implementation (see my original post).
"why?".
This is basically what happens with console-focused AAA games and their transition to the PC. Developers just rely on the brute force available on PCs to run their games, but this comes with a huge performance tax.
Pretty sure "all around users" will go with a MacBook Pro/Air over a GE76.
We'd ideally want future high-end server grade CPUs to be able to maintain a stable 30FPS at all times with PS4 equivalent settings.
Pfft, lazy programmers! Everyone knows that.Why do some console ports run siginifacntly slower on equivalent PC hardware?
It doesnt come with a NOTCH either.
It doesnt matter why, its what you get today when you pay for the products.
We're trying to discuss apps where consoles have equivalent performance to their PC counterparts, despite having far less resources to work with.
I'll go one further, let's keep Apple out of ALL discussions.Ok, so lets keep Apple out of this discussion then.
The most performant UMA in the desktop/notebook space is invariably going to come up when we're talking about the relative performance of a very similar architecture in the console space.Ok, so lets keep Apple out of this discussion then.
This is Beyond3D not BeyondLTT.Ok, so lets keep Apple out of this discussion then.
The most performant UMA in the desktop/notebook space is invariably going to come up when we're talking about the relative performance of a very similar architecture in the console space.
Apparently you haven't been on B3D long enough.We should have zero tolerance for cringe “PCMR” smugness or memeing about Apple’s hardware offerings and performance.
Bullshit. Apple has never, ever prioritized gaming performance on their platforms. They have always aimed towards the people who want to spend a lot more money for the spit-shine and polish of their ecosystem, which are the well-to-do "creative types" who can be funded by rich parents, companies who want to cater to their whims, educational subsidy (Apple did really well planting their seeds so to speak in the 80's and 90's) and "creative shops"The parallel between extracting performance from consoles and PCs and extracting performance from Macs and PCs is pretty obvious and simple to appreciate.
Be ware also that comparison to 3080m is often not specific, the 3080m can be tuned by OEMs to have a wide range of operating power limits, which can range from 80w up to 200w. with wildly different performance profiles as a result. This is a crucial information when doing comparisons between GPUs.Fair point. Though I didn't claim it was a great benchmark. I just said that it had a ground-up Metal implementation.