Why do some console ports run significantly slower on equivalent PC hardware?

Usually those can be tuned out, with a lot of care and time and attention -- but often simply throwing more hardware at it (ie a "lazy" PC port) is just easier.
It's not "just easier", it's the general expectation from PC by default.
Can you make a game which runs on a PS4 run on a similarly specced PC h/w just as fast? Sure.
Why though? Who would play your game on a Jaguar 8C and a 7850 in 2021 on PC?
Hence the lack of push into this direction.
 
The most performant UMA in the desktop/notebook space is invariably going to come up when we're talking about the relative performance of a very similar architecture in the console space.

The consoles and pc have more in common then Apple has to either architecturally. The only thing thats somewhat similar between m1 and amd apu in consoles is the shared memory, which has its disadvantages aswell compared to dedicated memory setup. Theres amd apu's for pc's aswell ;)
M1 is a apple custom arm arch for both cpu/gpu, with options for up to 64gb ram on a 5nm node.

We should have zero tolerance for cringe “PCMR” smugness or memeing about Apple’s hardware offerings and performance.

This topic was created with the question regarding console AAA ports to PC. Not to Apple architecture or how 'wrong' LTT or others are about its performance. So lets stop derailing this topic by polluting it with Apple vs PC/console stuff.
 
Apparently you haven't been on B3D long enough.

I'll respond literally: Been here for over a decade. Historically, these forums have seldom hosted knuckle-dragger levels of discussion about different hardware vendors.

Bullshit.
Relax.

Apple has never, ever prioritized gaming performance on their platforms.

Which is what I said before -- i.e., they don't even support Vulkan on macOS.

They do care about gaming on iOS, however, which generates an absurd amount of money.

They have always aimed towards the people who want to spend a lot more money for the spit-shine and polish of their ecosystem, which are the well-to-do "creative types" who can be funded by rich parents, companies who want to cater to their whims, educational subsidy (Apple did really well planting their seeds so to speak in the 80's and 90's) and "creative shops"

There's some truth to this.

A lot of professionals, certainly in my industry, just prefer them for the Terminal, better OS stability, and other features (better screen, battery, keyboard and so on).


Apple is not in the gaming space by any meaningful measure; their relationship to console gaming is limited to basically the fact that they're both digital devices. It has no bearing at all on why ports from consoles to PC (which share the same foundational CPU, GPU, memory, storage and network instruction sets and architectures) perform so very differently.

I'm not arguing that there is a causal relationship between Consoles-PCs and PCs-Macs. Put very simply: I am saying that the reasoning you give for why games on the PC lose so much performance for a given set of hardware, going from console -> PC:

Truth is, despite Digi's snark on the matter, the devs likely "don't care" enough to hand-tailor a game meant for console limitations into the PC world. Despite having near-identical archiectural foundations, consoles and PCs still have interesting differences in the OS and related abstraction layers. It's akin to looking at workload performance on the same application when backended by either an Oracle, Microsoft SQL, or IBM DB2 relational databse platform. All three, at the end of the day, are modern relational databases which are based on the same foundational technologies. However, they perform very differently with different workloads and simply "porting" code to target one platform (Oracle) to another (IBM DB2) can result in very significant performance changes.

Usually those can be tuned out, with a lot of care and time and attention -- but often simply throwing more hardware at it (ie a "lazy" PC port) is just easier.

is akin to the reason why performance tanks on Apple's Mac platform -- even when AMD and Nvidia GPUs were in Macs.

There are various software impediments, some more fundamental than others (i.e., lack of common API, emulation, translation layers etc), and there is a general lack of optimisation by the developer when porting stuff over from the PC to the Mac.

Be ware also that comparison to 3080m is often not specific, the 3080m can be tuned by OEMs to have a wide range of operating power limits, which can range from 80w up to 200w. with wildly different performance profiles as a result. This is a crucial information when doing comparisons between GPUs.

Absolutely agree.
 
This topic was created with the question regarding console AAA ports to PC. Not to Apple architecture or how 'wrong' LTT or others are about its performance. So lets stop derailing this topic by polluting it with Apple vs PC/console stuff.

The topic is about why performance drops, for a given set of hardware, when games are on the PC as opposed to on a console. There is some parallel between why performance drops, for a given set of hardware, when games are on the Mac as opposed to the PC.

The reasons provided so far in the thread -- lack of optimisation, OS overhead, driver issues, and etc -- are relevant to the both scenarios.
 
The topic is about why performance drops, for a given set of hardware, when games are on the PC as opposed to on a console. There is some parallel between why performance drops, for a given set of hardware, when games are on the Mac as opposed to the PC.

The reasons provided so far in the thread -- lack of optimisation, OS overhead, driver issues, and etc -- are relevant to the both scenarios.

Mentioning it once or so is a different thing than Apple vs pc/console warring which we landed into. Most of the discussion is now centered around Apple..... The OP never even mentioned Apple to begin with.
Topic is about console ports to pc hardware, more specific sony AAA ports as per the OP. Apple M1 hardware has nothing to do with any of this, discuss the m1 chip in the M1 thread in the mobile section.

On topic, the huge deflict we see in HZD and other sony AAA on pc has more to do due the fact they are ports, designed for the ps4 first and then ported. It would be the same the other way around. And again, the 7970/7850 are quite much older then the PS4.
 
I'll respond literally: Been here for over a decade. Historically, these forums have seldom hosted knuckle-dragger levels of discussion about different hardware vendors.
Apparently you haven't been on B3D long enough, and/or haven't paid enough attention. You're welcome to refer to my own join date and post count, which both predates and significantly exceeds yours ;)

The reality is as PSman just mentioned: no part of this thread had any need to devolve into (ahem) Apple vs PC. So let's stop talking about it; it has literally zero bearing on the issue at hand, even if there were once parallels.
 
Last edited:
With that out of the way, this thread seems to be very specific in what it's asking from the start. It's fine to mention that OS and API differences can play a role. However, if you want to delve deeper into why games perform differently between Apple and Windows OSes, or even between different versions of Windows OSes, or between Apple and PC Hardware, please create a new thread.
 
At the same time, an UMA Ryzen 4800u with a miniscule power budget shared between CPU and iGPU, like 20% of the 280x's memory bandwidth also shared between CPU and iGPU, and compute performance on paper less than half that of the 280x turns in a pretty good showing of 34fps average at 720p.


Versus TechPowerUp's overall performance comparison puts the 280x at 264% of Vega 8, and that's being charitable to the 4800u, as it's the most power constrained and lowest clocked of all the Renoir variants that share the same specs page.

https://www.techpowerup.com/gpu-specs/radeon-graphics-512sp.c3587
 
At the same time, an UMA Ryzen 4800u with a miniscule power budget shared between CPU and iGPU, like 20% of the 280x's memory bandwidth also shared between CPU and iGPU, and compute performance on paper less than half that of the 280x turns in a pretty good showing of 34fps average at 720p.


Versus TechPowerUp's overall performance comparison puts the 280x at 264% of Vega 8, and that's being charitable to the 4800u, as it's the most power constrained and lowest clocked of all the Renoir variants that share the same specs page.

https://www.techpowerup.com/gpu-specs/radeon-graphics-512sp.c3587

Which probably means driver/optimization is one large issue here. Not what the hardware is actually capable off. Again as ive said many posts ago, the performance deflict has largely to do with the reason because its a (unpotimized for pc) port.
Cross platform games do show better performance deltas between equal hardware.
 
Something interesting to look into at some point is since we're now getting into lower level API implementations (DX12, Vulkan) spanning over more generations how that will affect performance. As in theory the optimization burden would be both more architectures specific and on the developers than compared to say DX11. A compounding factor would be the possible relatively dramatic shift towards MCM GPUs.

The practical impact would potentially affect those that either buy relatively higher end but hold onto their GPUs longer or those that buy relatively lower end and wait to play older titles.
 
280x is still the same arch as the 7970, need to move up to 285 or 380x for update.
Moreover, it's constantly operating out of saturated VRAM. Look at the 2nd Video in the posting below, where a 4800U with integrated Vega is doing 34 fps in 720p: It's in the 4.4 GByte range most of the time.

With my 7970 GHz-Ed. and 6 GByte i got the following in 1080p, favor quality (that's one above 'original' and 2nd highest to ultimate quality): 31 / 37 / 41 (min. / 95% / avg.) Score 7473. Didn't run 720p and low details, sorry.

edit: Ran it at home at a lowly i7-4770K in 720p with "favor Performance": 39/57/65 fps, Score 11673. Will be doing a little gaming after x-mas.
 
Last edited:
Moreover, it's constantly operating out of saturated VRAM. Look at the 2nd Video in the posting below, where a 4800U with integrated Vega is doing 34 fps in 720p: It's in the 4.4 GByte range most of the time.

With my 7970 GHz-Ed. and 6 GByte i got the following in 1080p, favor quality (that's one above 'original' and 2nd highest to ultimate quality): 31 / 37 / 41 (min. / 95% / avg.) Score 7473. Didn't run 720p and low details, sorry.

Ran quite well this summer when i tested on a 2012 7970 6gb aswell, wonder if the game’s badly optimized in the vram category?
 
Digital Foundry tested God of War on PS4 and compared it to R9 270X GPU at 800MHz (PS4 equivalent) and at stock, both configurations performed considerably worse than PS4, even though the 270X has higher specs than PS4 to begin with. DF thinks drivers play significant factor in this outcome, as optimizations stop for older PC hardware after a certain time.

 
Back
Top