What is a hardware compatible CPU? *spawn

I think it's worth taking into consideration that the conservative design of the PS4Pro doesn't necessarily mean that the same would need to hold true for the PS5, even if they intend to support backwards compatibility.

The PS4Pro absolutely HAD to run all PS4 games without a hitch, or else it would have failed as a mid-gen console. If the PS5 plays 95% of PS4 games, that's not as disastrous, because it's pretty much just a bonus.

Also, let's say, for example, the PS5 launches late 2019. If there's any way that they can update the PS4's development tools to ensure any game released from the end of 2018 is flawlessly compatible with the next console, I don't think most people would even notice the PS5's less than 100% backwards compatibility.
 
So developers using lower level API's like GNM aren't going to throw roadblocks into backwards compatability if they move to new cpu Ryzen, Ryzen+, etc or GPU archtectures that might appear in next gen consoles (Navi's successor?)?
http://www.eurogamer.net/articles/digitalfoundry-how-the-crew-was-ported-to-playstation-4
I suppose it depends how optimised those APIs are. If they are using machine-specific instructions, the API could fail on a different CPU. I can't see why they'd optimise to such a low level though. That's inviting future issues for gains no-one would probably notice on the whole. Is 3% faster CPU code worth making future BC harder? The GPU seems a different story though, because fast access to that is more direct to the hardware, so it seems. The moment you start abstracting the GPU, you introduce notable overhead. But maybe no so in the post Vulkan/Metal/DX12 world?
 
With rendering it's possible there could be differences that manifested as slightly different end results or glitches of varying significance, while still operating to some level of acceptability.

I imagine that with a CPU its far more likely that if anything at all is not supported or does not produce identical results then your game breaks.

....Although with GPU compute maybe they're as demanding for BC as CPUs these days.
 
I suppose it depends how optimised those APIs are. If they are using machine-specific instructions, the API could fail on a different CPU. I can't see why they'd optimise to such a low level though. That's inviting future issues for gains no-one would probably notice on the whole. Is 3% faster CPU code worth making future BC harder? The GPU seems a different story though, because fast access to that is more direct to the hardware, so it seems. The moment you start abstracting the GPU, you introduce notable overhead. But maybe no so in the post Vulkan/Metal/DX12 world?
So the use of emulation for OG Xbox backwards compatibility on Xbox One wasn't necessary? They could have gone the route of implemented some legacy driver and legacy API support?
 
OG Xbox was probably addressed at a low level breaking APIs - the abstraction certainly wasn't to the extent of the PC which afforded the console a lower overhead and better relative performance. I think there's also copyright issues with nVidia when it comes to the drivers. I don't know if Jaguar could run the same P3 target x86 code. I don't see why not in principle unless, again, devs were writing low-level assembly specific to the CPU.
 
Historically backwards compatibility has been a highly desired feature across console manufacturers. They have been even willing to increase BOM costs by adding chips from the previous generation of consoles to enable it. Be it Nintendo's Wii, WiiU, their handhelds (Gameboy to Gameboy Advanced, Gameboy advanced to DS, etc). PS2 having built in PS1 hardware to maximize backwards compatibility, ditto for PS3).
However nearly every time regardless if they use the same architectural family utilizing the same instruction sets (eg MIPS for both PS1 & PS2 or ARM for both Gameboy Advanced and Nintendo DS) its implemented via either hardware emulation or having actual previous gen architecture in the system (or a combination of the two).

In the absence of very fat high level APIs (ala Windows DirectX) It seems that you don't need to go as low as assembler language to throw roadblocks into backwards compatibility across a specific architectural family, and that even these "low" level console APIs (despite not being anywhere as low as assembler language) throw roadblocks into backwards compatibility necessitating the need for emulation or previous gen architecture on the silicon die or on separate chips.
 
Consoles also historically used weird custom hardware that differed radically between generations.
But in the instances where they don't ala MIPS on PS1 & PS2 or ARM on Nintendo handhelds, they still have built in processor architecture from the previous generation to enable backwards compatibility. Those not only use the same instruction set between generations but are the same architectural family.

So I wonder about this assumption that if they remain on x86 for next gen, an identical instruction set between PS4/X1 and PS5/X2 should mean no problem implementing almost full backwards compat if Ryzen/Ryzen+/Ryzen2 are chosen.
 
But in the instances where they don't ala MIPS on PS1 & PS2 or ARM on Nintendo handhelds, they still have built in processor architecture from the previous generation to enable backwards compatibility.
It's not just the processor architecture but the system architecture. I'm confident that the later CPUs had no trouble with the old CPU code, but devs would rely on things like timings back then. Any change in the components - RAM, system controllers, GPU, buffer - could generate a fault. What we don't have for comparison is a similar console to see how compatible it is and where the problems lie. Plus history doesn't tell us about the present and future if the way the hardware is used changes.

Take home point being that previous gens tell us very little without confirmation that the problems from then still exist now. Hence the discussion is more on the theory of CPU compatibility.

I guess on the flip side, there have been cases of people taking hardware and swapping out the CPU for a more powerful one. The Amiga happily took 68000 replacements. There's a 65816 CPU alternative for C64 it seems. There are compatibility issues on low level code, but code that followed the system rules and doesn't go beyond the official boundaries generally works.
 
Why is that interesting for BC? RDTSC is nowadays often run to a different clock rather than counting CPU cycles. If the CPU is set to 1.6 GHz regardless in PS4, it'll be the same in 4Pro, being the same processor. Or is that indicative of an internal working? Not sure what the AMD RDTSC is set by.
 
I suppose it depends how optimised those APIs are. If they are using machine-specific instructions, the API could fail on a different CPU.

The API needs to be the same, but not the API's code which can be bespoke for each unique piece of hardware (PS4, PS4 Pro, PS5 etc). CPU instructions used by the OS/API code on PS4/Pro but are missing on PS5 will be replaced by whatever the new equivalents are. CPU instructions used by game code can be retargeted using a JIT approach or modified by the loader.

There are a lot of solutions for dealing with ever-changing CPU instruction sets in x86. GPUs should be harder but technologies developed by Chris Lattner during his time at Apple working on LLVM, particularly intermediate dynamic compilation, could make retargeting code for different GPU hardware much easier - if Sony are leveraging this. The odd PS4 Pro APU graphics hardware suggests not but it their software stack may just not have been ready for a 2016 launch and LLVM is part of the PS4 tool chain.

LLVM is utterly, utterly awesome :yes:

I guess on the flip side, there have been cases of people taking hardware and swapping out the CPU for a more powerful one. The Amiga happily took 68000 replacements.

Not a good example. 68000 was a 16-bit processor with 32-bit internals with a 24-bit address bus, which is why adding a 68020/30/40/60 would not just a case of slotting in a new chip, but actually adding a complete new board. All upgrades to the original 16-bit platforms were 32-bit systems sitting on a 32-bit Zorro slot (Amiga's PCI equivalent) with it's own 32-bit local memory. Lots of 68000 code broke on 68020+ as well because the cache mechanism changed. Self-modifying code that worked great on 68000 broke on latter processors and some instructions used more registers than the older chip. Frankly it was amazing that much ran at all.
 
Last edited by a moderator:
OG Xbox was probably addressed at a low level breaking APIs - the abstraction certainly wasn't to the extent of the PC which afforded the console a lower overhead and better relative performance. I think there's also copyright issues with nVidia when it comes to the drivers. I don't know if Jaguar could run the same P3 target x86 code. I don't see why not in principle unless, again, devs were writing low-level assembly specific to the CPU.
Jaguar can do everything P3 (with SSE2) could. It's only a matter of speed. You can't optimize your code so deep for a p3, that it couldn't run on jaguar (if there aren't any bugs) because even assembler commands will be translated into micro-code via the cpu itself. It's like a API in the OS. The API shouldn't change, just get expanded. What can change is the speed of the calculation. There can be some viariations (bugs), but those can most time resolved via a firmare/bios update..
That's why MS has DirectX (Direct2/3d) for GPUs and Sony GMN (and so on). Because there is no such thing as x86 on hardware-level, so the driver will decide how the commands/data will be send to the GPU.
 
Why is that interesting for BC? RDTSC is nowadays often run to a different clock rather than counting CPU cycles. If the CPU is set to 1.6 GHz regardless in PS4, it'll be the same in 4Pro, being the same processor. Or is that indicative of an internal working? Not sure what the AMD RDTSC is set by.
At this point, modern hardware purposefully keeps RDTSC at some fixed reference separate from CPU clock. Pentium M is one generation that had variable clocks but no fixed RDTSC tick rate, which had problems. Later generations moved to methods that did not increment based on a core's clock.

I think AMD's modern chips use a northbridge-based fixed increment. Not so coincidentally, AMD's introduction of HSA had requirement for a globally visible and monotonically increasing timing method, which is one of the few elements that HSA hardware-compliant architectures would need to have even as they discard many of the software or platform elements of HSA.

Jaguar can do everything P3 (with SSE2) could. It's only a matter of speed. You can't optimize your code so deep for a p3, that it couldn't run on jaguar (if there aren't any bugs) because even assembler commands will be translated into micro-code via the cpu itself. It's like a API in the OS. The API shouldn't change, just get expanded. What can change is the speed of the calculation. There can be some viariations (bugs), but those can most time resolved via a firmare/bios update..
That's why MS has DirectX (Direct2/3d) for GPUs and Sony GMN (and so on). Because there is no such thing as x86 on hardware-level, so the driver will decide how the commands/data will be send to the GPU.

I'm replying somewhat late after watching the DigitalFoundry video on Xbox One's backwards compatibility (referenced in https://forum.beyond3d.com/posts/2013012/). Without a transcript, and pending an a full article, I interpreted some statements on original Xbox compatibility to mean that there's translation of the x86 binaries as well.

One reason I thought about why this would happen--besides perhaps getting more optimal code to compensate for other overhead--is that while it's true that Jaguar should be able to run any code that the P3 could, x86-64 hardware compatibility uses exclusive and incompatible hardware modes. x86 is not forward-compatible with x86-64 at an architectural level, and the newer architecture was not designed to permit them to function concurrently in the same context.

A definition of backwards compatibility that doesn't include active platform intervention wouldn't work. x86-64 repurposes a number of opcodes (REX prefixes are 32-bit mode ALU instructions), loses some of the P3's own legacy backwards compatibility elements (unclear how often the Xbox code would have hit that), and assumes system-level changes since its addressing and conventions do not align totally with 32-bit mode.
If it's just a case of WOW64 (albeit even this is somewhat weaker than the compatibility some consoles have had) or dedicating a system to one mode or the other, that would be fine. However, neither console vendor is likely to recode their platforms, create an even more complex system context, or lose security/modern features to switch Jaguar down into 32-bit mode.
Beyond that, it may also be that it cannot readily do so since there's a good chance that the ongoing translation and emulation activity for the GPU's compatibility is using 64-bit mode (memory addressing, protection, VM, instruction/hardware emulation, extra registers, modern extensions, etc.).
 
I think it's worth taking into consideration that the conservative design of the PS4Pro doesn't necessarily mean that the same would need to hold true for the PS5, even if they intend to support backwards compatibility.

The PS4Pro absolutely HAD to run all PS4 games without a hitch, or else it would have failed as a mid-gen console. If the PS5 plays 95% of PS4 games, that's not as disastrous, because it's pretty much just a bonus.

Also, let's say, for example, the PS5 launches late 2019. If there's any way that they can update the PS4's development tools to ensure any game released from the end of 2018 is flawlessly compatible with the next console, I don't think most people would even notice the PS5's less than 100% backwards compatibility.

The PS2 Slim wasnt fully compatible with all older PS2 games. Hardly anybody noticed. You dont need flawless BC just good enough that's it's not a problem for the vast majority of your users.
 
Yeah I was right all along, the custom lowerlevel (somwhat) apis of the consoles actually impedes back compat.

Takeaway, the lower level custom API of the consoles while a boon for performance impedes backwards compatibility. While PC gamers have lamented PC's high level DX APIs they benefit from superior backwards compat because of it.

 
Last edited:
They only way I could absorb information was by closing my eyes.

With my eyes open all I can see are his hands waving all over the place. :yep2:
 
Back
Top