What is a hardware compatible CPU? *spawn

Very important semantics. What constitutes a new CPU? And what constitutes BC? Most of us are considering a CPU as hardware BC when it can run the same code.

That's a middleware software issue, not a CPU issue. I can find only one reference to Jag code not running on Zen and that's a highly optimised benchmark. https://www.techpowerup.com/231536/amd-ryzen-machine-crashes-to-a-sequence-of-fma3-instructions
It is a new cpu, but it could also be considered the same cpu with added enhancements. Not very important semantics unless you're dying to be right.

Good on you for proving yourself wrong. You should now consider that some developers who are trying to eek out every bit of juice they can from the ps4 (for example) with a first party budget are using highly optimized code as well, and not with an "my opinion is far more likely than yours until you show me different" arrogance.
[/QUOTE]

You're assuming a lot, which makes for a poor technical discussion, hence me moving this to its own thread. Why are you assuming devs are using to-the-metal CPU code? Is there any evidence supporting that, or are you guessing? Most code is written with high-level development languages. There's very little reason to use low-level CPU code, especially for anything multiplatform. I won't say it's impossible - Naughty Dog might well be using Intrinsics as they are platform exclusive - but it's far from a certainty unless you have reason to think that's the case beyond, "that's how it used to be and things never change so that must be how it is now."

Well, ok. I did. But it's highly probable for first party games, esp. later in the consoles' lives. But you're also assuming, and more so. Because if we can't know, my statement of "To be absolutely certain hardware is 100% compatible with older software without additional software workarounds, you have to have the same architectural design." Is absolutely correct. The people designing the playstation 5 couldn't possibly know how every single developer wrote their ps4 code, just as we can't. You following me here? Do you have the ability to change your stance when confronted or are you just going to keep power tripping?

If not you might as well just ban me.
 
What does "to the metal" cpu code even look like? They use LLVM/Clang, x86. If they kept the same OS/APIs, I have a hard time imaging why a newer x86 CPU would be a big issue.
 
Most commonly, this is a reference to writing code using assembly language rather than a compiler.
I guess it changes nothing from a compatibility stand point since the opcodes used are based on the compiler target. Say, using AVX opcodes in assembly or enabling AVX flag in the compiler build is the same end result.
 
I guess it changes nothing from a compatibility stand point since the opcodes used are based on the compiler target. Say, using AVX opcodes in assembly or enabling AVX flag in the compiler build is the same end result.
the only thing i can think of is perhaps race conditions. But I haven't heard of that type of thing happening in a CPU
 
It is a new cpu, but it could also be considered the same cpu with added enhancements. Not very important semantics unless you're dying to be right.
It's not about being right but understanding what everyone's talking about. If someone using the word, "CPU," is meaning something different to someone else meaning the same thing, the discussion will always be at odds. That's why clarification is important.

Well, ok. I did. But it's highly probable for first party games, esp. later in the consoles' lives. But you're also assuming, and more so.
I wrote, "To my mind, devs are no longer hitting the CPUs that hard, but I may be wrong on that." That's a clear acknowledgement of an assumption. Not even an assumption, but a theory. You stated your POV as factual and that the alternative is ridiculous. You assumed you were right; I presented a belief that I recognised could be wrong.

Because if we can't know, my statement of "To be absolutely certain hardware is 100% compatible with older software without additional software workarounds, you have to have the same architectural design." Is absolutely correct.
Yes and no. Technically, yes, to be 100% certain. But at the same time, the CPUs are designed to be perfectly compatible with old code. That's the very basis of a persistent, compatible architecture. Where you're not right is saying that we can be 100% certain that a second CPU is absolutely necessary to play the back catalogue. We don't know if devs are or are not accessing the CPU at such a low level as to make CPU compatibility an issue. For all we know, Sony have mandated against the use of inline intrinsics etc. for just that purpose. We don't know if the same code running on Zen would even fall over as, despite being a different implementation of x64 to Jaguar, AMD may have done a thorough job of implementing compatibility. In truth, to be absolutely certain your hardware is 100% compatible with older hardware, you'd have to include the older hardware, because any change introduces the possibility of a fault. Even PS2 had hardware incompatibilities within the same product due to different hardware implementations. So it seems reasonable to me to consider '100% BC' to be less than 100% identical as long as there's a very high probability of compatibility because the hardware is designed to be compatible.

The people designing the playstation 5 couldn't possibly know how every single developer wrote their ps4 code, just as we can't. You following me here? Do you have the ability to change your stance when confronted or are you just going to keep power tripping?

If not you might as well just ban me.
I've half a mind to. You've asserted yourself rudely, failed to adjust to conversations (not giving well defined answers), turned a polite discussion into a fight, and continue to be argumentative, incapable of saying, "sorry," for being so crass with your original attitude that started all this - even after accusing me of starting it, and then getting a lengthy response pointing out the progression of events which you've now just fobbed off by claiming I'm on a power trip.

I was happily discussing the next-gen consoles. I was interested in discussing whether the CPU could be a problem even if the same x64 architecture, open to the possibility of you being someone knowledgeable and able to provide informed insight, only for you to make unjustified assertions and then ridicule those with different ideas as they tried to discuss the point. You have three options now:

1) Start engaging in a decent discussion, including clarifying points and definitions where there's uncertainty so everyone's talking about the same thing, using technical reference where available, and acknowledging when a viewpoint is just a theory or feeling instead of asserting it as if fact or where one's knowledge isn't that strong.

2) Walk away in a huff because Beyond3D is a crap forum run by Nazi tyrants.

3) Have another volley of pointing out how wrong and stupid and overbearing and up-himself I am prior to a ban.
 
Getting back to the question, what is a hardware compatible CPU? I've always taken it to be a CPU in the same family made to be compatible, executing the same source code. Conceptually, there could be faults in implementation, but otherwise you can rely on a later version of a processor to run the legacy code without issue, certainly for an iteration or two of the family. the fastest variation in CPUs at the moment is probably ARM, with a crazy number of CPUs from different IHVs all running the same code. We do get compatibility issues even with hardware abstraction on Android, but then how much is CPU incompatibility and how much is the rest of the system?
 
Getting back to the question, what is a hardware compatible CPU? I've always taken it to be a CPU in the same family made to be compatible, executing the same source code. Conceptually, there could be faults in implementation, but otherwise you can rely on a later version of a processor to run the legacy code without issue, certainly for an iteration or two of the family. the fastest variation in CPUs at the moment is probably ARM, with a crazy number of CPUs from different IHVs all running the same code. We do get compatibility issues even with hardware abstraction on Android, but then how much is CPU incompatibility and how much is the rest of the system?

My first thought is any CPU that is capable of running the same code unmodified and that, generally, it's enough that it's a CPU that shares the same instruction set(s). So, yeah, pretty much what you and most others seem to think.
 
I would say that in black box terms, hardware compatibility between CPUs would mean they can read the same machine code and transform the same input data into the same result. There may be performance differences because of how the black box is implemented, but it ultimately works.
 
Yeah, but in the console space there is the concern that low-level access will require stuff like same timings. AFAIK this hasn't been a real issue for a couple of generations, but that's mostly a gut feeling. I was shocked that GPU differences could have such an impact still and you couldn't just replace the GPU in PS4 with a more recent design, with Sony instead doubling up the same hardware. Would be nice to know for sure what options and limits MS and Sony have for CPU, though I'm happy to operate under the assumption that any x64/Zen will work just fine. And that's even if Sony care for BC! :runaway:
 
Yeah, but in the console space there is the concern that low-level access will require stuff like same timings. AFAIK this hasn't been a real issue for a couple of generations, but that's mostly a gut feeling. I was shocked that GPU differences could have such an impact still and you couldn't just replace the GPU in PS4 with a more recent design, with Sony instead doubling up the same hardware. Would be nice to know for sure what options and limits MS and Sony have for CPU, though I'm happy to operate under the assumption that any x64/Zen will work just fine. And that's even if Sony care for BC! :runaway:

I suspect that the move to massively multi-threaded engines and out-of-order execution in CPUs have made strict timing of code execution less of a possibility.
 
Yeah, but in the console space there is the concern that low-level access will require stuff like same timings. AFAIK this hasn't been a real issue for a couple of generations, but that's mostly a gut feeling. I was shocked that GPU differences could have such an impact still and you couldn't just replace the GPU in PS4 with a more recent design, with Sony instead doubling up the same hardware. Would be nice to know for sure what options and limits MS and Sony have for CPU, though I'm happy to operate under the assumption that any x64/Zen will work just fine. And that's even if Sony care for BC! :runaway:

But GPUs don't standardize their ISA like CPUs do with x86 and AVX, or at least that's how I'd understand it. GPUs standardize based on an API and the underlying ISA changes significantly between families of GPUs. So if you optimize code for a specific GPU ISA, you're out of luck running the same code on a new GPU if the ISA changes. So intel and amd cpus understand the same machine code, but Nvidia and AMD gpus do not, and AMD/Nvidia GPUs do not even understand the same machine code across families of GPUs.

Someone can correct, as I'm no expert.
 
That's right. I just didn't appreciate how alien one GPU in a 'family' could be from its predecessor, but of course it's pretty obvious when you think how they need constant driver changes to hang it all together in PC space. We were talking about GCN 2 and GCN 3, and the implication was similar to Pentium 3 and Pentium4, that GCN 3 can run GCN 2 stuff just fine. Not at all! There isn't such a thing as a clear GCN 2 demarcation with similar architectural parts having differences enough to make them incompatible. That possibly speaks volumes to how much effort CPU makers go to to add improvements without screwing around with compatibility.
 
Yah, but it's x86 assembly ... not CPU specific like microcode.
The implementation of microcode in x86 is used sparingly. You can't write any semi-complex programme using only microcode, ergo the closest an OS or application developer can get "to the metal" is assembly language. Microcode is very rarely compatible across processor generations either. Intel write bespoke microcode for their CPUs

But GPUs don't standardize their ISA like CPUs do with x86 and AVX, or at least that's how I'd understand it.

It's smoke and mirrors. If cast you mind back across 80x86 over the decades, the core architecture which began as what we now term conventional CISC, has veered into RISC and VLIW and now remains an abominable mishmash of all three. There is absolutely nothing standard about the Instruction Set Architecture, only the Instruction Set and even then new x86 instructions are introduced frequently and old instructions deprecated and handled by processor firmware or just dropped completed

The fact that you as an application of OS developer don't need to worry too much about this, is testament to Intel and AMD's designs. Remember XOP? Of course you don't, it was introduced in in Bulldozer and removed from Zen. There are plentiful examples of this. :yep2:

CPUs really aren't more standard than GPUs it's simply that the abstraction between the application and hardware is handled at OS/API level for GPUs and by the processor itself for CPUs.
 
Last edited by a moderator:
So developers using lower level API's like GNM aren't going to throw roadblocks into backwards compatability if they move to new cpu Ryzen, Ryzen+, etc or GPU archtectures that might appear in next gen consoles (Navi's successor?)?
http://www.eurogamer.net/articles/digitalfoundry-how-the-crew-was-ported-to-playstation-4

"The graphics APIs are brand new - they don't have any legacy baggage, so they're quite clean, well thought-out and match the hardware really well," says Reflections' expert programmer Simon O'Connor.
"At the lowest level there's an API called GNM. That gives you nearly full control of the GPU. It gives you a lot of potential power and flexibility on how you program things. Driving the GPU at that level means more work."

Sony has talked about its lower-level API at GDC, but wouldn't disclose its name, so at least now we know what it's called (the PS3 equivalent is GCM, for what it's worth) but what about the "wrapper" code supplied by Sony that is supposed to make development simpler?

"Most people start with the GNMX API which wraps around GNM and manages the more esoteric GPU details in a way that's a lot more familiar if you're used to platforms like D3D11. We started with the high-level one but eventually we moved to the low-level API because it suits our uses a little better," says O'Connor, explaining that while GNMX is a lot simpler to work with, it removes much of the custom access to the PS4 GPU, and also incurs a significant CPU hit.

A lot of work was put into the move to the lower-level GNM, and in the process the tech team found out just how much work DirectX does in the background in terms of memory allocation and resource management. Moving to GNM meant that the developers had to take on the burden there themselves, as O'Connor explains:
 
It all depends if the newer SOCs ever remove instructions or change the actual effect of it. On the cpuside I dont think AMD removed instructions when creating new CPUs, but some instruction timing could have taken longer to execute. On the GPU side Sony was very careful to use a frankenstein GPU approach on the 4Pro, by using the PS4 GPU and only bolting on very specific new insttuctions (dr fp16 and gradiants) while retaining all the old.

If you look through AMD GPU tech papers for GCN just glance through the changes section. Any time they indicate instructions have been removed, that could lead to extensive trouble in a console setup like Sony's where there isnt as much of a layer of abstraction between the games and the hardware.
 
Back
Top