XBox 1 Backwards compatibility

If CELL is what it's cracked up to be, I rather assumed they'd be emulating the whole shebang in-processor since it's not your typical CPU-to-GPU relationship. (Or failing that, work out a place to stick in an EE+GS, though I rather don't think they want to suck up the extra cost or spread themselves thin again by adding that to the PSTwo/PSP fab-sharing mix.)

Meanwhile, can one of the insiders please answer whether it's more technical feasability or "ownership issues" that stand in the way of the Xbox 2? At one point before, a lot was being made of nVidia-owned procedures that would have to be licensed extra for Microsoft to get real Xbox compatibility in Xenon.
 
Meanwhile, can one of the insiders please answer whether it's more technical feasability or "ownership issues" that stand in the way of the Xbox 2?

ERP has mentioned a few times that he thinks there may be some issues with some of the buffer formats that NVIDIA use. One element that does stick out in my mind is the use of DST+PCF (a la 3DMark05); this is a format that is specific to NVIDIA at the moment. I've heard that ATI are trying to tackle that one somehow, but I don't know if thats in relation to XB2.
 
Fox5 said:
and I've never seen a mass market drive or adapter to play the original games of 1 system on another.
You forgot about the Master Gear convertor:
gearconvertor2ko.jpg


:D
 
DaveBaumann said:
ERP has mentioned a few times that he thinks there may be some issues with some of the buffer formats that NVIDIA use. One element that does stick out in my mind is the use of DST+PCF (a la 3DMark05); this is a format that is specific to NVIDIA at the moment. I've heard that ATI are trying to tackle that one somehow, but I don't know if thats in relation to XB2.
There's a lot of rumors coming from very trustworthy sources stating that MS is indeed working on BC, and since Nvidia do not help them in anyway, they try to do it by themselves (With the help of Ati or not, that i don't know, but it's likely).
If they can have a working version before the system is considered finish they will announce it, if they can't, they'll do what they're doing now, in other words, they'll do without (They never promised BC for a reason).
 
BC is not a big deal, its just useful in consumer mind set. IMO they don't need 100% BC, just BC the popular games.
 
DaveBaumann said:
Regardless of whether Sony is "assisting NVIDIA" or not (although, really, what help could they be for the graphics core?)

You never did answer what the difference in computational constructs are between an APU and an ALU in X2. I'm interested to hear how the unified SIMD|Scalar datapaths are so different.

And then you can elaborate on, how with the ATI GPU which has unified shaders at the front-end leading upto a fixed-functional back-end that does rasterization, what nVidia and STI can each potentially contribute.
 
Vince said:
DaveBaumann said:
Regardless of whether Sony is "assisting NVIDIA" or not (although, really, what help could they be for the graphics core?)

You never did answer what the difference in computational constructs are between an APU and an ALU in X2. I'm interested to hear how the unified SIMD|Scalar datapaths are so different.

And then you can elaborate on, how with the ATI GPU which has unified shaders at the front-end leading upto a fixed-functional back-end that does rasterization, what nVidia and STI can each potentially contribute.

Come on Vince give it a rest.

Tommy McClain
 
V3 said:
BC is not a big deal, its just useful in consumer mind set. IMO they don't need 100% BC, just BC the popular games.
But that just screams "customer support nightmare". If they manage a quota similar to Sony's PS1 / PS2 compatibility, that's fine, but if it's something much lower, it won't work.
The biggest culprit is still the HDD (or a gigabyte of flash).
 
Vince said:
You never did answer what the difference in computational constructs are between an APU and an ALU in X2. I'm interested to hear how the unified SIMD|Scalar datapaths are so different.

First difference 64 to 128 hardware contexts versus 1 per pipe. The pipes designed by ATI are explicitly designed for the type of workloads which they face, which has a large latency component. To overcome this large latency a large number of hardware contexts are architected into the design. An APU is just a freaking vector unit.

And then you can elaborate on, how with the ATI GPU which has unified shaders at the front-end leading upto a fixed-functional back-end that does rasterization, what nVidia and STI can each potentially contribute.

STI? Nothing design wise. All they will probably be contributing is integration into the rest of the asic and the manufacturing.

Aaron Spink
speaking for myself inc.
 
aaronspink said:
STI? Nothing design wise. All they will probably be contributing is integration into the rest of the asic and the manufacturing.

Aaron Spink
speaking for myself inc.

Uh no.

With NVIDIA's only experience in console hardware being the horrendously overpriced and underperforming xbox GPU, you can be certain that Sony has NVIDIA on a tight leash design-wise.
 
With NVIDIA's only experience in console hardware being the horrendously overpriced and underperforming xbox GPU, you can be certain that Sony has NVIDIA on a tight leash design-wise.

:rolleyes:
 
Qroach said:
With NVIDIA's only experience in console hardware being the horrendously overpriced and underperforming xbox GPU, you can be certain that Sony has NVIDIA on a tight leash design-wise.

:rolleyes:

That's right, keep rolling your eyes.

It's amazing that a good number of people have learned nothing, absolutely NOTHING, from the past three to fours years.
 
No I'm rolling my eyes at personal stabs people like yourself take at faceless corporations. It's rather lame. Then again, when it comes to console hype, it REALLY is amazing that a good number of people have learned nothing, absolutely NOTHING, from the past four to five years.

Sony is going to get technology Nvidia was/is planning for the PC. If that's what you mean by a "tight leash" then wow, impressive.
 
Qroach said:
Sony is going to get technology Nvidia was/is planning for the PC. If that's what you mean by a "tight leash" then wow, impressive.

i am somewhat reluctant to believe nv can pull another 'xgpu' fiasco, especially at sony.
 
how was what nvidia did with the Xbox GPU a fiasco? because they had MS behind an 8 ball and thus command a high price? Yes it was over priced, but Nvidia could leverage that price with all th other components they provided MS. Without Nvidia the xbox wouldn't have happened imo.

I'm not going to get into the "under performing" nonsesne. The PS2 regarding graphics functionality was just as guilty of this as any console. Anyway, Sony is getting technology originally planned for PC IMO and they are licensing technology not buying fabbed chips. I'd really like to know what a "tight" leash means.
 
Qroach said:
how was what nvidia did with the Xbox GPU a fiasco? because they had MS behind an 8 ball and thus command a high price? Yes it was over priced, but Nvidia could leverage that price with all th other components they provided MS. Without Nvidia the xbox wouldn't have happened imo.

I'm not going to get into the "under performing" nonsesne. The PS2 regarding graphics functionality was just as guilty of this as any console. Anyway, Sony is getting technology originally planned for PC IMO and they are licensing technology not buying fabbed chips. I'd really like to know what a "tight" leash means.

"over priced" is a bit of an understatement. it was way out of the console league re price/performance. and with that UMA design it was as good as 'bolted' on something originally meant to be a console. ..or again, maybe not, i'm not sure anymore. whatever, nv managed to sell a pickup to someone who was (supposedly) shopping for a porsche.
 
Vysez said:
Fox5 said:
and I've never seen a mass market drive or adapter to play the original games of 1 system on another.
You forgot about the Master Gear convertor:
gearconvertor2ko.jpg


:D

Wasn't there one of those on genesis too? Anyhow, I was aware of it, I just didn't think it was a big seller.
 
aaronspink said:
First difference 64 to 128 hardware contexts versus 1 per pipe.

That doesn't have to do with the actual computation pathway, that has to do with the complex built around it. This is where the distinction between an APU and SPU comes in...

I specifically phrased the question so people wouldn't comment as you did and force a correction. Going with the current SPU knowledge, there's nothing preventing the current 32 limit (IIRC) going higher outside of added area costs. Although, since we are basically unsure of how the hierarchy works in a PE concerning PU allocated workload, we can't say this either at this point.
 
So why use that if you can get better performance out of units dedicated to the task that its required to to for similar or smaller die sizes? And, how are you going to circumvent texture latency?
 
Back
Top