PS3 network + back-compat news

Shifty Geezer said:
I don't what low level access would be used on GPUs or the XB having only ever used the higher levels APIs and abstractions myself, but the fact MS have been having so much trouble with BC shows it's not straightforward. They themselves said it was a matter of different 'levels' and games written on the highest 'levels', which I take to mean going through the official APIs only, have BC out of the box, whereas other titles need to be worked on.

The difference between that and what might be appearing on PS3 is if the hardware is accessible on the same low level so if a program accesses something on the hardware level without going through an API, it still works correctly. If the previous-gen program is being translated bytewise and mapped onto a different hardware without any direct access to the hardware, that's pure software emulation. I don't know to what degree that is or is not happening on XB360, but I understand the API is key to MS's programming model (it was the DirectX Box afterall!) and would expect compatibilty to be focussed on the API rather than hardware-level compatibility.

Your first paragraph actually highlights the point that 360 BC is hardware based by virtue of the fact that some are needing more intervention than the others. Your point about RSX being able to to natively understand PS2 low level calls is not feasible: completely different hardware built in complete isolation of each other.

Where stuff like register calls are made directly then its more a case of having a the BC software layer make special cases for these and remap them to the new hardware. Think of something like "Bleem" which had a generic engine that mapped PS API calls on to DX calls, and then made special cases for games that stray a little. Where PS2 emulation may have an easier time is by virtue of the fact that it is more fixed function than NV2A.

However, the point being is that 360 emulation is a hardware process, other than there sliding a software layer between the application code and the hardware to remap / alter code pathways, and PS2 emulation on PS3 is likely to be very similar.
 
thatdude90210 said:
Why isn't "Purevideo" needed again? Does the PS3 have a dedicated chip that does video, or are they supposed to use the Cell chip to do that?

Because if it's the last one, then I would ask why take something out of one chip (work), just to have to write software (more work) to use another chip to do something you already had.

Probably because software is a one time cost, and porting some codecs that are already made is not going to be all that much work or very expensive (especially when the hardware is likely pretty good at it). Meanwhile adding fixed function hardware is a cost per unit on something that's already pretty big.

I'd say Sony would be kind of crazy not to ditch the purevideo... you're talking about increasing the die size of a rather large chip just to do something that the Cell is probably just as good at (and had codecs ported already, most likely -- tech demos and such).

Who knows whether they actually cut it out, but from someone looking from the outside, it would seem like keeping purevideo transistors in is a waste in the end.
 
so, are they still sticking to their guns that they will have 100% BC with PS1 and PS2 games? I think that comment might have to change completely if they aren't going for a hardware solution anymore. It might have to end up sounding similar to the loop that MS said at last e3.
 
Titanio said:
...
As for save files, if there's no memory card slots in PS3, you'll need to be able to save somewhere. The HDD makes as much sense as anything else. I guess we might find out at E3.
actually I was referring to the possibility that there would be something like .exe files on the hard drive for each game to add emulation, no?

if not, why did MS have to go that way if they are both using some hardware and some software emulation (a layer in between as Dave mentioned)

I'm confused. :oops:
 
Tap In said:
actually I was referring to the possibility that there would be something like .exe files on the hard drive for each game to add emulation, no?

if not, why did MS have to go that way if they are both using some hardware and some software emulation (a layer in between as Dave mentioned)

I'm confused. :oops:

Well i think the idea would be that sony would install any and all profiles required on the ps3. So there might be various profiles for different games, but they would all be there from the get-go.

Otherwise, they would have to do an MS like solution, i.e. a subset installed on the console, the rest available for download, i don't see that happening.
 
Tap In said:
actually I was referring to the possibility that there would be something like .exe files on the hard drive for each game to add emulation, no?

if not, why did MS have to go that way if they are both using some hardware and some software emulation (a layer in between as Dave mentioned)

I'm confused. :oops:

If the Emulution is not solid in itself witch it sounds as it is some combination of the two with the first one being relied one i guess.
 
Shifty Geezer said:
There's a grey area between software and hardware though. Hardware tends to mean incorporating the hardware from the original platform inthe form of a chip or such. If PS3 doesn't incorporate the GS as a chip, it's not a hardware BC solution, but if they have features of RSX to aid emulation, neither is it a software only solution as XB360's seems to be. Hence it looks to be a hardware-aided software solution ;)

Why would it need it? The PS3 easily eclipses the PS2 in features and performance in just about everyway.
 
Fox5 said:
Why would it need it? The PS3 easily eclipses the PS2 in features and performance in just about everyway.

I think they would need something to cover the edram bandwidth in the GS, I don't think any of the busses in the PS3 wil be fast enough
 
Exactly. In fact, I'm quite confused by this information. How would one go about software emulating the crazy FB bandwidth of PS2? It will be very interesting to see how they do it once we finally gain solid facts.
 
Tap In said:
actually I was referring to the possibility that there would be something like .exe files on the hard drive for each game to add emulation, no?

I sincerely doubt it, since that would suggest that emulation isn't working properly and each games require independant hacks for them to work fully. That would also suggest they would need to test each and every of those 13'000 games they want to be compatible with... If the emulation-system works fine, there should be no need for game-specific hacks (software-emulation or not).



Tap In said:
if not, why did MS have to go that way if they are both using some hardware and some software emulation (a layer in between as Dave mentioned)

Microsoft required individual hacks because some/most games didn't stick to the TRCs (I assume didn't code only through the API) but used it as a closed platform and coded to its advantages by doing wacky stuff with the hardware. Because of this, game specific 'hacks' are needed to emulate these special behaviours that are not covered by the emulation-system. Perhaps ERP (or any other developers that has worked on Xbox) knows more about this?
 
Phil said:
That would also suggest they would need to test each and every of those 13'000 games they want to be compatible with... If the emulation-system works fine, there should be no need for game-specific hacks (software-emulation or not).

There's no way in hell to somehow "prove" (as in mathematics) that the emulation works "fine". The only way would be to test the games.


Phil said:
Microsoft required individual hacks because some/most games didn't stick to the TRCs (I assume didn't code only through the API) but used it as a closed platform and coded to its advantages by doing wacky stuff with the hardware. Because of this, game specific 'hacks' are needed to emulate these special behaviours that are not covered by the emulation-system. Perhaps ERP (or any other developers that has worked on Xbox) knows more about this?

There are no TRCs that say "don't do wacky stuff with the hardware". TRCs say stuff like "offer 50Hz and 60Hz modes" and "don't crash if the user pulls out the memory card at the wrong moment". Console developers are encouraged to do wacky stuff with the hardware.
 
Dave Baumann said:
I'm not sure I see the distinction here. You think that all the graphics rendering of XBOX titles is done without the aid if the graphics processor? For one, we know this not to be the case as they have 4x AA applied (a hardware function) and second, both NV2A and Xenos are fundamentally DirectX devices, with Xenos being few generations on - one of the primary issue areas is with the shadowing mechanism of NV2A which isn't directly mappable to non-NVIDIA hardware and needs alternative coding to manage.

I still think there's a difference, though, between hardware-assisted (software) emulation and "normal" emulation. The later simply uses the available system (hardware + software) as well as possible to emulate whatever it is trying to emulate, the former has had the emulation as part of its design (which obviously mostly impact the hardware, as software can essentially be changed at any time).

An example what I imagine could be such hardware assistance would be the native inclusion of 1, 2, and 4 bit paletted formats in RSX.

Whether by that (completely arbitrary definition) the 360 also does "hardware-assisted" emulation is up in the air, as BC came rather later in the design process.
 
It does seem arbitary - even if a device doesn't support a format natively, then its likely to be using another format. If it is a.) still doing it in hardware b.) doing it with the correct performances and c.) not loosing quality, whats the difference?
 
Dave Baumann said:
It does seem arbitary - even if a device doesn't support a format natively, then its likely to be using another format. If it is a.) still doing it in hardware b.) doing it with the correct performances and c.) not loosing quality, whats the difference?
That's the point of emulation. If your host-machine is fast enough, you can emulate the system on a silicon level and not have any problems whatsoever. The difference comes in when it isn't as fast as you would like. Then the "hardware-assisted" parts simply are less likely to be a bottleneck and / or suck up valuable resources for menial tasks like format conversion.
 
Fox5 said:
Why would it need it? The PS3 easily eclipses the PS2 in features and performance in just about everyway.
Total power doesn't always mean you can software emulate a less powerful system. There were Spectrum (Timex 2000)emulators on much more powerful PCs (386 I think) that ran dog slow because for every instruction from the Specturm game, you had to find a suitable replacement on the PC. And then when it came to addressing the graphics, potentially having to go with a very alien route. I guess you could say it's like a very smart person translating a rather thick person's writing into another language. No matter how smart a person may be, if they have to use a language dictionary to make the translation happen, they are being slowed down. A Spanish speaker, even not an intelligent one, could read a book in Spanish, and a super smart German who can't speak Spanish has to look through lots of books to translate what the Spaniard says into German, slowing him considerably that he will be reading out the book at a much much slower rate.

I think they would need something to cover the edram bandwidth in the GS, I don't think any of the busses in the PS3 wil be fast enough
One suggestion is just using compression. PS2 didn't have texture compression, so with 4:1 RSX has 80 GB/s available relative to PS2 textures.
 
Dave Baumann said:
It does seem arbitary - even if a device doesn't support a format natively, then its likely to be using another format. If it is a.) still doing it in hardware b.) doing it with the correct performances and c.) not loosing quality, whats the difference?
Well I wasn't trying to categorize one as software and another as hardware. As I said originally, this emulation malarky is rather grey in it's definitions. When the media say it's a software solution or a hardware solution, it's normally a combination of the two.

Maven provided an example of the GPU level hardware emulation that I couldn't offer, and as he says, the more hardware level emulation you have, the less slowdown and problem you have on the software level. If the Cell supports native EE code because of hardware features, it'll off a high level of emulation of that part. If it has to translate EE code into Cell code, it'll be on a software emulation. As I understand it, reading between the lines of various comments, PS3 is offering some hardware support for instructions or features, whereas XB360 is having to translate a lot. Of course I have no figures or details on any of that, and have no idea what level of hardware compatibilty exists between XB and XB360, but from the sounds of things it's two very different architectures. Whereas Sony have been in a better position to consider BC from the beginning (as it's something they wanted as a feature of the console and own PS2 IP) to work with development partners to try and include some features for compatibility purposes, such as Maven's suggested low-bit textures.
 
assen said:
There's no way in hell to somehow "prove" (as in mathematics) that the emulation works "fine". The only way would be to test the games.

When the emulator does exactly what the emulated device does*, I'd say you can say with absolute certainty that the emulation works fine without testing each and every case of a game (which is not possible with todays level of complexity in games).


assen said:
Console developers are encouraged to do wacky stuff with the hardware.

I was pretty sure I read sometime in 2001 that Microsoft wasn't keen on developers 'coding to the metal' on Xbox and made doing so quite difficult. For various reasons, I can see why they would want developers to stick to coding high-level or though their API (i.e. for exactly backwards-compatiblity specific reasons).


In any case, I didn't claim to know which TRCs were stated in the case of Xbox - however given exactly the situation with backwards-compatibility, I would assume some technical requirements were given that influence backwards-compatibility today one way or the other. Perhaps you can be more detailled about what they were and how they impacted backwards-compatibilty on Xbox360?


EDITED type for clarification [thanks Maven].
 
Last edited by a moderator:
[maven] said:
That's the point of emulation. If your host-machine is fast enough, you can emulate the system on a silicon level and not have any problems whatsoever. The difference comes in when it isn't as fast as you would like. Then the "hardware-assisted" parts simply are less likely to be a bottleneck and / or suck up valuable resources for menial tasks like format conversion.
I don't think the issues here are about being "fast enough", but more about code specifics. For instance, the difference between the shadowing mechanisms between XBOX 360 and XBOX are fairly trivial in terms of performance difference, but not so in terms of code. PCF filtering can easily be emulated with higher texture rates and shader code in terms of producing the same output, but it can be a significant pain in the arse to identify the code and replace it, espeicially if it is intertwined with specific lighting model shader code (which can differ per title).

Shifty Geezer said:
Maven provided an example of the GPU level hardware emulation that I couldn't offer, and as he says, the more hardware level emulation you have, the less slowdown and problem you have on the software level. If the Cell supports native EE code because of hardware features, it'll off a high level of emulation of that part. If it has to translate EE code into Cell code, it'll be on a software emulation. As I understand it, reading between the lines of various comments, PS3 is offering some hardware support for instructions or features, whereas XB360 is having to translate a lot. Of course I have no figures or details on any of that, and have no idea what level of hardware compatibilty exists between XB and XB360, but from the sounds of things it's two very different architectures. Whereas Sony have been in a better position to consider BC from the beginning (as it's something they wanted as a feature of the console and own PS2 IP) to work with development partners to try and include some features for compatibility purposes, such as Maven's suggested low-bit textures.
From a graphics perspective, how can RSX fit in with considering BC from the beginning?
 
Dave Baumann said:
From a graphics perspective, how can RSX fit in with considering BC from the beginning?
Presumably Sony wanted BC, designed their in house (or Toshiba) GPU with BC in mind, decided to go with nVidia instead and for that year or whatever in development have been putting some degree of BC features into RSX, which is where the idea of losing the useless transistors and filling up with hardware BC idea fits in. I don't know enough about where they'd target such hardware to give any examples myself, but just going from Maven's example, adding hardware support for low bit textures to the TMUs would be one change adding a bit of hardware BC.
 
Phil said:
When the emulator does exactly what the emulated device is supposed to, I'd say you can say with absolute certainty that the emulation works fine without testing each and every case of a game (which is not possible with todays level of complexity in games).

Not to be pedantic, but that is one of the common caveats of emulation. That's the difference between writing the emulation based on the specification of the emulated device, and its implementation. Correct would be
does exactly what the emulated device does

Dave Baumann said:
I don't think the issues here are about being "fast enough", but more about code specifics. For instance, the difference between the shadowing mechanisms between XBOX 360 and XBOX are fairly trivial in terms of performance difference, but not so in terms of code. PCF filtering can easily be emulated with higher texture rates and shader code in terms of producing the same output, but it can be a significant pain in the arse to identify the code and replace it, espeicially if it is intertwined with specific lighting model shader code (which can differ per title).
I think it's both. :)
But with regards to your specific example of PCF, I don't think it's much of a problem as PCF is either a sampler-state or encoded in the tex-instruction (don't know how it is accessed on the Xbox), both of which can easily be found out and suffixed by the additional sampling and averaging instructions.
I think a bigger problem might be the floating point nature of the pixel-shader pipeline, you could do some clever bit-fiddling with the 12bit (IIRC) fixed-point values, although this (luckily) isn't common on the PC platform (as otherwise every floating-point pipeline emulating PS1.X would be in trouble).
 
Back
Top