will PS3's GPU be more modern than PS2's GS for its time?

In a month, ISSCC 2005 comes and we'll know the (1st-gen) Cell details . In 90 days, in March, there'll be the PS3 unveiling event. Patience, dudes :D
 
I am stubborn, but I am not confused.

Your argument did fail to prove how my two sentences CONTRADDICTED each other.

The fact that you accepted that:

My conclusion ALWAYS accepted other conclusions. It was suggestive and NEVER definitive.

If it was not definitive, but accepted other conclusions youa re saying this:

"The GPU might not be CELL based."

You accepted this conclusion as one of the possible outcomes of your argument or of one or more of the premises you used.

One of those premises was Vertex Shading would be done using the APUs on the CPU.

Do you see you are saying the same thing that you deemed to be logically contraddictory ?



My argument was:

....
Statement 1: With this said, I still say that IMHO the PlayStation 3 GPU is not CELL based, it does not have the SPUs/APUs.
....

Statement 2: It would seem to me a not bad idea to assign all the Vertex Shading work to the CELL based CPU[...]

For these two statements to be contraddictory it would mean that if Statement 1 is true then Statement 2 cannot be true at the same time.

Your arguments accepts as plausible premise that "Vertex Shading is done by the APUs in the CPU".

Now if saying that "the GPU is CELL based" is not the ONLY conclusion, but one of the possible conclusions of an argument which was thought to analyze the nature of the GPU in PlayStation 3, then we might start to accept that another possible and plausible conclusion is that the GPU might not be CELL based.

If we accept that as valid conclusion then my two statements are not contraddicting each other.

3 : to contain potentially
4 : to express indirectly <his silence implied consent>

This is from the Dictionary definition of "imply"... when I was presenting the more "exclusive" version of your argument I used it in the meaning presented as #4 while you meant it as #3.

When Immanul Kant and his followers say "ought implies can" they mean that a moral duty (i.e. something you "ought to do/respect") makes sense only if the person has the capability to perform the duty, if it is possible to perform the duty. It would not be fair to mandate you to do something if you could not possibly perform the action.

So I was justified in not accepting the premises in that scenario which I admit was a bit warped considering what you wanted to say.

My point is that I do not believe you can say my two Statements contraddict each other, do not make sense together, etc... : if you rpooved that argument in my "exclusive conclusion" and my use of "implies", etc... then you would have shown how my two Statememnts contraddict each other as...

"Statement 2: It would seem to me a not bad idea to assign all the Vertex Shading work to the CELL based CPU"

could not lead to say anything different from "the GPU is CELL based" and therefore it would make a conclusion like "the GPU is not CELL based" as impossible and illogical.
 
Almasy said:
So, if I understand this well, Sony had the GS completed, with everything mayor locked down, in 1998? And sat on it for more than a year?? If that´s the truth then:

1. Wow, the GS really is one heck of a great graphics chip if it was really completed back then.

The Matrox G200, the Nvidia Riva TNT/2, the Videologic PowerVR2 and ATI Rage Pro 128 GPUs were all impressive architectures that could render some trully amazing PS2 level graphics in a videogame if the game developers tools and code are written on an assembly level even thought they were all physically produced in 1998 on whatever process it was back then (.350nm?)

Unless I'm wrong, Nvidia released the Riva TNT on a .350nm process and then re-released the same GPU as the TNT2 on a .250nm raising the fill rate performance and if Sony would have chosen that chip instead of the GS we can argue that the graphics would have pretty much been the same since the PS2 does most of its software tricks dependant on its CPU (a possible reason why GS was finished first)

Fast foward to 1999 and Nvidia released the GeForce256 Nv10 GPU on a .220nm and in 2000 they released the GF2 Ultra Nv15 on 180nm then in 2002 they re-re-released it as the GF4 MX Nv18 on a .150nm process that is still basically a Nv10 just shrunk down and stably overclocked from 120Mhz all the way to the 300Mhz on the GF4 MX 460.

Now imagine that a 1999 GPU core having breakneck speed thanks to a company that allowed this to go on.

2. Wasn´t this kind of dumb?? To sit on a chip, no matter how good it is, for more than a year is not a very smart thing to do, IMO.

Not so, at least in the way Sony seems to think since they also neglected to add 4MBs to the already very limited 4MB of video ram in the PS2 to at least match up to Dreamcast's 8MB of video ram.

Besides there are reasons for this:

1: At the time 1998, 1999 there was no real XBox rumors based on fact.

2: The GS is very much like the Matrox G400, Nvidia Riva TNT2, etc in that they perform most of their functions dependant on a fast CPU therefore the wait for 1999 or 2000 EE production on a smaller die process for more speed in a custom chip 300Mhz. Compared to the way XBox works, the GPU handles most of the graphics tasks while the CPU is free to do things like A.I. something that PS2 games are not known for having in a good way.

3: the GS benefited in a RivaTNT2 or GeForce 2 way of getting a die shrink for more speed and therefore higher fill rate and polygon count.

4: Sony was going to destroy SEGA's credibility by buying off EA and having Square and Enix produce software only for the PS2 leaving the DC high and dry without those companie's IP and with a major head start on what they thought was going to be their competition in Nintendo.

5: Sony was basically promoting the use of assembly level software development kits so that game developers would be able to get very close to the hardware since Sony would preffer that a game developer devote all their time making games on their console leaving any port to another console a major software development effort farmed to either another team or outsource to another team if the 3rd party could afford it.

Basically Sony learned something that SEGA seemed to forget during the last PSX vs Saturn days in that Saturn games were really much more technologically impressive than anything on PSX if they were programed in assembly level code and the reason SEGA did that with some of their games was because they were not going to develop those games on a competing console platform.

SEGA's Saturn turned out some really technologically impressive 3d games during 1997/98 with titles like Burning Rangers Panzer Dragoon Saga, the Sonic World part of the Sonic compilation and even Nights into Dreams. But what is really more impressive is what AM2 was working on that very little got to see, a prototype Saturn game that requires the 4MB expansion cart, Project Berkerly aka Shenmue:

http://www.sega-saturn.com/news/112102.htm

If you watch that movie and you ever got to play the first version of VF4 on PS2 and compare it to the 3rd or fourth? version as VF4evo you will see that AM2 is really crunching the PS2 to nearly replicate the Naomi 2 arcade hardware on the PS2.

I´m not very knowledgable on this topic, but didn´t Sony had enough time to further upgrade the GS?? How difficult is it to do that??

I can assume that Sony may have had assembly level SDKs being written so that software developers would be able to start putting out and get a head start in PS2 game development and part of the reason as to why Metal Gear Solid 2 SoL pales in comparison to MGS 3 Snake Eater in the way the PS2 hardware is being used to display the much more complex graphics the same goes for GTAIII to GTASA.

Having a head start in assembly level SDKs and plenty of years of development time (PS2 is sales king world wide) has allowed and can allow game developers to produce amazing looking games that make people almost think that XBox games are technologically weak in comparison to a crippled 1998 technology like PS2.

As for the topic poster, I feel that I agree with you.

I see the same thing that happened to XBox happening with PS3 in that the technology will definetly be bleeding edge when released in comparison to PS2 mainly because:

1: Unlike PS2, PS3 will not be first DVD console anymore, it will be Blue Ray though and as Sony has proven, their pockets are deeper than SEGA's.

2: Assembly level development tools will be worked on however I feel that they will do the same thing they did with PSX and release a general C++ similar level SDK unless Nvidia is providing a custom version of CG to Sony. Expect alot of games to probably have choppy glitches and even some slight slowdown in the first and second wave/gen games released, followed by progressively improving games that even by the 3rd and 4th generation (unlike PS2) will not show the true potential of the console considering how games take an average of 2 years to make and even Halo 2 illustrates this even though its the developer's second effort.

3: I could be wrong and Sony and Nvidia could have a more powerfull assembly SDK ready for the 2nd gen of games to give full access to every transistor.

Now I personally am not a Sony fan, I am more of a SEGA fan but my taste in anime franchise based videogames that I personally find appealing like Mobile Suit Gundam, Dragon Ball Z, Transformers and others makes me already swing for PS3 over XBox Next since it seems like Microsoft has no idea that people watch anime and dig franchise related videogame products, it works for Star Wars but those anime games have and are being written specifically around PS2 hardware with a slim chance of a version or port on GameCube and no chance in hell for XBox even though the console's CPU is fully capable of emulating the PS2 hardware.
 
Akumajou said:
The Matrox G200, the Nvidia Riva TNT/2, the Videologic PowerVR2 and ATI Rage Pro 128 GPUs were all impressive architectures that could render some trully amazing PS2 level graphics in a videogame if the game developers tools and code are written on an assembly level even thought they were all physically produced in 1998 on whatever process it was back then (.350nm?)

You're delusional. Sorry that's all anyone can say after that.
Not so, at least in the way Sony seems to think since they also neglected to add 4MBs to the already very limited 4MB of video ram in the PS2 to at least match up to Dreamcast's 8MB of video ram.

They work differently, i thought this argument was closed 3 years ago. You can only store 8MB of textures on DC, while on PS2, once u've finished with your game code, you have the rest of those 32MB to store textures.
3: the GS benefited in a RivaTNT2 or GeForce 2 way of getting a die shrink for more speed and therefore higher fill rate and polygon count.

The GS of today is exactly the same speed as it was in 1999. The die shrink was for costs reasons, heat dissipation and power requirements. It is also the reason why we now have a much smaller PS2 than it was at launch. No speed gains.

4: Sony was going to destroy SEGA's credibility by buying off EA and having Square and Enix produce software only for the PS2 leaving the DC high and dry without those companie's IP and with a major head start on what they thought was going to be their competition in Nintendo.

What does that have to do with the main subject?

5: Sony was basically promoting the use of assembly level software development kits so that game developers would be able to get very close to the hardware since Sony would preffer that a game developer devote all their time making games on their console leaving any port to another console a major software development effort farmed to either another team or outsource to another team if the 3rd party could afford it.

Or, Developers wanted the manufacturers to let them get closer to the metal after PS1, and Sony did just that. Obviously they didn't handle it the best way they could, but that's another matter.

Basically Sony learned something that SEGA seemed to forget during the last PSX vs Saturn days in that Saturn games were really much more technologically impressive than anything on PSX if they were programed in assembly level code and the reason SEGA did that with some of their games was because they were not going to develop those games on a competing console platform.

Meh... sounds to me like someone still has a chip on their shoulders...

SEGA's Saturn turned out some really technologically impressive 3d games during 1997/98 with titles like Burning Rangers Panzer Dragoon Saga, the Sonic World part of the Sonic compilation and even Nights into Dreams. But what is really more impressive is what AM2 was working on that very little got to see, a prototype Saturn game that requires the 4MB expansion cart, Project Berkerly aka Shenmue:

http://www.sega-saturn.com/news/112102.htm

NOT AGAIN!!!

If you watch that movie and you ever got to play the first version of VF4 on PS2 and compare it to the 3rd or fourth? version as VF4evo you will see that AM2 is really crunching the PS2 to nearly replicate the Naomi 2 arcade hardware on the PS2.

Sega was never that good at squeezing performance out of PS2. So i'm not sure why one should use them to prove PS2 or any other console's power. Konami and Sony's first parties teams are the best PS2 developers. Nintendo and their first party teams are the best devs for GC and Tecmo and some others have proved to be the best for Xbox.
Sega has not released anything exceptionally advanced for years and years. Panzer Dragoon Orta might be very pretty to look at, but it's hardly advanced, being an on-rail shooter.

I can assume that Sony may have had assembly level SDKs being written so that software developers would be able to start putting out and get a head start in PS2 game development and part of the reason as to why Metal Gear Solid 2 SoL pales in comparison to MGS 3 Snake Eater in the way the PS2 hardware is being used to display the much more complex graphics the same goes for GTAIII to GTASA.

Now Now... I wouldn't say MGS2 pales in comparison to MGS3 and the same for GTA3/GTASA... The later released ones obviously look better, but it's obvious that a game that comes out 2-3 years after another one looks better.

Having a head start in assembly level SDKs and plenty of years of development time (PS2 is sales king world wide) has allowed and can allow game developers to produce amazing looking games that make people almost think that XBox games are technologically weak in comparison to a crippled 1998 technology like PS2.

That's one way to put it i guess... :?

1: Unlike PS2, PS3 will not be first DVD console anymore, it will be Blue Ray though and as Sony has proven, their pockets are deeper than SEGA's.

Didn't really get that...

2: Assembly level development tools will be worked on however I feel that they will do the same thing they did with PSX and release a general C++ similar level SDK unless Nvidia is providing a custom version of CG to Sony. Expect alot of games to probably have choppy glitches and even some slight slowdown in the first and second wave/gen games released, followed by progressively improving games that even by the 3rd and 4th generation (unlike PS2) will not show the true potential of the console considering how games take an average of 2 years to make and even Halo 2 illustrates this even though its the developer's second effort.

Again, stating the obvious... Later gen games always run better than earlier ones... Also, it's a given this time around, with NVIDIA on board, that there will be plenty of libraries and documentation available to the developers straight away, at least for the GPU. It's the CPU part, and how Sony will inform devs on how to work with it that worries be a bit more... ;)

3: I could be wrong and Sony and Nvidia could have a more powerfull assembly SDK ready for the 2nd gen of games to give full access to every transistor.

Sony, MS and N always release "new" SDKs or at least tools to help devs squeeze out performance out of their platforms. The Performance Analyser for PS2 helped a lot.

Now I personally am not a Sony fan, I am more of a SEGA fan
Who'd have guessed?!! :devilish:
...being written specifically around PS2 hardware with a slim chance of a version or port on GameCube and no chance in hell for XBox even though the console's CPU is fully capable of emulating the PS2 hardware.

Excuse me????? Now THAT i'd love to see in action. :LOL:
 
london-boy wrote:
You're delusional. Sorry that's all anyone can say after that.

As the topic poster mentioned, the GS is 1998 technology, therefore comparable to graphic chips released in that year as far as capabilities and the fact that the GS cannot be compared to 1999 PC-GPUs because those chips introduced things like hardware transform & lighting (S3 and Nvidia), enviroment mapped bump mapping (Matrox G400), texture compression (S3), etc.

They work differently, i thought this argument was closed 3 years ago. You can only store 8MB of textures on DC, while on PS2, once u've finished with your game code, you have the rest of those 32MB to store textures.

Duh, Sony started shifting to streaming games of the media as opposed to just loading a level into memory since the PSX days, the only reason that became a standard for the console industry is because Sony was the name brand sales/marketing leader, NOT technological leader.

So what are you, a believer in that PS2 is so powerfull because you claim that the video ram argument was closed 3 years ago? you are wrong and I am part of the few who will say it.

Like I said, Sony's PS2 is NOT a technologically superior product compared to SEGA's Dreamcast, they are almost on similar power levels with the PS2 only being able to push more polygons.

All the 3d effects on PS2 are handled in software, if AM2 could take a 4MB cart and make a prototype Shenmue (however much you may hate that game) run on SEGA Saturn (1994 tech) using their own custom assembly level SDKs and their obvious experience then its pretty damn obvious that even the worst developer in the world, with plenty of experience using Sony's custom assembly level SDKs will be able to put out amazing 3d effects on either PSX or PS2.

Maybe you never knew that SEGA Genesis was not able to do the SNES "mode 7" effect, but a couple of years after the SNES was released there were Genesis games doing the same tricks like the infamous "mode 7" thanks to developer experience in using assembly level tools.

The GS of today is exactly the same speed as it was in 1999. The die shrink was for costs reasons, heat dissipation and power requirements. It is also the reason why we now have a much smaller PS2 than it was at launch. No speed gains.

Gee I wonder why the PS2 had to have heat sinks and a big huge fan suck all the hot air out of the console. (sarcasm)

First of all it was 1998, second we can assume that although the die shrink provided all the heat dissipation and lower voltage requirements that Sony desired to make sure that the GPU would be as fast as it could complement the EE CPU, again its worth mentioning that the GS is not a GPU like the Nvidia Nv10 and ATI R100 were with their on core geometry processors and therefore such a graphics chip required a fast CPU just like the Matrox G200, G400, Nvidia Riva TNT and even 3DFX Voodoo 2/3 to reach a higher performance.

That also explains how A.I. has not evolved much on PS2 because the EE is all tied up.

What does that have to do with the main subject?

It has to do with the "subject" because if you lived back in 1998 and 1999 prior to the 2000 release of the PS2 you would have been bombarded with all the hype Sony was spitting about how Square would be able to display games pushing 80 million polygons, you would have read interviews of EA and Sony dev teams dismissing the DC as a "failiure" months prior to the 9-9-99 DC launch, all creating a mindshare atmosphere controlled by Sony marketing evangelists.

Basically, having those "evangelists" and the suckers that believed it allows companies like Sony to release 1998 technology in the year 2000 and not get critisized or punished for it.

Or, Developers wanted the manufacturers to let them get closer to the metal after PS1, and Sony did just that. Obviously they didn't handle it the best way they could, but that's another matter.

I think my paragraph made more sense but...

Polyphony Digital in 96 (or was that 97) was already getting closer to the "PS1 metal" with both Gran Tourismo (and later GT2) and thats years before the Dreamcast and using BTW the same software based streaming of videogame levels on PSX hardware years before they would again use it on PS2, nothing new there.

Its obvious that Sony realized it was better to have every developer using "closer to the metal" assembly level tools so that way a dev could devote all their resources on one platform making a port become a financial risk unless the game was using general C++ level SDKs. That ensure all or the majority of games being made on a "hit console" even though the technological hardware being used is the equivalent of 1998 technology.

Meh... sounds to me like someone still has a chip on their shoulders...

And what the hell is wrong with that? rich boy Sony came, saw, stole and conquered the competition. Anyone can say anything about Sony, Sega or Nintendo just like the bunch of idiots who claim AMD is god and Intel/Microsoft is the devil.

Or you could see Sony as Walmart taking market share away from mom & pop game shops like SEGA & Nintendo eventually leading them to a financial corner. I never saw Nintendo making tv sets, SEGA selling DVD players, radios, cassette players, cd-players, etc.

Or maybe you just cannot imagine a world without Sony's involvement in the videogame field.

NOT AGAIN!!!

I guess a prototype Shenmue running on a Sega Saturn that was hyped by the competition as being 3d-deficient must be very offensive to you.

Sega was never that good at squeezing performance out of PS2. So i'm not sure why one should use them to prove PS2 or any other console's power. Konami and Sony's first parties teams are the best PS2 developers. Nintendo and their first party teams are the best devs for GC and Tecmo and some others have proved to be the best for Xbox.

Can you please go out and find, rent or borrow (or dl if you mod) VF4 and compare it to VF4Evo, then compare it to the VF4 and VF4Evo arcade machines, you just might notice how VF4-PS2 looks like garbage compared to the closer to the arcade VF4Evo.

I picked SEGA because they are NOT ass-kissing Konami or Sony first parties mainly because Sega dev teams had to start learning Sony SDKs much later than any other game developer so it makes sense that their first games as a third party would never look as good as a Konami effort, thats a big change from how Sega dev teams had SDKs months before any third party would or the same could be said for Sony first devs and second devs because its obvious Sony would want Square to have a FFXXX ready for console launch day if they could get away with it.

Now Now... I wouldn't say MGS2 pales in comparison to MGS3 and the same for GTA3/GTASA... The later released ones obviously look better, but it's obvious that a game that comes out 2-3 years after another one looks better.

Now it seems like you are just disagreeing or re-cycling the same mesage I originally posted just for the sake of finding something to do.

Gee I wonder why a game that comes out 2-3 years is always able to push more 3d effects+performance out of a console, hey maybe thats what makes consoles so great over PCs.

That's one way to put it i guess...

Maybe I could have said that the Sony driver has to learn how to drive a 6 speed manual transmision touring car and they figured that drawing the race track by hand in their backyard allow them to remember all the curves come race day even though the touring car was a tricked/pimped out Yugo racing against SEGA Supras and Nintendo RX-7s with drivers who had only heard of the track and assumed that with their experience they would win or complete the race.

Didn't really get that...

Do you remember the price of DVD players in the US back in 1999 and months prior to PS2's 2000 launch?

Basically the only new technologies being introduced in consoles was the DVD drive and enhanced backward (but flawed) compatability being built in and the sole reason alot of people rushed out to get PS2s, to watch movies and play old PSX games enhanced since the majority of first gen PS2 games were horribly below the then Dreamcast standard, something that should never have been.

Again, stating the obvious... Later gen games always run better than earlier ones... Also, it's a given this time around, with NVIDIA on board, that there will be plenty of libraries and documentation available to the developers straight away, at least for the GPU. It's the CPU part, and how Sony will inform devs on how to work with it that worries be a bit more...

So what?, the Topic poster asked for opinions on how the PS3 would compare to PS2 in its time of introduction, I gave my opinion, deal with it.

Sony, MS and N always release "new" SDKs or at least tools to help devs squeeze out performance out of their platforms. The Performance Analyser for PS2 helped a lot.

Gee I seem to remember that Genesis & SNES story about "mode 7", how about I tell you how both PSX and Saturn dev kits were buggy but because the Saturn used dual CPUs, it was critisized more for being more complex to develop a SDK for.

I have been very well informed on how SEGA, Sony, MS and N always in their historical times have always had to release updated SDKs, that is a standard that was beaten to death on SNK's Neo Geo 2d arcade hardware if you compare Fatal Fury to Mark of the Wolves when it comes to fluid 2d animation.

Who'd have guessed?!!

And I am to be punished for even remotely liking SEGA??

You missed the whole point of my post then if you just want to narrowly pre-judge me into a SEGA fanboy box.

Excuse me????? Now THAT i'd love to see in action.

If you are familiar with emulators you would know that Dreamcast was capable of running an emulator that would not only raise the resolution of certain PSX hit games, but breathe new life into them.

There are PSX emulators that work on PCs and the minimum requirement is an Intel Pentium II or I with MMX extensions for visual parity with PSX image/sound quality. Basically the emulator uses that old Intel CPU running at 200Mhz and the MMX extensions to emulate a 1994 console that had a 33Mhz cpu.

PSX emulators took a long time to develop, the console was released in 94 in Japan and the first actual working emulators I remember were released in either 1999 or 2000 with Bleem being one of the first that tried to go commercial (and they got sued for it) If you can count, it took hobbyists (the people who make emus) over five years to properly make accurate emulation and later enhanced emulation with the help of powerfull graphics cards and APIs like Direct X and OpenGL.

Now it did not help that Sony offered to hire (as in give jobs) to those hobbyists so as to prevent their fears from becoming a reality in a hobbyist made PS2 emulator being made much sooner thanks to the hobbyist's experience in making previous emulators.

Taking into account that the PS2's EE runs at 300Mhz and the GS runs at 150Mhz, it makes sense to say and assume that a properly written PS2 emulator could be made with a PIII as a minimum requirement for accurate emulation and a Dx+OGL graphics card to provide texture filtering enhancements.

As to when that will happen, hey its now 2005 and people are still working on it but since they do not get paid it depends on how much time they dedicate to it.

If we were to get a company to do it, like the company that made Bleem, we would have had a PS2 game emulator running on XBox 2 years ago.
 
Akumajou said:
Taking into account that the PS2's EE runs at 300Mhz and the GS runs at 150Mhz, it makes sense to say and assume that a properly written PS2 emulator could be made with a PIII as a minimum requirement for accurate emulation and a Dx+OGL graphics card to provide texture filtering enhancements.

fastest p3 produced was the 1.4GHz tualatin. but that's hardly a commonly-available cpu, the consumer's p3 would be ~1ghz. so you're saying that a 1ghz p3 (sse) + a dx7 card can emulate the ps2, did i get you right?
 
The XCPU is actually weaker then the EE at least in some respect. And yet you're saying that it's fast enough to emulate it? Are you serious?? :oops:
 
Today's Windows PC can play a PS2 game on an emulator at whopping 0.99 fps :LOL: It'll require a 15GHz or more CPU to run it at 60 fps.

Sevensamurai_1.jpg
 
The reason there's no PS2 emulator out that runs commercial games is not for lack of trying or Sony "buying emucoders" but instead it is due the existence of real hardware issues: 128 bit GPRs, two independent floating point vector coprocessors, 64 bit integer instructions, the complex architecture and many others I forget.

I've spoken with two or three emu/plugin coders and the running consensus seems to be that we can expect a reasonably solid PS2 emulator that runs commercial games maybe sometime around a decade from now; certainly not on Xbox (or even on Xenon for that matter) and not on a P3 either
 
darkblu said:
fastest p3 produced was the 1.4GHz tualatin. but that's hardly a commonly-available cpu, the consumer's p3 would be ~1ghz. so you're saying that a 1ghz p3 (sse) + a dx7 card can emulate the ps2, did i get you right?

I understand what you are saying about the P3, still what I meant is a properly written emulator that would provide accurate emulation and some texture filtering would have to be made so that it is taking full advantage of Direct X or OpenGL and the CPU+large memory (512MB to 1024MB as minimum).

Now this is just my theory, not a fact and I forgot to mention IMHO but I feel that an emulator that is probably written in an assembly or very close to the OS kernel+API+CPU & extensions (SSE/2/3, 3d Now, etc) basically the emulator has to work just like the PS2 was designed to work dependant on the EE+GS sharing the workload as opposed to XBox where the GPU (if a dev codes it) will handle the graphics load while the CPU does other tasks like A.I., etc.

The XCPU is actually weaker then the EE at least in some respect. And yet you're saying that it's fast enough to emulate it? Are you serious??

Like I mentioned I neglected to mention "IMHO" and I understand what you mean as I also know how slow the DC emulator Chankast and even that working Saturn emulator Cassini and the up coming SEGA Model 3 emulator are running on Pentium 4 at 2Ghz+ howeven I feel that it is possible to make the games run emulated if the coder is closer to how the graphics filtering API+cpu extensions+ large memory+OS kernel so IMO it would take a professional coder to do it, it is not impossible however.

akira888 wrote:
The reason there's no PS2 emulator out that runs commercial games is not for lack of trying or Sony "buying emucoders" but instead it is due the existence of real hardware issues: 128 bit GPRs, two independent floating point vector coprocessors, 64 bit integer instructions, the complex architecture and many others I forget.

I've spoken with two or three emu/plugin coders and the running consensus seems to be that we can expect a reasonably solid PS2 emulator that runs commercial games maybe sometime around a decade from now; certainly not on Xbox (or even on Xenon for that matter) and not on a P3 either

Still maybe the PS2 emulator could be limited like Bleem for DC was in that it was best to optimize the emulator for each specific game to be emulated and enhanced (just run at 640x480 with texture filtering to smooth out the edges)

I often wondered how Shenmue II, a game written specifically for a SH4+PowerVR2 DC (and I assume was written in second gen assembly level custom SDKs) and released in 2001 was ported to XBox so quickly with no real, major graphical enhancements, even on the textures since many still claim that the DC's textures were more accurate yet no major flaws.

Also I feel that although it is not a standard now since most emus are written in C++ for one CPU (Intel/AMD) alot of these console emulators could greatly benefit if they required a dual CPU with large memory (1024MB minimum) set up to compensate for the two independent floating point vector coprocessors, and 64 bit integer instructions. Maybe with upcoming dual core cpus it might be possible.

Also the Chankast story on it running XBox does give hope since those guys devoted so much hard work of their time:

http://www.ngemu.com/forums/showthread.php?t=58320
 
I often wondered how Shenmue II, a game written specifically for a SH4+PowerVR2 DC (and I assume was written in second gen assembly level custom SDKs) and released in 2001 was ported to XBox so quickly with no real, major graphical enhancements, even on the textures since many still claim that the DC's textures were more accurate yet no major flaws.
the dreamcast (followed by the xbox) was really the first console to have an industry (IRT PC's) standard graphics subsystem. the xbox has a faster cpu and a faster gpu with basicly a superset of features compared to the pvr2dc. it's no surprise that shenmue ran so well.

Still maybe the PS2 emulator could be limited like Bleem for DC was in that it was best to optimize the emulator for each specific game to be emulated and enhanced (just run at 640x480 with texture filtering to smooth out the edges)
the dreamcast's sound subsystem alone is more powerful than the psx and has, what, 8 times the memory. the xbox and ps2 are comparable in speed and overall memory, with an obvious edge going to the xbox. you can't expect any system from this generation to emulate another, it's just not feasable.
 
I have read alot of talk about assembly level programming. Are console developers still using assembly for console games?
 
ondaedg said:
I have read alot of talk about assembly level programming. Are console developers still using assembly for console games?

Assembly level SDK or Software Development Kits and custom versions and continually revised versions have been made to absolutely stay competitive against the stiffer competition of today's consoles.

Back in the Genesis vs SNES days it was used by SEGA to dispell the "mode 7" scaling and rotation of 2d sprites and fields.

It was used again with SEGA Saturn's in house dev teams like AM2 and other Amusement Divisions, Sonic Team and Team Andromeda to get closer to the Saturn's metal and produce game that dispelled the myth of Saturn being 3d deficient with games like Burning Rangers, Nights into Dreams, Virtua Fighters, Virtual On, Panzer Dragoon, etc just look at this movie of a prototype Project Berkerly aka Shenmue running on Saturn and requiring the 4MB expansion for video ram:

http://www.sega-saturn.com/news.htm

It was also used by Sony in house dev teams like Polyphony Digital to make Gran Turismo and GT2 possible, I think Konami used it to make Metal Gear Solid but I am not sure even though that game was exclusive to PSX and Namco used it too as well as other dev teams.

SNK has used it to push the 1989 Neo Geo 2d hardware to its limits in 2d graphics.

Nintendo has used it to make Super Mario 64 and other games from RARE on the N64.

It was used somewhat to a lesser extent on Dreamcast (unfortunatly) as SEGA was trying to get away from the supposed "nightmare to program" Saturn.

Sony absolutely had to used it at the start of the PS2's life cycle since they had no initial plans on doing online games like Sega was aiming at and as a way to get a major head start in development in pushing the PS2 to its limits in anticipation of the higher powered XBox and possible NGC threats.

N used it on GC to make dazzling 3d effects possible in many of their first party games.

As for Microsoft XBox, I am not sure, I believe it was confirmed that Tecmo tried or did used their own in-house developed assembly tools as opposed to using MS's SDK based on DirectX 8.0. However I believe there were two initial versions of the Dx SDK, one that was more like C++ and another that was closer to the metal but it was still Direct X, hence a possible reason why Tecmo and any other dev teams who want to prove their talents at pushing hardware.

However the main idea with XBox's SDK was to make it easier for devs to make a PC version, so I can see myself here that is good for some devs but bad for others who are not interested in making PC ports.

As for up coming consoles like XBox Next (some call Xenon), Nintendo Revolution and Sony PS3, I can only assume that Nintendo and Sony will immediately use assembly level SDKs instead of C++ like based to give them a head start unless I am wrong but if right they will be able to push those consoles to their limits much faster since they are not going to give their first party games to other competing consoles.

Thats also one of the reasons why current PS2 games like MGS3 Snake Eater look so impressive even compared to XBox specially when devs (some of them at least) have been gathering experience in knowing the Assembly level SDKs like the backs of their hands they would probably rap or sing assembly if dared.

Also keep in mind that is also the reason alot of 3rd party games are exclusive to only receiving one console version as re-programing a game, specially an assembly game takes alot of financial investment on the part of the dev company unless they can affort do have 3 or 2 dev teams each working on the 3 or 2 differing console versions. This also helps Sony and Nintendo if a 3rd party uses assembly level SDKs because it almost assures that the game will be exclusive and if its a good or great game, it helps to sell tons more consoles.
 
Akumajou said:
Now this is just my theory, not a fact and I forgot to mention IMHO but I feel that an emulator that is probably written in an assembly or very close to the OS kernel+API+CPU & extensions (SSE/2/3, 3d Now, etc) basically the emulator has to work just like the PS2 was designed to work dependant on the EE+GS sharing the workload as opposed to XBox where the GPU (if a dev codes it) will handle the graphics load while the CPU does other tasks like A.I., etc.
In others words, you want a PS2 emulator that goes close to metal, and you want the emulator to act like a "simple kind" of translator, also know as Compatibility Layer.
But there's a problem here, and it resides in the fact the closer you go to the PC architecture metal, and bigger the underlying and fundamental differences, with the PS2 architecture, are visible.
And the PS2 architecture, as others already told you, is really different than what you found in actual PC (Even futures dual cores ;) ).

The closer thing, in emulation that ressemble your ideafor obtaining accurate emulation is known as dynamic recompilation, Dynarec.
But here, again, you have to understand that a good and efficient Dynarec requires a lot available raw power.

Akumajou said:
Also I feel that although it is not a standard now since most emus are written in C++ for one CPU (Intel/AMD) alot of these console emulators could greatly benefit if they required a dual CPU with large memory (1024MB minimum) set up to compensate for the two independent floating point vector coprocessors, and 64 bit integer instructions. Maybe with upcoming dual core cpus it might be possible.
You're really suggesting that you can "compensate" the absence of the VU1 VU0, by having a lot of RAM available?
When it comes to PS2 emulation, RAM isn't an issue.
And lots of it won't help, at all, with accurately emulating the PS2's, really powerful, vector units. ;)
 
ondaedg said:
I have read alot of talk about assembly level programming. Are console developers still using assembly for console games?
On today's game? Some developers might throw a few line of assembly here and there, in some hardware that lacks excellent compilers *cough*PS2*cough*.

But, generally, games are written using high-level language, and that for different reasons.
The main ones are readability, the game code in modern games is so complex and dev period are so short that you don't have time to loose hour looking for something.

There's also the portability, a lot of games, engines (graphic, sound, physic) are ported from a platform/game to another, and low level language are not the solution in this case.

And then you have "simplicity" (term used losely), as i said games are so complex, and the time available is so short that any technique that saves you some precious hours are welcomed.

The last one i cite would be the fact that today's compilers do an excellent job, making assembly obsolete for most of usage. For instance there's no need for low level language, AFAIK, in Xbox, since the machine embark an X86 processor, and seeing that compilers give excellent results.
 
Vysez said:
The last one i cite would be the fact that today's compilers do an excellent job, making assembly obsolete for most of usage. For instance there's no need for low level language, AFAIK, in Xbox, since the machine embark an X86 processor, and seeing that compilers give excellent results.
I would guess that many Xbox games have x86 ASM for animation work. And almost all of the shaders written for the Xbox I would guess to be shader ASM.

There will always be a use for ASM. Compilers will, for the forseeable future, never support all the features everyone will want to use. Take the cache locking patent for the X2, for instance. Either a compiler will have to be specifically modified to handle that feature or developers will handle it through ASM (I hope I'm not presuming too much here). I guess it will be the latter.
 
Akumajou:

You quite don't quite grasp what was special about PS2. It is the fact that VU-units were really fast for it's time, same for the GS and memory speeds. GS does have 16 pipelines which could really work to full efficiency, albeit for texture mapping you had to combine pipelines so you had 8 pipelines for single textured surfaces, 4 pipelines for dual textured and so on. GS has the whopping 48GB/s bandwidth internally to really pull of some stuff not really possible with tnt2.

Based on nvidia pages(http://www.nvidia.com/page/tnt2.html) tnt2 ultra had 2.9GB/s memory bandwidth and had to use AGP2 to transfer data from main memory to graphics memory. Compare this to ps2 :) And yes, tnt2 had to do t&l on processor and transfer vertices via agp. PS2 has 3.2GB/s rdram as main memory, dedicated vector units with peak(not achievable) 6.4GFlops performance and so on. Try to emulate with tnt2, yeah, or even run same algorithms with same speed.

Compare PS2 to pc solutions available at the launch of PS2 and year after and you will see that even the peak specs of tnt2 are not anywhere near, much less the achievable results when you factor in memory speeds. I consider one of the biggest problems of the XBox the memory speeds, 6.4GB/s combined for cpu and graphics, not that much, luckily it does have nice caches to help, but STILL, MS had a WHOLE lot of time after PS2 launch and still so poor memory buses. Maybe some developer can give insight, if the memory bus in xbox is really as bad as I think it to be?


This is not to say that ps2 is god or anything, just that it was really, really fast, but simple. Also the complexity to code for it... well... not that bad anymore as better tools are available, hopefully sony has learnt its lesson from the past.

If you start thinking about emulating ps2 you would at ideal case need the 48GB/s internal bandwidth and 4MB memory to it. Also 3.2GB/s for main memory. Also factor in various caches and scratch pad memories in system, you have to emulate them probably cycle exact. Also games contain code that is to the metal and trust to strict timings that you would need to emulate in some compatible way adding to extra burden.

Considering previous and all other kinds of things that hinder efficiency I would guestimate needing at least twice the specs and performance of ps2 to emulate it. Really, where do you have such a pc, NOW? Sure 2007-8 or so there might be a fully working emulator for some not too expensive pc or similar machine... or maybe even earlier if sony puts soc implementation of ps2 into ps3...
 
Back
Top