*Sigh* The Inquirer Strikes Again: RSX "Slightly Less Powerful" than 7800

Acert93 said:
Not true.

1. Doom 3 is highly dependant on CPU performance--like I said, the shadowing techniques are offloaded to the CPU.

2. Doom 3 CPU benchmarks clearly show Doom 3 is CPU limited at low resolutions. An FX-53 can church out 103.4 FPS while an XP 2000+ can only churn out 46fps.

That's the benchmark. The actual gameplay itself is not that CPU dependant as varying the CPU with the GPU being the same does not much change the performance. Check out the hardocp Doom 3 benchmark to see my point. Compare the result of the same GPU with different CPUs. You will see my point. Beside, if you're using a GF4 Ti 4200 to play Doom 3, the game will be GPU-limited the difference in GPU will not affect a lot.

http://www.hardocp.com/article.html?art=NjQ0LDE=

http://www.hardocp.com/image.html?image=MTA5MDc4NzE0M1RPNjJBTU9FV1hfNl8xX2wuZ2lm

http://www.hardocp.com/image.html?image=MTA5MDc4NzE0M1RPNjJBTU9FV1hfN181X2wuZ2lm



Acert93 said:
BF2 uses PS1.4. The GF4 only supports PS1.3. So it is an issue of featureset.

Exactly. Another example is the mirror effect in Max Payne 2 that requires PS1.4. This mirror effect also is not present in the Xbox version which means that Xbox's PS featureset is about equal to that of GF4. which is clearly below PS 1.4.

Acert93 said:
As for Far Cry Instincts, you would really argue that?

Yup. I got a GF4 Ti, and Far Cry (demo) does not look any worse (if not better) than Far Cry INstinct (judging from the trailer. The final product may not even look as good, but who knows). Granted it looks like a shit compared to the highest quality running on the highest end of PC but that's beside the point.

Acert93 said:
1. Mainstream priced GPUs in the Xbox1 era (which is relevant in relation to market penetration and dev support) were getting 2x the memory of the *entire* Xbox1 system.

Yeah, and the highest end of GPU at the era had like 256 MB of onboard memory... wait... there was not a single GPU with 256 MB of memory at the time. That's right, the highest end ALSO had 128 MB of memory.

Acert93 said:
2. We are seeing the reverse now: Xbox 360/PS3 have 2x the amount of RAM of *high end* (e.g. 7800GTX) GPUs and 4x the ram of a *mid range* GPU (e.g. 6600GT with 128MB).

Not clearly true. I can bet that we will see a 512 MB card by the time X360 is released. By the time PS3 is released, we might even see a card with Unified Shader Architecture or one with with 1 GB of RAM. Well, they might not be mainstream graphics cards, but you said that X360 will have 2x the amount of memory the high end GPU at the time it's released and that is clearly NOT the case. Hell even Nvidia itself states that there will be a PC GPU that is more advance than RSX at PS3 launch date. And if ATI has an even more advance lineup than Nvidia... we'll see PC GPU once again running rings around RSX and Xenos.

Anyway, I apologize for being off topic. This would be the last reply from me regarding the console vs PC discussion in this thread.
 
Last edited by a moderator:
yeah right! im sure nvidia is gonna come out and say that the RSX is less powerful than 7800gtx, that would look kinda silly to admit that a brand new PS3 gpu will be less powerful than a year old 7800gtx, it wouldnt make much difference to me if it was, but in marketing terms its huge.
why do so many of you guys take this marketing spill as gospel?
By the way i will be buying a ps3 no matter what. i think for the money it will be a great system.
 
Acert93 said:
Not true.

1. Doom 3 is highly dependant on CPU performance--like I said, the shadowing techniques are offloaded to the CPU.

2. Doom 3 CPU benchmarks clearly show Doom 3 is CPU limited at low resolutions. An FX-53 can church out 103.4 FPS while an XP 2000+ can only churn out 46fps.

Thanks, that proves my point quite nicely. An XP2000+ is capable of pushing 46fps in Doom 3 when not GPU limited. A Ti4200 GPU limits the game to 38fps at the same res. Thus whether you are using a 2000+ or an FX57, your framrate isn't going to go much higher than 38fps. Unless your suggesting that adding a faster CPU to an already GPU limited game will greatly increase its performance? But we all know thats not the case.

And in our context we are talking about bringing that all the way down to a PIII 733MHz w/ 64MB of total memory. So in that context the Xbox Doom 3 game is achieving *significantly* more than the same PC hardware.

Again, we are trying to compare the GPU's, not the entire system. We are all already well aware that a PC needs more CPU power and memory to be comoarable because its running the game on top of an OS. Why would you think its fair to compare the same hardware when one is clearly having to do much more work?

BF2 uses PS1.4. The GF4 only supports PS1.3. So it is an issue of featureset.

Yes, a feature thats beyond the xbox aswell being only PS1.1. In fact Im pretty sure their is a hack to get BF2 working on a the GF4Ti anyway so why this wasn't implemented in the actual game I don't know.

But that is the point--on a closed box this can be worked around, and indeed, *is* quite frequently. One look no further than the "normal mapping" hack on the PS2. The features and abilities of a closed box platform are far extended beyond that of the PC featureset. Why? It is a closed box, exploted, and exposed.

Given that the hack exists on the PC, it can clearly be worked around there aswell.

And this does not even begin to engage early gen games that console games frequently use all the features on a platform while the PC counterparts are still supporting legacy APIs and hardware. That is just another way that console hardware is used in a way that PCs dont have an advantage.

If that were true then console games would totally blow away the best PC games when they launch and would get little better throughout the consoles life, which they don't. Consoles go almost as badly utilised as high end PC hadrware in early games.

As for Far Cry Instincts, you would really argue that?

Of course I would, what on earh would make you think that FCI is going to look better than the PC version of Farcry? Addmitedly a GF4Ti can only run the game at low res and medium details but thats going to at least match instincts.

Wrong again.

The GeForce 3, which launched in March 2001 and had a Fall 2001 refresh (Fall 2001 is when the Xbox1 shipped), had 128MB variants. You are talking a street price of below $160 for 128MB GF3 cards.

So not only was 64MB not the most, 128MB cards were affordable.

Actually not wrong. The Ti series didn't launch with 128MB, it was added later to the low end card (the Ti200) but not the high end version (the Ti500). It was about Feb 2002 before 128MB cards appeared on the PC or over a quarter of a year after the xbox launched:

http://graphics.tomshardware.com/graphic/20020205/index.html

Incidentally you can get 512MB cards in the mainstream today.

1. Mainstream priced GPUs in the Xbox1 era (which is relevant in relation to market penetration and dev support) were getting 2x the memory of the *entire* Xbox1 system.

2. We are seeing the reverse now: Xbox 360/PS3 have 2x the amount of RAM of *high end* (e.g. 7800GTX) GPUs and 4x the ram of a *mid range* GPU (e.g. 6600GT with 128MB).

1. At launch both mainstram and high end GPU's has the same amount of RAM as the entire xbox.

2. We are not seeing the reverse, the X360 and certainly the PS3 are not launched yet. R520 and the GTX's successor will likely sport 512MB and thus as with the xbox's launch, both mainstram and high end GPU's will have the same amount of RAM available as the entire X360. 128MB cards will be the past by the time X360 launches.

3. We wont be seeing 1GB cards anytime soon. We already got a taste of 512MB cards--which were insanely priced (seeing a 6800Ultra with 512MB for $1,000) and they offer almost no benefit at this time. This is partly related to the games, but also significantly related to A) GPUs are doing more shader work and the need for masses of textures has been alieviated some and B) memory bandwidth is frequently the limiting factor, in addition to the fact more bandwidth, and not more memory, results in a better performance boost.

On this I have already agreed. It was 3 months before the PC doubled the xbox's memory. I doubt we will have 1GB cards by Feb next year. However give it another 6 months after that and the first ones may start appearing in the midrange. That would be about equal to the orignal xbox if we are talking about the PS3 launch.

Overall your view of the market is incorrect IMO. I am not sure how this is equal:

Xbox 1 w/ 64MB memory & 2001:
• $160 GPU with 128MB memory

Xbox 360 w/ 512MB memory & 2005:
• $160 GPUs with 128MB memory
• $300 GPUs with 256MB memory (the $300 class is where you see the memory being beneficial)
• $500-$1000 GPUs w/ 512MB memory (and totally worthless due to PC game design and bandwidth limitations)

Its not suprising you think my view of the market is incorrect given that your working from faulty information:

1) In 2001, there were no 128MB GPU's
2) You can get 256MB GPU's for far less than $300, your caveat is worthless since it also applies to 128MB GPU's in 2002.
3) 512MB cards will be around $400-$600 to anyone who's not stupid enough to go out and find the most ridiculously overpriced GPU they can find and that amount of memeory is far more benefitial to modern games like Doom 3, FEAR and BF2 than 128MB was to anything in Feb 2002.

Further, the Xbox1 had a mere 6.4GB/s of bandwidth for the total system. The Ti4200 had 7.1GB/s alone.

Fast forward: The 6600GT has 16GB/s of bandwidth. The 7800 GTX has 32-35GB/s (depending on model). The PS3 has 48GB/s and the Xbox 360 has 22GB/s + 256GB/s for back buffer processing (with a 32GB/s link between the logic for stencil, Z, alpha, AA to the core shader logic).

Why are you comparing the 6600GT, a card that is 2 years older than the PS3 to a GF4Ti which came out after the xbox? Aside from that the 6600GT isn't even the Ti4200's equivilent. That would be the 6800.

So lets look at that again. 2 months before the X360 launches we have the the 7800GT with 32GB/sec of memory bandwidth compared to the X360 with 22.4GB (im not counting the edram since its seperate from the main system memory and both the PS2 and GC also had it which damages your argument anyway).

And you want to compare to the PS3 which won't be out for another 6 months min? Well try 22.4GB/sec for the GPU again and another 25.6GB/sec for the CPU. Withing 3 months of the PS3 launching (same as the xbox - GF4Ti timeframe) we could easily have a G80 launching. Would you care to compare potential memory bandiwdth of that to the PS3?

Like I have said previously, im not expecting to the PC to be at the exact same level it was last time but its much closer than your making out and given that the consoles are expected to render at PC resolutions this time the actual difference is greatly weighted in the PC's favour compared to last time.

Whereas in 2001 the Xbox1 had less total bandwidth than a GPU alone, in 2005 we are seeing consoles with MORE bandwidth/effeciency than the top end GPUs.

No we arn't, your wrong. If you want to talk edram then lets talk ps2 in 2000 and its 48GB/sec. If you want to ignore dram lets talk 38.4GB/sec on the GTX 5 months before the X360 launches with 22.4GB/sec.

I am not seeing anything equal about More Memory + More Bandwidth.

Thats because your comparing completely different timeframes. 6600GT (2004) vs PS3 (2006) and GF4Ti (2002) vs xbox (2001). Hmmm.

Checking the Valve stats 512MB seems to be more common than not for system memory.

Nevertheless, 1GB is fast becoming the standard and we are still 2 months away from X360 launch and 6+ from the PS3.

1. The Xbox 360/PS3 have 512MB of memory completely accessible by the GPU. PCs are looking at swapping data from the system memory to the GPU. A 128MB or 256MB GPU is not enough to hold 350-450MB of graphics data. A console with 512MB of memory is not going to need to do all this swapping.

And nor is a PC with a 512MB GPU which will be available when the X360 is launched and common (in terms of new parts sold) when the PS3 is launched. There were plenty of 32MB GPU's around when the Xbox was launched.

2. Different designs. Comparing a PC game--which is very ineffecient with memory--to a console is not fair. If you don't know why please start a new thread on this and some developers can give you some lengthy replies ;)

It would probably be better if you simply didn't try changing the direction of the argument to something I have never tried to claim. This was originally about comparing GPU's, I have never stated that PC's have equal system memory efficiency to consoles and in fact I have explicitely stated the exact opposite.

3. Different philosphies. A PC game is going to be designed with the PC in mind. On the consoles there is a lot of dynamic streaming. Comparing the bottlenecks on a PC game is not really a 1-to-1 relevant comparison to the console space.

Ahh the old "PC is a bottlenecked word processor" argument again. Funny how it always manages to keep up with and quickly exceed consoles performance though given all those bottlenecks isn't it? In fact the only bottleneck your actually referring too is the CPU --> GPU bandwidth but history has proven that whenever this increases, performance stays the same. Even running a modern high end game game over an AGP4x interface which is 1/4 the speed of PCI-E. The other big streaming bottleneck - how to load data from the disc is less apparent in the PC because of its reliance on fast HDD's

Actually this line of questioning was initially based on quoting MY statements (which LB responded to back and forth to you). The point I made, and L-B restated, was that RSX is going to be utilized better and produce better looking games than G70.

A point on which I agree. Were I didn't agree is that it would run rings around it which suggests it would perform as if it were at least a full generation ahead. Comparing the GF4Ti and even the GF3 to the xbox, thats never been the case.

4. RSX and Xenos are top of the line GPUs with more memory than their contemporary GPUs and have more memory bandwidth. This was not the case with NV2a, yet it can be argued persuasively with facts that the Xbox1 put its hardware to better use than the competing hard from 2001.

Seriously, its ridiculous to keep comparing the RSX to the GTX. Its being lauched up to a year later! The RSX is to the GTX as the xbox was to the GF3, and thats being generous! The next gen architecture G80 will probably launch around the same time as the PS3 (withing 3 months) and so you should be comparing that, not a year old card.

And as for Xenos, like I have already said, R520 will almost certainly have 512MB of memory running at double the speed of that in the Xenos. Your only supporting your argument with its edram but if you want to do that then you have to consider that the PS2 has edram which for the time, was larger and faster than that in the X360.

Funny thing is, the Ti 4200 shipped in April of 2002. A GF3 Ti 200 with 64MB (6.4GB/s!) memory is a MUCH more even comparison as that shipped in Fall of 2001.

Yes, so compare to a Radeon 9800pro which has similar memory bandwidth to the Xenos (excluding edram)

That kind of puts things into perspective. Comparing a GPU that was commercially avialable 6 months AFTER the Xbox1 launched is odd in many ways. And even then the Xbox1 holds it own with less memory, slower memory, a slower CPU, and less GPU clock speed.

Exactly, it holds its own (inferior but not by much). It doesn't run rings around it. RSX is to GTX as NV2a was to GF3. The GF3 pretty much holds its own against the xbox at 480p.

Comparing the results from a P4 3GHz w/ 1GB of memory or a 2000+ AMD with 512MB of memory with a GPU released later with better specs than what was on the market at the time sounds pretty unbalanced.

Yet you keep trying to compare the RSX to the GTX? And like I have demonstrated above, the rest of your system doesn't matter if you are already GPU limited.

Really, the simplest way to look at it is: The Xbox 1 got a lot more out of a PIII 733MHz & NV2a with 64MB of memory compared to a PIII 733MHz & Ti4200 with 64MB video memory and 128MB of system memory (to be generous).

Of course it did because it didn't have to run the game on an OS. But comparing the system like that is totally skewing the performance of the GPU which is what this discussion was supposed to be about.

And as pointed out the situation with memory size, speed, and CPU performance has changed this gen. Consoles are not taking a back seat this time around.

Memory size and speed are a few months better this time, its not much and at launch, they should be about the same as last time (just the PC will take over more slowly). CPU performance is still very much up in the air as has been discussed extensively at this site already and GPU performance is IMO behind. Xenos may be equivilent or even slightley better than the NV2a for its time (compared to R520 G70Ultra) but RSX will be very much behind.

And the consoles will need much more comparitive power this gen because of the high res requirements. Im not seeing it.
 
Last edited by a moderator:
pjbliverpool said:
Exactly, it holds its own (inferior but not by much). It doesn't run rings around it. RSX is to GTX as NV2a was to GF3. The GF3 pretty much holds its own against the xbox at 480p.
NV2a was significantly faster then GF3 in some areas, but then again that doesn't mean people would acknowledge it, especially since all the comparisons are made with ports, where the platform being ported to is almost always at a disadvantage - best example being your typical Halo and Unreal ports.

It remains to be seen what RSX is vs 7800, I am not all that hopefull at the moment (and I am especially confused with all the secrecy regarding RSX, IMO it's a very counter productive mentality) but who knows.
 
Perhaps they're still uncertain of RSX's final specs? After all, is it not supposed to tape out until the end of this year? Okay, found a link that suggests tape out this month, first silicon in December...
http://www.bit-tech.net/news/2005/05/25/rsx_still_in_development/
Burkett stated that RSX is still in development and that no actual silicon is available yet. In other words, the silicon is not even taped out thus far. If we look at Sony's schedule, we expect that RSX is being finalised right now, and should be taped out before September, and the first silicon will be available nearer Christmas, in time for enough units to be made in the run up to the expected Spring 2006 launch
 
Let's hope "still in development" means "evaluating the final spin for production."

If NVidia had an ATI-R520 style fcuk-up...

Jawed
 
Jawed said:
Let's hope "still in development" means "evaluating the final spin for production."

If NVidia had an ATI-R520 style fcuk-up...

Jawed

Why would you believe that one company who very succesfully launched their latest line of graphics cards, would 'fcuk-up' like their competitor who have still been unable to release their latest line of graphics cards? Especially when you believe said company's console chip is very closely related to said companies latest line of graphics cards, which, and I'll repeat, has has a very succesful launch?
 
Jawed said:
Let's hope "still in development" means "evaluating the final spin for production."
That was late May. This is early September. Also, considering the time it takes from tape-out to mass-production nowadays, an initial tape-out in "September" is not an option. Unless they are (signficantly) behind schedule, the RSX is already taped-out.

Uttar
 
I don't "believe" anything. There are two key risks here:

  • the fab is Sony, a fab NVidia has never worked with before
  • the combination of a very very large die on a process (90nm) that NVidia has little experience in
I don't believe anything specific. But the risks are there.

ATI fcuked-up on a process that they'd already got "splendid" results with, on Xenos. Xenos is a more complex design than R520, so the failing is yet more surprising.

It would be sensible to expect NVidia to find it non-trivial to implement RSX on 90nm at Sony. Anything more, well we just have to see.

Jawed
 
It forgives to all for off topic, but certain time I read in a Post de David Baumman that the NV2A had some features even though not gifts in the NV-25 as 2 Z-stencil buffers, 2X more ALUs to per to shader units among others.

It will be that the same Nvidia with "a theoretically so advantageous" contract (US 5 to per chip) how much the NV2A will not be able to increase some features "NV2A like to over NV20" to the RSX?

Particularly I do not believe Sony even though under pressure of costs will go to place a Gpu of one year [ until march/april 2006 ] and that still that it does not have at least 80/90% performance of top 3D card PC(NV-50 early mid 2006?).

( My estimate or bets would be of a RSX is "G70 like" with small modifications [FlexIO] as addition of "flops" in pixel shaders (more normal or vec4) in about [ is natural for about 6/8 months lives to over G70 dev. passing of 27Flops for 32 FP ps shader unit] or even Turbocache 16MB or sony/toshiba/Nvidia S-RAM "tech" for an addition of 5/ 15% over G70 at same clock.)
 
Diesel2 said:
Why would you believe that one company who very succesfully launched their latest line of graphics cards, would 'fcuk-up' like their competitor...

NV30. It can happen to anyone. We don't actually know how similar the two are. And obviously it's not the same chip or it'd be done now. So anything can happen. Isn't it going to be 90nm too? That's a big change from 110nm on 7800 right there.
 
Here is the Skinny from Gamespot

RUMOR #3: The PlayStation 3's vaunted RSX graphics processor is less powerful that NVIDIA's new 7800GTX graphics card.

Source: See below.

The official story: Sony had not returned requests for comment as of press time.

What we heard: A classic game of telephone was played out on the Web this week. The end point was the widely--and deservedly--read Team Xbox, which ran a story headlined "PlayStation 3 GPU Less Powerful than GeForce 7800." This article linked to a piece over at our friends the Inquirer called "Playstation 3 GPU slightly less powerful than GeForce 7800." This in turned linked to a discussion thread on Evil Avatar called "GeForce 7800 LESS powerful than PS3???" which featured a quote taken from the September issue of PSM. The quote was attributed to an Nvidia spokesperson, who said "There's no doubting that NVIDIA's new 7800GTX is the ultimate in PC graphics technology. The card's G70 GPU, which is more than twice as powerful as two of NVIDIA's previous top-of-the-line 6800 boards, shares a lot of similar workings with the PS3's RSX chip--only it isn't as fast." [Emphasis added.] So the article is saying that the 7800GTX isn't as fast as the RSX, as Evil Avatar pointed out, not the other way around, as the speed-readers at the Inquirer claim--at length. To their credit, the editors at Team Xbox were quick to realize their error--after a phone call from Nvidia--and set the record straight with the follow-up article/mea culpa "PlayStation 3 GPU more powerful than GeForce 7800!"

Bogus or not bogus?: Bogus.

http://www.gamespot.com/news/2005/09/02/news_6132546.html
 
Back
Top