Acert93 said:
Not true.
1. Doom 3 is highly dependant on CPU performance--like I said, the shadowing techniques are offloaded to the CPU.
2. Doom 3 CPU
benchmarks clearly show Doom 3 is CPU limited at low resolutions. An FX-53 can church out 103.4 FPS while an XP 2000+ can only churn out 46fps.
Thanks, that proves my point quite nicely. An XP2000+ is capable of pushing 46fps in Doom 3 when not GPU limited. A Ti4200 GPU limits the game to 38fps at the same res. Thus whether you are using a 2000+ or an FX57, your framrate isn't going to go much higher than 38fps. Unless your suggesting that adding a faster CPU to an already GPU limited game will greatly increase its performance? But we all know thats not the case.
And in our context we are talking about bringing that all the way down to a PIII 733MHz w/ 64MB of total memory. So in that context the Xbox Doom 3 game is achieving *significantly* more than the same PC hardware.
Again, we are trying to compare the GPU's, not the entire system. We are all already well aware that a PC needs more CPU power and memory to be comoarable because its running the game on top of an OS. Why would you think its fair to compare the same hardware when one is clearly having to do much more work?
BF2 uses PS1.4. The GF4 only supports PS1.3. So it is an issue of featureset.
Yes, a feature thats beyond the xbox aswell being only PS1.1. In fact Im pretty sure their is a hack to get BF2 working on a the GF4Ti anyway so why this wasn't implemented in the actual game I don't know.
But that is the point--on a closed box this can be worked around, and indeed, *is* quite frequently. One look no further than the "normal mapping" hack on the PS2. The features and abilities of a closed box platform are far extended beyond that of the PC featureset. Why? It is a closed box, exploted, and exposed.
Given that the hack exists on the PC, it can clearly be worked around there aswell.
And this does not even begin to engage early gen games that console games frequently use all the features on a platform while the PC counterparts are still supporting legacy APIs and hardware. That is just another way that console hardware is used in a way that PCs dont have an advantage.
If that were true then console games would totally blow away the best PC games when they launch and would get little better throughout the consoles life, which they don't. Consoles go almost as badly utilised as high end PC hadrware in early games.
As for Far Cry Instincts, you would really argue that?
Of course I would, what on earh would make you think that FCI is going to look better than the PC version of Farcry? Addmitedly a GF4Ti can only run the game at low res and medium details but thats going to
at least match instincts.
Wrong again.
The GeForce 3, which launched in March 2001 and had a Fall 2001 refresh (Fall 2001 is when the Xbox1 shipped), had 128MB variants. You are talking a street price of below
$160 for 128MB GF3 cards.
So not only was 64MB not the most, 128MB cards were affordable.
Actually not wrong. The Ti series didn't launch with 128MB, it was added later to the low end card (the Ti200) but not the high end version (the Ti500). It was about Feb 2002 before 128MB cards appeared on the PC or over a quarter of a year after the xbox launched:
http://graphics.tomshardware.com/graphic/20020205/index.html
Incidentally you can get 512MB cards in the mainstream today.
1. Mainstream priced GPUs in the Xbox1 era (which is relevant in relation to market penetration and dev support) were getting 2x the memory of the *entire* Xbox1 system.
2. We are seeing the reverse now: Xbox 360/PS3 have 2x the amount of RAM of *high end* (e.g. 7800GTX) GPUs and 4x the ram of a *mid range* GPU (e.g. 6600GT with 128MB).
1. At launch both mainstram and high end GPU's has
the same amount of RAM as the entire xbox.
2. We are not seeing the reverse, the X360 and certainly the PS3 are not launched yet. R520 and the GTX's successor will likely sport 512MB and thus as with the xbox's launch, both mainstram and high end GPU's will have
the same amount of RAM available as the entire X360. 128MB cards will be the past by the time X360 launches.
3. We wont be seeing 1GB cards anytime soon. We already got a taste of 512MB cards--which were insanely priced (seeing a 6800Ultra with 512MB for $1,000) and they offer almost no benefit at this time. This is partly related to the games, but also significantly related to A) GPUs are doing more shader work and the need for masses of textures has been alieviated some and B) memory bandwidth is frequently the limiting factor, in addition to the fact more bandwidth, and not more memory, results in a better performance boost.
On this I have already agreed. It was 3 months before the PC doubled the xbox's memory. I doubt we will have 1GB cards by Feb next year. However give it another 6 months after that and the first ones may start appearing in the midrange. That would be about equal to the orignal xbox if we are talking about the PS3 launch.
Overall your view of the market is incorrect IMO. I am not sure how this is equal:
Xbox 1 w/ 64MB memory & 2001:
• $160 GPU with 128MB memory
Xbox 360 w/ 512MB memory & 2005:
• $160 GPUs with 128MB memory
• $300 GPUs with 256MB memory (the $300 class is where you see the memory being beneficial)
• $500-$1000 GPUs w/ 512MB memory (and totally worthless due to PC game design and bandwidth limitations)
Its not suprising you think my view of the market is incorrect given that your working from faulty information:
1) In 2001, there were no 128MB GPU's
2) You can get 256MB GPU's for far less than $300, your caveat is worthless since it also applies to 128MB GPU's in 2002.
3) 512MB cards will be around $400-$600 to anyone who's not stupid enough to go out and find the most ridiculously overpriced GPU they can find and that amount of memeory is far more benefitial to modern games like Doom 3, FEAR and BF2 than 128MB was to anything in Feb 2002.
Further, the Xbox1 had a mere 6.4GB/s of bandwidth for the total system. The Ti4200 had 7.1GB/s alone.
Fast forward: The 6600GT has 16GB/s of bandwidth. The 7800 GTX has 32-35GB/s (depending on model). The PS3 has 48GB/s and the Xbox 360 has 22GB/s + 256GB/s for back buffer processing (with a 32GB/s link between the logic for stencil, Z, alpha, AA to the core shader logic).
Why are you comparing the 6600GT, a card that is 2 years older than the PS3 to a GF4Ti which came out after the xbox? Aside from that the 6600GT isn't even the Ti4200's equivilent. That would be the 6800.
So lets look at that again. 2 months before the X360 launches we have the the 7800GT with 32GB/sec of memory bandwidth compared to the X360 with 22.4GB (im not counting the edram since its seperate from the main system memory and both the PS2 and GC also had it which damages your argument anyway).
And you want to compare to the PS3 which won't be out for another 6 months min? Well try 22.4GB/sec for the GPU again and another 25.6GB/sec for the CPU. Withing 3 months of the PS3 launching (same as the xbox - GF4Ti timeframe) we could easily have a G80 launching. Would you care to compare potential memory bandiwdth of that to the PS3?
Like I have said previously, im not expecting to the PC to be at the exact same level it was last time but its much closer than your making out and given that the consoles are expected to render at PC resolutions this time the actual difference is greatly weighted in the PC's favour compared to last time.
Whereas in 2001 the Xbox1 had less total bandwidth than a GPU alone, in 2005 we are seeing consoles with MORE bandwidth/effeciency than the top end GPUs.
No we arn't, your wrong. If you want to talk edram then lets talk ps2 in 2000 and its 48GB/sec. If you want to ignore dram lets talk 38.4GB/sec on the GTX 5 months before the X360 launches with 22.4GB/sec.
I am not seeing anything equal about More Memory + More Bandwidth.
Thats because your comparing completely different timeframes. 6600GT (2004) vs PS3 (2006) and GF4Ti (2002) vs xbox (2001). Hmmm.
Checking the Valve stats 512MB seems to be more common than not for system memory.
Nevertheless, 1GB is fast becoming the standard and we are still 2 months away from X360 launch and 6+ from the PS3.
1. The Xbox 360/PS3 have 512MB of memory completely accessible by the GPU. PCs are looking at swapping data from the system memory to the GPU. A 128MB or 256MB GPU is not enough to hold 350-450MB of graphics data. A console with 512MB of memory is not going to need to do all this swapping.
And nor is a PC with a 512MB GPU which will be available when the X360 is launched and common (in terms of new parts sold) when the PS3 is launched. There were plenty of 32MB GPU's around when the Xbox was launched.
2. Different designs. Comparing a PC game--which is very ineffecient with memory--to a console is not fair. If you don't know why please start a new thread on this and some developers can give you some lengthy replies
It would probably be better if you simply didn't try changing the direction of the argument to something I have never tried to claim. This was originally about comparing GPU's, I have never stated that PC's have equal system memory efficiency to consoles and in fact I have explicitely stated the exact opposite.
3. Different philosphies. A PC game is going to be designed with the PC in mind. On the consoles there is a lot of dynamic streaming. Comparing the bottlenecks on a PC game is not really a 1-to-1 relevant comparison to the console space.
Ahh the old "PC is a bottlenecked word processor" argument again. Funny how it always manages to keep up with and quickly exceed consoles performance though given all those bottlenecks isn't it? In fact the only bottleneck your actually referring too is the CPU --> GPU bandwidth but history has proven that whenever this increases, performance stays the same. Even running a modern high end game game over an AGP4x interface which is 1/4 the speed of PCI-E. The other big streaming bottleneck - how to load data from the disc is less apparent in the PC because of its reliance on fast HDD's
Actually this line of questioning was initially based on quoting MY statements (which LB responded to back and forth to you). The point I made, and L-B restated, was that RSX is going to be utilized better and produce better looking games than G70.
A point on which I agree. Were I didn't agree is that it would run rings around it which suggests it would perform as if it were at least a full generation ahead. Comparing the GF4Ti and even the GF3 to the xbox, thats never been the case.
4. RSX and Xenos are top of the line GPUs with more memory than their contemporary GPUs and have more memory bandwidth. This was not the case with NV2a, yet it can be argued persuasively with facts that the Xbox1 put its hardware to better use than the competing hard from 2001.
Seriously, its ridiculous to keep comparing the RSX to the GTX. Its being lauched up to a year later! The RSX is to the GTX as the xbox was to the GF3, and thats being generous! The next gen architecture G80 will probably launch around the same time as the PS3 (withing 3 months) and so you should be comparing that, not a year old card.
And as for Xenos, like I have already said, R520 will almost certainly have 512MB of memory running at double the speed of that in the Xenos. Your only supporting your argument with its edram but if you want to do that then you have to consider that the PS2 has edram which for the time, was larger and faster than that in the X360.
Funny thing is, the Ti 4200 shipped in April of 2002. A GF3 Ti 200 with 64MB (6.4GB/s!) memory is a MUCH more even comparison as that shipped in Fall of 2001.
Yes, so compare to a Radeon 9800pro which has similar memory bandwidth to the Xenos (excluding edram)
That kind of puts things into perspective. Comparing a GPU that was commercially avialable 6 months AFTER the Xbox1 launched is odd in many ways. And even then the Xbox1 holds it own with less memory, slower memory, a slower CPU, and less GPU clock speed.
Exactly, it holds its own (inferior but not by much). It doesn't run rings around it. RSX is to GTX as NV2a was to GF3. The GF3 pretty much holds its own against the xbox at 480p.
Comparing the results from a P4 3GHz w/ 1GB of memory or a 2000+ AMD with 512MB of memory with a GPU released later with better specs than what was on the market at the time sounds pretty unbalanced.
Yet you keep trying to compare the RSX to the GTX? And like I have demonstrated above, the rest of your system doesn't matter if you are already GPU limited.
Really, the simplest way to look at it is: The Xbox 1 got a lot more out of a PIII 733MHz & NV2a with 64MB of memory compared to a PIII 733MHz & Ti4200 with 64MB video memory and 128MB of system memory (to be generous).
Of course it did because it didn't have to run the game on an OS. But comparing the system like that is totally skewing the performance of the GPU which is what this discussion was supposed to be about.
And as pointed out the situation with memory size, speed, and CPU performance has changed this gen. Consoles are not taking a back seat this time around.
Memory size and speed are a few months better this time, its not much and at launch, they should be about the same as last time (just the PC will take over more slowly). CPU performance is still very much up in the air as has been discussed extensively at this site already and GPU performance is IMO behind. Xenos may be equivilent or even slightley better than the NV2a for its time (compared to R520 G70Ultra) but RSX will be very much behind.
And the consoles will need much more comparitive power this gen because of the high res requirements. Im not seeing it.