Untold Legends Producer claims PS3> $ 3,500 PC

Status
Not open for further replies.
xbdestroya said:
Well... sort of, I do. I say - let's put your twin Ti4600 PC up against an XBox running Doom 3 and see who wins. Doom 3 would clearly be the best game to use for the comparison, since it has such clear PC roots, and is NVidia-centric to begin with.

Ummm, yeah, bring it right on! Go find youself some benchmarks for Doom 3. You will find a single Ti4600 running it at 800x600 with medium quality (xbox runs at low) at over 35fps. Now multiply that by 2 Ti4600's and tell me the xbox will do better.....

EDIT: http://techreport.com/etc/2004q3/doom3-midlow/index.x?pg=3
 
Last edited by a moderator:
pjbliverpool said:
Ummm, yeah, bring it right on! Go find youself some benchmarks for Doom 3. You will find a single Ti4600 running it at 800x600 with medium quality (xbox runs at low) at over 35fps. Now multiply that by 2 Ti4600's and tell me the xbox will do better.....

EDIT: http://techreport.com/etc/2004q3/doom3-midlow/index.x?pg=3

You know what pjb - you rock my world. Will you marry me? :p

Well I came up with the perfect comparison point at least, even if it disproved my own theory. Anyway though I'll maintain that there are areas where the PS3 will outperform our hypothetical $3500 PC, if five years from now I'm utterly wrong on that point, believe me when I say I'm not that emotionally invested in being correct on the issue.

I'll stand by my initial comments in the thread as well, and re-comment that really, these articles aren't worth bringing up in the first place, because whatever the case, we all know that there's no apples to apples here - it's just PR talk that in a forum draws people out.
 
xbdestroya said:
You know what pjb - you rock my world. Will you marry me? :p

Well I came up with the perfect comparison point at least, even if it disproved my own theory. Anyway though I'll maintain that there are areas where the PS3 will outperform our hypothetical $3500 PC, if five years from now I'm utterly wrong on that point, believe me when I say I'm not that emotionally invested in being correct on the issue.

I'll stand by my initial comments in the thread as well, and re-comment that really, these articles aren't worth bringing up in the first place, because whatever the case, we all know that there's no apples to apples here - it's just PR talk that in a forum draws people out.

I appreciate your honest answer, even though I disagree with your viewpoint, I admire the way you put it across, sorry if I sounded a bit hostile there.
 
pjbliverpool said:
I appreciate your honest answer, even though I disagree with your viewpoint, I admire the way you put it across, sorry if I sounded a bit hostile there.

Don't sweat it, believe me when I say knowing those benchmark numbers I could see where my comparison choice would have been a point of aggravation for you.
 
But wait! Could you not buy a PS3 and a 360 and a Wii, all their relevant peripherals and a half-dozen-or-so games on each of them for the price of that one $3500 system?

ZOMG! I have tapped into some hidden truth of the universe, I know it! :p
 
cthellis42 said:
But wait! Could you not buy a PS3 and a 360 and a Wii, all their relevant peripherals and a half-dozen-or-so games on each of them for the price of that one $3500 system?

ZOMG! I have tapped into some hidden truth of the universe, I know it! :p


:oops:
 
pjbliverpool said:
Oh please!

So all of a sudden PS3 is going to be able to outperform a quad sli PC? Exactly how much money would you be willing to put on that?
how much u got?
youre forgetting the fact that all games are cpu limited to some degree.
a games performance is dependant on both the cpu + gpu not just the gpu
for certain things the cell (on paper) looks better than the top cpu you can buy (by quite a large margin)
as Guden Oden pointed out there are situations which can bring a $3500 pc to its knees eg decent physics but the cell wont even break a sweat
 
zed said:
how much u got?
youre forgetting the fact that all games are cpu limited to some degree.
a games performance is dependant on both the cpu + gpu not just the gpu
for certain things the cell (on paper) looks better than the top cpu you can buy (by quite a large margin)
as Guden Oden pointed out there are situations which can bring a $3500 pc to its knees eg decent physics but the cell wont even break a sweat


yes yes and EE was 3x more powerful on paper then a Pentium Coppermine. What CPU was the Xbox based off of again? Do people not recognize the irony of the Cell gloating that they were doing the same thing back in 99 with EE? Fact of the matter is in general performance even a $1000 PC would have huge strengths over the PS3. People bringing up the most ideal situations where the PS3 will have a fighting chance or lead are absolutly useless. The dumbass comment that started this thread was not made on any specific point like FP performance which makes it so ignorant.


These threads are dumb. Like an automated massive pissing contest. Every thread comparing speculation of any console vs any pc should be deleted in my humble opinion. They serve zero purpose.
 
pjbliverpool said:
Oh please!

So all of a sudden PS3 is going to be able to outperform a quad sli PC? Exactly how much money would you be willing to put on that?
Quad SLI doesn't get you 4x the power of a single GPU. There's certain limits as to how you can user the GPU power too. As has been mentioned, power needs to be qualified. PS3 has some pretty awesome BW, and being a closed system it can be leveraged to great effect. I expect a lot of potential power in a $3500 system goes to waste, as software isn't designed for it. eg. Even if an Athlon 64 could outperfom the Cell on calculations, it's likely to have a third the access the RAM and will be starved for data. That was the main approach of Cell - to circumvent the memory access bottleneck that is holding back processing. Put four Athlon 64s onto that same 8 GB/s bus, and all that extra processing potential will achieve nothing. this is where a closed, designed system can outperform a massive cost escalation of an architecture not ideally suited for all that processing potential. $3500 buys you a lot of RAM and processors, but some aspects are still going to be severly bottlenecked.

So it's a point that can be argued. It all depends what they are trying to do. If they could move processing across four GPUs with their own massive BW, the PC would have a massive advantage. But you won't write for a PC that way. Comparing say HalfLife 3 optimized to make the most of PS3 (if such a thing happens) to HL3 on PC, that runs on a 9700 and scales up to whatever PC you run it on, there's good chance the PS3 version will be better.

Of course, the real concern is, if UL is what they can achieve on a super powerful PS3, how bad would their PC developments be :p (UL isn't bad, just not great. Mediocre 1st party devs commenting on their parent company's hardware isn't the best place for balanced views of performance potential...)
 
Shifty Geezer said:
Quad SLI doesn't get you 4x the power of a single GPU. There's certain limits as to how you can user the GPU power too. As has been mentioned, power needs to be qualified. PS3 has some pretty awesome BW, and being a closed system it can be leveraged to great effect. I expect a lot of potential power in a $3500 system goes to waste, as software isn't designed for it. eg. Even if an Athlon 64 could outperfom the Cell on calculations, it's likely to have a third the access the RAM and will be starved for data. That was the main approach of Cell - to circumvent the memory access bottleneck that is holding back processing. Put four Athlon 64s onto that same 8 GB/s bus, and all that extra processing potential will achieve nothing. this is where a closed, designed system can outperform a massive cost escalation of an architecture not ideally suited for all that processing potential. $3500 buys you a lot of RAM and processors, but some aspects are still going to be severly bottlenecked.

If the Athlon64 is so bandwidth starved then why does AM2 add virtually no performance at the same clock speed despite theoretically having double the bandwidth? That alone is proof enough to conclude that the Athlon64 architecture doesn't really need more bandwidth than it already has to reach its peak performance, especially AM2.

Besides, if you throw a PPU in there, then combined with an AM2 CPU your talking a similar amount of bandwidth to Cell. And thats assuming all of Cells XDR bandwidth is dedicated to comparable tasks and not graphics.

Do you think if used properly an AthlonFX62 + PPU + 2GB DDR2 800Mhz really has less power than Cell?

And then that just leaves RSX to be compared to a pair of 7900GTX's (for ease of comparisons sake). And everyone in here is certainly aware enough of the architectures to know the GTX's are packing more than double the potential power and more than 4x the potential bandwidth.

I know it doesn't always get used that well but you really do have to use best case scenario's to show how "powerfull" they are in comparison to each other. And a best case scenario would have SLI giving you around ~80% more performance which certainly can happen.

And thats assuming he's talking about actual delivered power and not just potential power where the comparison gets even more absurd.

By the way, were about in the video did he say this, I can't find it? :???:

EDIT: nevermind, I found it in the second video. He actually says their PS3's are trouncing their $3500 PC's which could mean pretty much anything. I.e. those PC's could be using dual Xeons as the CPU and QuadroFX 4400's as the GPU making them cost a lot but have only a fraction the performance of a "gaming PC". They are no doubt talking about how fast they run code which is already being optimised for the PS3 aswell.
 
Last edited by a moderator:
pjbliverpool i agree with you.
The new intel harchitecture has the same bandwith available as xenon if I remenber properly.
for less 3500$ lol you could have two duo core a ppu a sli set up and half a toon (lol) of ram)...
Sony at their best lol I can't understand why the buzz was so low at last computex about cell.............lol
A I want to add that when the PS3 will be out you probably will be able to buy a g80 based gpu not sure for R600........
 
Last edited by a moderator:
pjbliverpool said:
If the Athlon64 is so bandwidth starved then why does AM2 add virtually no performance at the same clock speed despite theoretically having double the bandwidth? That alone is proof enough to conclude that the Athlon64 architecture doesn't really need more bandwidth than it already has to reach its peak performance, especially AM2.
You're not being obtuse on purpose are you? :) He did clearly say what would be the case if an Athlon64 system was scaled up to Cell performance. The A64 is a well-balanced product where it comes to available bandwidth vs. computational ability as-is. Emphasis: as-is.

However! A64 is extremely weak, processing-wise, compared to Cell. A 7-SPE Cell leverages over 200Gflops of computational power, even the fastest A64 less than 20. If you somehow scaled up an A64 to offer as much computational power as Cell, it would die from data starvation because that part of the design isn't dimensioned to handle the increased demands.

The way Cell solves this problem is by having tons of registers (128, each 128-bit in size), an extremely fast SRAM-based local store, and a very wide, high-capacity bidirectional internal ring-bus hooked to all computational units and to a high-bandwidth main memory interface. The A64 is specced to offer cost-effective, so-so overall performance like most any GP CPU, it's a cost/performance trade-off. It wouldn't scale up anywhere near cell's performance at all.

So your "proof" is in reality nothing of the sort.

Do you think if used properly an AthlonFX62 + PPU + 2GB DDR2 800Mhz really has less power than Cell?
Define "used properly".
All of us know PCs are pretty much NEVER "used properly". Bloatware is a term that was INVENTED on the PC for chrissakes.

Furthermore:
The PPU is a proprietary architecture. Only way to access it is to use Ageia's API function calls, the native instruction set isn't exposed to developers, so you can only use it for what the API allows you to use it for. Even if you could program it directly, chances are it'd be less than ideal for other things than what it was originally intended for. Also, it can only process stuff that fits entirely inside its own on-board memory, it can't use all 2GB of your 'powerful' DDR2 memory. Besides, it's sitting on the PCI bus, which is extremely slow by today's standards. All it's really good for is doing stuff that takes relatively small input parameters, does A LOT of number crunching using those parameters, and then outputs a relatively small amount of results. It's a crutch, due to the weak FP performance of modern PC CPUs. It's in no way comparable to Cell, which is a freely programmable multi-processor system designed for multiple purposes, not just running physics/fluid/cloth simulations and similar related tasks.

And everyone in here is certainly aware enough of the architectures to know the GTX's are packing more than double the potential power and more than 4x the potential bandwidth.
Yeah, you can run CoD3 in 2500*1500 or whatever on a PC SLI rig some time in the future, while PS3 runs it at 720P. Does that really make that particular system more POWERFUL tho? Do you know of any PC game with visuals similar to Warhawk's for example? Check that game out and let me know, because I'd be damn interested in playing it.

That I can run game X with fairly simple graphics and a couple units moving around on-screen in 10 times higher resolution doesn't mean I have a 10 times more powerful system.

And a best case scenario would have SLI giving you around ~80% more performance which certainly can happen.

And thats assuming he's talking about actual delivered power and not just potential power where the comparison gets even more absurd.
What is absurd is using a best-case SLI scenario of 80% increase as an example of 'actual delivered power'. That strikes me as more than slightly optimistic. Sure, some games get 80%+ increase, but it's a relatively small minority.

They are no doubt talking about how fast they run code which is already being optimised for the PS3 aswell.
Undoubtedly, yes. Because you actually CAN optimize code for speed on a fixed system. ;)
 
Guden Oden said:
However! A64 is extremely weak, processing-wise, compared to Cell. A 7-SPE Cell leverages over 200Gflops of computational power, even the fastest A64 less than 20. If you somehow scaled up an A64 to offer as much computational power as Cell, it would die from data starvation because that part of the design isn't dimensioned to handle the increased demands.

From anandtech.com:

First and foremost, a floating point operation can be anything; it can be
adding two floating point numbers together, or it can be performing a dot
product on two floating point numbers, it can even be just calculating the
complement of a fp number. Anything that is executed on a FPU is fair game
to be called a floating point operation.

Secondly, both floating point power numbers refer to the whole system, CPU
and GPU. Obviously a GPU's floating point processing power doesn't mean
anything if you're trying to run general purpose code on it and vice versa.
As we've seen from the graphics market, characterizing GPU performance in
terms of generic floating point operations per second is far from the full
performance story.

Third, when a manufacturer is talking about peak floating point performance
there are a few things that they aren't taking into account. Being able to
process billions of operations per second depends on actually being able to
have that many floating point operations to work on. That means that you
have to have enough bandwidth to keep the FPUs fed, no mispredicted
branches, no cache misses and the right structure of code to make sure that
all of the FPUs can be fed at all times so they can execute at their peak
rates. Not to mention that the requirements for
hitting peak theoretical performance are always ridiculous; caches are only
so big and thus there will come a time where a request to main memory is
needed, and you can expect that request to be fulfilled in a few hundred
clock cycles, where no floating point operations will be happening at all.

The Cell processor is no different; given that its PPE is identical to one
of the PowerPC cores in Xenon, it must derive its floating point performance
superiority from its array of SPEs. So what's the issue with 218 GFLOPs
number (2 TFLOPs for the whole system)? Well, from what we've heard, game
developers are finding that they can't use the SPEs for a lot of tasks. So
in the end, it doesn't matter what peak theoretical performance of Cell's
SPE array is, if those SPEs aren't being used all the time.

Another way to look at this comparison of flops is to look at integer add
latencies on the Pentium 4 vs. the Athlon 64. The Pentium 4 has two double
pumped ALUs, each capable of performing two add operations per clock, that's
a total of 4 add operations per clock; so we could say that a 3.8GHz Pentium
4 can perform 15.2 billion operations per second. The Athlon 64 has three
ALUs each capable of executing an add every clock; so a 2.8GHz Athlon 64
can perform 8.4 billion operations per second. By this silly console
marketing logic, the Pentium 4 would be almost twice as fast as the Athlon
64, and a multi-core Pentium 4 would be faster than a multi-core Athlon 64.
Any AnandTech reader should know that's hardly the case. No code is
composed entirely of add instructions, and even if it were, eventually the
Pentium 4 and Athlon 64 will have to go out to main memory for data, and
when they do, the Athlon 64 has a much lower latency access to memory than
the P4. In the end, despite what these horribly concocted numbers may lead
you to believe, they say absolutely nothing about performance. The exact
same situation exists with the CPUs of the next-generation consoles; don't
fall for it.

Guden Oden said:
So your "proof" is in reality nothing of the sort.

Neither is yours
Yeah, you can run CoD3 in 2500*1500 or whatever on a PC SLI rig some time in the future, while PS3 runs it at 720P. Does that really make that particular system more POWERFUL tho?

Yes. Just the same example if your pc can run CoD3 maxed at 720p with 4x aa at 30fps, and if my rig can run the same game maxed at 1600p with 16xAA at 60fps. Obviously its much more powerful.

Do you know of any PC game with visuals similar to Warhawk's for example? Check that game out and let me know, because I'd be damn interested in playing it.

Warhawk? Seriously. Warhawk while it looks good is not groundbreaking in anyway.
Anyways, Crysis looks a million times better than any console game annouced shown in a realtime form.


You also seem to be completely hung up about this marvelous CPU thing. Have you forgotten that your cute console has a half arsed GPU? You got a 7900GT with 128bit memory and 8 ROPS. You have squat when it comes to graphical power compared to what a quad-sli rig can do.

Further talking about optimization bla bla bla closed box etc, does not change the fact that a Quad-sli rig is more powerful. If somebody made a game optimized for a quad-sli rig, it would look like real life :p
 
Oh my.. we are not joking? sometimes i feel ashamed to be a console gamer.

PS3 doesnt come out till November and so you would want to build a $3500 PC on that date. $3500 can get you a PC with the Conroes, the 4SLI G80s and XFires R600...That amount of power is wasted today but it should form the base of PC games graphics and processing within 5 years.

I know B3D console gamers have a historical fetish with Cell (unproven) powers but Crysis is visually breathtaking while showing no deficiency in the physics department. You can have your warhawk.



This quote started all the nonsense. Take it back.

>>IMO I think that the PS3 will pull off better graphics by the end of it's life than the $3500 PC will be able to pull off five years from now; it's simply the advantage of having a closed system and being able to code to it
 
Wow! Could you be a little more biased please? I wasn't quite feeling it enough there :oops:

Guden Oden said:
You're not being obtuse on purpose are you? :) He did clearly say what would be the case if an Athlon64 system was scaled up to Cell performance. The A64 is a well-balanced product where it comes to available bandwidth vs. computational ability as-is. Emphasis: as-is.

However! A64 is extremely weak, processing-wise, compared to Cell. A 7-SPE Cell leverages over 200Gflops of computational power, even the fastest A64 less than 20. If you somehow scaled up an A64 to offer as much computational power as Cell, it would die from data starvation because that part of the design isn't dimensioned to handle the increased demands.

The way Cell solves this problem is by having tons of registers (128, each 128-bit in size), an extremely fast SRAM-based local store, and a very wide, high-capacity bidirectional internal ring-bus hooked to all computational units and to a high-bandwidth main memory interface. The A64 is specced to offer cost-effective, so-so overall performance like most any GP CPU, it's a cost/performance trade-off. It wouldn't scale up anywhere near cell's performance at all.

So your "proof" is in reality nothing of the sort.

I don't accept that for a second. Yes cell has much higher floating point performance on paper than an A64 but achieving it is a completely different thing. Even John Carmack has said you would never be able to get anything close to that out of it in real life.

And there is far more to a gaming CPU that just floating point performance. Sure if you want to render your graphics on the CPU instead of the more efficient GU then Cell is great but for things like AI, scripting, game control and possible even physics its no-where near the leap your claiming over Athlon64 if indeed its an improvement at all.

The point is you can't compare Cells performance to the A64 on the basis of it having more bandwidth because we know full well the A64 doesn't need more bandwidth. Baseless claims about it being "massively weaker" arn't going to change that or prove anything.


Define "used properly".
All of us know PCs are pretty much NEVER "used properly". Bloatware is a term that was INVENTED on the PC for chrissakes.

I mean used properly like HL2, Doom 3, Farcry. Not like Halo and GRAW where the power is clearly being wasted. Im not saying eek out every last drop of potential from the hardware. Im talking about a well made and efficient engine which uses the hardware that it has available well.

Furthermore:
The PPU is a proprietary architecture. Only way to access it is to use Ageia's API function calls, the native instruction set isn't exposed to developers, so you can only use it for what the API allows you to use it for. Even if you could program it directly, chances are it'd be less than ideal for other things than what it was originally intended for.

And the relevance of that is..... what? What does it matter how you go about using its power if its there and its available to use?

Saying a $3500 PC is weaker than the PS3 implys that the PS3 will be able to regularly output superior results. Im saying thats false since a good dual core with a PPU to handle the physics should be able to handle pretty much anything that Cell can and then some. There are always going to be architecture specific cases where the Cell will handles something better but the power should be there on the PC to produce something similar or better using a different method.

Also, it can only process stuff that fits entirely inside its own on-board memory, it can't use all 2GB of your 'powerful' DDR2 memory. Besides, it's sitting on the PCI bus, which is extremely slow by today's standards. All it's really good for is doing stuff that takes relatively small input parameters, does A LOT of number crunching using those parameters, and then outputs a relatively small amount of results. It's a crutch, due to the weak FP performance of modern PC CPUs. It's in no way comparable to Cell, which is a freely programmable multi-processor system designed for multiple purposes, not just running physics/fluid/cloth simulations and similar related tasks.

Im not quite sure what the purpose of that paragraph was. Why would the PPU need to access the main memory? It has its local store which is presumably sufficient for modern needs - afterall its a full 50% of what Cell has available for everything dedicated purely to phsyics.

As for the information going into and out of the PPU over the PCI bus, do you have any evidence whatsoever to suggest its a bottleneck? If it where such a huge bottleneck why release the PPU on PCI in the first place? Its not like the fast majority of systems a PPU would be going into won't already have PCI-E. In all likelyhood the data going in and out of the PPU simply doesn't require a lot of bandwidth.


Yeah, you can run CoD3 in 2500*1500 or whatever on a PC SLI rig some time in the future, while PS3 runs it at 720P. Does that really make that particular system more POWERFUL tho?

Of course it does! You have twice the computational power and twice the bandwidth.

SLI isn't just some kind of "fill rate doubler", it will double your power under any scenario, be that fill rate limited, bandwidth limited, shader limited etc...

The ASR method of using SLI literally just gives the GPU twice as long to render a specific frame, thats clearly an obvious power boost across the board and as games get more demanding, it will show up at lower resolutions aswell. Crysis for example, your probably talking dual 7900's just to run that game at 720p with a bit of FSAA. Somthing you probably can't do with a single card.

Do you know of any PC game with visuals similar to Warhawk's for example? Check that game out and let me know, because I'd be damn interested in playing it.

Well Warhawk isn't even out yet and frankly I don't think it looks all that good. Flight Simulator:X looks far better IMO but there are plenty ot other games which also look better, UT2007, Alan Wake, Crysis, hell I think Oblvion looks better today with the mods.

That I can run game X with fairly simple graphics and a couple units moving around on-screen in 10 times higher resolution doesn't mean I have a 10 times more powerful system.

No it doesn't, but thats not even remotely how SLI works and im sure you know it.

What is absurd is using a best-case SLI scenario of 80% increase as an example of 'actual delivered power'. That strikes me as more than slightly optimistic. Sure, some games get 80%+ increase, but it's a relatively small minority.

Yes games can get 80% more performance, generally when their heavily GPU limited which is exactly the kind of situation we are talking about when comparing "next gen" titles. With heavy GPU limitations and a little optimisation for SLI in the code, 80% improvement should be a failry reasonable level to aim for for any dev. When your seeing less than that its usually because the game is bottlenecked by some other part of the system or is badly optimised for SLI (or not at all).

Undoubtedly, yes. Because you actually CAN optimize code for speed on a fixed system. ;)

There's a difference between code optimised for a specifc hardware set compared with a more general hardware set and code written for one architecture (Cell) trying to run almost as is on a completely different architecture (their PC's).

Anyway, he's a SOE employee so I wouldn't really expect him to say anything else.
 
xbdestroya said:
Ostepop I have to say, IMO I think that the PS3 will pull off better graphics by the end of it's life than the $3500 PC will be able to pull off five years from now; it's simply the advantage of having a closed system and being able to code to it.

I don't think so, not by a long shot. I have long been a proponent of these closed boxes squeezing a TON out of the hardware; similarly I also believe graphics engines are substantially underutilizing the current PC hardware.

So I am 100% on board for exclusive software making a 7800 @ 550MHz look stuck in mud to a degree.

But $3,500 -- not including a display or sound system seeing as a PS3 does not include either -- is going to get you top of the line multi-core x86 CPUs and 2-4GB of very fast memory. First, lets not underestimate the overall performance of these CPUs. Ok, they will lose in flop only tests, but lets not delude ourselves that this is all a CPU does. A Conroe or FX-64 at 3GHz is no chump (I believe Conroe's peak at 3GHz for flops is ~50GFLOPs anyhow). But for graphical comparisons sake, lets say you use those 5SPEs for graphics--I am willing to give CELL the edge here over these processors. Based on the recent reports CELL can contribute about ~6800 levels of performance to pixel shading for deferred rendering. So lets move to the GPUs.

RSX is basically a 7800GTX @ 550MHz (with 8 ROPs, some cache tweaks, etc) with 256MB of video memory @ 22.4GB/s and it also can share another 256MB pool with some extra latency.

This compares very poorly to what you would be getting on the PC. You could choose between

2x 7900GTX 650MHz 512MB w/ ~50GB/s bandwidth (~100GB/s aggregate)

or

2x X1900XTX 655MHz 512MB w/ ~50GB/s bandwidth (~100GB/s aggregate)

RSX has a decisive disadvantage head to head with a single GPU in fillrate and bandwidth, when taking into consideration RSX will have a fraction of the performance of the SLI/Crossfire rigs, e.g. nearly 1/3rd the shader performance. It also has less dedicate memory for assets (the PC is going to have Huge HDD + 4GB system memory + 512MB [*2] GPU memory). And if you go with the X1900XTX you have 4x (!) the *raw* shader performance -- not to mention the X1900XTX eats RSX alive in regards to dynamic branching in the pixel shaders (which will become more relevant as SM3.0 takes hold).

4x the raw shader performance and better featureset (FP16 blending and filtering with MSAA, HQ AF, superior SM3.0 implimentation, etc) is a huge, huge gap.

Heck, for $250 you can toss in a PPU just in case ;) And if you were really freakish you could angle in on a quad GPU setup. As good as Xbox1 games looked on a GF3/4 hybrid they never passed these sort of performance mark.

In the *raw* sense, clearly the $3500 PC has the greater horsepower in the GPU and RAM areas, though in the CPU arena the Cell will have the *raw* edge.

Yet even if you dedicate CELL to graphics (which is the context) even the recent reports don't indicate using CELL for shading would even remotely close the gap.

Anyway, that's my take on it. I don't think the statement is 'outrageous' or anything, you just have to filter it for PR, and then take it in context.

Yes, that it is exclusive platform dev PRing for his game. And to be frank if this was a dev with a great looking game coming down the pipe there could be some discussion, but UL is hardly a title that could not be done on a current PC--quite easily at that.

PS3 games are going to look great and will rival PC games for a couple years, surpass in a number of situations.

But the PC space has very much moved on since the last console generation.

PCs are no longer limited to 1 CPU or 1 GPU. You can stuff more into a PC now than you could ever do in the past.

And to be even more fair, we should be comparing what we could buy November 17th. You are looking at the possibility of Quad-Zilla from AMD with SLI G80 or Crossfire R600 GPUs. When the software catches up with the API model that pure brute force of such rigs will offer far too many gaps in performance over a PS3 (and Xbox 360!) that there is a noticable gap. It may just be an extra feature there, higher resolution here, nicer texture quality all around, etc. But graphics are a pretty straightforward task--your job it to spit out pixels. GPUs are very well streamlined for such and SLI gives a huge realistic boost.

/PC evangelism now steps down... of course, I don't have $3,500 for a PC and if I did I would get a $1200 gaming PC and a $600 PS3 and pocket the $1700 extra! :LOL:
 
Guden Oden said:
What is absurd is using a best-case SLI scenario of 80% increase as an example of 'actual delivered power'. That strikes me as more than slightly optimistic. Sure, some games get 80%+ increase, but it's a relatively small minority.

No worse than argueing that because CELL has a peak flops in excess of 200GFLOPs that is unquestionably faster than an x86 CPU. There are tons, and tons of situations where an x86 chip would spank CELL simply because flops is not the only thing a CPU has to perform. There is a reason Apple didn't go with CELL.

But in the context of graphics CELL should be able to help more than an x86. But we are talking in the range of a 6800 extra help. And to play number games -- and ignoring all the hard wired graphical tasks on a GPU (RSX has about 1.8TFLOPs worth of that btw), CELL is still only 208GFLOPs. RSX is 255GFLOPs.

So even if CELL could nearly double RSX's performance, RSX is not even close to a G80 or R600. You are running into realistic memory limitations -- and fillrate. There is a reason for only 8 ROPs ;)

But of course if you are using all ~200GFLOPs of CELL for graphics--you have to wonder, what is running your game engine? Sound? Renderer? AI? Physics? OS? There is no way CELL can dedicate all of its processing power to graphics in game, and we all know that in real code situation in-game CELL wont be turning out 200GFLOPs worth of performance in games.

Anyhow, if people are gonna start counting flops to poo-poo comparison a PC then there is no reason why people should not start counting the *raw* PC performance. What is good for the goose...
 
Status
Not open for further replies.
Back
Top