More comments from John Carmack about xbox360, ps3

weaksauce said:
They haven't said if it's pre-rendered. Maybe it is target renders, and they knew AA would take too much.
God, please don't start another pre-rendered/in-game debate again. I'm so tired of it.

weaksauce said:
And I said Cell was "more powerful on gflops".
Right, to which we said "...And?". You're comparing 2 numbers written on 2 pieces of paper told to you by 2 unreliable sources. You'd be making an equally informed decision if you just decided which console was more powerful by which VP spews more hyperbole. Which generally seems to be how a lot of people roll on the Internet.
 
If I remember well that number should be 80 gigaflops, at least acording to Nvidea, anyway that is for the whole system GPU included it is equivalent to the 1-2 Teraflops from XB360 and PS3
 
hadareud said:
Yeah, the table where half the fields are "I dunno" shall be our reference for accurate numbers and system performance.

I dont know for xbox but afaik 6.2 gflops is indeed the ps2 total system power.
 
Those Xbox Gflop numbers are padded with the nvflops (similar to the 1.8tflop RSX). You'll likely have to do a bit of digging to find the programmable flops of the nv2a (or whatever it was) and the celeron (that should be easier to come by).
 
scooby_dooby said:
So w/ 20times the 'gflops'(20:1 ratio) xbox was used to create GFX only slightly/moderately better than PS2. So much for Gflops...

you forget that now is different becouse we talk 1 tflop against 2 tflops and with such a BIG numbers must be some BIG differences ;) ok?
 
czekon said:
you forget that now is different becouse we talk 1 tflop against 2 tflops and with such a BIG numbers must be some BIG differences ;) ok?
actually we talk 200gflop vs 100gflop because that's the CPU portion of the #'s, and that's where everyone seems to think this 'big difference' is going to come from.

Anyways, they are marketing #'s and next to meaningless, you realy have to realize that.
 
i see this both consoles have cpu at 3.2 Ghz both have 512 Mb ram and both have high end GPU's, i don't belive that one can be much more powerfull...in other hand i really don't care :cool:
 
Marketing

scooby_dooby said:
actually we talk 200gflop vs 100gflop because that's the CPU portion of the #'s, and that's where everyone seems to think this 'big difference' is going to come from.

Anyways, they are marketing #'s and next to meaningless, you realy have to realize that.

CELL is benchmark for SP 200gflops no? So this is real peak performance, not marketing number. But Xenon CPU is not benchmark for 115gflops marketing number. Real peak number for real computation is maybe same as 3 SPE = 77Gflops. But without benchmark we cannot be sure what is peak Xenon performance.

But we can make unscientific guess.

If we can assume real game situation where minimum 1 full CPU is for non-floating point task, then only 2 left floating point processing no?

For PS3 CELL (not IBM benchmark CELL), 7 SPE is available so for PS3 version of same game, if full PPE is for non-floating-point processing, still available is 7 SPE.

So for real game with 1 full PPE or Xenon core for non-floating point processing, then for floating point processing, comparison is 2 Xenon core and 7 SPE.

So we can see this situation that PS3 is >3x floating point power for "real" game.

But, if for different "real" game, 2 Xenon core is for non-floating point, then for PS3 version, full PPE + how many SPE must have use for this? I do not know. Let us make guess (random) 4 SPE. Then we have 1 Xenon core for floating point processing and 3 SPE.

So still PS3 has atleast 3x floating point process power.

But this is all silly guesses. We cannot know what is real life without real tests from Dave Baumann or Anandtech or other test site for hardware performances. But it is fun to make stories for examples.
 
Let me know when a game-engine uses 200GFLOPs and I'll believe it's not a marketing number.

A synthetic benchmark payed for by IBM has nothing to do with the efficiency in a real world game scenario.

I'm not convinced 100gflops extra of theoretical power in the CPU will have any meaningful impact on anything short of slightly better physics. I'm also not convinced that Sony's system is balanced correctly, seems to put a huge focus on FLOPs while having a very week in order PPE to run it's game code. MS has produced a cpu with very high peak FLOP ratings, but also has 3 processors homogenous for running the game code.

Basically, what reason is there to believe that so much FLOP power is required in today's games? Just cause Sony says so? What about the rest of the processing? What about tasks not suited to SPE's? Is a single 3.2Ghz In-order CPU enough? How do SPE's compares to PPE's in this type of situtation?

GPU's will make the biggest diference like they alsways do, personally I really believe it will come down to whichever GPU has the most legs, and is most future proofed. THis CPU stuff is just hype from Sony to distinguish themselves and their product as something special.
 
The 215 and 114 numbers aren't necessarily correct (its been talked about in other threads), I believe the real numbers should be ~200 and ~76. Not that it really matters.

Let's not jump to conclusions.
 
Dave Baumann said:
IIRC John has been fairly vocal about the development path of DX and over the past few years, especially in light of the rockier development path of OpenGL. He's said that he has no particular ties to OpenGL, but he chose it because it was more right for him to do it at the time - his comments previously show that he's generally been quite happy with DX9 in particular; I should imaging the primary reason for his not using it for Doom 3 was purely because it wasn't available when he started work on it.

Actually, John originally chose OpenGL and viciously attacked DX because DX was much more complex than OGL to use, not because "it was the right thing to do at the time". His language was clear.

John Carmack said:
Direct-3D IM is a horribly broken API. It inflicts great pain and
suffering on the programmers using it, without returning any
significant advantages. I don't think there is ANY market segment
that D3D is appropriate for, OpenGL seems to work just fine for
everything from quake to softimage. There is no good technical reason
for the existence of D3D.

I'm sure D3D will suck less with each forthcoming version, but this is
an opportunity to just bypass dragging the entire development community
through the messy evolution of an ill-birthed API.

he overriding reason why GL is so much better than D3D has to do with
ease of use. GL is easy to use and fun to experiment with. D3D is
not (ahem). You can make sample GL programs with a single page of
code. I think D3D has managed to make the worst possible interface
choice at every opportunity. COM. Expandable structs passed to
functions. Execute buffers. Some of these choices were made so that
the API would be able to gracefully expand in the future, but who
cares about having an API that can grow if you have forced it to be
painful to use now and forever after? Many things that are a single
line of GL code require half a page of D3D code to allocate a
structure, set a size, fill something in, call a COM routine, then
extract the result.

Ease of use is damn important. If you can program something in half
the time, you can ship earlier or explore more approaches. A clean,
readable coding interface also makes it easier to find / prevent bugs.


Now, you may argue that after Quake3, with DirectX9 becoming much more programmer friendly, that his views have changed, and I think that is right. I still think DX's overall design requires code to be much verbose than need be, and that the names of functions are not as elegant and adherent to naming rules as OGL. DX10 is simplifying things further, and removing assembly language, and that may help more.

However, it's a shame that the industry has to wait a decade for Microsoft to catch up and fix their broken API, when OpenGL has been there all along, was cross platform, highly functional, and could have been evolved with Microsoft as a participant instead of an antagonist.
 
scooby_dooby said:
Let me know when a game-engine uses 200GFLOPs and I'll believe it's not a marketing number.

A synthetic benchmark payed for by IBM has nothing to do with the efficiency in a real world game scenario.

Depends on type of use no? For some use efficiency can be very but for other use maybe not so efficient but because of large number of cores even inefficient use can have very very high performance.

I'm not convinced 100gflops extra of theoretical power in the CPU will have any meaningful impact on anything short of slightly better physics.

Peak difference is really 130gflops but that is not so important because only CELL has benchmark for this. What is important is demo and yes, physics is good use for CELL, and animation, skinning, any graphics processing. London and TRE demo shows graphics can have very high efficiency for CELL so CELL can have very big contribution for graphics.

I'm also not convinced that Sony's system is balanced correctly, seems to put a huge focus on FLOPs while having a very week in order PPE to run it's game code. MS has produced a cpu with very high peak FLOP ratings, but also has 3 processors homogenous for running the game code.

3 processors of Xenon is either game code or flops not both and are "weak in order" type like CELL PPE. So if 1 core is gamecode then only 2 is flops, if 2 is game code then only 1 is flops. For PS3 CELL if PPE is gamecode, then 7 SPE is flops. If PPE and 4 SPE is gamecode, then 3 is flops.

Basically, what reason is there to believe that so much FLOP power is required in today's games? Just cause Sony says so? What about the rest of the processing? What about tasks not suited to SPE's? Is a single 3.2Ghz In-order CPU enough? How do SPE's compares to PPE's in this type of situtation?

We cannot know what is SPE efficiency for this situation but maybe 2 is enough but I will guess 4 so comparison with Xenon is not too bad for Xenon. If I say 2 is enough then for game with 2 Xenon core for gamecode and 1 for flops, PS3 version will have PPE and 2 SPE for gamecode and 5 SPE for flops, so if 2 SPE can have same performance as 1 Xenon core or PPE for other code, then CELL has 5x flops. This is not nice to say so this is why I say 4 SPE is same as PPE for gamecode in my before post.

GPU's will make the biggest diference like they alsways do, personally I really believe it will come down to whichever GPU has the most legs, and is most future proofed. THis CPU stuff is just hype from Sony to distinguish themselves and their product as something special.

I do not think it is hype my friend and there is many example for my statement but if that is your opinion that is ok. It is freedom to have belief of your choice no?
 
http://glowystarman.tripod.com/locopollo/id12.html

Xbox 120 GFLOPS PS2 6.2 GPFLOPS total system.
You might want to learn not to trust sources who say I dunno for a lot of things. There's a lot of data in there that's just plain wrong. I don't know who the hell dreamed up those figures, but PS2's CPU alone was rated at 6.2 GFLOPS peak. The thing is that since it doesn't have any sort of shader-related hardware, they have a harder time getting away with "NVFLOPS" type of figures for the GS, like you have for the Xbox, 360, and PS3. Just like the 1 TFLOP of the 360 is less than 100 GFLOPS on the CPU, your 120 GFLOPS on the Xbox is about the same lack of sense.

I can't say I've ever heard the NVFLOPS figure for Xbox, though I doubt I would have paid attention if it was ever told to me. But I can tell you the CPU alone is typically only rated at 1.5 or 3 GFLOPS (depending who you ask). There was never any doubt that PS2's CPU had more raw power, but it was just a pain to get at. I've heard the 1 trillion instructions per second tripe before, and it was a cheapo case where all but for a few thousand instructions were simply not run at all.
 
ihamoitc2005 said:
3 processors of Xenon is either game code or flops not both

Phew, I'm glad that game code doesn't use flops! What about megahurts? Does gamecode use megahurts?
 
Flops

function said:
Phew, I'm glad that game code doesn't use flops! What about megahurts? Does gamecode use megahurts?

Gamecode can use anything my friend and what is game-code can also be anything no but for discussion we make simplification. Sorry if it is confusing.
 
Back
Top