randycat99
Veteran
You know what I meant.
randycat99 said:jvd said:Btw 1tflop is alot of speed and power. But 1tflop does not equal 100s of pcs...
Actually, 1 TFLOP would be about 1 hundred 3 Ghz P4's. If one P4 can run the hell outta Doom 3, I don't think a lot of people will be complaining about a game that utilizes 100x the resources of Doom III will look like. So what is there to be disappointed over?
Mind you (if you hadn't read the first post in this topic), this topic isn't about whether or not you think the PS3 will deliver 1 TFLOPs. It is about what can we expect assuming it does.
Panajev2001a said:jvd... if at 65 nm ( with specs targeted probably for 45-50 nm ) they do not pass 1 GHz it would mean you are getting a BEAST even bigger than the one we are expecting...
I think they will be able to push the clock-speed faster than that...
Sony has quite always delivered specs higher than initial demos of the HW ( EE was demoed at 250 MHz and shipped at 300 MHz... )...
randycat99 said:It entirely depends on what each of those PC's (of the hundred) is capable of. I think it's pretty pointless to make a statement on it as vague as that. Is a game console going to be rendering movie resolutions or TV-ish resolutions? Are there shortcuts that can be exploited in a realtime videogame vs. doing everything "genuine" in a movie render?
randycat99 said:My question still remains unanswered from before- is there a renderfarm out there being used to make movie sequences that was actually the size of 100 nodes? Surely, they weren't 3 GHz P4 jobs, either. My guess is that 16 or 32 would be a more likely number, and each unit isn't sporting the latest clock speeds?
randycat99 said:???
If it's 1 TFLOP of performance either way, why would it be substantially different?
and i never said both would be rendering real time . I said the one that would have weeks to render the scene can add more detail to it. Of course there would be no big dif runing a 100node system and a monolithic in real time. The monolithic would prob be faster.But as with anything the more time you have the more you can actually do. If i wanted to , i can make a scene that would take 3 years to render on a 100 pcs (I'm talking tv res and real artists since i have trouble make stick figures.) and then ask you to do the same scene real time on the ps3 with all its effects and you wouldn't come close . When did toy story come out ? what 97-98 ? have we seen real time graphics that good yet ? I'm sure the ps2 is alot faster than what they rendered toy story on .randycat99 said:...But you are comparing a rendering implicitly tied to movie resolution and comparing that to another rendering done at TV resolution. There's a lot of demands that will radically decline if the target resolution is lower. Now throw in the factor that you may use a more speedy, realtime ray-tracing procedure in a game than genuine, hi-quality raytracing in a movie render.
So if you have a 1 TFLOP 100-node system and a 1 TFLOP monolithic system rendering to the same resolution in realtime, why would there necessarily be a difference. That was my point.
Panajev2001a said:jvd, I understand your point yet I think guys like the Sony, IBM Toshiba group have access to better manufacturing processes and fabs than nVIDIA (TMSC was the bottleneck...)... according to a very optimistic statement, nVIDIA and its partners ( TMSC ) are approx. 6 months behind the guys like Intel and IBM as far as manufacturing technology goes... this can be a bit more, but still 6 months is a LOT in the electronics world...
I think they are in better situation than TMSC when they started getting problems on .13 um even if the Broadband Engine should be such a large chip...
If you have a pool of computing resources (a methodology the PC indusry is moving to), why can't shader programs (eg. Vertex to start) be run on them. What prevent me from drawing 2 polygons for every pixel on a CPU and then running shaders on them before handing them off to a highly clocked, but simpler, rasterizer which then kick it out to the framebuffer?
Or, what prevents me from running a more Brazil like system (as opposed to PRman's REYES) and run a ray-tracing routine that's divided between APUs which each find independent rays on a Cell-like MPU? Do this at a comperable speed in a fragment/vertex shader (which should be combined by DX10 IIRC).
Nobody is talking about software rasteization. In case you don't get it and untill someone proves or explains diffrent - A design like Cell isn't that much diffrent than a P10 or other advanced architecture.
Comparing physical shader speed (none of the sampling and filtering crap)...
why would be a DX10 GPU faster at executing very long shaders with conditional branches than a modern well designed CPU ?
My main issues with your argument
BTW, You say T&L will not be a factor in next generation... well I disagree, as others did... fully dynamic and global lighting models ( introducing things like radiosity or ray-tracing ) will be a huge tax on any next-generation system and you need RAW horse power to do those... and IMHO the end result would be pretty good looking
That would be informally equivalent to a renderfarm containing over one thousand 400 Mhz PowerMac G4's. Those aren't exactly pokey machines just using one of them.
Has Pixar used anything remotely as extensive as that in the past to make a movie?
I have stated repeatedly that 1TFLOPs of general purpose processor won't do much against dedicated hardware, but Sony will almost certainly have a decent dedicated rasterizer.
1024 * 2.8 GHz * 4FLOP/cycle ~ 11.5 TFlops
I mean using cells, assuming they have the low 1tflop perf, it would take a 1000 of those, and if they're equal 100p4 3Ghzs, it would take near 100,000 pentiums to achieve perf near a petaflops.