If PS3 can really do 1Tflops

Vince-

Actually, you listed none AFAIK

Branches.

Or it's [fragment] nothing diffrent that what will eventally happen post DX10 or whenever they combine the processing resources in the 3D pipe. How is this impossible?

Being able to compete on a performance level is what makes it nigh impossible.

Last time I checked we were talking processing, not memory... nice try buddy. You're points are pretty unfounded IMHO; especially on the topic of RT Raytracing in which a cellular processor would utterly destroy you doing it in a DX10 shader.

This entire quote is way out there, are we talking real world or fantasy? A processor must deal with its platform, it is a major factor in its weakness compared to GPUs. Ray tracing as a rasterization process? You could simulate the effect using future shaders near perfectly, trying to run actual ray tracing on a CPU the memory constraints are going to kill you, there is no way around this for the forseeable future(not unless you plan on them having multiple GBs of eDRAM for the geometry load you are talking about).

The P10 isn't a gaming card

And how does that explain its very poor shader performance? I don't expect it to tear up UT2K3, but given all the hype around it I expected it to hold its own on shader performance with then current GPUs(which it didn't/doesn't).

Ben, AFAIK (maybe Faf is in which case I appologize), but nobody here is talking full software rasterization. And nobody is talking about a CPU architecture like that found in x86. When will you get this bud?

Look at the title of the thread again, then reread my first post in this topic and then look at numerous replies discussing exactly full software rendering. And what does x86 have to do with the sky being blue?

Zidane-

I believe the final cell processor could reach a TFLOPS rating beyond the speculated, and officially announced and maintained number, 6+TFLOPS, I believe it could reach sligthly above 10TFLOPS....

The general consense is that 1TFLOP is pushing it and iffy, I haven't heard anyone seriously consider the possibility of Sony's initial wet dreams ;)

Panajev-

Ben, the funny thing is that you invented your own argument and set up our own assumptions and restrictions that declared you winner...

Are you reading one thread and then posting to this one?-

If PS3 can really do 1Tflops won't it completely destroy GC2 and Xbox2 in the graphics department?

I did not post that, someone else did. I pointed out that 1TFLOPs alone wouldn't assume Sony so much as parity, then everyone jumped in to claim that Sony created God and they can do things we can't imagine ;) :p

Who in this thread was taking 1 TFLOPS and thinking the WHOLE 3D pipeline was going to be implemented in software

Considering the thread title, I don't see why people were thinking otherwise. I made it quite clear in my first post that of course Sony would have a rasterizer so it wouldn't need to rely on pure software. So the topic was based on what I responded to and then a few wanted to change it around in to something else.

You say that we should not be amazed by 1 TFLOPS because a PURE software renderer would not be competitive in 2005 even with 1 TFLOPS and then you say that it doesn't really matter as PS3 will not really go on a purely software route and there will be a rasterizer with 3D dedicated silicon...

Read the thread title and first post and maybe you will understand. I was pointing out 1TFLOP wouldn't do much if it was pure software, we need to know where Sony's rasterizer is going to fall before any sort of reasonable speculation can start as to where it will fall relative to its peers, unless you worship Sony as the ultimate deity that is ;)

Faf-

I was talking about geometry benchmarks.

I wasn't :)

Actually from what I remember high end cpus typically even outperformed GPUs in those situations.

Used to, GPUs are running them significantly faster now, and the gap is growing very quickly.

Marco-

Offline renderers are calculating everything with enormous precision - more than likely what they are doing can be reasonably approximated with simpler algorithms and speed up the proces significantly?

DX9 has increased the precission four fold in terms of what GPUs can do. GPUs precission level is accelerating much faster then off line renderers.

Randy-

It certainly is a grand departure from a generic x86 CPU implementation sitting on your desk. I think that's where the bulk of these "software renderers can never be fast" arguments crumble to the ground.

Pixar just moved their render farm to x86. Steve Job's own company now runs x86 for their most demanding tasks. In terms of cost v performance, x86 obliterates the higher end offerings.

CG looks pretty damn good (maybe not even today-today, but lets just say FF:TSW movie sort of CG), and that level is plausibly achievable in realtime CG on next generation hardware.

A good approximation perhaps, no way are they going to pull off true radiosity in real time(a full break down on my 2GHZ rig takes about half a day per frame).
 
Used to, GPUs are running them significantly faster now, and the gap is growing very quickly.
GF4 was still behind, or roughly on par, so it isn't that far back...
Not really sure how it is with R300/GFXs now, but that's besides the point I was making anyhow.
 
Read the thread title and first post and maybe you will understand. I was pointing out 1TFLOP wouldn't do much if it was pure software, we need to know where Sony's rasterizer is going to fall before any sort of reasonable speculation can start as to where it will fall relative to its peers, unless you worship Sony as the ultimate deity that is ;)

I agree that 1 TFLOPS for pure software rendering would not be that much against dedicated silicon + a nice 3-4 GHz Prescott like CPU...

Again, the thing is like I have seen NOBODY arguing that as a pure software renderer 1 TFLOPS would be competitive by 2005 standards...

My point was that is not because of how Cell works, not because the APUs cannot efficiently be used for Vertex and Fragment Shading ( they can be used quite effectively IMHO )... but because basic things like FSAA, texture filtering, etc... would require LOTS of FLOPS and would be much more efficiently implemented in silicon.

I read clam64 posts and he was not saying anything about full software rasterization...

Your first post was about "tsk... 1 TFLOPS for the CPU + current GS == crapola"

I do not find the next GS or Visualizer in the same situation as the current GS... You have seen the patent, ou have seen that the GPU would have part Cell elements ( 4 PUs + 4 APUs ) and silicon dedicated to basica HW ops ( AA, texture filtering, etc... )... The current GS is not programmable, certainly not DX8 levels let alone DX9... Mixing custom silicon and the Cell architecture brings something programmable enough IMHO ( you saw that patent too )...

Sorry if I appeared insulting, but we DO have an idea how the Rasterizer looks like... the patent doesn't only describe the TFLOPS class Broadband Engine, but also a very programmable Visualizer...

You basically predicted doom and gloom if PS3 had only 1 TFLOPS because in pure software rendering it would have the "power of a 2005 e-machine" ( btw, I bet that the e-machine in 2005 would have that max, but in terms of achieving it... it would fall quite behind... I somehow doubt we can see large quanitites of low latency and high bandwidth e-DRAM embedded on CPU and GPU... ) and then, after some posts, you went like "oh well I do believe that they will have a decent GPU this time so pure software rasterization performance won't be the issue"... :?

You could have expressed your feeling of "1 TFLOPS is not much by 2005 standards" in a better way...
 
DX9 has increased the precission four fold in terms of what GPUs can do. GPUs precission level is accelerating much faster then off line renderers.

Will this trend continue ( Marconelly makes a good point... but maybe by DX10-11 we will be able to HW accelerate RT... GPUs will be very fast programmable HW that can be used for off-line rendering ) ?

Personally I believe that GPUs might reach the point in which they can be used for off-line rendering... by that time we might have a PFLOP Cell v2 or IA-64 and GPUs might be substituted by CPUs... ;)
 
DX9 has increased the precission four fold in terms of what GPUs can do. GPUs precission level is accelerating much faster then off line renderers.
I used the term 'precision' very loosely, to describe the amount of sampling, reflection and tracing depth, antialiasing, etc. that goes into offline rendered movies. Enviro - mapping in racing games is an example of what I was talking about. Today's games all have simplified enviro models that are used for realtime enviro map calculation. It's reasonably close to the real thing, but so much faster to perform than real ray-traced reflections.
 
BenSkywalker said:
Pixar just moved their render farm to x86. Steve Job's own company now runs x86 for their most demanding tasks. In terms of cost v performance, x86 obliterates the higher end offerings.

Yes, I'm well aware of that. It still doesn't change the limitations of the typical Intel CPU/memory setup with regard to software rendering, of which you yourself keep harping on. My earlier remark still stands that the supposed PS3 architecture is such a grand departure from that, one cannot automatically make the same associations with rendering limitations as they do with the P4 sitting on their desk. You seem to be unconvinced it would make any difference, which is fine for your own opinion. For anyone else, there is a lot of promise given by the peculiar architecture.
 
To be realistic we can see the PS3 "WORLDWIDE" in late 2005.
Look back 3 years from today and then "try" to look forward 3 years in time.
It´s impossible to se what the future holds in it´s hands, but we can always speculate.
The pos/limi of the PS3 will simply be the process IMO.
 
BenSkywalker said:
Look at the title of the thread again, then reread my first post in this topic and then look at numerous replies discussing exactly full software rendering.

Topic = If PS3 can really do 1TFlop

And believe it or not, conversations exist before you join them and spread your infinate wisdom upon us all. Just because you enter, start talking out of nowhere about full rasterization and then get replies doesn't mean thats what we were talking about - or should be as it's a purely theoretical argument thats pointless.

Or it's [fragment] nothing diffrent that what will eventally happen post DX10 or whenever they combine the processing resources in the 3D pipe. How is this impossible?

Being able to compete on a performance level is what makes it nigh impossible. [/quote]

Ben, nobody is talking about pure software rendering; as Panajev said, stop making up your own arguments with your own conclusions.

This entire quote is way out there, are we talking real world or fantasy? A processor must deal with its platform, it is a major factor in its weakness compared to GPUs.

Ben, stop being like this.We're talking about computational ability. When talking about ability or potential you must remove all other bottlenecks and then see what the actual upperbound on the processing elements are.

You're argument is so ridiculous. A R300 core must deal with it's platform; thus we'll only bench it at 640*480 and conclude that's the cores preformance. :rolleyes:

And how does that explain its very poor shader performance? I don't expect it to tear up UT2K3, but given all the hype around it I expected it to hold its own on shader performance with then current GPUs(which it didn't/doesn't).

(a) I don't know - it could be anything.
(b) It's not cell
 
IF? IF? I...F?!! There are no IFs in $ony world. PS2 was 50X the DC and boast a cool 6.2GFlop of supercomputer powers!

PS3 WILL deliver SUPERSUPER KOMPUTATIONAL POWERS, 100X TODAy FaSTEST pC, all IN A BOX WITH BLURAY AND HISPEED FIBEROPTIC adaptor, for a dream price of USD299.

1999,
"We want to propose 'emotion synthesis,' " said Ken Kutaragi, executive vice president and co-chief operating officer of SCE, who has led Playstation R&D. "With this keyword, we want to realize the generation of emotion by calculation."

http://www.eetimes.com/story/OEG19990302S0026


2003,
"With hundreds of CELL chips and the speed of Broadband, you can now play games so realistic that you will feel for the game. Cry with sadness, feel the anger of revenge, save the princess and more, all in realtime FFTSW quality!" said Ken Kutaragi, the visionary creator of Playstation and THE dance master. "I call it Broadband Cellular Realization. Prepare for the next coming." , Kutaragi winked.

http://www.bullshit.com/story/usualsonyhype#141

8)
 
Another thing, is the PS3 architecture more or less set in stone now? The patent? They could up some hertz and ram but the main thing is there already right?

So technically, PS3 is like a 2003 hardware? As in the types of rendering/shading/color precision/fsaa are today technology? Will it not be like PS2 again? Outdated tech?

The good thing is Sony has a good plan outline. Better able to produce more PS3 more cost effectively and all, but the performance will suffer?
 
chaperone said:
Another thing, is the PS3 architecture more or less set in stone now? The patent? They could up some hertz and ram but the main thing is there already right?

So technically, PS3 is like a 2003 hardware? As in the types of rendering/shading/color precision/fsaa are today technology? Will it not be like PS2 again? Outdated tech?

The good thing is Sony has a good plan outline. Better able to produce more PS3 more cost effectively and all, but the performance will suffer?



Oh god................ :cry:
 
I agree. :rolleyes:

It is not "outdated" technology because it cannot even be realized in hardware using today's lithography standards (not in anything you could sell on the market, that is). The entire architecture is betting on a specific lithography threshold for 2005 to be feasible. Additionally, this is slightly different than a GPU design being solidified today. The functions in a GPU are typically set in stone, whereas the programmability of a CPU-centric design offers considerable flexibility with regard to meeting the needs and challenges of some future date. As long as the brute processing resources are available, you can be assured the design will hold its own in year 2005.
 
The patent originitates from before IBM's involvement ... I would not put a lot of stock in it.

Some things wont change of course, simply because they are the patently obvious way of doing things (one of the reasons the patent had to be filed probably). The hierarchical organization of general and low level processors, networking and memory is inescapable .... as is the reactive nature of the processing (although it is not a model very popular with most of the software development crowd).
 
I see some of you have strong faith in the $ony, the Sony that promises this
ps2demo.jpg

Clean and sharp looking textures seem to be so far away now. :(

yet could only deliver this:
ffx_eng_018.jpg


Grainy textures, low res video and tons o' shimmering. If only the PS2 had as good image quality as DC, you will not see so much PS2 bashing.

Right now, how much graphiX rendering expertise does Sony, Tos and IBM have? How do they compare to heavy weights like ATi? I fear PS3 might be another engineering disaster. :cry:
 
Clean and sharp looking textures seem to be so far away now.
Far or not, there was none of that in that demo to begin with. (but then you would have needed to actually see the demo at least once to be able to tell that... ;))
 
"Another thing, is the PS3 architecture more or less set in stone now? The patent? They could up some hertz and ram but the main thing is there already right?"

First off smarty, NOTHING has been confirmed. The patent tells alot about the Cell but it has said nothing on what the performance of the Graphics Synthesizer 3 will be.

"So technically, PS3 is like a 2003 hardware? As in the types of rendering/shading/color precision/fsaa are today technology? Will it not be like PS2 again? Outdated tech?"

Ps2.. Outdated? You do remember that PS2 is based on 1999 technology right? And the specs were incredible on the thing, how could you *not* expect a Xbox or gamecube to be better than it technicaly when they come out waaaay after it?

"Clean and sharp looking textures seem to be so far away now.

yet could only deliver this:"

Good job fanboy, comparing a high res picture to a low res screen shot taken from a tv screen with a digital camera. Nice job trying to spread your crappy rhetoric around, too bad it's not going to work.


silenthill3_screen001.jpg


Why not be fair? Compare high res with high res, don't use little fanboy tactics.
 
Paul:
You do remember that PS2 is based on 1999 technology right? And the specs were incredible on the thing, how could you *not* expect a Xbox or gamecube to be better than it technicaly when they come out waaaay after it?
The technology in all chips has to be finalized well before production. It takes time to make and test samples and to ramp up the fabrication processes. The problem is even worse when you have to share production lines with chips running on other processes... I remember days of downtime on fab implanters simply for switching the gases that had to be used, and months of partial downtime on new implanters that were being caliberated to the production line.
 
Back
Top