Is PS3 able to preform ray tracing via cell

Vysez said:
Intel17 said:
Will people be able to buy Cell based computers in the near future, or is the PS3 the only way to get your hands on them?
Regardless the fact that this is slightly off-topic, I don't expect Cell based workstation to be available to a mainstream, or even enthusiast base, if you ask me.

IBM with the loss of Apple, no matter how you see it, is in a shit position when it comes to anything else but their xServers business. (I talking about Personal Computers here)

On the other hand, I could see Cell CPU computers replace SGi Stations in some fields, like medical imagery, basic VR simulations, etc...

In other words, Cell is definitely not an X86 contender.
Hell, I don't even think it's a PPC contender (And somehow, the Cell is a PPC + 8 "DSPs").

Isn't the CELL architecture superior to x86 when it comes to heavy math stuff? Also, isn't the bus significantly wider than any Intel or AMD chip?

I'm sorry, I might have gotten lost in the hype!
 
Intel17 said:
Vysez said:
Intel17 said:
Will people be able to buy Cell based computers in the near future, or is the PS3 the only way to get your hands on them?
Regardless the fact that this is slightly off-topic, I don't expect Cell based workstation to be available to a mainstream, or even enthusiast base, if you ask me.

IBM with the loss of Apple, no matter how you see it, is in a shit position when it comes to anything else but their xServers business. (I talking about Personal Computers here)

On the other hand, I could see Cell CPU computers replace SGi Stations in some fields, like medical imagery, basic VR simulations, etc...

In other words, Cell is definitely not an X86 contender.
Hell, I don't even think it's a PPC contender (And somehow, the Cell is a PPC + 8 "DSPs").

Isn't the CELL architecture superior to x86 when it comes to heavy math stuff? Also, isn't the bus significantly wider than any Intel or AMD chip?

I'm sorry, I might have gotten lost in the hype!

Far superior, the FFT example that IBM discussed at the Power.org conference in Barcelona showed a 10x improvement over a Xeon in a 256 point FFT and 2 magnitudes greater performance at a 16 million point FFT. They had to do some heavy duty thinking on how to handle the quite enormous data flow.

http://www.beyond3d.com/forum/viewt...storder=asc&highlight=power+org+barcelona
 
Intel17 said:
Vysez, why don't you think Cell is a competitor to the PowerPC architecture or x86?

Arn't the cell and more standard PowerPC processors (lets say the 970) and x86 in different markets although there is some crossover from the non cell side.
 
a688 said:
Intel17 said:
Vysez, why don't you think Cell is a competitor to the PowerPC architecture or x86?

Arn't the cell and more standard PowerPC processors (lets say the 970) and x86 in different markets although there is some crossover from the non cell side.
Yeah, that's what I think.

The problem is not about Cell itself. If software code was platform agnostic, new CPU architectures could have a chance on the workstation/server arena.
 
64 Bit Floating Point performance is important in the workstation market. At 25 (DP-)GFlops , Cell might look nice at first glance, but considering it's die size and rather bad efficiency (in order, no branch prediction at the vector units, etc...) even a quad core ppc970 (which should amount to about equal die size) will slap it silly performance wise. In short i don't see many reasons why cell should replace proven solutions in workstations (see Apple) & servers. Might have a future in digital cameras, blueray players and the like
 
PiNkY said:
In short i don't see many reasons why cell should replace proven solutions in workstations (see Apple) & servers
The way I see it DP was probably included as a requirement of the ISA, current chip was obviously aimed primarily at multimedia apps so DP performance would be one of the last priorities.
If I understand things right, SPE DP works as 2-way SIMD, so a potential revision optimized for DP throughput could run it at half the SP peak.
 
I imagine DP specialised Cells will appear, as long as they're accessed the same they can just have their priorities changed - lower peak SPFP, higher peak DPFP.

Though all we know for sure is it's appearing in PS3, some custom developments, and supposedly Toshiba TV or two. I think there's a good dose of 'wait and see' attitude before anyone will even consider using it in a desktop (save Sony's content-creation workstations they've mentioned)
 
> "even a quad core ppc970 (which should amount to about equal die size) will slap it silly performance wise."

But why bring up things that can't even be produced? Your solution does not even exist at 3 GHz above, or even as a dual core, never mind quad core! I like to see the heat output of that quad core ppc970 versus CELL at 3.2 GHz that is if the quad core ppc970 could even run it at that clock rate.

A number of important goals of the CELL project was 1) high clock rate, and 2) a number of highly efficient vectorized processors, 3) with reasonable heat output.

Saying CELL is not efficient is like saying it was designed for running word processors only. CELL is incredibly efficient for what it was designed to do. The current PPE is overkill for most desktop apps, and you have the SPE's for everything else.

Apple did not go with CELL, because they would have to re-write all their code, and they already had 5 years porting experience over to the x86. That's the main reason, and the fact the x86 market can offer more variations of a CPU overtime due to the size of that market. Economies of scale and production, and amount of R&D dollars must amount to something!
 
There has been a lot of talk going around the internet for the past few weeks since E3 about the issue of real-time raytracing and the possibility of next generation consoles being able to incorporate it. This topic had also come up in early 2003 when a few unknown companies began experimenting with hardware accelerators for raytracers and 3D artists began making their own homebrewed real-time raytracing demos. Now, in 2003, the issue wasn't whether or not real-time raytracing could be done, it was whether or not it would become a standard. I distinctly remember having a conversation with some of my blender buddies about this very issue. However, it has become appearant that the issue has evolved from being whether or not it will become a standard to whether or not it's possible. And that makes me wonder why would people have such a problem with accepting real-time raytracing as a possibility for next generation consoles. I think I have come up with an answer to that question but before I state the answer, I'll try to prove that real-time raytracing is possible. I'll do that by saying this: raytracing in the past was completely cpu based. Meaning 100% of the raytracer's instructions would be carried out by the cpu. The average cpu in late 2002, early 2003 was around 500Mhz - give or take a few hundred MHz depending on your budget at the time. A cpu at these speeds could not handle raytracing in real-time at a decent framerate but it still could be done. Just not comfortably. Now here's the good part. Late 2003, these guys http://www.saarcor.de/ made a raytracing accelerator that allowed the implementation of real-time raytracing for 3D computer applications. This was in 2003 and yet for some reason there's an issue today on whether or not it's possible. The first credible test on real-time raytracing was accomplished on a 500MHz cpu - The Cell is said to be running at 3.2GHz. What was an issue in the past is no longer an issue today with advancing technology. I believe that this issue has blown up so much is the misconception that certain rendering techniques are so expensive in the power department that it takes a rendering farm to produce them when in-fact, that is not true. Will real-time raytracing become a standard during the next generation? That depends on the developers and the need for such a lighting technique. But it is possible.

P.S. Here's a real-time raytracing demo if anyone is interested: http://www.realstorm.de

hope this somewhat clears the issue.

500MHz not 1300MHz, mistake
 
cobragt said:
The average cpu in late 2002, early 2003 was around 500Mhz - give or take a few hundred MHz depending on your budget at the time.
In 2000 I bought a 900Mhz Athlon, and it was not even top of the line. I think your budget would have to be extremely small to buy a 500Mhz cpu in 2002-2003.

500MHz not 1300MHz, mistake
I think even 1300Mhz is a bit low for that timeframe.

Just noticed you probably meant average speed to own, not to buy. In that case, even today the average speed of all my computers is only about 200Mhz :p
 
Well, the problem of realtime raytracing on next-gen hardware is... well, you can take something like CELL, and say quite easily that it is extremely well-suited to the task of raytracing. But that doesn't really say whether or not it's "good enough." I mean, for something with no recursion depth (i.e. raycasting w/ shadows) at 720p 4x antialiased, yeah, CELL should probably handle that without too much of a problem. But then your end result will basically be Doom3 level.

When you're at that level, what's the point? If there isn't enough reason to do a software realtime raytracing engine, why would you? It would boil down to what sorts of gains you'd see. Bear in mind that raytracing polygons is definitely not fast, either. The net losses might outweigh the gains in the end.
 
cobragt said:
There has been a lot of talk going around the internet for the past few weeks since E3 about the issue of real-time raytracing and the possibility of next generation consoles being able to incorporate it. This topic had also come up in early 2003 when a few unknown companies began experimenting with hardware accelerators for raytracers and 3D artists began making their own homebrewed real-time raytracing demos. Now, in 2003, the issue wasn't whether or not real-time raytracing could be done, it was whether or not it would become a standard. I distinctly remember having a conversation with some of my blender buddies about this very issue. However, it has become appearant that the issue has evolved from being whether or not it will become a standard to whether or not it's possible. And that makes me wonder why would people have such a problem with accepting real-time raytracing as a possibility for next generation consoles. I think I have come up with an answer to that question but before I state the answer, I'll try to prove that real-time raytracing is possible. I'll do that by saying this: raytracing in the past was completely cpu based. Meaning 100% of the raytracer's instructions would be carried out by the cpu. The average cpu in late 2002, early 2003 was around 500Mhz - give or take a few hundred MHz depending on your budget at the time. A cpu at these speeds could not handle raytracing in real-time at a decent framerate but it still could be done. Just not comfortably. Now here's the good part. Late 2003, these guys http://www.saarcor.de/ made a raytracing accelerator that allowed the implementation of real-time raytracing for 3D computer applications. This was in 2003 and yet for some reason there's an issue today on whether or not it's possible. The first credible test on real-time raytracing was accomplished on a 500MHz cpu - The Cell is said to be running at 3.2GHz. What was an issue in the past is no longer an issue today with advancing technology. I believe that this issue has blown up so much is the misconception that certain rendering techniques are so expensive in the power department that it takes a rendering farm to produce them when in-fact, that is not true. Will real-time raytracing become a standard during the next generation? That depends on the developers and the need for such a lighting technique. But it is possible.

P.S. Here's a real-time raytracing demo if anyone is interested: http://www.realstorm.de

hope this somewhat clears the issue.

500MHz not 1300MHz, mistake

If realtime ray tracing is available now using older cpu's why could it be done with cell or xcpu. What reasoning could there be not to make this tool available during this new generation of game systems? :?
 
I guess the nature of a dynamic high geometry environment prohibits it, we're talking about individual frame geometry counts in excess of 10 million triangles in the next gen, to calculate ray tracing over such high poly counts would be formidable indeed. Especially since everything is zapping along at 30-60fps. Probably have to wait another 5 years at least. Tomasi suggested that its always been a moving target for photorealism, ask a guy 5 years ago when it's possible and they'd have said 5 years. That's today and such lighting and shading tech is at least 10 years away he suggested.
 
Actually raytracing (AFAIK) is more efficient for high poly scenes. Its scales better in system demands. It also allow non-tesselated surfaces for perfectly smooth-edged objects in a very low vertex count using NURBS/SDS.

The problem ATM is it's too processor intensive. Realtime raytracers (tried one a couple of years back) are slow and pixelated - 15 fps of blocky graphics...no thanks!

So for the time being if raytracing does feature it'll be as an ancillary role, but by PS5 maybe it'll be realtime?
 
Actually raytracing (AFAIK) is more efficient for high poly scenes. Its scales better in system demands. It also allow non-tesselated surfaces for perfectly smooth-edged objects in a very low vertex count using NURBS/SDS.
Well, yes, you can apply all nature of spatial subdivision schemes to minimize the number of scene elements you actually test per ray, and the scaling with complexity becomes logarithmic. Hell, we use that all the time to speed up raycasts for line-of-sight tests in console-land even now.

However, consider the idea of raytests against a sphere defined as a center and radius... pretty fast, obviously. Now try and imagine raytests against a sphere defined as a polygon mesh -- now you've got several hundred or thousand potential scene elements to test against that make up a single object. And using some sort of spatial subdivision does help by culling individual elements, but the end result will still be slower than having that single scene element that defined the whole object.

The same thing could be said of raytracing NURBS. If you implicitly calculate the surface at any point, it's a lot faster than say, computing a certain tesselation depth in polygons and then raytracing against the polygon mesh. Similarly, this is also because a model built in NURBS will have a comparatively low-complexity control point mesh. Same could be said of Catmull-Clark or Doo-Sabin subdivision surfaces, which converge to Bezier curves on infinite iterations (cubic and quadratic respectively).

The problem ATM is it's too processor intensive. Realtime raytracers (tried one a couple of years back) are slow and pixelated - 15 fps of blocky graphics...no thanks!
They have improved over the years. I can get a good 24 fps at 800x600 on current PC CPUs. However, every scene in these cases were still constructed of spheres, cylinders, boxes, and CSG combinations thereof. And they were simple enough of scenes that spatial subdivision hierarchies would have slowed you down. Doing the same in a game level is a far cry away.

The math works well in SPE-land, though, since you do have quite a few nice built-in instructions, and each individual calculation is pretty compact in terms of complexity. Plus the nice thing about raytracing is that every ray is totally independent, so you can basically multithread infinitely (until you run out of pixels, anyway).

So I basically have to say... yeah CELL might be able to do something basic in realtime, but at what cost and what gain? I doubt it would be enough to justify the trouble. I mean, the landscape demo was cool and all, but if you look back to Outcast, which did the same thing in game back then, the scaling up to that demo's level isn't really that out of this world.

Where raytracing would really show clear benefit would be if you had some kind of stochastic GI system or even stochastic area light sampling would show SOME improvement. And full-on MCPT of a dynamic scene every frame would easily require something 1000x CELL.
 
Perhaps a puzzler game using CSG's could be raytraced RT with a little recursion too? Could make for very realistic objects. I mean, there's only so much you can do with Tetris visually, but it's still a compelling game!
 
ShootMyMonkey said:
And full-on MCPT of a dynamic scene every frame would easily require something 1000x CELL.

I wouldn't worry about that then - Kutaragi called me up yesterday and told me that we could be expecting 1000x the power of Cell from PS4. 8)
 
Back
Top