A new Sony patent application..

pcostabel said:
Actually, that demo proves my point. It's super noisy because with ray tracing you need to supersample 4 or 16 times to get decent results, and it doesn't look any better then traditional hardware rendering.
That might be true for now, but there are a number of inherent advantages, such as, flexibility (almost free choice of primitive type), no need for z-buffer, correct lighting (shadows, specular reflections, refraction etc.).
 
Squeak said:
pcostabel said:
Actually, that demo proves my point. It's super noisy because with ray tracing you need to supersample 4 or 16 times to get decent results, and it doesn't look any better then traditional hardware rendering.
That might be true for now, but there are a number of inherent advantages, such as, flexibility (almost free choice of primitive type), no need for z-buffer, correct lighting (shadows, specular reflections, refraction etc.).

Not sure what you mean by flexibility... raytracing works best with implicit surfaces (spheres, cylinders etc.). Intersecting NURBS or even polygon meshes is very expensive and requires tessellation.
Yes you don't need a z-buffer for raytracing, but I wouldn't call that a big advantage. Besides, z-buffer is not needed in scan line rendering either.
Correct lighting is arguable, since global illumination is more correct in most situations (altough you don't get reflections or highlights).
Raytracing is great for volume rendering, though (actually, raymarching is the correct term), and refractions. That's why it is usually used in conjunction with more efficient algorithms to render only translucent/reflective objects.
 
Rule of thumb ... if you can do an analytical intersection test with something it is only good for demos. Outside of demos you will be using tris or tesselated surfaces even with raytracing.

You still need to buffer a Z value for each ray, since with usefull scene descriptions (again, ignore demos) you wont be able to get perfect front back sorting of primitives ... without that you still need to use the Z values of potentially multiple intersections to determine the front most hit.

Better lighting sure ... but that's where things really start getting slow (ray coherency goes down the drain, important for parallel tracers).

Basically if you ignore demos raytracing makes sense for only part of the rendering, the problem is that at the moment the rest of it is too much to ignore ... rasterizers are necessity, raytracing is a cherry on the top. Nice to have, but the cake is almost as good without.
 
One of those links I posted shows a 350 million polygon (triangle) model being raytraced with shading and shadowing at 1-3 fps on a 1.8GHz Opteron. Now that isn't a game situation, but it is still impressive. Raytracers certainly scale well.

You still need to buffer a Z value for each ray, since with usefull scene descriptions (again, ignore demos) you wont be able to get perfect front back sorting of primitives ... without that you still need to use the Z values of potentially multiple intersections to determine the front most hit.
To be sure, you don't need an entire z-buffer though. Although, you might want one.
Lack of a z-buffer isn't really a compelling argument for me though.

Correct lighting is arguable, since global illumination is more correct in most situations (altough you don't get reflections or highlights).
Global illumination isn't a rendering technique. It is a lighting term. Many global illumination techniques use raytracing.
 
Did anyone notice the new sony controller?

Number 7 of the search list.

To paraphrase it's description: Sony Wireless PowerGlove
#4
http://appft1.uspto.gov/netacgi/nph...ment"&FIELD1=AS&co1=AND&TERM2=&FIELD2=&d=PG01

I didn't read it through so here are some creative ideas.

A. My guess is that such a device could use piezo electric charges produced by flexing the device to stimulate a bluetooth like transmitter.

B. Sony could also sell color coded gloves that an Eyetoy could interpret.

C. Batteries or a Cable. anon
 
David,

I saw no mention of any powerglove (wireless or otherwise) on the page you linked to... Anyway, 'powerglove' is a term Mattel trademarked way back in the day (for the NES). Dunno if trademarks expire, but if not they'd need to buy that trademark first before they could use it. :)
 
It's that thing they talked about before, that minority report style interface.

Eye-Toy 2 + Sony-PowerGlove + PS3 =

ph7.jpg


:)

Fredi

Edit: It can't be a glove, but maybe something similar:

Thus, there is a need to solve the problems of the prior art to provide an input device convenient for a user to use that provides tactile/haptic feedback and does not have to be worn by a user.

SUMMARY OF THE INVENTION

[0009] Broadly speaking, the present invention fills these needs by providing an input device configured to be held by a user. It should be appreciated that the present invention can be implemented in numerous ways, including as an apparatus, a method, a process, a system, program instructions or a device. Several inventive embodiments of the present invention are described below.
 
LogisticX said:
This is exactly why I'm dropping out of computer science and moving into a different field of math... do you enjoy the work marco (possibly a stupid question I guess)?
It's not a stupid question and yes, I do enjoy it.
Obviously I'm not working so hard all the time...

ciao,
Marco
 
Nexiss said:
One of those links I posted shows a 350 million polygon (triangle) model being raytraced with shading and shadowing at 1-3 fps on a 1.8GHz Opteron. Now that isn't a game situation, but it is still impressive. Raytracers certainly scale well.

They scale in the wrong direction though :) I mean we arent really interested in aliased to shit renders from super high polygon scenes, usefull for CAD but not for us. Scenes from CGI movies are a lot closer to being relevant to us (where you dont have ridiculous tri to pixel ratios, so the subsampling ability of raytracers is less relevant).
 
It isn't like you can't do anti-aliasing with raytracers. Sure, you have to cast more rays, but aa doesn't seem particularly cheap in any situation.
Now I am not saying we should just jump on the raytracing bandwagon (not yet at least). I'm just trying dispel the idea that raytacers are too slow to be of any use.
 
Guden Oden said:
David,

I saw no mention of any powerglove (wireless or otherwise) on the page you linked to... Anyway, 'powerglove' is a term Mattel trademarked way back in the day (for the NES). Dunno if trademarks expire, but if not they'd need to buy that trademark first before they could use it. :)

I understand trademark.
However the easiest way to relate a headline is to use familiar terms.
Thus I said it was a paraphased description.
The correct title is...
4 20040012557 Hand-held computer interactive device

It seemed novel enough. Since the new patent simply solidifies what we already know.
To learn more about the memory operation...
5 20040003178 Methods and apparatus for controlling a cache memory
To learn more about the depth of field technique...
2 20040036687 Methods and apparatus for rendering an image with depth-of-field display

Ya'll have probably already viewed all of these too.
The glove at least semed entertaining.
Besides I want radio wireless controllers to be standard in the future. So the hand held computer interactive device (Sony Powerglove :p) seemed most interesting.

Back to the subject. What makes the newest paten interesting is that it clearly confirms the use of on chip Video RAM. Something we already knew about as "Image Cache" and gives us further details. Since they call it VRAM it's a little more exciting because by association we hope it is dual ported and at least half the chips clock speed. A friend and I made bets a while back. We were guessing 32MegaBytes of Image Cache in the event of a single VS on each PS3 visualizer (16MB if there were two). As it looks 1 visualizer with "8x 4 MB area DRAMs" seems to be the correct guess PS3. (Rather than the big MCM visualizer others have talked about. Point for me. :) Maybe another point for recognizing more than a year ago that it would use a tiled redering method. Though that was probably obvious.) We also learn that is is indeed the intent to have 4 APU in a visualizer and that it is not limited by die size to Two. (Wit the assumption that this is a PS3 relevent patent.)

Another thing to note is that the nature of the CPU is ambiguois in this patent. (Though that may be because it isn't relavent to the subject.) So at least that point remains. My guess is no more than 2 PE. (Including the VS, I then expect only one PE of 8 APU.)

Look forward to reading other ideas.
I'm dipping for a concert now. Sfyre Rules! :)
 
LogisticX said:
This is exactly why I'm dropping out of computer science and moving into a different field of math...
Workoholism isn't isolated to comp.sci. fields alone. But yeah, I'd say the usual prerequisite to have the 'condition' is to enjoy your work. :)

Mfa said:
Rule of thumb ... if you can do an analytical intersection test with something it is only good for demos. Outside of demos you will be using tris or tesselated surfaces even with raytracing.
It's the right Marco this time right?! (today I had some sleep for a change).
Ecstatica05.jpg

Ecstatica08.jpg

Elipsoid rendered, and it's not a techdemo :D

Not that I don't agree with you btw, just that even obscure niches can every now and then turn up an actual product.
 
That's why it's a rule of thumb ;) (I shoulda said any surface you can do analytical intersections for other than a triangle anyway.)
 
...

Well well, I finally found the time to sit back and read the application in detail to figure out what Kutaragi Ken was upto.

GS3 specification :

- 1 Grouping Unit.(<- IMG will be screaming a tile accelerator patent infringement soon)
- 4 independent T&L units
- 4 independent rendering units.
- 4 MB x 8 banks eDRAM(32 MB in total)
- Upto 4 texels per pixel per cycle
- Number of pixel engines unspecified.

My performance estimation presuming 800 Mhz

- Unknown Grouping Unit FLOPS
- 25.6 GFLOPS for T&L units.
- 51.2 billion texels/s peak
- 12.8 billion pixels/s peak
 
MfA said:
Seems a hybrid between sort-first and sort-last parallel rendering.
Sounds a bit like a proposal that a parallel SW processing company came to us with.
 
Back
Top