If NV30 uses tile-based rendering, will Ati convert too?

Johnathan256

Newcomer
If the NV30 is indeed a tiler, does anyone think Ati will follow with something similar or will they stick with tradititional tech with wide buses and super fast memory?
 
Hmmmm let me rephrase then: if it should be a Tiler (in the sense I understand it) I´m sold already :D
 
Johnathan256 said:
If the NV30 is indeed a tiler, does anyone think Ati will follow with something similar or will they stick with tradititional tech with wide buses and super fast memory?

It probally depends on how effective this tiler turns out to be.
 
Johnathan256 said:
If the NV30 is indeed a tiler, does anyone think Ati will follow with something similar or will they stick with tradititional tech with wide buses and super fast memory?

One couldn't even possibly begin to guess that

Just because one company does one thing doesn't mean the other has to do it, they may think totally different about how to do things and therefore will give us products that work completely different will still trying to reach the same goal

As to a 'tiler' are you reffering to a deffered rendering card? Because tile based archicture is not the only means to accomplish deffered rendering I believe, I don't know the specifics of other methods, only that I know there ARE other methods to this.

If you think about it really I think the occlusion schemes HyperZ III uses are somewhat like tiling? right? or am I off base here...
 
I have also been wondering about that Brent. I am a little confused as to the advantages of a tiling-architecture over hidden surface removal and occlusion schemes already commonly used. Maybe someone else can post a little feedback on that. Oh, and the only reason I point out a tiling architecture in due to GigaPixel's advanced tile-based-rendering technology which Nvidia gained with the acqusition of 3dfx. Its sounds quite promising (anti-alaising with zero performance hit).
 
Just one quick question about a tiler architecture that don't go in hand with a (full) deferred rendering:

I assume that you have to receive all of the vertex data for a frame first (so that you sort which polygons that go into which tile), so whats the point of not going deferred rendering when you have to store all of the vertex data anyway? Me not understand...
 
Why would you need all vertex data for a tiler? You can split it up into tiles as you render the triangle.
 
Humus,

If you don't have ALL the vertex data, you can't fully eliminate (the chance of) overdraw, which sort of is the reason to go with a deferred rendering approach anyway...

Well, that's the way I see it anyway, but I'm just a layman so what do I know really? :)

*G*
 
If you think about it really I think the occlusion schemes HyperZ III uses are somewhat like tiling? right? or am I off base here...

HyperZ may make use of some sort of tilling, but its not really similar to a tile based renderer. HyperZ is just a feature, tile based rendering is an architecture.

I am a little confused as to the advantages of a tiling-architecture over hidden surface removal and occlusion schemes already commonly used. Maybe someone else can post a little feedback on that

Hidden surface removal is only one part of tile based rendering. There are allot of other advantages for a tile based renderer. For instance no Z-buffer in main ram, all blends done on-chip, unlimited amount of textures per pass, no extra large buffers needed for FSAA (free MSAA) ect ect. Also hidden surface removal is achievable basically 100% perfectly with a tile based renderer, which is allot better then any occlusion culling scheme we've seen on a card with a traditional architecture so far.

I'm not going to go indepth on TBR because I'm pretty sure that if you need to know more about TBR there will be some topics discussing its technical merits on this forum, just use the search utility.

I assume that you have to receive all of the vertex data for a frame first (so that you sort which polygons that go into which tile), so whats the point of not going deferred rendering when you have to store all of the vertex data anyway?

What you just described there is actually deferred rendering :)

AFAICS to be a full TBR you have to defer rendering, surely you cannot split the frame into tiles without first collecting the polygons for the frame.
 
I think NV30 is deferred renderer. I read document "NV30_NDA_preview.pdf". On page 7 is "Computing non-visible pixel is expensive", "Most pixels most applications draw are NOT visible".
On page 13 is "The most effective architecture in the world", "1.0 GHz data rate", "48GB/s effective bandwidth".
I think, no Sherlock Holmes is needed for deduktion, that NV30 is deffered renderer. Document was authentic.
 
When you have eliminated the impossible, whatever remains, however improbable, must be the truth.

But there's nothing impossible about a highly-efficient non-deferred renderer.
 
Can anyone tell how many subpixels they would use if they would use the gigapixel tech?
If an 8pipeline chip of 120million transistors would use 4subpixels, how many transistors would the same chip be with 16subpixels or 64 maybe or maybe 256 :) ?
 
Dio said:
When you have eliminated the impossible, whatever remains, however improbable, must be the truth.

But there's nothing impossible about a highly-efficient non-deferred renderer.

Oh so true....

Why do you guys exclude the possibility that an IMR can adopt many advantages of a TBR, through clever combinations? Efficiency then in such a hypothetical case remains to be seen.

What could be debatable afterwards (and probably will be) is the definition of the architecture once the specs get announced. Under strict terms I stick to my initial comment (is that ok Wavey? :) ).
 
Humus said:
Why would you need all vertex data for a tiler? You can split it up into tiles as you render the triangle.

Yes, but my point is that you cannot start to render the first tile before you have all the vertex data for that frame. Only then can you start to sort in which tile the rendered triangles go (let's call it display lists) and thus you're already one step from IMR and closer to DR.
 
So does that mean that they get a 50% reduction on bandwidth requirements? And do you suspect that nVidia may have implemented a similar form of system. So they may be able to reach that 48Gb\sec bandwidth mark?
 
Wavey,

That doesn't contradict my assumptions, I'd say rather the contrary:

Patented deferred rendering technology to optimize silicon efficiency
*Within the framework of a traditional graphics pipeline
*Completely transparent to the driver and application software

As a user I care little what it is or should be called as long as it works ;)
 
Back
Top