PowerVR Serie 5 is a DX9 chip?

That's a tiny part of all rendering, and I see no reason why hardware optimizations for these passes are any harder to implement in an IMR than a TBR. It has already begun, with the two-sided stencil of the Radeon 9700 and GeForce FX.
 
Chalnoth,

Wether TBDR is a necessity can be debatable; despite that, since PowerVR has stuck to deferred rendering through all this years, it's high time they finally proove what they can or cannot do, preferably finally with a fully speced product.

Only then will you be able to proove your standpoints/claims or if those corner cases have been addressed adequately or not.

I have severe doubts that you'll keep your current standpoints should NV anytime in the future opt for deferred rendering; I haven't seen so far an official or unofficial statement far that rules that possibility completely out.
 
I don't know. I still don't like nV's sticking to half-supersampling FSAA modes. But you're right. It appears we should be approaching the point where TBR's will be having problems before long.

And I really don't think that either nVidia or ATI will go TBR unless somebody else (I suppose only PVR is remotely in the position) trounces their cards with a high-end TBR card.
 
It appears we should be approaching the point where TBR's will be having problems before long.

Been hearing that bubbleyum since the late 90's. So when are we going to see it? By the time IMR's hit the imaginary bandwidth wall?

And I really don't think that either nVidia or ATI will go TBR unless somebody else (I suppose only PVR is remotely in the position) trounces their cards with a high-end TBR card.

Whereby all the problems would magically disappear for those two? Now that's what I call confidence in one's opinion :D
 
Chalnoth said:
And I really don't think that either nVidia or ATI will go TBR unless somebody else (I suppose only PVR is remotely in the position) trounces their cards with a high-end TBR card.

If that doesn't happen, im moving to New Zealand!

(The troucing bit, cos even if that did happen, ATI/nVidia will just bring out their next IMR card in 6 months time... Only powerVR is in the PC TBR game)
 
Ailuros said:
Been hearing that bubbleyum since the late 90's. So when are we going to see it? By the time IMR's hit the imaginary bandwidth wall?
When? I don't know, because a TBR that has high enough performance to make the problems apparent has yet to appear.

And I really don't think that either nVidia or ATI will go TBR unless somebody else (I suppose only PVR is remotely in the position) trounces their cards with a high-end TBR card.
Whereby all the problems would magically disappear for those two? Now that's what I call confidence in one's opinion :D
No, that's not what I meant, not in the least. This is the primary reason, in fact, why I don't want TBR to catch on, for the precise reason that these companies will have the same problems with higher polycounts.
 
Chalnoth said:
Ailuros said:
Been hearing that bubbleyum since the late 90's. So when are we going to see it? By the time IMR's hit the imaginary bandwidth wall?
When? I don't know, because a TBR that has high enough performance to make the problems apparent has yet to appear.
So you are waiting for a system that has high enough performance so that it doesn't have high enough performance. Very Zen.
 
@Chalnoth:

I don't know what's your problem. Wait for Series 5 and we will see :)! But i fear you will be very surprised ;)! We could discuss about PowerVR and Series 5 on and on, but as we don't know any offical about Series 5, it woun'ldn't bring us anywhere.
BTW: I don't see any Problems with TBDR which couldn't be or have already been solved :)! Who knows what these guys over at PowerVR have developed over these two years :)?

CU ActionNews
 
Simon F said:
So you are waiting for a system that has high enough performance so that it doesn't have high enough performance. Very Zen.
Well, in a way.
Remember what I've been stating all along, that I feel TBR's will decrease in fillrate throughput with very high polycounts. That is, it would be useful to compare the performance drop at high resolution when drastically increasing polycounts vs. an IMR.

Actually, I think that a graph of the performance comparison between a TBR and IMR (say, fps TBR divided by fps IMR) will have a peak somewhere (increasing for very low polycounts), and then fall off at very high polycounts.
 
Wouldn't the ability to bin triangles with the same transistor count increase as clock speed went up? How expensive is it transitor wise to broaden this functionality?

Doesn't an IMR have to spend either equal or greater bandwidth getting triangles than the TBDR (don't forget geometry compression schemes...a bit easier and more effective in big batches)? So, why, except for the theoretical worst case you mention that people more familiar with TBDR design seem to repeatedly indicate can be avoided or atleast mitigated, would this result in a reduction in relative performance as polygon count increases?
 
PVR are good for their specs, but they alway make low cost, low spec cards. I would love to see a medium/high priced PVR card with tons of bandwidth an fillrate :D

Imagine a PVR with

300-400Mhz core.
300-400Mhz DDR Memory
256bit bus
8 pipelines

Now add all the PVRs bandwidth saving stuff, what you got is a killer GFX card. Just think how many shading operation you would be saving because there is no overdraw :D

I can dream, will they ever make a high end GFX card :p
 
demalion said:
Wouldn't the ability to bin triangles with the same transistor count increase as clock speed went up? How expensive is it transitor wise to broaden this functionality?

So cheap it isnt worth mentioning.
 
JohnH said:
Noo, I'd never do anything like that! I just got confused, must be an age thing.

Tell me how many of the 169 questions I have you can answer and I'll start shooting :D
 
Depends what those 169 questions are about, food, maybe the size of the carpet tiles under my desk ?

Seriously guys, no tease intended!

John.
 
Back
Top