PowerVR Serie 5 is a DX9 chip?

Ailuros,

whereby which global average consensus between the architectures outweighs the other remains to be seen (if ever) since we haven't really seen fully speced TBDR's yet.

That's a circular argument. I know it "remains to be seen". The question is, WHY is that the case? Why hasn't there been a "fully speced" TBDR on the market so we can settle this? You seem to think that it might be because it is questionable that "overall", TBDR will by more advantageous than IMR. That's no different than what I already said. ;)

Can you safely exclude the possibility that immediate mode rendering isn't the safer route for them to walk on and they don't have as much experience to produce an at least as well performing TBDR?

No, I did not exclude that at all. I already commented that the "extra investment" to "switch" to TBDR might seem like too much a risk given what their "best guess" for the result might be.

"Hey we like to differentiate ourselves (or defer if you like...)" doesn't exactly make a good point sticking to a hypothetically "inferior" rendering approach.

Right. So why haven't 3rd parties been flocking to license Deferred rendering cores for the PC from IMG, for a hypothetically "superior" rendering approach?

Hypotheticals mean nothing. It's results that matter.

Apparently, IMG has as of yet failed to make a convincing case to their prospective customers, that they could be building a "fully specced TBDR" with their tech, and be very successful (make money) with it.

or do you really expect NV or ATI to come out and say that TBR is superior to what we do or PVR to admit that IMR's are in fact better than TBR?

Don't understand your point.

I was saying that in the PC space, where games are already coded with IMRs in mind, the "theoretical advantages" of a TBDR may be significantly diminshed. (Relative to say, deciding which chip to put in a new console or PDA, where the software is yet to be developed.)
 
Joe DeFuria said:
The advantages may be there, but they are not as great as typically hyped, particularly in the PC space.

I think too much is mixed when one say "deferred rendering".

We could talk about:
1.) deferred shading - shading is occurs only after the visibility of a pixel is determined (eliminates overdraw).
2.) on chip frambuffer (EDRAM / SRAM) - framebuffer bandwidth can be very high while DRAM bandwidth is saved.
3.) tile based deferred rendering - rendering in tile order instead of primitive order (reduces rendertarget size so it can fit on chip).

1: it is possible on IMR cards as well, but the determination of pixel visibility cost fillrate/bandwidth (If I understand correctly it does cost on Kyro cards as well but it has 16 pixel / clock speed and done paralelly with rendering the previous tile, so it's rarely limiting.)
But the introduction of HyperZ and equvalent technologies work in the direction of saving on this cost.

2: the question is: can the bandwidth of DRAM solution satisfy IMR cards fillrate? As long as the fillrate is limited by the available manufacturing process instead of DRAM technology the answer will be yes.
The on chip solution (combined with TBR) can be cheaper - but so far it's only critical in the mainstream market - not the high-end.

3: this actually what is defined as the opposite of IMR. This is required because in the PC space the amount of data needed to be stored in the framebuffer is way too high to fit on chip. Once you work in DRAM only there's really no point doing this. (I'm not talking about drawing triangles in tile order - which most IMRs do.)
 
I never believed for a second though that a similarly speced TBDR would kill an IMR in performance and across the board.

You don't believe that despite seeing that very thing with Kyro II?
 
IIRC, That's not what the proponents like to say...they usually talk of much simpler operation and lower transistor counts.

To be more precise, I was actually trying to say that a tile-based deferred renderer (TBDR) is more complex to implement, not more complex when working.

I would imagine VideoLogics "IP" on deferred rendering isn't much different than every other company's "IP" related to IMR.

The 'standard' IMR we know and love is fundamentally based on work which took place at SGI. How many of the IHVs have engineers who formerly worked at SGI? I know 3Dfx, NVidia and ATI have a fair few people who were formerly SGI. Although the concept of a TBDR was undoubtedly thought of before Videologic introduced the PowerVR technology, had anyone else actually done much work on the technology? As far as I am aware, Videologic were the first to produce such hardware and were therefore the first to tackle any problems encountered and consequently solve them, leading to IP in the field.

Dreamcast was the first hardware that showed the potential benefits of a TBDR - there's little doubt that the chip used in Dreamcast was much more powerful than that which 3Dfx was trying to persuade Sega to take (based on what we have since heard). At the time, 3Dfx had the fastest chips in the marketplace so it seems reasonable to think that, had more work been done to bring the PC equivalent of the chip to market promptly, they could have had a winner on their hands. This failure was always blamed on NEC concentrating on the Dreamcast only.

As far as I am concerned, I'm with Teasy in that Kyro showed the benefits that could be gained with the use of TBDR as compared to a standard IMR. Kyro & Kyro II wiped the floor with the GeForce2 MX which had similar specs and Kyro II competed very well with the GF2 GTS - I seem to remember that they sold 1 million Kyro chips? Once again, the reason we are told a high-end chip was not produced was due to ST aiming for the budget arena and then pulling out of the market all together.

I am sure that TBDR still has many benefits and the onus is now on ImgTch to gain a licensee who is willing to 'go for the jugular', so to speak and release a high-end chip. The alternative which has been mooted before is for them to go it alone and produce the chip themselves, but I think this unlikely as it doesn't really fit in with their current business plan.
 
Chalnoth said:
In other words, I'd rather see an architecture that can do 6x-8x FSAA at good framerates than one that can do 16x FSAA at good framerates, but whose performance drops drastically when a set geometry limit is reached. As a side note, there may also be other FSAA techniques, such as a version of the FAA technique seen on the Parhelia (though it may be impossible to completely detect all edges...).

With this kind of reasoning you can keep going. What if the number of texture increases rathter then the geometry complexity, the number of textures requiring storage grows and grows but you have these huge buffers allocated so you run out of video memory causing you to spill textures into AGP and your performance drops drastically on your IMR but not on your TBDR since it does not waste tens of MBs on buffers.

All in all the reasoning you have about a TBDR can be applied just as well to an IMR just in different situations. If you now think : how about just adding more memory, well adding more memory solves the problem of textures but also of scene storage.

You can find a worst case situation for any design, the issue is how do they all perform in the real world. Will geometry complexity get so complex that buffers overflow or is the system that handles this so advanced that there is little impact ? Will textures overflow on an IMR and is AGP then fast enough ? Or will all cards simply have soo much memory (128MB,256MB, etc) that neither issue will ever be a real problem ?

Its easy to sit there and point out a potential weakness of one architecture but you then have to be fair and indicate all the potential weaknesses of the other architectures as well...

K-
 
To be more precise, I was actually trying to say that a tile-based deferred renderer (TBDR) is more complex to implement, not more complex when working.

Gotcha. So then we would generally agree that "more R&D Expense" is required for TBDR vs.IMR. Thus, from the beginning, you need a better price/performance part simply to recoup the additional R&D expense.

As far as I am aware, Videologic were the first to produce such hardware and were therefore the first to tackle any problems encountered and consequently solve them, leading to IP in the field.

You can look at it the other way: because there are relatively few TBDR hardware implementations out there, there are many avenues of specific implementations yet to be explored....meaning there is a lot of IP yet to be claimed.

there's little doubt that the chip used in Dreamcast was much more powerful than that which 3Dfx was trying to persuade Sega...

Sure, I wouldn't doubt that. But again, we're talking about a console that is not concerned with running legacy software that avoids the pitfalls of IMR, and doesn't cater to TBDR strengths.

I don't want to sound like a broken record....but I'm ONLY talking about success in the the PC space!

As Kristof and Chalnoth are discussing...you can create software that runs a "best case" and "worst case" for either architecture. If all the software on PCs is designed with IMRs in mind, it is inherently trying to minimize the "worst case" scenarios for IMRs. And that can drastically impact the overall "advantage" that a TBDR brings to market.

As far as I am concerned, I'm with Teasy in that Kyro showed the benefits that could be gained with the use of TBDR as compared to a standard IMR.

It was a glimpse for sure. That was so long ago (relatively speaking) and with IMR architectures that are much less efficient than today's parts. Has PowerVr's technology / implementation increased in efficiency similarly?

And if someone builds a TBDR using 256 bits of 500 MHZ DDR-II....can they design the chip itself fast enough to utilize it? Or will we be stuck with a part with about the same performance...just that it's bottleneck is fillrate, not bandwidth?

Once again, the reason we are told a high-end chip was not produced was due to ST aiming for the budget arena and then pulling out of the market all together.

I'll refrain from commenting on that in hopes of evading 10 pages of battle with Teasy that would certainly ensue. ;)

The alternative which has been mooted before is for them to go it alone and produce the chip themselves, but I think this unlikely as it doesn't really fit in with their current business plan.

IMG's current "business strutcure" is the second reason why I think PowerVR tech will never amount to anything other than a niche in the PC space.
 
Joe DeFuria said:
The question is, WHY is that the case?

IP issues, inertia, NIH, unwillingness to take risk and offend developers who think they are better at determining how hardware should be designed than the hardware developers etc.

No answer to this particular question is better than any other since it will never be answered with authority except by direct proof, of the few people with authority who frequent this board none are unbiased, trying to determine which answer is more likely to be true is an even greater waste of time.

Right. So why haven't 3rd parties been flocking to license Deferred rendering cores for the PC from IMG, for a hypothetically "superior" rendering approach?

IP licensing is flawed, companies intent on being a dominant player in a market would DIY it anyway (they will not abide the inevitable mismatches and delays which result from splitting up the development). It is for companies which do something on the side, or need your core as a small part of their bigger machine.
 
IP issues, inertia, NIH, unwillingness to take risk and offend developers who think they are better at determining how hardware should be designed than the hardware developers etc.

To be fair, hardware is supposed to be a means to an end for developers....so perhaps its the hardware guys who think they are better at determining how the software guys should work, than the software guys themselves. ;)

No answer to this particular question is better than any other since it will never be answered with authority except by direct proof...

Agreed. We can't with any reasonable proof know "why" we have a case of no one building a "super" TBDR. We only know the fact is that one does not exist. And there are only two logical "generalized" reasons for this:

1) It has tried to have been built, just unsuccessfully.
2) "they guys making the decision" to buildit or not, simply have not been convinced "by the guys with the idea", that the potential rewards are high enough to outweigh the risk.

And to be clear...that is "up to this point in time."

IP licensing is flawed...

In the PC Graphics space, I obviously agree 100%.
 
Joe DeFuria said:
I may as well throw in my "standard" argument against deferred rendering:

If the benefits do in fact for outweigh any drawbacks for the PC, we would have seen more than just a few low-mid range parts from one vendor.

IHVs aren't "dumb", as much as we like to point the finger at them for being exactly that. This isn't to say they don't make mistakes, but when the most successful IHVs are still using IMR, there is only one reason for it:

The advantages of DR don't outweigh the disadvantages...at least not at this time.

What are all the disadvantages? Beats me. The proponents of deferred rendering would "shoot down" any apparent disadvantage, saying "nah...that's not a problem."

What I would like to hear from the proponents, is what exactly they feel is the reason for the lack of PC implementations? If the practical advantages are clear and unambiguous, with little to no drawbacks, why is it that everyone isn't jumping on the Deferred rendering bandwagon? My thoughts:

1) There are some disadvantages that are real, and tend to be show-stoppers, that we haven't discussed or been thought about here. I remember reading something from an nVidia employee saying something like "we haven't solved all the 'fringe' cases where DR breaks down...we may in the future, but not yet.

and/or

2) The advantages may be there, but they are not as great as typically hyped, particularly in the PC space. This makes it more risky for management to invest the R&D to make the "switch" to deferred rendering, because the return on investment is not a sure thing.

Sorry but i think this is nonsense! If a TBDR had so many disadvantages, then i would have serious problems with my KyroII :)! Either i had graphic errors or it would be rather slow! But my KyroII performes well, and no errors :)!

What i think the reason is why ATI or Nvidia don't change to TBDR is that it costs a lot of money and time to develop a well performing and bugfree TBDR! Perhaps there are some difficulties when you develop a TBDR which you have to solve first (without offending existing patens)! For ATi and Nvidia it is cheaper to develop their exsisting IMR further and reduce Overdraw through other methods! That all!

CU ActionNews
 
Joe DeFuria said:
(...)
Agreed. We can't with any reasonable proof know "why" we have a case of no one building a "super" TBDR. We only know the fact is that one does not exist. And there are only two logical "generalized" reasons for this:

1) It has tried to have been built, just unsuccessfully.
2) "they guys making the decision" to buildit or not, simply have not been convinced "by the guys with the idea", that the potential rewards are high enough to outweigh the risk.

And to be clear...that is "up to this point in time."

(...)

Sorry but here i don't agree either!
PowerVRs licencees chose the market! Neither NEC nor STMicro were very interested in PC Graphics! What STMicro wanted is to make fast money without big investment and NEC only needed a chip for their DreamCast! For both purposes you don't need Highend!

Perhaps now PowerVR wants to show what's possible with TBDR and we really see a Highend TBDR :)!

CU ActionNews
 
DaveBaumann said:
There are all kinds of things, such as MRT's, coming along that TBR's will have a much greater efficiency than IMR's.

In what way would TBDRs be better than IMRs for MRTs?

I might have misunderstood some things about MRTs, please tell me if that's the case. But AFAIK, MRTs are multiple framebuffers for output. And if you want to use the output in a second pass, you'll have to use it as a texture. As opposed to a mechanism where the second pass is limited to just reading the MRT from the same pixel position. (Just as we suspect that DeltaChrome's frame buffer reads work, but with multiple inputs/outputs.)

I can see how a TBDR would be more efficient with "pixel-locked" MRTs, but don't think that's how it works in DX9. I don't see what's so special about TBDRs together with MRTs that need to be used as textures to feed it back.
 
MfA said:
Which would still not represent proof ... you cant proove a negative.

I did not claim is would be any sort of proof. Again, all we have is the fact that one does not exist. I am not claiming in the slightest have proof that it can't be built.

To be clear: I am saying that the fact that one does not exist is itself evidence to support the theory that the overall advantages of TBDR may not be there in practice...or be there to an extent which would intice more risk taking by IHVs to reap the rewards.
 
Sorry but i think this is nonsense! If a TBDR had so many disadvantages, then i would have serious problems with my KyroII !

We're talking about high-end products.

If a TDBR had so many advantages across the board, then we would see more of them on the market, and in greater product segments.

What i think the reason is why ATI or Nvidia don't change to TBDR is that it costs a lot of money and time to develop a well performing and bugfree TBDR!

So what you're saying, is that the reward is not worth the risk. Put another way, if the result TBDR was such a clear-cut, obvious advantage, "a lot of money and time" would be spent by ATI and nVidia to bring one to market.

For ATi and Nvidia it is cheaper to develop their exsisting IMR further and reduce Overdraw through other methods! That all!

We're really not too different on this, you know. ;)

Again, companies are in this to make money. If ATI and nVidia could develope a clearly supoerior product with TBDR, one that would MAKE UP for the higher investment costs....they would do it.

Your theory is effectivly placed on one of the two premises:

1) IMg Tech holds tech IP that makes it legally impossible for ATI / nVidia to make a competitive TBDR

and/or

2) ATI and nVidia engineering talent /resources aren't on par with IMG.

No offense to IMG, but I'd say nVidia and ATI have them soundly beat in the resoruces department, and I'd have a hard time believing that the collective talent of either of those two companies could be that far behind IMG Tech.

PowerVRs licencees chose the market! Neither NEC nor STMicro were very interested in PC Graphics!

Must....resist....asking the obvious question......
 
Joe I can't keep up with endless cross-quoting; just one:

That's a circular argument. I know it "remains to be seen". The question is, WHY is that the case? Why hasn't there been a "fully speced" TBDR on the market so we can settle this? You seem to think that it might be because it is questionable that "overall", TBDR will by more advantageous than IMR. That's no different than what I already said.

Any answer out of a pool of countless of possibilities falls rather under MfA's earlier post(-s). Guess what I don't really care that much either; as long as there more than one alternatives on the market.

My personal urge or curiousity if you prefer, to finally see a high end TBDR is a totally different story. I've said it before that I don't care from which vendor it originates, yet the options to see a TBDR from more than one vendor are equal to zero currently.

Anyway to come back to your above quote, no I don't believe in miracles if that's what you're asking and don't exclude the possibility that I might agree with you on a few points too.

Teasy,

You don't believe that despite seeing that very thing with Kyro II?

Depends how you define "killing" I guess.

edit: typos
 
Depends how you define "killing" I guess.

TNT2 had similar specs to Kyro II, you don't think Kyro II killed TNT2? Then there's Geforce 2 MX, higher specs, Kyro II also killed that AFAICS.
 
Teasy said:
Depends how you define "killing" I guess.

TNT2 had similar specs to Kyro II, you don't think Kyro II killed TNT2? Then there's Geforce 2 MX, higher specs, Kyro II also killed that AFAICS.


Would the K2 have been released around TNT2 timeframe it would have killed w/o a single doubt. I'm encounting all factors here; of course was the K2 to me the obviously better choice compared to a 2MX, but there where obviously users that didn't mind 16bpp colour depth quality back then too.

My idea of "killing" just requires more potential that's all. Despite that my original comment wasn't exactly fixated on the past. Just about two years later, or to be more precise today.
 
Been reading the last few pages ( didn't bother to read it all ) , and there's something I don't understand.
Many of you say IMG Tech got IP which might make nVidia/ATI unable to develop a TBDR
But isn't most of that IP pretty much similar to GigaPixel tech, beside the TBR parts? So, the problem would be finding another way to do TBR for nVidia, not finding another way to do DR, right?


Uttar
 
Ailuros

Wether both cards are released at the same time isn't important here. This is about looking at a TBR and IMR with similar specs and seeing which one is faster, wether there released at the same time or years appart it irrelivant.

My idea of "killing" just requires more potential that's all. Despite that my original comment wasn't exactly fixated on the past. Just about two years later, or to be more precise today.

So your saying that you don't believe that, say, a Radeon 9700 would be killed by a similarly speced TBR? Well that's certainly debatable with the improvements in effeciency since Geforce 2. So I wouldn't argue with that as its not proven either way.

When you said "I never believed" that gives the impression that well.. you never believed it, as in now, 1,2,3,4 years ago ect.

I just have a hard time seeing how anyone could see a TNT2 (which AFAICS had an identical basic spec to Kyro II) or even a Geforce 2 MX against a Kyro II a couple of years ago and not think "wow a TBR really does kill a similarly speced IMR".

I mean so what if Geforce 2 MX was as fast as (or even slightly faster then) Kyro II in a few limited conditions. Kyro II still beat it in the massive majority of cases and beat it by a massive amount in higher res 32bit and especially with FSAA. For instance with FSAA in 32bit at any res Kyro II was twice as fast. Saying that Kyro II didn't kill Geforce 2 MX is like saying that Radeon 9700 doesn't kill a Geforce 4.
 
Back
Top