PowerVR Serie 5 is a DX9 chip?

Discussion in 'Architecture and Products' started by ActionNews, Jan 9, 2003.

  1. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Ailuros,

    That's a circular argument. I know it "remains to be seen". The question is, WHY is that the case? Why hasn't there been a "fully speced" TBDR on the market so we can settle this? You seem to think that it might be because it is questionable that "overall", TBDR will by more advantageous than IMR. That's no different than what I already said. ;)

    No, I did not exclude that at all. I already commented that the "extra investment" to "switch" to TBDR might seem like too much a risk given what their "best guess" for the result might be.

    Right. So why haven't 3rd parties been flocking to license Deferred rendering cores for the PC from IMG, for a hypothetically "superior" rendering approach?

    Hypotheticals mean nothing. It's results that matter.

    Apparently, IMG has as of yet failed to make a convincing case to their prospective customers, that they could be building a "fully specced TBDR" with their tech, and be very successful (make money) with it.

    Don't understand your point.

    I was saying that in the PC space, where games are already coded with IMRs in mind, the "theoretical advantages" of a TBDR may be significantly diminshed. (Relative to say, deciding which chip to put in a new console or PDA, where the software is yet to be developed.)
     
  2. Hyp-X

    Hyp-X Irregular
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,170
    Likes Received:
    5
    I think too much is mixed when one say "deferred rendering".

    We could talk about:
    1.) deferred shading - shading is occurs only after the visibility of a pixel is determined (eliminates overdraw).
    2.) on chip frambuffer (EDRAM / SRAM) - framebuffer bandwidth can be very high while DRAM bandwidth is saved.
    3.) tile based deferred rendering - rendering in tile order instead of primitive order (reduces rendertarget size so it can fit on chip).

    1: it is possible on IMR cards as well, but the determination of pixel visibility cost fillrate/bandwidth (If I understand correctly it does cost on Kyro cards as well but it has 16 pixel / clock speed and done paralelly with rendering the previous tile, so it's rarely limiting.)
    But the introduction of HyperZ and equvalent technologies work in the direction of saving on this cost.

    2: the question is: can the bandwidth of DRAM solution satisfy IMR cards fillrate? As long as the fillrate is limited by the available manufacturing process instead of DRAM technology the answer will be yes.
    The on chip solution (combined with TBR) can be cheaper - but so far it's only critical in the mainstream market - not the high-end.

    3: this actually what is defined as the opposite of IMR. This is required because in the PC space the amount of data needed to be stored in the framebuffer is way too high to fit on chip. Once you work in DRAM only there's really no point doing this. (I'm not talking about drawing triangles in tile order - which most IMRs do.)
     
  3. Teasy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,563
    Likes Received:
    14
    Location:
    Newcastle
    You don't believe that despite seeing that very thing with Kyro II?
     
  4. Mariner

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,288
    Likes Received:
    1,055
    To be more precise, I was actually trying to say that a tile-based deferred renderer (TBDR) is more complex to implement, not more complex when working.

    The 'standard' IMR we know and love is fundamentally based on work which took place at SGI. How many of the IHVs have engineers who formerly worked at SGI? I know 3Dfx, NVidia and ATI have a fair few people who were formerly SGI. Although the concept of a TBDR was undoubtedly thought of before Videologic introduced the PowerVR technology, had anyone else actually done much work on the technology? As far as I am aware, Videologic were the first to produce such hardware and were therefore the first to tackle any problems encountered and consequently solve them, leading to IP in the field.

    Dreamcast was the first hardware that showed the potential benefits of a TBDR - there's little doubt that the chip used in Dreamcast was much more powerful than that which 3Dfx was trying to persuade Sega to take (based on what we have since heard). At the time, 3Dfx had the fastest chips in the marketplace so it seems reasonable to think that, had more work been done to bring the PC equivalent of the chip to market promptly, they could have had a winner on their hands. This failure was always blamed on NEC concentrating on the Dreamcast only.

    As far as I am concerned, I'm with Teasy in that Kyro showed the benefits that could be gained with the use of TBDR as compared to a standard IMR. Kyro & Kyro II wiped the floor with the GeForce2 MX which had similar specs and Kyro II competed very well with the GF2 GTS - I seem to remember that they sold 1 million Kyro chips? Once again, the reason we are told a high-end chip was not produced was due to ST aiming for the budget arena and then pulling out of the market all together.

    I am sure that TBDR still has many benefits and the onus is now on ImgTch to gain a licensee who is willing to 'go for the jugular', so to speak and release a high-end chip. The alternative which has been mooted before is for them to go it alone and produce the chip themselves, but I think this unlikely as it doesn't really fit in with their current business plan.
     
  5. Kristof

    Regular Alpha

    Joined:
    Jan 30, 2002
    Messages:
    733
    Likes Received:
    1
    Location:
    Abbots Langley
    With this kind of reasoning you can keep going. What if the number of texture increases rathter then the geometry complexity, the number of textures requiring storage grows and grows but you have these huge buffers allocated so you run out of video memory causing you to spill textures into AGP and your performance drops drastically on your IMR but not on your TBDR since it does not waste tens of MBs on buffers.

    All in all the reasoning you have about a TBDR can be applied just as well to an IMR just in different situations. If you now think : how about just adding more memory, well adding more memory solves the problem of textures but also of scene storage.

    You can find a worst case situation for any design, the issue is how do they all perform in the real world. Will geometry complexity get so complex that buffers overflow or is the system that handles this so advanced that there is little impact ? Will textures overflow on an IMR and is AGP then fast enough ? Or will all cards simply have soo much memory (128MB,256MB, etc) that neither issue will ever be a real problem ?

    Its easy to sit there and point out a potential weakness of one architecture but you then have to be fair and indicate all the potential weaknesses of the other architectures as well...

    K-
     
  6. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Gotcha. So then we would generally agree that "more R&D Expense" is required for TBDR vs.IMR. Thus, from the beginning, you need a better price/performance part simply to recoup the additional R&D expense.

    You can look at it the other way: because there are relatively few TBDR hardware implementations out there, there are many avenues of specific implementations yet to be explored....meaning there is a lot of IP yet to be claimed.

    Sure, I wouldn't doubt that. But again, we're talking about a console that is not concerned with running legacy software that avoids the pitfalls of IMR, and doesn't cater to TBDR strengths.

    I don't want to sound like a broken record....but I'm ONLY talking about success in the the PC space!

    As Kristof and Chalnoth are discussing...you can create software that runs a "best case" and "worst case" for either architecture. If all the software on PCs is designed with IMRs in mind, it is inherently trying to minimize the "worst case" scenarios for IMRs. And that can drastically impact the overall "advantage" that a TBDR brings to market.

    It was a glimpse for sure. That was so long ago (relatively speaking) and with IMR architectures that are much less efficient than today's parts. Has PowerVr's technology / implementation increased in efficiency similarly?

    And if someone builds a TBDR using 256 bits of 500 MHZ DDR-II....can they design the chip itself fast enough to utilize it? Or will we be stuck with a part with about the same performance...just that it's bottleneck is fillrate, not bandwidth?

    I'll refrain from commenting on that in hopes of evading 10 pages of battle with Teasy that would certainly ensue. ;)

    IMG's current "business strutcure" is the second reason why I think PowerVR tech will never amount to anything other than a niche in the PC space.
     
  7. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    IP issues, inertia, NIH, unwillingness to take risk and offend developers who think they are better at determining how hardware should be designed than the hardware developers etc.

    No answer to this particular question is better than any other since it will never be answered with authority except by direct proof, of the few people with authority who frequent this board none are unbiased, trying to determine which answer is more likely to be true is an even greater waste of time.

    IP licensing is flawed, companies intent on being a dominant player in a market would DIY it anyway (they will not abide the inevitable mismatches and delays which result from splitting up the development). It is for companies which do something on the side, or need your core as a small part of their bigger machine.
     
  8. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    To be fair, hardware is supposed to be a means to an end for developers....so perhaps its the hardware guys who think they are better at determining how the software guys should work, than the software guys themselves. ;)

    Agreed. We can't with any reasonable proof know "why" we have a case of no one building a "super" TBDR. We only know the fact is that one does not exist. And there are only two logical "generalized" reasons for this:

    1) It has tried to have been built, just unsuccessfully.
    2) "they guys making the decision" to buildit or not, simply have not been convinced "by the guys with the idea", that the potential rewards are high enough to outweigh the risk.

    And to be clear...that is "up to this point in time."

    In the PC Graphics space, I obviously agree 100%.
     
  9. ActionNews

    Newcomer

    Joined:
    May 14, 2002
    Messages:
    59
    Likes Received:
    0
    Location:
    Germany
    Sorry but i think this is nonsense! If a TBDR had so many disadvantages, then i would have serious problems with my KyroII :)! Either i had graphic errors or it would be rather slow! But my KyroII performes well, and no errors :)!

    What i think the reason is why ATI or Nvidia don't change to TBDR is that it costs a lot of money and time to develop a well performing and bugfree TBDR! Perhaps there are some difficulties when you develop a TBDR which you have to solve first (without offending existing patens)! For ATi and Nvidia it is cheaper to develop their exsisting IMR further and reduce Overdraw through other methods! That all!

    CU ActionNews
     
  10. ActionNews

    Newcomer

    Joined:
    May 14, 2002
    Messages:
    59
    Likes Received:
    0
    Location:
    Germany
    Sorry but here i don't agree either!
    PowerVRs licencees chose the market! Neither NEC nor STMicro were very interested in PC Graphics! What STMicro wanted is to make fast money without big investment and NEC only needed a chip for their DreamCast! For both purposes you don't need Highend!

    Perhaps now PowerVR wants to show what's possible with TBDR and we really see a Highend TBDR :)!

    CU ActionNews
     
  11. Basic

    Regular

    Joined:
    Feb 8, 2002
    Messages:
    846
    Likes Received:
    13
    Location:
    Linköping, Sweden
    In what way would TBDRs be better than IMRs for MRTs?

    I might have misunderstood some things about MRTs, please tell me if that's the case. But AFAIK, MRTs are multiple framebuffers for output. And if you want to use the output in a second pass, you'll have to use it as a texture. As opposed to a mechanism where the second pass is limited to just reading the MRT from the same pixel position. (Just as we suspect that DeltaChrome's frame buffer reads work, but with multiple inputs/outputs.)

    I can see how a TBDR would be more efficient with "pixel-locked" MRTs, but don't think that's how it works in DX9. I don't see what's so special about TBDRs together with MRTs that need to be used as textures to feed it back.
     
  12. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    Which would still not represent proof ... you cant proove a negative.
     
  13. Aeros405

    Newcomer

    Joined:
    Feb 10, 2002
    Messages:
    15
    Likes Received:
    0
  14. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    MfA said:
    I did not claim is would be any sort of proof. Again, all we have is the fact that one does not exist. I am not claiming in the slightest have proof that it can't be built.

    To be clear: I am saying that the fact that one does not exist is itself evidence to support the theory that the overall advantages of TBDR may not be there in practice...or be there to an extent which would intice more risk taking by IHVs to reap the rewards.
     
  15. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    We're talking about high-end products.

    If a TDBR had so many advantages across the board, then we would see more of them on the market, and in greater product segments.

    So what you're saying, is that the reward is not worth the risk. Put another way, if the result TBDR was such a clear-cut, obvious advantage, "a lot of money and time" would be spent by ATI and nVidia to bring one to market.

    We're really not too different on this, you know. ;)

    Again, companies are in this to make money. If ATI and nVidia could develope a clearly supoerior product with TBDR, one that would MAKE UP for the higher investment costs....they would do it.

    Your theory is effectivly placed on one of the two premises:

    1) IMg Tech holds tech IP that makes it legally impossible for ATI / nVidia to make a competitive TBDR

    and/or

    2) ATI and nVidia engineering talent /resources aren't on par with IMG.

    No offense to IMG, but I'd say nVidia and ATI have them soundly beat in the resoruces department, and I'd have a hard time believing that the collective talent of either of those two companies could be that far behind IMG Tech.

    Must....resist....asking the obvious question......
     
  16. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Joe I can't keep up with endless cross-quoting; just one:

    Any answer out of a pool of countless of possibilities falls rather under MfA's earlier post(-s). Guess what I don't really care that much either; as long as there more than one alternatives on the market.

    My personal urge or curiousity if you prefer, to finally see a high end TBDR is a totally different story. I've said it before that I don't care from which vendor it originates, yet the options to see a TBDR from more than one vendor are equal to zero currently.

    Anyway to come back to your above quote, no I don't believe in miracles if that's what you're asking and don't exclude the possibility that I might agree with you on a few points too.

    Teasy,

    Depends how you define "killing" I guess.

    edit: typos
     
  17. Teasy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,563
    Likes Received:
    14
    Location:
    Newcastle
    TNT2 had similar specs to Kyro II, you don't think Kyro II killed TNT2? Then there's Geforce 2 MX, higher specs, Kyro II also killed that AFAICS.
     
  18. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania

    Would the K2 have been released around TNT2 timeframe it would have killed w/o a single doubt. I'm encounting all factors here; of course was the K2 to me the obviously better choice compared to a 2MX, but there where obviously users that didn't mind 16bpp colour depth quality back then too.

    My idea of "killing" just requires more potential that's all. Despite that my original comment wasn't exactly fixated on the past. Just about two years later, or to be more precise today.
     
  19. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    Been reading the last few pages ( didn't bother to read it all ) , and there's something I don't understand.
    Many of you say IMG Tech got IP which might make nVidia/ATI unable to develop a TBDR
    But isn't most of that IP pretty much similar to GigaPixel tech, beside the TBR parts? So, the problem would be finding another way to do TBR for nVidia, not finding another way to do DR, right?


    Uttar
     
  20. Teasy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,563
    Likes Received:
    14
    Location:
    Newcastle
    Ailuros

    Wether both cards are released at the same time isn't important here. This is about looking at a TBR and IMR with similar specs and seeing which one is faster, wether there released at the same time or years appart it irrelivant.

    So your saying that you don't believe that, say, a Radeon 9700 would be killed by a similarly speced TBR? Well that's certainly debatable with the improvements in effeciency since Geforce 2. So I wouldn't argue with that as its not proven either way.

    When you said "I never believed" that gives the impression that well.. you never believed it, as in now, 1,2,3,4 years ago ect.

    I just have a hard time seeing how anyone could see a TNT2 (which AFAICS had an identical basic spec to Kyro II) or even a Geforce 2 MX against a Kyro II a couple of years ago and not think "wow a TBR really does kill a similarly speced IMR".

    I mean so what if Geforce 2 MX was as fast as (or even slightly faster then) Kyro II in a few limited conditions. Kyro II still beat it in the massive majority of cases and beat it by a massive amount in higher res 32bit and especially with FSAA. For instance with FSAA in 32bit at any res Kyro II was twice as fast. Saying that Kyro II didn't kill Geforce 2 MX is like saying that Radeon 9700 doesn't kill a Geforce 4.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...