IMR "Wall" Limits V PVR

Discussion in 'General 3D Technology' started by PVR_Extremist, May 21, 2002.

  1. PVR_Extremist

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    194
    Likes Received:
    1
    A number of years ago the general consensus of opinion was that IMR would hit some kind of "technological wall" with respect to memory bandwidth and clock speed. This was an argument that PowerVR fans (including myself, truth be told) used to proport PowerVR technology as the eventual winner in this race.

    Clearly to date at least this has not happened. Credit to nV and others who have employed various techniques to use their available bandwidth more efficiently (whilst managing to get hold of faster more capable RAM also).

    My question:

    Is there still some kind of "technological wall" which will hamper IMR performance in the future?

    My personal opinion is that nV and others will lean towards a hybrid config further optimising their bandwidth saving techniques but not fully going towards a tile based deferred rendering solution.

    Also,

    Are "other" companies actively pursuing Tile based deferred rendering in "future projects"? If they are how long do you think PowerVR hold the advantage with respect to experience and capability in that area of technology?

    Regards

    Tino
     
  2. mboeller

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    922
    Likes Received:
    1
    Location:
    Germany
    see below the thread "Intel's new graphics core" for answers.
     
  3. PVR_Extremist

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    194
    Likes Received:
    1
    I saw that thread but in all honesty don't really consider Intel much of a competitor in the 3D market.

    And it dont answer any of my other questions :roll:
     
  4. mboeller

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    922
    Likes Received:
    1
    Location:
    Germany

    Sorry; but I have no answers myself. I had hoped that you see Intel as an "player" in the 3D-chipset arena. I'm not sure myself if this new Intel-chipset will help IMG; but it is better when 2 companies have an deferred rendering architecture instead of only one.
    If Intel would have used this core in the 850E-chipset too, then they would have been a force to recognisse, simply because the 4,2GB/sec bandwidth would have been enough for an deferred renderer to shine ( and maybe even outshine the nForce chipset / MX400 class ).

    I hope that Intel will use this graphics-core in all new chipsets to improve the 3D-performance of all new chipsets. If that happens then IMG has an easier time when new 3D features come up in new DirectX / OpenGL versions.
     
  5. pascal

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    1,830
    Likes Received:
    49
    Location:
    Brasil
    I think game developers also helped to avoid the "technology wall" by never developing a game that will not run well with the current IMR.

    Imagine the games we could have if developers had good deferred render available.

    The trick is that we get used to the limitations imposed by IMR.

    Soon or later they (hardware and software developers) will have to think seriouslly about deferred rendering.
     
  6. Tahir2

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,978
    Likes Received:
    86
    Location:
    Earth
    We are approaching a level where polygons will become smaller than the size of pixels.. then even deferred rendering will become obsolete, surely?

    Is there still an advantiage of using deferred rendering when polygons are so small?
     
  7. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,805
    Likes Received:
    473
    Rendering is not necessarily deferred with tiling, the scene is just rendered in tile order ... there's a difference.
     
  8. PVR_Extremist

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    194
    Likes Received:
    1
    When? Why? Does this infer that the big 2 already have deferred rendering somewhere in their roadmap?
     
  9. pascal

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    1,830
    Likes Received:
    49
    Location:
    Brasil
    I dont know anything about roadmaps and this is just a gamer´s guess (or hope).

    In 2 or 3 years we will start to see some DX9 games (many levels of multipass), and the hardware developers will have a good .09 micron process available (means DX9 to the mass). The main competition will be the $80 to $150 DX9 card which is cost sensitive. To keep cost down with high performance a deferred rendering will make sense. It will save bandwith and fillrate.
     
  10. Ty

    Ty Roberta E. Lee
    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,448
    Likes Received:
    52
    Well that's pretty much the line of reasoning that Tino refers to that PVR fans have used in the past. And obviously for one reason or another, it just has not come to pass.
     
  11. PVR_Extremist

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    194
    Likes Received:
    1

    I was just going to say that :lol:

    I can see Tile Based Deferred Rendering making more sense from a cost point of view. Indeed it doesnt cost alot today. Then again what we have available now from IMGTEC isnt a GF4ti4600 in performance either.

    So my original questions still stand....
     
  12. pascal

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    1,830
    Likes Received:
    49
    Location:
    Brasil
    Not exactlly the same reasoning.

    What if Kyro II had a DDR memory and 3 or 4 pipelines?
    I think it could still be cheap and much faster than any other card in the same price range.

    edited: what I am trying to say is "the same price range with much better performance" not "the same performance in the same price range".
     
  13. Saem

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    1,532
    Likes Received:
    6
    Actually, Intel using a deffered + tile architecture, is far more important than IMG help bring to the PC arena. One thing to note is the fact that the i810 chipsets with their integrated core are in about 50% of desktops, IIRC. It was in a survey done not too long ago, it was discussed in the forums as well.

    With that said, I think we'll see more of the same, with i845G. If this is the case then, guess where a lot of game makers will target their games?
     
  14. Humus

    Humus Crazy coder
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,217
    Likes Received:
    77
    Location:
    Stockholm, Sweden
    Ok, hit me, but I think that IMR is the way to go. Not only can you get decently close speedwise with HyperZ like implementations but you can also get useful information in the form of the depth values for shadowmapping and other effects. IMR's may also give you occlusion query information.
     
  15. Reverend

    Banned

    Joined:
    Jan 31, 2002
    Messages:
    3,266
    Likes Received:
    24
    It's simple really - faster memory as well as more games appear to be CPU limited.
     
  16. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,805
    Likes Received:
    473
    Noone likes occlusion queries but software developers, and now that they have it Ive only seen it put down on the GDalgorithms :) (Demo's are nice, but I prefer the impressions from game developers.) As long as you use immediate mode rendering in its present form its the only way to do finegrained occlusion culling, granted ... but its a pretty damn sucky way, would be much better if the hardware had access to bounding volume information itself IMO.

    Getting Z values for a shadow buffer is hardly a problem for a tiler, just a variation on rendering to a texture.
     
  17. Ty

    Ty Roberta E. Lee
    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,448
    Likes Received:
    52
    Well the whole "What if?" line of arguments have also been done to death previously as well. "What if the Neon250 came out on time? What if memory prices didn't drop for the original 3Dfx Voodoo1? What if the Kyro had the benefit of mass production? Etc. etc." We all know that on paper deferred renderers have great benefits. The problem is that so far, no one has proved that one can come out at, "the same price range with much better performance" - speaking of the top end here where IMR's are supposedly hitting the bandwidth ceiling.
     
  18. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,805
    Likes Received:
    473
    If people would stop trying to proove the negative others could stop presenting "what if" scenarios.
     
  19. Ty

    Ty Roberta E. Lee
    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,448
    Likes Received:
    52
    There's no 'proving of the negative' here because you can't logically prove a negative. We're waiting for proof of the assertion that IMRs will not be able to keep up with bandwidth demands and that deferred renderers will take over.
     
  20. arjan de lumens

    Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,274
    Likes Received:
    50
    Location:
    gjethus, Norway
    Still memory bandwidth. So let's take a look at what the maximum possible memory bandwidth into a single-chip GPU might be. This bandwidth is determined mainly by 2 factors:
    • Width of memory bus, in number of pins
    • Datarate per pin of memory bus
    For a Geforce4, you get 128 bits * 650 MHz = 10.4 GBytes/sec. But the ultimate maximum? The maximum bus width is obviously no less than 256 bits (P10, Parhelia) - with flip chip packaging, pin counts of up to several thousands is possible, so I'd estimate that a 1024-bit (external!) bus is possible (though certainly expensive as hell). For the datarate, Rambus QRSL signalling permits a datarate of 1.6 GBit/s per pin - I'm not aware of any other scheme that offers comparable per-pin datarates.

    This amounts to a (rather hypothetical) maximum of about 200 GB/s, barring cost and signal integrity issues. Which is about 20 times what Geforce4 has. So IMRs won't run into any hard limits anytime soon - about 6-7 years away, according to Moore's law. At which time eDRAM may have gotten cheap enough to take over for external memory solutions...
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...