The New and Improved "G80 Rumours Thread" *DailyTech specs at #802*

Discussion in 'Pre-release GPU Speculation' started by Geo, Sep 11, 2006.

Thread Status:
Not open for further replies.
  1. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    I don't really think "guessing right" is the right terminology. At the end of the day, it is up to the game developers and how they achieve the desired results. There really isn't any way to predict how aggressive they will be in exploiting the advantages of a unified architecture.
     
  2. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    Well, one of the points of it is not to have to have the need to specifically exploit for it.

    Of course, it does also allow the developer more freedom in how they distribute the resources such that they could make choices that they wouldn't necessarily on a traditional architecture.
     
  3. NocturnDragon

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    393
    Likes Received:
    17
    It's pretty clear that if the developers will use a PS/VS balance close to what the G80 will have, the match for Ati will be harder, no one outside NDAs knows yet who might win, but in all other scenarios (without even going to the eccessive PS and VS limited cases) the R600 will have a edge.

     
  4. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    But is that how developers engineer their games? As far as I can tell, developers have a baseline, and then loads of stuff you can turn on in order to get extra features and IQ if you have the horsepower to do so (ie more effects, higher resolutions, AA, AF etc). We've seen many triple-A games that were simply not able to run at their full capabilities when they were released, because they had the capability to outstrip even what the top end graphics cards could offer.

    For a fixed PS/VS architechture like G80, you have a sweet-spot that the developer may try and hit for his baseline. However, as soon as you start turning on all those extra features, chances are you will move away from that baseline, leaving some of your fixed architecture idle while waiting on where ever the bottleneck has been moved to.

    The advantage of the unified architecture, is of course that it can effectively reconfigure itself to whatever balance the games needs (whatever the settings you choose), in order to get the maximum performance out of all the transistors. In fact, the unified architecture could reconfigure itself on the fly depending what the game is doing from scene to scene.

    Although there may be overhead for this kind of flexible, unified approach when compared to a fixed architecture operating at it's sweet-spot, the picture gets very blurry when you consider that a game may easily move the fixed architecture away from it's sweet-spot as you enable all the extra eye-candy.

    In this case the goalposts get moved, but the unified approach enables you to "chase" those moving goalposts and dynamically create whatever new sweet-spot is required, whereas the fixed architecture is left to suffer.
     
    #224 Bouncing Zabaglione Bros., Sep 15, 2006
    Last edited by a moderator: Sep 15, 2006
  5. Rolf N

    Rolf N Recurring Membmare
    Veteran

    Joined:
    Aug 18, 2003
    Messages:
    2,494
    Likes Received:
    55
    Location:
    yes
    There really isn't "the balance" for a given game. Or even a frame. Load distribution varies all the time. Just think about how objects tend to occupy less pixels as they are moved away from the viewpoint, and you'll surely see what I mean.

    Or think about how definitely not vertex bound a tone mapping pass is going to be, regardless of how many gazillions of polygons are visible.
     
  6. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    well usually the max settings is what we target at 30 fps on the fastest card available or next gen is expected at. Usually target around a 50% increase in performance.

    This is why we see many games on different engines come out with around the same polys per frame, so far other then Far Cry no games really go above 200k per scene, Far Cry is an odd ball, at least for outdoors, where pixel shaders weren't taxed much so they could do that since the vertex calculation amounts were limited. 300-350k is the target for polys per frame for this gen cards (7900 and x1900xtx) So vertex shaders won't be overly taxed on next gen. Actually I don't think vertex shaders are taxed at all yet, so staying with the fixed vertex shader counts really doesn't hurt. The more vertecies that are drawn the harder it will be for pixel shaders units to keep up, since there are alot more calculations needed to be done on this front (this was all the talk about FEAR with the 1 to 8 pixel shader ratio).

    Edit

    Using simple shaders like normal mapping current gen cards could do well about 3 million polys per scene with sm 3.0 4 lights per object in one pass. So adding in more complex shaders that figure will drop conciderable manily because the pixel shaders will be taxed greatly.
     
    #226 Razor1, Sep 15, 2006
    Last edited by a moderator: Sep 15, 2006
  7. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    Is this right, that more vertices leads to more pixel shader calcs? And what was that ratio, VSs:pSs (surely not tex:math, given the context)?
     
  8. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    for each new vertex, normals have to be recalculated for the pixel shaders to do thier thing specifically for animated objects and some instances for static as well. So the pixel shaders will be used more with more verticies being drawn.

    edit: its highly dependent on the shader being drawn, regular normal mapping and simple parallex is about 1 to 2 or 3, once ya get into it POM, it will sky rocket.
     
    #228 Razor1, Sep 16, 2006
    Last edited by a moderator: Sep 16, 2006
  9. ants

    Newcomer

    Joined:
    Feb 10, 2006
    Messages:
    44
    Likes Received:
    3
    Which normals? Aren't they (usually) sent down with the vertex data (position, tex coords, etc).

    Do you mean the interpolation for the vertex data from the VS to the PS?

    Pixel shaders handle fragments, they don't care about vertecies... They will be used more if those vertecies generate more fragments.

    Please correct me if I am wrong.
     
  10. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    vertex normals, to calculate anything like normal maps, once a mesh moves, vertex normals change, so these have to be introduced into the pixel shader to get the final outcome. So lets looks at regular lighting phong or blinn with normal mapping. All vertex normals are calculated at load time and then when animated objects are concerned vertex normals are recalculated as needed and sent to the pixel shader to spit out the output.

    In a phong type lighting system without any type of bump mapping you are correct there is no change since we won't be using the vertex normal to calculate anything.

    But anyways, what the goal of real time lighting is to do all the vertex calculations at one time and one time only in one pass to do this vertex calculations are done in a post processing stage.(deffered shading and some other new lighting models coming out) but this fails with animated objects.

    sorry had to get a quick bite to eat ;), but now with newer games being much more interactive with their environment, more animated objects are going to be used. Crysis for example all thier trees are interactive, which just increases the vertex work load and inturn increases the pixel shader work load that much more. (the ratio will stay the same, but the pixel shader limit will be hit first since the ratio of the pixel shaders will be higher), and this is where the r600 "should" come out on top, but it all depends on the number of calculations that can be done on each GPU.
     
    #230 Razor1, Sep 16, 2006
    Last edited by a moderator: Sep 16, 2006
  11. MistaPi

    Regular

    Joined:
    Jun 12, 2002
    Messages:
    374
    Likes Received:
    13
    Location:
    Norway
    From someone I trust:

    - 40% dedicated to D3D10 (a little unclear if this means of the entire die).

    - NDA expires beginning of november.

    - Hard launch.

    - Aims for good availability for 3 SKU's for chrismas sales.
     
    #231 MistaPi, Sep 16, 2006
    Last edited by a moderator: Sep 16, 2006
  12. LeStoffer

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,262
    Likes Received:
    22
    Location:
    Land of the 25% VAT
    That sounds totally weird: dedicated :???: VS and GS should be unified on G80 and the new added integer instruction set shouldn't demand that much extra die space - especially not if their patent to combine fp and int is used in a single ALU?

    What am I missing here?
     
  13. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    There's so much more into D3D10/SM4.0 than integer support..a 40% increase would not surprise me at all
     
  14. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    Shocka! :twisted:
     
  15. trumphsiao

    Regular

    Joined:
    Jan 31, 2006
    Messages:
    285
    Likes Received:
    11

    Benchmark 3Dmark06 increase very much same as from R9700 Pro to X800XT.
    I also heard G80 architecture supporting orthogonalized Frame Buffer which exactly I dunno what it can do for ?
     
    #235 trumphsiao, Sep 16, 2006
    Last edited by a moderator: Sep 16, 2006
  16. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    It means that it does not matther what format your frame buffer is, all features (related to the frame buffer) are supported, basically your fb is orthogonal (is not related at all, it comes from linear algebra..) to anything else....MAYBE ;)
     
  17. trumphsiao

    Regular

    Joined:
    Jan 31, 2006
    Messages:
    285
    Likes Received:
    11

    I aleady heard 80nm SOI G80 will simultaneouly come out with Vista/New Office.
     
  18. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    to Uttar: ZRAMMM! LOL :)
     
  19. trumphsiao

    Regular

    Joined:
    Jan 31, 2006
    Messages:
    285
    Likes Received:
    11

    1. I wager this time we will see lots of G80GT.

    2.Nvidia just placed order several days ago which punctually confirm we would/shall at least see batch of G80s by Nov or Dec

    3.but first batch of G80s only 40K for sure.(Hope OEM like Dell will not swallow too much portion of G80s .
     
  20. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    How big of a deal is this? Besides FP16 AA support what's the current situation with support of different FB formats?
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...