The NEXT LAST R600 Rumours & Speculation Thread

Discussion in 'Pre-release GPU Speculation' started by Geo, Mar 1, 2007.

Thread Status:
Not open for further replies.
  1. dnavas

    Regular

    Joined:
    Apr 12, 2004
    Messages:
    375
    Likes Received:
    7
    Well, in the first two images, I like the 2900's slightly thicker electric lines in left middle section of picture, but the ugly texturing in the bottom right is a clearer win for the G80.

    The blimp on the second two shots are different, though. The R600 shot looks like it's got one narrow protusion, whereas the G80 looks like one central, and one off-center, more knobby structures. There are some slight aliasing differences on the top of the blimp as well. Would have to give that one to R600.

    There are certainly sharper eyes out there, but, that's all I see.

    -Dave
     
  2. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,493
    Likes Received:
    474
    A null GS could lead to potential register pressure, wasted ALU time (though should be minimal), thread buffer space, and memory used up for post-GS vertices. Maybe other performance wasting things too.
     
  3. Dalton Sleeper

    Newcomer

    Joined:
    May 1, 2007
    Messages:
    32
    Likes Received:
    0
    Location:
    SWEDEN
  4. Aerows

    Regular

    Joined:
    Nov 19, 2002
    Messages:
    317
    Likes Received:
    6
    ::Sigh::: I went to the store (while on page 133 - I was catching up - 40 per page) now that I am back, I started reading again...I just switched to page 134 and now it's up to page 139 now! LOL

    I'm hopelessly behind. Maybe I'll catch up to this post later this evening. Fascinating, though. So far, it seems that there are slight quality issues, perhaps due to drivers (that's what I have gathered from the screen shots I've seen - not just from jpeg compression - but nothing awful). Maybe by the time I actually get to page 140, a new driver will have corrected it..LOL.
     
  5. Rebel44

    Newcomer

    Joined:
    Feb 7, 2007
    Messages:
    65
    Likes Received:
    0
    Its so similar that I wont be able to tell difference in games so it doesnt matter whose IQ is 0,5% better.
     
  6. Dalton Sleeper

    Newcomer

    Joined:
    May 1, 2007
    Messages:
    32
    Likes Received:
    0
    Location:
    SWEDEN
    Typing error again? GTS on first page, then they wrote GTX 640...
     
  7. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,320
    Likes Received:
    525
  8. Arnold Beckenbauer

    Veteran Subscriber

    Joined:
    Oct 11, 2006
    Messages:
    1,756
    Likes Received:
    722
    Location:
    Germany
  9. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    You dont really think a company like Nvidia wouldnt have gotten their hands on an R600 by this point? Some are already leaking into the retail channel.
     
  10. Frank

    Frank Certified not a majority
    Veteran

    Joined:
    Sep 21, 2003
    Messages:
    3,187
    Likes Received:
    59
    Location:
    Sittard, the Netherlands
    Like the previous generations, the R600 is optimized for calculating over texture fetches. Add to that, that it is much easier to schedule stuff for the G80 to keep all ALUs filled, and for the R600 lots of portential scheduling hazards for the ring bus (and memory controller).

    If you go DX10, calculate as much of your geometry and lighting as possible, the R600 will most likely do very well, if they get the compiler and resource hazards sorted out. And, it seems that (for now) both of those things run quite a bit below par.

    On current generation games that use filtered textures for just about anything, it won't surpass the GTX. On next generation games, that use point sampling (data arrays) for much of the workload, it could. It has rather complex texture units, that are very good in point sampling, but much less so in filtered samples, especially if those are floating point.
     
  11. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    If they have the right "contact" inside a shared partner, who knows ?
    They may get one even before ATI's lesser partners to play with. :lol:

    But the other way around is also plausible, considering all the "leaks" before the G80 launch last November.
     
  12. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    Oh they have had it for some time I'm sure :grin: , everything kinda points to that, they knew the performance at the conference call, the leaked emails were from last week.
     
  13. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    This is indeed the same story we heard when R580 was introduced, which was supposedly more future proof. I just don't understand, though, why they keep on targeting games that don't exists instead of focusing on what needs to be accelerated now? By the time those features become important (and they never really did within the lifetime of R580), faster and better HW will be available anyway.

    I understand that it's possible to shift some of the texture filtering load to the shaders and how that would benefit R600. But I don't understand the incentive of a developer to do this (and would very much appreciate it if they provide their thoughts on this.)

    If it is currently doable to use the bandwidth of the texture units to do all kinds of fancy stuff that would otherwise required shader calculation power, doesn't it also make sense to keep this functionality in the TU's and use increased shader power for other stuff (like GS)? It seems that memory bandwidth is about to increase significantly during the coming years, this stuff has to be used for something: if ROP usage isn't increasing much, isn't it likely that it can be most efficiently used for texture operations?

    I went through the slides of the Cascade demo: It seemed to me that they were still relying a lot on clever texturing techniques in combination with GS to get the best result. (One again, some insight from a 3D programmer would be most helpful.)

    Also, even if there will be a shift, isn't this something that will take multiple years to complete? Most engines will still need to run decently on DX9, so for the foreseeable future, DX10 usage will be primarily used to add nice effect here and there. And there's, of course, the simple fact that programmers can fall back on tons of well documented existing techniques to make certain things happen, while they'll need to learn a bunch of new stuff to make full use of DX10.

    Edit: If you saw the human head demo that was recently posted on the Nvidia developers blog, I had the impression that it used an incredible amount of textures to get just the right light behavior. I'd be surprised if this kinds of results can be obtained as efficiently with shaders.
     
  14. Subtlesnake

    Regular

    Joined:
    Mar 18, 2005
    Messages:
    347
    Likes Received:
    126
    Well, then why is there such performance variance from application to application? Wouldn't you expect texturing demands to be more consistent across games?

    Given the huge performance increases we've seen over the course of a couple of driver releases and the fact that in certain instances the X2900 is close to the 8800 GTX (while in others it performs at X1950 XTX levels!) I think we can point to other factors.

    Do we even know how R600's texturing capabilities stack up to R580's?

    Isn't the 2600 intended for the sub $300 price points?
     
  15. Frank

    Frank Certified not a majority
    Veteran

    Joined:
    Sep 21, 2003
    Messages:
    3,187
    Likes Received:
    59
    Location:
    Sittard, the Netherlands
    Creating geometry and lots of shader power are good things, as long as most of your potential game buyers have them. Creating geometry is needed for physics, clothing, hair, fluids and deformable terrain, while lots of shading power are great for weather and dynamic lighting.
     
  16. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    But does it replace the need for texturing throughput in any way?
     
  17. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    It won't replace texturing through put, but if there is a shift in the bottlenecks, which it seems probably won't happen at least not to the degree the r600 is held back (if infact it ends up the texture units are what is holding it back) then the extra shader power and again possible GS performance advantage will come in. I say possible because the GS problem seems to be actually something trival, but again take this last part with some salt I'm not certain if its really trivial or not but thats along the gist of what I hear.
     
  18. Frank

    Frank Certified not a majority
    Veteran

    Joined:
    Sep 21, 2003
    Messages:
    3,187
    Likes Received:
    59
    Location:
    Sittard, the Netherlands
    Hm. What textures can we remove?

    Simple, colored ones? Perhaps, for materials like wood, leaves, rocks, plastic etc. As long as we make sure each polygon uses only a single material. But that won't work for details and most objects.

    Normal maps and bump maps? That could be done, in many cases, as long as the surface is reasonably smooth, or with lots of straight lines. But probably not for (N)PCs and most objects. And I don't think artists would like that.

    Light maps? As long as you only use a single material for each polygon, you could remove most of them.

    So, you could theoretically get rid of many textures that are used for static terrain.
     
  19. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,320
    Likes Received:
    525
    Sorry for reply to myself, but after looking at this picture again, it seems that one is blurrier then the other. Not a lot, nothing compared to the jpeged mess - but noticeable. Of course, there is a chance that after staring at the monitor for hours my eyes are playing tricks, so what does everyone else think?
     
  20. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,491
    Likes Received:
    978
    Location:
    en.gb.uk
    I think that your question is an answer in itself. After hours of staring you're starting to doubt your own eyes.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...