The NEXT LAST R600 Rumours & Speculation Thread

Discussion in 'Pre-release GPU Speculation' started by Geo, Mar 1, 2007.

Thread Status:
Not open for further replies.
  1. Twinkie

    Regular

    Joined:
    Oct 22, 2006
    Messages:
    386
    Likes Received:
    5
  2. TG01

    Newcomer

    Joined:
    Dec 18, 2006
    Messages:
    40
    Likes Received:
    1
    ehm.. because it's faster than accessing a harddrive perhaps..?
     
  3. pakotlar

    Banned

    Joined:
    Mar 19, 2004
    Messages:
    805
    Likes Received:
    17
    kyles not the sharpest tool in the shed. if the card actually used 300w theres no way its power envelope would be 300w (6+8+pcie). No engineer in the world would produce a product with 0 tolerance.
     
  4. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,360
    Likes Received:
    1,377
    I think you had better explain exactly what you mean by this. :)
     
  5. pakotlar

    Banned

    Joined:
    Mar 19, 2004
    Messages:
    805
    Likes Received:
    17
    how sweet would that be though. and with ~140GB/s of bandwith and 24 ROP's @ 800mhz, it wouldn't be nearly as bottlenecked as the g80.
     
  6. TG01

    Newcomer

    Joined:
    Dec 18, 2006
    Messages:
    40
    Likes Received:
    1
    DX10 Virtual memory means accesing your physical memory directly from the GPU
    this can be used to preload all kinds of stuff so the GPU can access that data directly.

    or I could be wrong (again) .. :)
     
  7. Unknown Soldier

    Veteran

    Joined:
    Jul 28, 2002
    Messages:
    4,047
    Likes Received:
    1,670
    Hi Pete, I got that bit, thing is was the demo actually demo'd on the R600 or an Xbox?

    Pete also note that since EA signed a contract with Epic to use the UE3 engine, Battlefield: Bad Company's frostbite seems very much like the UE3(maybe an updated version?).

    If it is a completely new engine, then WOW! EA has actually gone and surprised me.

    US
     
  8. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,245
    Likes Received:
    4,465
    Location:
    Finland
    I believe it's their own, and thanks go rather to DICE than EA (to my understanding, they're still acting very much as their "own studio", similar to CryTek and so on, and not the gazillion studios integratred 100% and now acting with no name of their own)
     
  9. Unknown Soldier

    Veteran

    Joined:
    Jul 28, 2002
    Messages:
    4,047
    Likes Received:
    1,670
    #249 Unknown Soldier, Mar 3, 2007
    Last edited by a moderator: Mar 3, 2007
  10. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    Ugh, not the inq for a source again.

    What a bunch of FUD that site is!
     
  11. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    Who ever wrote that article is completely clueless.

    Sapphire will never make Geforce cards.
    The parent company (PC Partner) has an equivalent brand setup just for selling Nvidia cards called "Zotac".
    I believe there are a few of those cards around.
     
  12. rwolf

    rwolf Rock Star
    Regular

    Joined:
    Oct 25, 2002
    Messages:
    968
    Likes Received:
    54
    Location:
    Canada

    - geometry shader and the ability to create ploygons.
    - Substantially reduced API object overhead

    These two feature alone will solve the problem with applications being completely CPU bound.

    - no cap bits all features in hardware eliminating multiple code paths in games.
    - virtual memory
    - Unified instruction sets (HLSL 10)
    - Shader model 4.0
    - Standard Storage Formats

    These will make games easier to code and reduce development time.
     
  13. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    hmm well all dx10 gpu's have those features, and that wasn't what we were talking about, which features of dx10 could be f*cked up or done better is what I'm talking about, if a GPU is a good dx9 performer, all of these features in dx10 will also run well, so only the additional features like GS is really the only area I can think of that would make a difference. We already see that the load balancing on the g80 for the unified shaders is doing very well but this is possibly another area AMD could have improved since its they second generation USA design. Another area possibly branching, the g80 has very good branching performance, this should be carried over to dx10, but its a comparative situation, the r600 might have better.
     
  14. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    They aren't talking about PC Partner and daughter or sister companies, probably used the wrong term, I'm pretty sure they mean AIB partners.
     
  15. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    A quick reality check will tell you that any GPU will have a hard time in upcoming games to use even 8x MSAA in a decent resolution.
     
  16. turtle

    Regular

    Joined:
    Aug 20, 2005
    Messages:
    279
    Likes Received:
    8
    Why? It looks like 4x will be usable on G80 for the time-being in DX9 (and perhaps DX10), why wouldn't the R600 with the possibility of literally twice the bandwidth (179+GBPS vs 86.4) if using 2800mhz GDDR4 be an exception for 8x?

    I mean granted, new games are taxing, and that might be pushing it to the brink of what to expect, but it is at least possible...especially considering ATi has done a more efficient job with AA/AF in the past using the same amount of BW as nvidia, imagine them having 2x as much. :twisted:
     
  17. Sound_Card

    Regular

    Joined:
    Nov 24, 2006
    Messages:
    936
    Likes Received:
    4
    Location:
    San Antonio, TX
    Jawed likes this.
  18. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    Likely not to be used very much in the next 2 years I predict.

    Irrelevent to underlying HW, so not a factor in G80 DX10 performance vs R600 DX10 performance.

    Yes, but all of this is irrelevent to the original point which is that DX9 performance is likely to predict DX10 performance for the most part. There is a sort of FUD myth going around that somehow the G80 isn't a true DX10 card and somehow DX10 performance will such much like the NV30 was great at DX8 but sucked on real DX9 workloads. In other words, when benchmarks show the G80 and R600 doing about the same on DX9 workloads within margins (maybe +/- 15%), the rallying cry of f*nb*ys will be "but this is not really a fair comparison, wait until real DX10 games come out which will really show the difference between the GPUs!" To which I say, it won't make a big difference.

    There are really only 2 DX10 features I can think of that don't have equivalents in DX9 that can be implemented "badly" and hence kill performance if a game used them heavily: Geometry Shaders and Stream Out.

    But looking at the CUDA architecture, it is reasonable to expect that both of these will perform as expected on the G80, although I think first-generation Geometry Shaders will essentially remain a toy. There are limitations in DX10 on GS that reduce the usefulness compared to developers using middleware to code to CUDA/CTM directly.

    IMHO, I think alot of geometry amplification techniques dovetail with GPGPU techniques and hence, you'll see developers going the Brook/GPGPU route. For example, to do the kind of procedural geometry synthesis you see in something like Speed Tree, you're not going to do it in GS.
     
    John Reynolds likes this.
  19. Twinkie

    Regular

    Joined:
    Oct 22, 2006
    Messages:
    386
    Likes Received:
    5
    Think this is why nVIDIA came up with what we know as CSAA. 8XQ (8xMSAA) may not be useable for next generation games, but that doesnt mean 8xAA non Q and 16xAA non Q arent useable either. Using Chris Ray's G80 image investigation found on rage3d, you can clearly see that 16xAA non Q has performance hit almost similiar to 8xAA non Q while both are very useable in almost ALL games new (except one or two that i can think of right now) or old.

    Im not sure how ATi will approach things, but by the looks of it, instead of what nVIDIA did (with their BW saving technique in the form of CSAA), they may go for the brute force/more traditional approach. (May as well spend the 150gb/s of bandwidth at something).

    Would it be possible if ATi implemented SuperAA into single GPUs on the R600?
     
  20. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    That doesn't even make sense.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...