no 6800ultra Extreme, still the same 6800ultra

Discussion in 'Architecture and Products' started by ultragpu, May 10, 2004.

Thread Status:
Not open for further replies.
  1. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    Check out that logic. A NV30 with a 128-bit memory bus and 500Mhz RAM would be faster, because of, I assume, more memory bandwidth, than an R300 with 325Mhz RAM and a 256-bit memory bus.

    Geenyus. Do me a favor and ask Intel how well that logic works.
     
  2. Rugor

    Newcomer

    Joined:
    May 27, 2003
    Messages:
    221
    Likes Received:
    0
    However, I believe what he was saying was that for any given core increased clock speeds are tied to increased performance, rather than making a comment on how low-K can improve performance.
     
  3. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    Like I said before, He probably anticipated and expected that there can be performance increases on a core thats already bottlenecked by memory bandwith.

    Just because theres a memory bandwith bottlenecked doesnt neccasarily mean increasing the core wont improve performance. Heck Even my 8500 AIW DV with its piddle 190 Mhz Clocked Memory, Recieves frame rate improvements by increasing the core.

    Sure its not reaching its effective fill rate due to memory bottlenecks. But I havent seen a memory configuration that can feed a cores complete effective fill rate either.


    Even the FX series which doesnt have a high single texturing fill rate but lots of bandwith doesnt reach its effective fill rate. But increasing clock rates still yields marginal improvements to fill rates. Even when bottleneck.

    sure it's not ideal. But like I've speculated, It's there. the X800 and 6800 series are bottlenecked by there memory, But increasing core still increased its performance.
     
  4. radar1200gs

    Regular

    Joined:
    Nov 30, 2002
    Messages:
    900
    Likes Received:
    0
    I'm not even sure you you insist on bringing the memory bus topic up.

    It was never the real problem with NV30 - FP performance was and that can be remedied on die by the application of increased core speed.
     
  5. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    So, let's assume that you're talking about shader performance alone, which is silly because it'd still get creamed at higher resolutions because of insane memory bandwidth bottlenecks. But whatever. Shader performance only. Clock rates weren't going to remedy that problem. Have we all forgotten the FX's pipeline setup? 4x2?

    http://www.beyond3d.com/forum/viewtopic.php?t=8005&amp;highlight=alu

    Compare that. The problems were a lot more significant than something a simple clock speed boost can remedy.
     
  6. radar1200gs

    Regular

    Joined:
    Nov 30, 2002
    Messages:
    900
    Likes Received:
    0
    Can you provide finalised specs for SM4.0? IIRC SM4.0 is a work in progress which makes it rather difficult for anyone to target at the moment, whereas SM3.0 has existed pretty much in its current form (with maybe a tiny tweak here and there) since DX9 was first released.
     
  7. FUDie

    Regular

    Joined:
    Sep 25, 2002
    Messages:
    581
    Likes Received:
    34
    Isn't that a performance boost?
    But the NV30 didn't run at twice the speed of the competition and the memory didn't exist that would even allow such a thing to happen. Even now, a year after NV30's release, we don't even have large quantities of 600 mhz RAMs.

    You are living in a (green colored) fantasy world.

    -FUDie
     
  8. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    Umm increasing clock rates does increase performance in shader apps to.. I know you know this.

    yes the FX series has a 4x2 pipeline config. But Faster clocks does mean faster performance in shaders...


    Going from 400/700 to 465/700 On my FX card yielded significant improvements in Shader titles. 2.0/1.1 alike.


    I'm not being an apologetic here, Making a big assumption the NV30 could have, under some crazy circumstance. Done 700 mhz, everything would have benefited, Including Shaders,

    Now assume th FX 5900 series which removed integer units, had been running @ 600 mhz, there would have been significant improvements as well.

    Unless your trying to tell us, That increased clock rates dont improve its shader performance either.


    To Be clear. I dont think 700 mhz was ever possible. Or could have been possible. But even on a 128 Bus. there would have been improvements. It just would have lost alot of efficiency
     
  9. radar1200gs

    Regular

    Joined:
    Nov 30, 2002
    Messages:
    900
    Likes Received:
    0
    The problems don't stop it from running Ruby, and they are (comparatively) minor details compared to the larger problem - lack of raw horsepower.

    Yes, eventually bandwidth would play a part, but, there is nothing stopping you from also applying high clock rates to NV35, which does have a 256 bit bus.
     
  10. Crusher

    Crusher Aptitudinal Constituent
    Regular

    Joined:
    Mar 16, 2002
    Messages:
    869
    Likes Received:
    19
    If you guys want to complain about NVIDIA holding back pixel shader adoption, don't waste your breath on the GF4 Ti cards, because that's a baseless argument. The GF4 Ti cards had PS1.3, two versions higher than the GF3. They were released 3 months after DX8.1, so it's pretty absurd to say "well they should have added ATI's near-proprietary PS1.4 by then". You can't just throw in support for a new version 3 months before release, let alone a version that only existed because it happened to be what your competitors made. They also weren't going to delay a chip that was nearly complete just because ATI had slightly higher feature support.

    The real cards that have been holding the adoption of pixel shaders back are the GF4 MX series. For evidence, go look at the forums for Deus Ex: Invisible War, Thief: Deadly Shadows, or Splinter Cell Pandora Tomorrow. These require a meager PS1.1 support, and there are still people complaining, because their GF4 MX's don't support it. These consumer complaints are what discourage developers from implementing new features.

    While the Radeon 9000 and 9200 have a similar "lack of latest PS version" problem as the GF4 MX line, they aren't going to suffer the same disdain because as Dave pointed out, there's not going to be a PS2.0 based console system. Without a console systme like that, there won't be a flood of games requiring PS2.0, probably ever. Most games are going to be made for both the PC and XBox, and as such will probably only require PS1.1 on the PC. Games will increasingly support PS2.0 for better features and faster execution, but they will still run on GF3/4Ti and Radone 8500/9000/9200s. By this same logic, the NV30's utterly horrible performance in PS2.0 won't be a complete disaster for the industry as a whole, either.

    We'll have to wait and see how well NVIDIA and ATI support SM3 in their value products in the next cycle to know what will really happen with SM3.0 adoption. If the trend follows from the R300/NV30 generation though, the ATI budget cards may only run PS2.0, and that would be as bad as the GF4 MX has been for the PC market. Meanwhile NVIDIA would release budget cards that support SM3.0, but perform like dogs. And of course, the people who buy sub-$100 graphics cards to play games can only hope that the games run, not that they run well.
     
  11. radar1200gs

    Regular

    Joined:
    Nov 30, 2002
    Messages:
    900
    Likes Received:
    0
    700mhz was technically possible and nVidia achieved it in the labs, but the fab processes and yields available to nVidia at the time made it practically impossible with NV30 and by NV35 nVidia had given up on further development of NV3x and was concentrating on NV40 instead.
     
  12. radar1200gs

    Regular

    Joined:
    Nov 30, 2002
    Messages:
    900
    Likes Received:
    0
    Yes, the GF4MX is the real culprit on nVidias's side and something it should be ashamed of - hardware accelerators are meant to do exactly that, not emulate things on the CPU.

    Mind you ATi was just as guilty at the time with a version of their Radeon (a 7500VE I think, don't know the exact model) that also omitted T&amp;L functionality, it just that since it was much more of an unknown card than the GF4MX it never recieved the same scrutiny.
     
  13. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    I could achieve 700Mhz too with nitrogen cooling. 700Mhz on air is, frankly, absurd for a chip with that many transistors, even with low-K.

    Supporting something very slowly is a lot easier than supporting less but having it run very quickly. I mean, hell, I can run SM3.0 apps right now on my Athlon XP.

    This whole topic is totally insane. Save me, Wavey Davey!

    And Jesus. Why are we talking about "The NV30 was actually really good but __________ (insert name of scapegoat here) made it really crappy!" or "The NV30 could have had awesome shader performance but (insert scapegoat here) ruined it by messing up (insert scapegoat reason here)!" THE NV30 WAS A CRAPPY ARCHITECTURE. (and to a lesser extent all of the NV3x cards) It was FUNDAMENTALLY FLAWED. Why are we still arguing this? Even with low-k and a 500Mhz core clock (hey, waitaminute, DIDN'T IT COME WITH A 500MHZ CORE CLOCK ANYWAY?!), it couldn't compete with R300, in IQ or in performance. Turn up the res and it tanked. Use shaders, it tanked. Use AA, it tanked from lack of memory bandwidth. IT SUCKED! WHY are we justifying its shortcomings?

    Oh, and NV had given up on NV3x? Makes you wonder about that NV36, which was kinda like a NV31 but mixed with an NV35, and the NV38, except that just was an NV35. But NV36? That was new.
     
  14. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    And you don't think NV, ATI, and Microsoft (along with several game developers) are working on this specification right now? Funny, that.
     
  15. Eronarn

    Newcomer

    Joined:
    May 1, 2004
    Messages:
    247
    Likes Received:
    0
    The NV30 was REALLY good, but ATI made it really crappy by releasing the R300! They RUINED the NV30!
     
  16. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    SM2.0 may have been a waypoint on the road to SM3.0, but no longer. Intel supporting SM2.0 with its upcoming IGP indicates to me that SM2.0 is the destination. I don't see Intel bothering with FP32 and dynamic branching for an IGP--not until Longhorn, anyway.

    And, yes, ATi's X300 and X600 being SM2.0 further solidifies that as the baseline. I really doubt we're going to see solid "SM3.0" performance out of nV's cheaper offerings. How will nV compete with a SM3.0 part against a smaller, lower-transistor (110nm, SM2.0 &lt; SM3.0) X300?

    As to whether I'd prefer SM3.0 or SM2.0 parts at the bottom end, obviously I'd want the former, but I don't see it happening. nV stuck with a NV1x architecture for quite some time, and the 5200 was still only a DX8 part WRT performance. The low end by necessity must sacrifice features for speed for the sake of the bottom line. I don't see nV bucking this trend by offering a decently speedy SM3.0 + HDR + FP blending "6200" at the low-end, but they're welcome to surprise me. :)

    Is this tongue-in-cheek? B/c I thought PS1.3 was basically a relabelled PS1.1, a marketing answer that was nowhere near the (mostly unrealized in the 8500's lifetime) leap that PS1.4 was.
     
  17. FUDie

    Regular

    Joined:
    Sep 25, 2002
    Messages:
    581
    Likes Received:
    34
    "two versions higher"? More like ".2 versions higher". The jump from PS 1.1 to 1.3 was very small.

    -FUDie
     
  18. mozmo

    Newcomer

    Joined:
    Jul 19, 2002
    Messages:
    129
    Likes Received:
    0
    I agree with the comments on xbox dx8 games being the domanant factor for games in the future. A large number of games will target xbox dx8 style effects, if ps2/3 is used it will be for performance and extra quality. Only pc only game titles will use the latest features, but those types of games are slowly fading away and the more money making multiplatform strategy taking over.
     
  19. radar1200gs

    Regular

    Joined:
    Nov 30, 2002
    Messages:
    900
    Likes Received:
    0
    Dave was trying to imply that SM3.0 is just another waypoint on the DX9 roadmap. I'm saying it is (and was) the end of the currently defined roadmap (of course the roadmap might get extended, but I'd expect that then you would have a DX9.1 and SM4.0 would be like PS1.4).
     
  20. jvd

    jvd
    Banned

    Joined:
    Feb 13, 2002
    Messages:
    12,724
    Likes Received:
    9
    Location:
    new jersey
    I don't agree .


    sm4.0 is obviously at the final stages. Ms is going to want something much more advance for the xbox 2 than an already 2 year old spec .

    If anything this 9.0c dx is going to be like dx 8.1 . A small step that only one hardware producer is gonig to support .
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...