no 6800ultra Extreme, still the same 6800ultra

Status
Not open for further replies.
Check out that logic. A NV30 with a 128-bit memory bus and 500Mhz RAM would be faster, because of, I assume, more memory bandwidth, than an R300 with 325Mhz RAM and a 256-bit memory bus.

Geenyus. Do me a favor and ask Intel how well that logic works.
 
radar1200gs said:
Sage said:
radar1200gs said:
Who's doing Low-K for the performance boost? You do it to (...) enable faster clock speeds.

uhhhh.. correct me if I'm wrong, but aren't those two generally tied together?
There are other aspects of Low-K that can help boost performance, I suggest you read some of the links I and others have posted on the forum concerning Low-K.

However, I believe what he was saying was that for any given core increased clock speeds are tied to increased performance, rather than making a comment on how low-K can improve performance.
 
The Baron said:
Check out that logic. A NV30 with a 128-bit memory bus and 500Mhz RAM would be faster, because of, I assume, more memory bandwidth, than an R300 with 325Mhz RAM and a 256-bit memory bus.

Geenyus. Do me a favor and ask Intel how well that logic works.

Like I said before, He probably anticipated and expected that there can be performance increases on a core thats already bottlenecked by memory bandwith.

Just because theres a memory bandwith bottlenecked doesnt neccasarily mean increasing the core wont improve performance. Heck Even my 8500 AIW DV with its piddle 190 Mhz Clocked Memory, Recieves frame rate improvements by increasing the core.

Sure its not reaching its effective fill rate due to memory bottlenecks. But I havent seen a memory configuration that can feed a cores complete effective fill rate either.


Even the FX series which doesnt have a high single texturing fill rate but lots of bandwith doesnt reach its effective fill rate. But increasing clock rates still yields marginal improvements to fill rates. Even when bottleneck.

sure it's not ideal. But like I've speculated, It's there. the X800 and 6800 series are bottlenecked by there memory, But increasing core still increased its performance.
 
The Baron said:
Check out that logic. A NV30 with a 128-bit memory bus and 500Mhz RAM would be faster, because of, I assume, more memory bandwidth, than an R300 with 325Mhz RAM and a 256-bit memory bus.

Geenyus. Do me a favor and ask Intel how well that logic works.
I'm not even sure you you insist on bringing the memory bus topic up.

It was never the real problem with NV30 - FP performance was and that can be remedied on die by the application of increased core speed.
 
So, let's assume that you're talking about shader performance alone, which is silly because it'd still get creamed at higher resolutions because of insane memory bandwidth bottlenecks. But whatever. Shader performance only. Clock rates weren't going to remedy that problem. Have we all forgotten the FX's pipeline setup? 4x2?

http://www.beyond3d.com/forum/viewtopic.php?t=8005&highlight=alu

Compare that. The problems were a lot more significant than something a simple clock speed boost can remedy.
 
DaveBaumann said:
radar1200gs said:
It would have been interesing to see NV30 as originally intended, on a Low-K process and clocked around 700mhz.

Unlikely that it could possibly scale to 700MHz. The announced speeds of 500MHz were its targetted clocks (although they wouldn't have gone for the coolers they did).

SM2.0 is just a waypoint on the DX9 map leading to SM3.0.

Wrong. SM3.0 is a way point to 4.0 - its a teaser. Whatever has the widespread support consensus is whats most important and I doubt you'll find Intel supporting SM3.0.

Can you provide finalised specs for SM4.0? IIRC SM4.0 is a work in progress which makes it rather difficult for anyone to target at the moment, whereas SM3.0 has existed pretty much in its current form (with maybe a tiny tweak here and there) since DX9 was first released.
 
radar1200gs said:
The Baron said:
It would have been interesing to see NV30 as originally intended, on a Low-K process and clocked around 700mhz.
Low-k doesn't give that big of a performance boost, first of all, and have we all forgotten about the God-awful 128-bit memory bus on NV30?
Who's doing Low-K for the performance boost? You do it to lower signal interference at high frequency and enable faster clock speeds (edit: through a lowering of the total heat budget in NV30's case).
Isn't that a performance boost?
The 128 bit bus really doesn't matter if you can run twice the speed of your competitor (you effectively have his 256 bit bus anyway).
But the NV30 didn't run at twice the speed of the competition and the memory didn't exist that would even allow such a thing to happen. Even now, a year after NV30's release, we don't even have large quantities of 600 mhz RAMs.

You are living in a (green colored) fantasy world.

-FUDie
 
The Baron said:
So, let's assume that you're talking about shader performance alone, which is silly because it'd still get creamed at higher resolutions because of insane memory bandwidth bottlenecks. But whatever. Shader performance only. Clock rates weren't going to remedy that problem. Have we all forgotten the FX's pipeline setup? 4x2?

http://www.beyond3d.com/forum/viewtopic.php?t=8005&highlight=alu

Compare that. The problems were a lot more significant than something a simple clock speed boost can remedy.

Umm increasing clock rates does increase performance in shader apps to.. I know you know this.

yes the FX series has a 4x2 pipeline config. But Faster clocks does mean faster performance in shaders...


Going from 400/700 to 465/700 On my FX card yielded significant improvements in Shader titles. 2.0/1.1 alike.


I'm not being an apologetic here, Making a big assumption the NV30 could have, under some crazy circumstance. Done 700 mhz, everything would have benefited, Including Shaders,

Now assume th FX 5900 series which removed integer units, had been running @ 600 mhz, there would have been significant improvements as well.

Unless your trying to tell us, That increased clock rates dont improve its shader performance either.


To Be clear. I dont think 700 mhz was ever possible. Or could have been possible. But even on a 128 Bus. there would have been improvements. It just would have lost alot of efficiency
 
The Baron said:
So, let's assume that you're talking about shader performance alone, which is silly because it'd still get creamed at higher resolutions because of insane memory bandwidth bottlenecks. But whatever. Shader performance only. Clock rates weren't going to remedy that problem. Have we all forgotten the FX's pipeline setup? 4x2?

http://www.beyond3d.com/forum/viewtopic.php?t=8005&highlight=alu

Compare that. The problems were a lot more significant than something a simple clock speed boost can remedy.

The problems don't stop it from running Ruby, and they are (comparatively) minor details compared to the larger problem - lack of raw horsepower.

Yes, eventually bandwidth would play a part, but, there is nothing stopping you from also applying high clock rates to NV35, which does have a 256 bit bus.
 
If you guys want to complain about NVIDIA holding back pixel shader adoption, don't waste your breath on the GF4 Ti cards, because that's a baseless argument. The GF4 Ti cards had PS1.3, two versions higher than the GF3. They were released 3 months after DX8.1, so it's pretty absurd to say "well they should have added ATI's near-proprietary PS1.4 by then". You can't just throw in support for a new version 3 months before release, let alone a version that only existed because it happened to be what your competitors made. They also weren't going to delay a chip that was nearly complete just because ATI had slightly higher feature support.

The real cards that have been holding the adoption of pixel shaders back are the GF4 MX series. For evidence, go look at the forums for Deus Ex: Invisible War, Thief: Deadly Shadows, or Splinter Cell Pandora Tomorrow. These require a meager PS1.1 support, and there are still people complaining, because their GF4 MX's don't support it. These consumer complaints are what discourage developers from implementing new features.

While the Radeon 9000 and 9200 have a similar "lack of latest PS version" problem as the GF4 MX line, they aren't going to suffer the same disdain because as Dave pointed out, there's not going to be a PS2.0 based console system. Without a console systme like that, there won't be a flood of games requiring PS2.0, probably ever. Most games are going to be made for both the PC and XBox, and as such will probably only require PS1.1 on the PC. Games will increasingly support PS2.0 for better features and faster execution, but they will still run on GF3/4Ti and Radone 8500/9000/9200s. By this same logic, the NV30's utterly horrible performance in PS2.0 won't be a complete disaster for the industry as a whole, either.

We'll have to wait and see how well NVIDIA and ATI support SM3 in their value products in the next cycle to know what will really happen with SM3.0 adoption. If the trend follows from the R300/NV30 generation though, the ATI budget cards may only run PS2.0, and that would be as bad as the GF4 MX has been for the PC market. Meanwhile NVIDIA would release budget cards that support SM3.0, but perform like dogs. And of course, the people who buy sub-$100 graphics cards to play games can only hope that the games run, not that they run well.
 
ChrisRay said:
The Baron said:
So, let's assume that you're talking about shader performance alone, which is silly because it'd still get creamed at higher resolutions because of insane memory bandwidth bottlenecks. But whatever. Shader performance only. Clock rates weren't going to remedy that problem. Have we all forgotten the FX's pipeline setup? 4x2?

http://www.beyond3d.com/forum/viewtopic.php?t=8005&highlight=alu

Compare that. The problems were a lot more significant than something a simple clock speed boost can remedy.

Umm increasing clock rates does increase performance in shader apps to.. I know you know this.

yes the FX series has a 4x2 pipeline config. But Faster clocks does mean faster performance in shaders...


Going from 400/700 to 465/700 On my FX card yielded significant improvements in Shader titles. 2.0/1.1 alike.


I'm not being an apologetic here, Making a big assumption the NV30 could have, under some crazy circumstance. Done 700 mhz, everything would have benefited, Including Shaders,

Now assume th FX 5900 series which removed integer units, had been running @ 600 mhz, there would have been significant improvements as well.

Unless your trying to tell us, That increased clock rates dont improve its shader performance either.

To Be clear. I dont think 700 mhz was ever possible. Or could have been possible. But even on a 128 Bus. there would have been improvements. It just would have lost alot of efficiency

700mhz was technically possible and nVidia achieved it in the labs, but the fab processes and yields available to nVidia at the time made it practically impossible with NV30 and by NV35 nVidia had given up on further development of NV3x and was concentrating on NV40 instead.
 
Crusher said:
If you guys want to complain about NVIDIA holding back pixel shader adoption, don't waste your breath on the GF4 Ti cards, because that's a baseless argument. The GF4 Ti cards had PS1.3, two versions higher than the GF3. They were released 3 months after DX8.1, so it's pretty absurd to say "well they should have added ATI's near-proprietary PS1.4 by then". You can't just throw in support for a new version 3 months before release, let alone a version that only existed because it happened to be what your competitors made. They also weren't going to delay a chip that was nearly complete just because ATI had slightly higher feature support.

The real cards that have been holding the adoption of pixel shaders back are the GF4 MX series. For evidence, go look at the forums for Deus Ex: Invisible War, Thief: Deadly Shadows, or Splinter Cell Pandora Tomorrow. These require a meager PS1.1 support, and there are still people complaining, because their GF4 MX's don't support it. These consumer complaints are what discourage developers from implementing new features.

While the Radeon 9000 and 9200 have a similar "lack of latest PS version" problem as the GF4 MX line, they aren't going to suffer the same disdain because as Dave pointed out, there's not going to be a PS2.0 based console system. Without a console systme like that, there won't be a flood of games requiring PS2.0, probably ever. Most games are going to be made for both the PC and XBox, and as such will probably only require PS1.1 on the PC. Games will increasingly support PS2.0 for better features and faster execution, but they will still run on GF3/4Ti and Radone 8500/9000/9200s. By this same logic, the NV30's utterly horrible performance in PS2.0 won't be a complete disaster for the industry as a whole, either.

We'll have to wait and see how well NVIDIA and ATI support SM3 in their value products in the next cycle to know what will really happen with SM3.0 adoption. If the trend follows from the R300/NV30 generation though, the ATI budget cards may only run PS2.0, and that would be as bad as the GF4 MX has been for the PC market. Meanwhile NVIDIA would release budget cards that support SM3.0, but perform like dogs. And of course, the people who buy sub-$100 graphics cards to play games can only hope that the games run, not that they run well.

Yes, the GF4MX is the real culprit on nVidias's side and something it should be ashamed of - hardware accelerators are meant to do exactly that, not emulate things on the CPU.

Mind you ATi was just as guilty at the time with a version of their Radeon (a 7500VE I think, don't know the exact model) that also omitted T&L functionality, it just that since it was much more of an unknown card than the GF4MX it never recieved the same scrutiny.
 
radar1200gs said:
700mhz was technically possible and nVidia achieved it in the labs, but the fab processes and yields available to nVidia at the time made it practically impossible with NV30 and by NV35 nVidia had given up on further development of NV3x and was concentrating on NV40 instead.
I could achieve 700Mhz too with nitrogen cooling. 700Mhz on air is, frankly, absurd for a chip with that many transistors, even with low-K.

Supporting something very slowly is a lot easier than supporting less but having it run very quickly. I mean, hell, I can run SM3.0 apps right now on my Athlon XP.

This whole topic is totally insane. Save me, Wavey Davey!

And Jesus. Why are we talking about "The NV30 was actually really good but __________ (insert name of scapegoat here) made it really crappy!" or "The NV30 could have had awesome shader performance but (insert scapegoat here) ruined it by messing up (insert scapegoat reason here)!" THE NV30 WAS A CRAPPY ARCHITECTURE. (and to a lesser extent all of the NV3x cards) It was FUNDAMENTALLY FLAWED. Why are we still arguing this? Even with low-k and a 500Mhz core clock (hey, waitaminute, DIDN'T IT COME WITH A 500MHZ CORE CLOCK ANYWAY?!), it couldn't compete with R300, in IQ or in performance. Turn up the res and it tanked. Use shaders, it tanked. Use AA, it tanked from lack of memory bandwidth. IT SUCKED! WHY are we justifying its shortcomings?

Oh, and NV had given up on NV3x? Makes you wonder about that NV36, which was kinda like a NV31 but mixed with an NV35, and the NV38, except that just was an NV35. But NV36? That was new.
 
radar1200gs said:
Can you provide finalised specs for SM4.0? IIRC SM4.0 is a work in progress which makes it rather difficult for anyone to target at the moment, whereas SM3.0 has existed pretty much in its current form (with maybe a tiny tweak here and there) since DX9 was first released.
And you don't think NV, ATI, and Microsoft (along with several game developers) are working on this specification right now? Funny, that.
 
The Baron said:
The NV30 was actually really good but __________ (insert name of scapegoat here) made it really crappy!

The NV30 was REALLY good, but ATI made it really crappy by releasing the R300! They RUINED the NV30!
 
SM2.0 is just a waypoint on the DX9 map leading to SM3.0. In fact I believe it was initially quite unimportant until nVidia walked out of DX9 discussions and were subsequently unable to get fully to SM3.0 with NV30.
SM2.0 may have been a waypoint on the road to SM3.0, but no longer. Intel supporting SM2.0 with its upcoming IGP indicates to me that SM2.0 is the destination. I don't see Intel bothering with FP32 and dynamic branching for an IGP--not until Longhorn, anyway.

And, yes, ATi's X300 and X600 being SM2.0 further solidifies that as the baseline. I really doubt we're going to see solid "SM3.0" performance out of nV's cheaper offerings. How will nV compete with a SM3.0 part against a smaller, lower-transistor (110nm, SM2.0 < SM3.0) X300?

As to whether I'd prefer SM3.0 or SM2.0 parts at the bottom end, obviously I'd want the former, but I don't see it happening. nV stuck with a NV1x architecture for quite some time, and the 5200 was still only a DX8 part WRT performance. The low end by necessity must sacrifice features for speed for the sake of the bottom line. I don't see nV bucking this trend by offering a decently speedy SM3.0 + HDR + FP blending "6200" at the low-end, but they're welcome to surprise me. :)

If you guys want to complain about NVIDIA holding back pixel shader adoption, don't waste your breath on the GF4 Ti cards, because that's a baseless argument. The GF4 Ti cards had PS1.3, two versions higher than the GF3.
Is this tongue-in-cheek? B/c I thought PS1.3 was basically a relabelled PS1.1, a marketing answer that was nowhere near the (mostly unrealized in the 8500's lifetime) leap that PS1.4 was.
 
Crusher said:
If you guys want to complain about NVIDIA holding back pixel shader adoption, don't waste your breath on the GF4 Ti cards, because that's a baseless argument. The GF4 Ti cards had PS1.3, two versions higher than the GF3. They were released 3 months after DX8.1, so it's pretty absurd to say "well they should have added ATI's near-proprietary PS1.4 by then". You can't just throw in support for a new version 3 months before release, let alone a version that only existed because it happened to be what your competitors made. They also weren't going to delay a chip that was nearly complete just because ATI had slightly higher feature support.
"two versions higher"? More like ".2 versions higher". The jump from PS 1.1 to 1.3 was very small.

-FUDie
 
I agree with the comments on xbox dx8 games being the domanant factor for games in the future. A large number of games will target xbox dx8 style effects, if ps2/3 is used it will be for performance and extra quality. Only pc only game titles will use the latest features, but those types of games are slowly fading away and the more money making multiplatform strategy taking over.
 
The Baron said:
radar1200gs said:
Can you provide finalised specs for SM4.0? IIRC SM4.0 is a work in progress which makes it rather difficult for anyone to target at the moment, whereas SM3.0 has existed pretty much in its current form (with maybe a tiny tweak here and there) since DX9 was first released.
And you don't think NV, ATI, and Microsoft (along with several game developers) are working on this specification right now? Funny, that.
Dave was trying to imply that SM3.0 is just another waypoint on the DX9 roadmap. I'm saying it is (and was) the end of the currently defined roadmap (of course the roadmap might get extended, but I'd expect that then you would have a DX9.1 and SM4.0 would be like PS1.4).
 
radar1200gs said:
The Baron said:
radar1200gs said:
Can you provide finalised specs for SM4.0? IIRC SM4.0 is a work in progress which makes it rather difficult for anyone to target at the moment, whereas SM3.0 has existed pretty much in its current form (with maybe a tiny tweak here and there) since DX9 was first released.
And you don't think NV, ATI, and Microsoft (along with several game developers) are working on this specification right now? Funny, that.
Dave was trying to imply that SM3.0 is just another waypoint on the DX9 roadmap. I'm saying it is (and was) the end of the currently defined roadmap (of course the roadmap might get extended, but I'd expect that then you would have a DX9.1 and SM4.0 would be like PS1.4).

I don't agree .


sm4.0 is obviously at the final stages. Ms is going to want something much more advance for the xbox 2 than an already 2 year old spec .

If anything this 9.0c dx is going to be like dx 8.1 . A small step that only one hardware producer is gonig to support .
 
Status
Not open for further replies.
Back
Top