Anyone else wondering if 6800 Ultra is really the 6800?

surfhurleydude said:
Johnny Rotten said:
I wish people would stop looking at clock speeds as some form of performance metric. Why does 400mhz seem low? Because the 5950, a different architecture was clocked higher? You shouldnt care, just look at the benchmarks. Hasn't the current cpu climate taught you anything? :)

It seems low because R420 clockspeeds are rumored to be AT LEAST 500 MHz, with the same 16 pipe architecture.

Both architectures feature 16 pipes, yes but that still shouldnt lead you to equate clockspeeds between the two. Athlons and P4's have alot of structural similarities too...
 
surfhurleydude said:
Johnny Rotten said:
I wish people would stop looking at clock speeds as some form of performance metric. Why does 400mhz seem low? Because the 5950, a different architecture was clocked higher? You shouldnt care, just look at the benchmarks. Hasn't the current cpu climate taught you anything? :)

It seems low because R420 clockspeeds are rumored to be AT LEAST 500 MHz, with the same 16 pipe architecture.

Yeah, I think this is the kicker for me. Don't get me wrong, nVidia have done a great job with NV40. It's just that I envisage R420 being pretty much the same, but with higher clockspeed. So in theory R420 should be faster.

I guess one of the major differences will be SM3.0 support, hence nVidia promoting it heavily. I don't know how much this will influence the consumers purchase decision though.
 
If anything I would purely blame yield for decreased clocks. This isn't to say that Nvidia don't have a 6850 Ultra in the plans.
 
I could see a couple of explainations for the lower core speed:

1) They have some insight as to what ATI is going to be releasing for the R420, and realized they didn't need a 475MHz Core clock to beat it. lowering the clock speed means higher yields, cheaper heatsinks, and less power load, so over all it's a good decision so long as they still outclass ATI.

2) The heat/power issue makes 475MHz unreasonable for some reason. It could be artifacting problems, it could be that a more expensive heatsink needed to be used, or it could simply be that fewer powersupplies were capable of dealing with the load and nvidia wanted to minimize the problem as much as possible.

For the memory speed, I don't think 550MHz is terribly unreasonable given memory availability. It looks like Micron might have a leg up on samsung, and given that micron seems to be only supplying ATI, it looks like nvidia really didn't have much choice.

Nite_Hawk
 
digitalwanderer said:
What if that isn't the case? What if sometimes what would be a 3/4 quad card at 500Mhz turns out to be a working 4/4 quad card at 400Mhz?

Is that even possible? I would have thought that if one quad was stuffed it would be broken at any clockspeed.
 
DW, yes you are fairly correct on yield.

There are two basic types of yield problems - 1) physical flaws and 2) how fast the chip can be reliably clocked without malfunctioning (the problem gets worse as the transistor count climbs).

When I was talking about Low-K and the probable yield situation it was scenario 2 i mostly had in mind, though 1 still palys a large part (Low-K materials are fragile compared to the rest of the semiconductor).
 
6800XT! The one clocked at 475/650. ;)

(I made that up. It's not a real rumor. Get over it. Still, it wouldn't surprise me to see a speedbumped A2 version of the 6800 when X800 XT comes out.)
 
Couple of things to think about.

1) The power consumption was not as high as I expected, perhaps someone should try running it with only the primary Molex connecter amd see of there is a signigicant power draw and performance difference.

BTW

You just created a part that while clocked at 400Mhz with 550 DDR which sometimes you are overclocking sometimes underclocking doubles the performance of your last high end part. Why spin it up any more if you don;t have to, you increased IQ relative to your last product and on par with your competitors, your twice as fast as your last high end part, and it cost you a bundle to manufacturer. A lot of head room enables you to do the following.

1)Enter the market conservatively and during pre release evaluate what your competitor is doing (most likely tarketing off of your release data) and make improvements before you ship your parts i.e. the old sucker punch

2) Enable plenty of headroom for refresh parts to offset the very high development costs and maintain your higher street prices longer and to show improvement in the PCI Express parts that won't necessarily perform any better this generation than their AGP counterparts.

NVIDIA based on the performance of this part has nothing to loose and everything to gain by holding back a little. Now if they are doing it because of manufacturing issues or some other reason is another interesting question that we will probably never know the answer too.,
 
The Baron said:
6800XT! The one clocked at 475/650. ;)

(I made that up. It's not a real rumor. Get over it. Still, it wouldn't surprise me to see a speedbumped A2 version of the 6800 when X800 XT comes out.)

Based on rumors, the 6800 Ultra (as we know it) and the x800XT should be out at about the same time.
 
I think the memory bandwidth is holding the card back. Until higher clocked memory comes out, the increased costs of clocking the GPU higher do not outweigh the increased performance (Since the bottleneck is the memory). Just what I think.
 
Well the next few weeks' scenario seems pretty obvious to me:
1) ATI, which uses both low-k and less transistors, realizes they can increase their core clock and still have as good margins as NVIDIA (or maybe they already did)
2) ATI launches cards slightly outperforming the NV40.
3) NVIDIA realizes they're competing against a more powerful chip they thought they would; they beef up their card, which is only possible via clock (450/1200 at least). They then announce the change, send new reference boards, and add new drivers in the
3) ATI is in stores late May, not having changed their plan and respecting their 1 month from launch to avaibility policy
4) NVIDIA is is stores mid/late June, with higher clocks than expected and performance as good or better than ATI's, but worse AA IQ.

But hey, that's just my own personal little speculation :)


Uttar
 
I'm pretty sure nVidia's aware of the fact that ATi's anxious to have the bar set for them so they can know exactly what needs to be done to beat the 6800 series.

nVidia would be smart to turn that strategy on its head by waiting for ATi's response and leaving enough headroom for their own counterattack.

Besides, these low (ie. realistic) clock speeds all but guarantee the mass marketability of the cards -- we will likely see NV40s on store shelves in quantity sooner rather than later.
 
I don't believe this is the case, but seeing as we're in post-release crazy speculation mode...

The Ultra is 16 pipes, and the Non-Ultra is 12 pipes. The expectation, from everyone, was that both would be 16 pipes, with <16 pipes being for the other market segments.

With that in mind, what if the non-ultra (which we assumed was 16 pipes) is now the Ultra, and the top end performance card (which we assumed would be 12 pipes or less) has now been bumped up to the 6800 Non-Ultra slot? It would certainly explain the 16 pipes and 12 pipes within the same number bracket.

As someone else mentioned, perhaps they're hoping for an A2 revision with slightly higher clocks for just after the 420XT launch? The whole "last-minute" tweaking of the card/settings/clocks for the 6800U would indicate they perhaps changed their thinking a little after finding out the XT card would turn up a month after the other ATi parts?

As I said, crazy speculation :D
 
Stryyder said:
Couple of things to think about.

1) The power consumption was not as high as I expected, perhaps someone should try running it with only the primary Molex connecter amd see of there is a signigicant power draw and performance difference.

BTW

You just created a part that while clocked at 400Mhz with 550 DDR which sometimes you are overclocking sometimes underclocking doubles the performance of your last high end part. Why spin it up any more if you don;t have to, you increased IQ relative to your last product and on par with your competitors, your twice as fast as your last high end part, and it cost you a bundle to manufacturer. A lot of head room enables you to do the following.

1)Enter the market conservatively and during pre release evaluate what your competitor is doing (most likely tarketing off of your release data) and make improvements before you ship your parts i.e. the old sucker punch

2) Enable plenty of headroom for refresh parts to offset the very high development costs and maintain your higher street prices longer and to show improvement in the PCI Express parts that won't necessarily perform any better this generation than their AGP counterparts.

NVIDIA based on the performance of this part has nothing to loose and everything to gain by holding back a little. Now if they are doing it because of manufacturing issues or some other reason is another interesting question that we will probably never know the answer too.,

I'm not overly sure about the so called head room. I suspect that current revisions may enable a 50-75 Mhz increase but ultimately the NV40 will top out below the fabled 500 mark. That priveledge will fall upon NV45. What I do think Nvidia have created is a very good base line for their DX9 line up. With many features ticked off already I'm sure that the speed increases that will come from revisions will enable them to follow the GTS route of marketing. I think we have a new GeForce256 on our hands and the new GTS2 of its time will come via NV45 spawning massive success.

As for Nvidia reacting to ATI I honesty believe (and I have no proof at all) that it's likely to be ATI on the back foot this time around. I cant help think that the scrapping of one concept chip R400 and the rework of another combined with XBOX2 has left ATI playing catch up. The R420 simply won't be a long term contender and although fast I do think features are going to be limited. From my limited view of things it appears R420 has simply delayed the transition to PS3.0 and may catch ATI out. A couple of weeks will no doubt proove me wrong :)

Just like a badly written soap however I do think Nvidia have shown one area to ATI which may bail them out and that's IQ. If ATI can introduce a new FSAA method or fantastic AF then they may win people over. If they don't then I see many an OEM and gamer jumping this round due to the feature set difference.
 
Seiko said:
Just like a badly written soap however....

Don't be so hard on yourself...I think what you've written is a pretty good soap opera. ;)

I do think Nvidia have shown one area to ATI which may bail them out and that's IQ. If ATI can introduce a new FSAA method or fantastic AF then they may win people over. If they don't then I see many an OEM and gamer jumping this round due to the feature set difference.

I see it a bit differently.

I don't see OEM jumping on the nVidia parts based on any difference in feature set. PS 3.0, AFAIC, isn't something the OEMs will feel is a great marketing feature. Both parts will claim "DX9" compliance / support.

Developers may jump on nVidia parts, and to a lesser degree, gamers, based on the features...but as usual, it's going to be performance that largely dictates mind-share and likely drives demand.

I think the OEMs will view R420 and NV40 as largely "feature comparable", so they'll be more apt to go with one or the other based on other factors, performance (which might also be a wash), but more importantly cost, power consumption, ability of ATI/nVidia to fill orders, etc.
 
onetwo

leaving enough headroom for their own counterattack.

[H] review indicated they overclocked the core to 430mhz. Granted this is a pre-production card but reviewers typically get "golden" cards that OC partcularly well.

I'm not sure how much headroom you think there is left in this core given the complexity, thermal dissipation properties and voltage requirements.
 
I haven't seen many previews that address how overclockable the card is. It would put some of these theories to test.

IMO, i'll echo others and guess that they are playing the conservative route (which is good news for overclocking enthusiasts)

Having said that, this is a card that is somewhat akin to the 9800 at the time of its launch. Its basically straining current games to their fillrate limits, as you can see from the cpu scaling.. There are only a few games at high resolution and AA settings that seem to really grind it.

Thats potentially good news for ATI. If they can provide a card that is almost as fast as Nvidia, but offers better IQ (like I suspect they will), one could argue that the speed differential will only be seen in a few remote game selections.

One thing that has surprised me actually, are the low res cpu limited benchmarks. The driver looks incredibly solid for a new generation pre release card, almost suspiciously so. I'm not sure how much performance benefits we will see over the course of its lifetime given how solid it already seems to be (indicating a lot of work by Nvidia).. I suspect the largest gains will be in the shader optimisations as usual.
 
Stryyder said:
Couple of things to think about.

1) The power consumption was not as high as I expected, perhaps someone should try running it with only the primary Molex connecter amd see of there is a signigicant power draw and performance difference.
It has been done. If you connect only the bottom connecter the card doesn't work, if you do the same with the top connecter it works with some artifacts.

En l’état actuel des choses, si on ne branche que le connecteur du bas, le PC ne boote pas, mais si on ne branche que le connecteur du haut, cela fonctionne, avec toutefois quelques bugs qui apparaissent en 3D.
http://www.hardware.fr/articles/491/page11.html

Edit: As for OC: 440MHz for the core and 580MHz for the memory it seems.
Côté overclocking, le NV40 dans sa révision A1 que nous avions sur notre carte de test ne montait pas plus haut que 440 MHz sans bug, ce qui n’est pas faramineux. De son côté la mémoire, pourtant spécifiée à 600 MHz, ne dépassait pas 580 MHz, mais les performances étaient alors moindres qu´à 550 MHz.
http://www.hardware.fr/articles/491/page11.html
 
onetwo said:
nVidia would be smart to turn that strategy on its head by waiting for ATi's response and leaving enough headroom for their own counterattack.

Of course, ATI could do it's own "counter-counter attack" by having initial reviews that beat the NV40, but have some in reserve....assuming nvidia doesn't have a counter-counter-counter attack...

...where does it end? ;)

Besides, these low (ie. realistic) clock speeds all but guarantee the mass marketability of the cards -- we will likely see NV40s on store shelves in quantity sooner rather than later.

I don't see why.

The highest nVidia got the NV3X was what, 475 Mhz? Why does it seem conservative to "only" ship a part that is much larger, yet on the same process, with slightly less clock rate?
 
Back
Top