Next NV High-end

It depends which route they choose to go. They could simply add another quad or they could improve the memory controller and increase clocks. The latter would seem like the better route considering the G70 is largely bandwidth starved as it is.
 
Just looking at a couple of numbers now and at high res AA I see about 20%-35% performance improvement for an X1800 XL in relation to X800 XT (same clocks).
 
trinibwoy said:
Another question that not many have asked - why is there such a large bandwidth disparity between ATi's first and second string parts in the first place?

Well, I have. ;) It seems clear that on a ratio basis of core clocks vs core clocks and mem clocks vs mem clocks that XL should have mem clocks of 1200 rather than 1000.

You are implying that this was done to artificially create marketing performance gap for XT? It is a puzzlement to me, unless it is power related for OEM specs, since I'd think that ownage on GT would have been the priority there. It would have been for me in making those decisions.
 
trinibwoy said:
The only other possibility I see given current numbers is that ATi was able to tweak the XT more than the XL which I really doubt since they're essentially the same chip.
Since not even the XT has been tweaked by ATI (it looks like ATI tweaked R5xx for 3DMk05 and will come back to games later), I'd say it's very doubtful the XL has been tweaked at all.

Jawed
 
Jawed said:
Since not even the XT has been tweaked by ATI (it looks like ATI tweaked R5xx for 3DMk05 and will come back to games later), I'd say it's very doubtful the XL has been tweaked at all.

Jawed

Well yeah, I wasn't referring to the absolute level of tweakage. I was referring to a possible disparity in the level of tweakage between "cores" - not that I think there is.
 
geo said:
You are implying that this was done to artificially create marketing performance gap for XT? It is a puzzlement to me, unless it is power related for OEM specs, since I'd think that ownage on GT would have been the priority there. It would have been for me in making those decisions.

I'm not really implying anything. Like you said, beating the GT should have been a priority IMO. Why bust your balls with the XT only to "disappoint" with the XL ? If they can sell a 512MB/1500Mhz part for $550, they sure as hell could have put 256MB/1200Mhz on the XL and still sell it for $450 no?
 
trinibwoy said:
Well yeah, I wasn't referring to the absolute level of tweakage. I was referring to a possible disparity in the level of tweakage between "cores" - not that I think there is.

Me neither, AFAIK it's 100% the same chip? :???:
 
ANova said:
It depends which route they choose to go. They could simply add another quad or they could improve the memory controller and increase clocks. The latter would seem like the better route considering the G70 is largely bandwidth starved as it is.


which is what i implied, save i do expect the G7X to contain some further pipeline modifications as well, but the popular theory in the G7X (is the topic still right? :) ) seems to be essentially the same core with 32pipes and largly increased memory speeds but not core clocks, at least not much over the 500MHz edge current third party GTX's are scraping.

Also, i am curious about one thing perhaps someone can clarify. On the current GTX im hearing many not able to break the 499MHz mark due to some jump in all the clocks, either having to then switch to 520 or around there if you increase the core to 500MHz. Something like that, any insight as to what that is and how it may effect a G7X part?
 
trinibwoy said:
Have you taken a look at Ratchet's X1800 preview on Rage3D ? I think that is pretty conclusive evidence that R520's advantage in the games tested is apparent primarily in bandwidth bound situations (high res + AA).

The one standout is Chaos Theory with all SM3.0 features turned on - the XT really struts its stuff there, even without AA.

So the way I see it, the XT has a lot of potential but any "wins" so far in last generation titles are down to the bandwidth advantage IMO. Hopefully we see it pull away more in more shader limited titles - FEAR, Oblivion etc and put the matter to rest.

Where does this conclusion come from exactly when you test two cards with different sizes of framebuffers? I will not hang it alone on the framebuffer of course, but I'll say it's a combination of both more ram and bandwidth.

http://www.anandtech.com/video/showdoc.aspx?i=2556&p=2

BF2 is a known "ill" case when it comes to memory leaks. All other games tested - except the OGL games - don't show such a large difference in ultra high resolutions.

I'm taking this still with some precaution because it looks rather to me like there could still a lot in done in future Catalyst drivers, but so far I don't see the performance difference I'd expect in resolutions past 1600.

***edit: just for the record I ran a couple of tests tonight with vidmemorytester and the CoD2 demo consumes in 1920*1440 with 4xAA/8xAF almost 410MB of overall texture memory.
 
Last edited by a moderator:
ANova said:
It depends which route they choose to go. They could simply add another quad or they could improve the memory controller and increase clocks. The latter would seem like the better route considering the G70 is largely bandwidth starved as it is.

Nothing is conclusive unless verified.

BF2
2048*1536
4xAA/16xAF
7800GTX/ 430/600MHz
Guru3d demo

430/685MHz vs. 430/600MHz = +5%
490/685MHz vs. 430/600MHz = +15%
 
Dave seems to think the G70 is still being held back by lack of registers. So that could be an improvement in th G7X.
 
Ailuros said:
Where does this conclusion come from exactly when you test two cards with different sizes of framebuffers? I will not hang it alone on the framebuffer of course, but I'll say it's a combination of both more ram and bandwidth.

http://www.anandtech.com/video/showdoc.aspx?i=2556&p=2

BF2 is a known "ill" case when it comes to memory leaks. All other games tested - except the OGL games - don't show such a large difference in ultra high resolutions.

I'm taking this still with some precaution because it looks rather to me like there could still a lot in done in future Catalyst drivers, but so far I don't see the performance difference I'd expect in resolutions past 1600.

***edit: just for the record I ran a couple of tests tonight with vidmemorytester and the CoD2 demo consumes in 1920*1440 with 4xAA/8xAF almost 410MB of overall texture memory.
Holy shit:D
 
Well, this is why I brought up the concern in Neeyik's excellent FM interview piece that they are aimed at 256mb cards. It seems to me that boat has already sailed.
 
geo said:
Well, this is why I brought up the concern in Neeyik's excellent FM interview piece that they are aimed at 256mb cards. It seems to me that boat has already sailed.
Hell.. they should aim for 1GB cards:cool:
 
SugarCoat said:
Also, i am curious about one thing perhaps someone can clarify. On the current GTX im hearing many not able to break the 499MHz mark due to some jump in all the clocks, either having to then switch to 520 or around there if you increase the core to 500MHz. Something like that, any insight as to what that is and how it may effect a G7X part?
With 0.1V more on the core, 500Mhz is easily reached. On stock volts you're right.
So 0.1V volts more and a dual slot copper cooling solution should be enough to reach 500+ speeds. Power consumption will be much higher though. Don't know if nvidia wants to give up their single slot, low power consumption advantage over ATI.
 
I am a little confused...

Dont we simply need a better or full time texture compression like VQ or something?

One that handles textures and one that handles normal maps etc.

Am i off on this?
 
Hellbinder said:
I am a little confused...

Dont we simply need a better or full time texture compression like VQ or something?

One that handles textures and one that handles normal maps etc.

Am i off on this?
Not at all. The need for better resource compression is always there. Finding a new method and lobbying for its introduction and support in the big 3D APIs, where all IHVs have to agree to build (maybe) silicon for it and then wait a generation or two for its introduction is the hard part.
 
I doubt there will be any new high-end G7x. If there was one coming, I think we would have some more substantiated rumours by now.

Perhaps we'll see a G70 Ultra announced very soon, but beyond that I have a feeling NVidia will go for G80 this spring. And why not? It will be early with Vista/DX10 in mind, but the lack of DX9 didn't stop ATI with R300.
 
EasyRaider said:
I doubt there will be any new high-end G7x. If there was one coming, I think we would have some more substantiated rumours by now.

Perhaps we'll see a G70 Ultra announced very soon, but beyond that I have a feeling NVidia will go for G80 this spring. And why not? It will be early with Vista/DX10 in mind, but the lack of DX9 didn't stop ATI with R300.

And what goes up against R600 in December 2006?
 
trinibwoy said:
And what goes up against R600 in December 2006?

G80 refresh, of course.

That being said, I don't think nVidia would target G80 until the fall '06 cycle. (Likewise, I don't see ATI targeting R600 until the same time.) I just don't see ATI and nVidia doing much of anything but jockying for just the right SM 3.0 product mix superiority until Vista hits. They both have respectable SM 3.0 hardware, so they are going to want to get as much out of pre DX10 (or whatever it is called this month) hardware as they can...and that means not bringing out DX10 hardware until DX10 is on the verge of release.

I'm still betting that MS will not actually ship Vista in '06.
 
Back
Top