trinibwoy said:Another question that not many have asked - why is there such a large bandwidth disparity between ATi's first and second string parts in the first place?
Since not even the XT has been tweaked by ATI (it looks like ATI tweaked R5xx for 3DMk05 and will come back to games later), I'd say it's very doubtful the XL has been tweaked at all.trinibwoy said:The only other possibility I see given current numbers is that ATi was able to tweak the XT more than the XL which I really doubt since they're essentially the same chip.
Jawed said:Since not even the XT has been tweaked by ATI (it looks like ATI tweaked R5xx for 3DMk05 and will come back to games later), I'd say it's very doubtful the XL has been tweaked at all.
Jawed
geo said:You are implying that this was done to artificially create marketing performance gap for XT? It is a puzzlement to me, unless it is power related for OEM specs, since I'd think that ownage on GT would have been the priority there. It would have been for me in making those decisions.
trinibwoy said:Well yeah, I wasn't referring to the absolute level of tweakage. I was referring to a possible disparity in the level of tweakage between "cores" - not that I think there is.
ANova said:It depends which route they choose to go. They could simply add another quad or they could improve the memory controller and increase clocks. The latter would seem like the better route considering the G70 is largely bandwidth starved as it is.
trinibwoy said:Have you taken a look at Ratchet's X1800 preview on Rage3D ? I think that is pretty conclusive evidence that R520's advantage in the games tested is apparent primarily in bandwidth bound situations (high res + AA).
The one standout is Chaos Theory with all SM3.0 features turned on - the XT really struts its stuff there, even without AA.
So the way I see it, the XT has a lot of potential but any "wins" so far in last generation titles are down to the bandwidth advantage IMO. Hopefully we see it pull away more in more shader limited titles - FEAR, Oblivion etc and put the matter to rest.
ANova said:It depends which route they choose to go. They could simply add another quad or they could improve the memory controller and increase clocks. The latter would seem like the better route considering the G70 is largely bandwidth starved as it is.
Holy shitAiluros said:Where does this conclusion come from exactly when you test two cards with different sizes of framebuffers? I will not hang it alone on the framebuffer of course, but I'll say it's a combination of both more ram and bandwidth.
http://www.anandtech.com/video/showdoc.aspx?i=2556&p=2
BF2 is a known "ill" case when it comes to memory leaks. All other games tested - except the OGL games - don't show such a large difference in ultra high resolutions.
I'm taking this still with some precaution because it looks rather to me like there could still a lot in done in future Catalyst drivers, but so far I don't see the performance difference I'd expect in resolutions past 1600.
***edit: just for the record I ran a couple of tests tonight with vidmemorytester and the CoD2 demo consumes in 1920*1440 with 4xAA/8xAF almost 410MB of overall texture memory.
Hell.. they should aim for 1GB cardsgeo said:Well, this is why I brought up the concern in Neeyik's excellent FM interview piece that they are aimed at 256mb cards. It seems to me that boat has already sailed.
With 0.1V more on the core, 500Mhz is easily reached. On stock volts you're right.SugarCoat said:Also, i am curious about one thing perhaps someone can clarify. On the current GTX im hearing many not able to break the 499MHz mark due to some jump in all the clocks, either having to then switch to 520 or around there if you increase the core to 500MHz. Something like that, any insight as to what that is and how it may effect a G7X part?
Not at all. The need for better resource compression is always there. Finding a new method and lobbying for its introduction and support in the big 3D APIs, where all IHVs have to agree to build (maybe) silicon for it and then wait a generation or two for its introduction is the hard part.Hellbinder said:I am a little confused...
Dont we simply need a better or full time texture compression like VQ or something?
One that handles textures and one that handles normal maps etc.
Am i off on this?
EasyRaider said:I doubt there will be any new high-end G7x. If there was one coming, I think we would have some more substantiated rumours by now.
Perhaps we'll see a G70 Ultra announced very soon, but beyond that I have a feeling NVidia will go for G80 this spring. And why not? It will be early with Vista/DX10 in mind, but the lack of DX9 didn't stop ATI with R300.
trinibwoy said:And what goes up against R600 in December 2006?