When Tuesday does the G70 NDA expire?

dizietsma said:
digitalwanderer said:
Anyone brave the waters over at nVidia's forums or nVnews lately? I'm curious to hear about the reaction of the faithful to the news so far, but I can't bring meself to actually visit those places much anymore.

Not raving and not sobbing, people with 6800's seem to be thinking of passing. A lot of people are thinking that cpu limitations are much in evidence and so it is not worthwhile

Is the FX-57 out tomorrow ? If so then the FX-57 will surely help the cpu side out. An FX-57 at 3GHz and a 7800GTX at 500/1300 would be a nice match.

http://www.gpumania.com/edicion/noticias/gpun1219.jpg
 
Rockster said:
No, I agree. But to that same point, long shaders typically don't need a mad every cycle either. I just think it's more complex than just saying 2x. Certainly you could write a shader that is, but for most code it's an incremental improvement. The compiler should certainly be able to schedule instructions easier now!

That was my first impression, that it's an efficiency improvement more than brute-force power. Would there be significant benefit to 2 full-blown ALU's per pipe over the (expected) G70 configuration?
 
DemoCoder said:
I'm waiting to see how an X2 + multithreaded drivers + SLI GTX will do. :)

And I'm waiting to see how much that system costs :cry:

Anyway, I'm kinda of surprised... the benchies are, are.... low!? I think that ATI has a big smile, right now!
 
I believe we are a bit too hasty in passing judgement; this isn't one of the best and thorough reviews ever done. Wait until the big boys air theirs, then make up your mind methinks.
 
Hellbinder said:
Unless i am crazy it looks like *basically* In Shader intensive games an X800XL beats it or is within 10 FPS of it in most cases. The exception is Doom 3.

Its obvious to me that ATi has a superior Shader Core

hrmm....

In Far Cry no AA, the FPS for the 7800 are basically the same at 1024x768 and 1600x1200. This would indicate to me that there could be headroom for a faster CPU paired with a 7800. The X800, on the other hand, loses 30FPS in the resolution jump - it has hit its limit.

In Tombraider no AA, the 7800 is nearing twice the speed of an X800 at 1600x1200.

7800 is ~170 frames faster in COD at 1600x1200 no AA/AF.

7800 has double the FPS in Doom3 1600x1200 no AA/AF.

30 FPS faster in 1600x1200 in Colin McCrae no AA/AF.

Obviously mem bandwidth is affecting AA/AF numbers. Would be interesting to see some AA only and AF only tests.
 
Kombatant said:
I believe we are a bit too hasty in passing judgement; this isn't one of the best and thorough reviews ever done. Wait until the big boys air theirs, then make up your mind methinks.

Another reason the poker intended for the Chinese rears is turning white in the fire over at NV. :) Not to mention however much we're missing by just plain not being able to read it very well.

I think we've had a lot of indications this time around that brute force fps performance in older titles wasn't going to be the order of the day (just got those indications somewhat later in the day for NV), including for R520, so even whatever we see tomorrow we probably need to be careful about the assumptions we make on how it all works out relatively in the end. Of course I may have an "excitement relapse" tomorrow. :LOL:
 
Titanio said:
dizietsma said:
True. Also, Sampsa seems to have his up to 525Mhz as well so that is now 4 that have gone to or past 500MHz ( out of 4 )

Last I checked in one of the links here, someone had got it up to 549Mhz with watercooling :)

Sampsa said something about the card having some problems with LN2 cooling and nVidia being investigating it (ie at the moment LN2 is too cold for it for some reason)
 
The leaked PDF says:
an advanced texture unit incorporates new hardware algorithms to speed filtering and blending operations.
What is it? faster 16bit fp filtering?
 
It seems that G70 offers very strong PS performance with relatively weak VS performance...

http://www.pconline.com.cn/diy/evalue/evalue/graphics/0506/644143_7.html

050618tom06.jpg
 
John Reynolds said:
JoshMST said:
I wish I could be a big boy. As it is, I sit at the children's table in the kitchen.

Awww, shut up, at least you got a pair of 6800 GTs for SLI testing. :p

Haha, I guess you have a point. I really shouldn't complain much... I mean, it is my own fault that my monthly page views aren't nearly as large as they could be.
 
Otoh, might explain why they are so interested in kicking vertex work at the CPU, and multithreaded drivers.
 
Well, it could be more than just vertex work. DirectX and drivers have a large overhead (see DramPrim*) between the time you submit the work and the time it is sent to the GPU for processing. Instead of being blocked on the GPU waiting for it to finish, you could be doing some preprocessing in another thread, pipelining the dataflow in the driver/OS more. You don't even need dual core CPU to benefit from this.
 
Vertex work was the one called out specifically in the TechReport article. Tho I will admit it could easily have been the author's bias to give it special prominence rather than the NV reps, based on the wording.
 
Sampsa claims he got the G70 running at 520/1380 with stock cooling. If it overclocks this good, why is NVidia and OEM partners being so conservative and releasing it at 430-460?
 
Back
Top