When Tuesday does the G70 NDA expire?

Xmas said:
_xxx_ said:
Yeah, so it's 16 pipes when we think the old way, right?
That depends...
In the "old way", fragment pipelines ("pixel shaders") and ROPs were not separated, so depending on what you consider the main part of the "old pipelines", G70 has either "24 pipes", "16 pipes" or maybe "16 pipes plus 8 half-pipes". ;)

Or you just drop the "old pipes" and say 6 quad fragment pipes and 4 quad ROPs.

Unknown Soldier said:
Pete .. ye .. I actually understand that, my bad for wording it so badly. What I was trying to get at was that they've never before separate the ALU's and said it's 24 Pixel and 8 Vertex(if they have .. then my bad .. I shouldn't really rush through the reviews). I'm not really a engine type of guy since I really don't understand all about ALU's ROP's etc. I'm getting there .. but am slow to capture the info.
ALUs are the core of the shader pipelines. So having separate vertex and fragment shader pipelines implies having separate ALUs.


so the G70 is therefore not a true 24 pixel pipeline GPU and it cannot get the fillrate of a 24 pixel pipeline GPU since G70 outputs 16 pixels per clock (because it only has 16 ROPS) even though it has 24 pixel shader pipes internally?
 
Hellbinder said:
Unless i am crazy it looks like *basically* In Shader intensive games an X800XL beats it or is within 10 FPS of it in most cases. The exception is Doom 3.

And Half-Life 2, which is completely CPU bound in all of those benchmarks for the 7800GTX, unlike the X800XL :? Tomb Raider has also got it's fair share of shaders too, hasn't it? It's ahead by 25-30fps there.

Seems like you've been a bit quick on the trigger here.
 
No wonder NV is rumbling about "co-processors" --they must be grinding their teeth and contemplating suing AMD & Intel for non-support. :LOL:
 
geo said:
No wonder NV is rumbling about "co-processors" --they must be grinding their teeth and contemplating suing AMD & Intel for non-support. :LOL:

They co-processor should supports the GPU not the CPU.

Looks more like that developers should think about offload work from the CPU to the GPU.
 
So a bit more speed and a bit more IQ it seems, probably not enough to upgrade from a 6 series , but an upgrade to it from any FX would be very nice.
 
Interesting that Inq, who sneers at NDAs, hasn't jumped on pushing the Chinese review. nvnews apparently spent a good bit of the morning pulling posts with the link before giving up.
 
The "Fullspeed HDR" caught my attention aswell as the the new AA modes.
Does anyone have a clue how the D4/madd number that was inreased VERY mutch indicates of for ALU/shader optimizations in the fragment quad?
There was some bit on VS optimizations also.
 
dizietsma said:
So a bit more speed and a bit more IQ it seems, probably not enough to upgrade from a 6 series , but an upgrade to it from any FX would be very nice.

More of a GeForce 4 to the GeForce 3. :p
 
Hellbinder said:
Its obvious to me that ATi has a superior Shader Core for 90% of todays games even with their Current tech. Not even getting into what the R520 is going to do.

It's obvious that none of these games is in any way optimized or suited to take advantage of either G70 or R520, I'd say...wait for the games which ARE optimized and then you'll see a VASTLY different picture.
 
overclocked said:
The "Fullspeed HDR" caught my attention aswell as the the new AA modes.
Does anyone have a clue how the D4/madd number that was inreased VERY mutch indicates of for ALU/shader optimizations in the fragment quad?
There was some bit on VS optimizations also.

The NV4x had 1 ALU per pipe that was split into 2 half-ALUs. The G70 is supposed to have 2 full ALUs per pipe, not two half ALUs.
 
It seems to me that these NV40/G70 ALUs are very flexible and that it's needed a very smart compiler to take advantage of them.
Dunno about current nvidia shaders compiler quality..
 
DemoCoder said:
overclocked said:
The "Fullspeed HDR" caught my attention aswell as the the new AA modes.
Does anyone have a clue how the D4/madd number that was inreased VERY mutch indicates of for ALU/shader optimizations in the fragment quad?
There was some bit on VS optimizations also.

The NV4x had 1 ALU per pipe that was split into 2 half-ALUs. The G70 is supposed to have 2 full ALUs per pipe, not two half ALUs.

Dunno about that, but even if not it's 24 against 16.
 
_xxx_ said:
DemoCoder said:
overclocked said:
The "Fullspeed HDR" caught my attention aswell as the the new AA modes.
Does anyone have a clue how the D4/madd number that was inreased VERY mutch indicates of for ALU/shader optimizations in the fragment quad?
There was some bit on VS optimizations also.

The NV4x had 1 ALU per pipe that was split into 2 half-ALUs. The G70 is supposed to have 2 full ALUs per pipe, not two half ALUs.

Dunno about that, but even if not it's 24 against 16.

They're talking about the 2x the floating point power for each pixel shader vs NV40, which suggests 2 full ALUs (?)
 
DemoCoder said:
The NV4x had 1 ALU per pipe that was split into 2 half-ALUs. The G70 is supposed to have 2 full ALUs per pipe, not two half ALUs.

So pixel shaders went up by 50%, and then another what on top of that for doubling the ALUs? I know just barely enuf to sense that doubling the ALU's won't have a straight-thru performance impact. Yeah, somebody find something to push this sucker at what its good at so we can really see why they spent their transistor budget the way they did. I'm pretty sure a dart board wasn't involved. . .
 
DemoCoder said:
overclocked said:
The "Fullspeed HDR" caught my attention aswell as the the new AA modes.
Does anyone have a clue how the D4/madd number that was inreased VERY mutch indicates of for ALU/shader optimizations in the fragment quad?
There was some bit on VS optimizations also.

The NV4x had 1 ALU per pipe that was split into 2 half-ALUs. The G70 is supposed to have 2 full ALUs per pipe, not two half ALUs.

NV40 already have two ALUs that can split in two sub ALUs (2-2 or 3-1).
 
Not so much doubling the number of ALU's per pipe as the addition of an add in ALU 1. Meaning it's now capable of a mad/dot instruction instead of just the mul it could do before.
 
Megadrive1988 said:
so the G70 is therefore not a true 24 pixel pipeline GPU and it cannot get the fillrate of a 24 pixel pipeline GPU since G70 outputs 16 pixels per clock (because it only has 16 ROPS) even though it has 24 pixel shader pipes internally?

So instead of a 16x1 design of the NV40, we might call the G70 a 16x1.5 design -- using the traditional sense.
 
Back
Top