NVIDIA GF100 & Friends speculation

It should be "free", which is why I don't understand why Fermi doesn't have it. But then again, why has it only just now appeared in ATI?

Jawed

I guess even if there's no dedicated instruction, one can always choose an appropriate mantissa and disguise in32 multiplication as fp32.
 
True enough, though granted, if there's going to be a mistake in a store's listing, it's likely going to be in overestimating the tech specs, not in getting the price wrong.

Not really. We are talking about an unreleased product, which if it appears in an e-store somewhere, will almost always be inflated in price. The wrong specs are clear though.
 
That sounds reasonable but it doesn't answer the question as to what Guru3D is reporting. I'm hoping the convention is that the scheduler clock becomes core clock and the ROP/L2 clock becomes a side attraction. The other way around might be too confusing. The 48-ROP/32-Raster discrepancy is already disconcerting for some folks.
 
I don't get the big fuss about the 650 Mhz clock frequency for the GTX 470. Wasn't that what was being speculated for quite a while now ? 650-700 Mhz ? Why is it so "bad" now and it was expected before ?
 
[I am assuming 650MHz is half hot clock. ]

It was being speculated before, but now this sorta confirms it. . The difference is that now we know that gf100 has missed it's clock targets.
 
Funny piece from Hilbert.

On occasions, Hilbert too can't stop himself from repeating what Charlie is saying, only in different words. Yet, he sings the paeans of Fermi.

I never knew Guru3D is competing to outsmart Semi-Accurate. It's true that charlie has more misses than hits but what is the point of coming out with this piece on a website like Guru3D, hitting at Mr C, taking an argument that could very well be (poorly) drafted by Nvidia's PR team.

You kept your mouth shut for so long and you are not going to share any concrete info, so why not keep mum for a few more weeks. Unless, you were nudged by Nvidia.

Back to reality. We found out (and verified), surprisingly enough, that the GPU is already at revision A3, that's the third revision of the GPU, the GF100 already has had three builds. So yes, something was wrong, very wrong alongside the initial yield issues. But we know that the products right now are in volume production, will it be many weeks before we see good availability ? Sure it will. Maybe April, or indeed May is where things will start to make a difference. Performance will be very good, however with the clocks I have seen (and only if they are final) I do believe they will not be brilliant though.

and this one is gem...

NVIDIA has many trump cards though, they have an outstanding driver team which will driver performance upwards fast and soon.

:oops:
 
I guess even if there's no dedicated instruction, one can always choose an appropriate mantissa and disguise in32 multiplication as fp32.
I'm not sure that that will work as the multiplier will always try to create a normalised result - there's "magic" for the 24th bit which is implicit in the final result. To do uint24 arithmetic it needs to be tweaked a bit, I think, which is why I say "free".

The end result is really just a wrinkle for existing CUDA kernels that rely upon the workings of 24-bit.

Jawed
 
[I am assuming 650MHz is half hot clock. ]

It was being speculated before, but now this sorta confirms it. . The difference is that now we know that gf100 has missed it's clock targets.

I may have missed it, but must Fermi's GPU clock frequency be 1/2 of the hot clock ? That wasn't the case in G80 or GT200.

And what were the target clocks for the GTX 470 ? Plus we still don't know the clocks for the GTX 480. The high-end model is historically clocked higher anyway.

I'm still betting on 650 MHz GPU and 1500 Mhz for the Stream Processors, in the GTX 480.
 
I may have missed it, but must Fermi's GPU clock frequency be 1/2 of the hot clock ?

Yes. It was the same in G80/GT200 as well. The scheduler clock was half the shader clock, it's just that nobody marketed/talked much about the scheduler clock. Now that the texture units run at scheduler clock it's become more prominent.
 
I may have missed it, but must Fermi's GPU clock frequency be 1/2 of the hot clock ? That wasn't the case in G80 or GT200.

Yes, you missed this bit.

[I am assuming 650MHz is half hot clock. ]

And what were the target clocks for the GTX 470 ? Plus we still don't know the clocks for the GTX 480. The high-end model is historically clocked higher anyway.

I'm still betting on 650 MHz GPU and 1500 Mhz for the Stream Processors, in the GTX 480.

GTX285 had hot clock of 1476MHz. Slower than that, especially by 100MHz or more is definitely missing the clock target.
 
Yes. It was the same in G80/GT200 as well. The scheduler clock was half the shader clock, it's just that nobody marketed/talked much about the scheduler clock. Now that the texture units run at scheduler clock it's become more prominent.

As far as I recall, the GPU clock frequency in both G80 and GT200 wasn't 1/2 of the hot clock.

G80 was 575 Mhz / 1.35 Ghz
GT200 was ~600 Mhz / ~1.3 Ghz
 
GTX285 had hot clock of 1476MHz. Slower than that, especially by 100MHz or more is definitely missing the clock target.

Ah ok, you're using the GTX 285's clocks as base. But still, we don't know the clocks for the GTX 480 which is what we should be using for that comparison. Not the salvage part, which is historically clocked lower than the high-end model using the full chip.
 
Yes. It was the same in G80/GT200 as well. The scheduler clock was half the shader clock, it's just that nobody marketed/talked much about the scheduler clock. Now that the texture units run at scheduler clock it's become more prominent.

Considering only ROPs/L2 seem to run at core clock this time, I guess it's safe to say that the minority of the chip runs at core clock.
 
As far as I recall, the GPU clock frequency in both G80 and GT200 wasn't 1/2 of the hot clock.

G80 was 575 Mhz / 1.35 Ghz
GT200 was ~600 Mhz / ~1.3 Ghz

The scheduler doesn't run at GPU clock, it runs at half the shader clock. It has to be synchronous since AFAIK there's no buffer or staging area for issued instructions / operands.
 
The first GTX470 & 480 prices have appeared. Image of the web site in case it disappears:

Nvidia_Fermi_prijzen_duiken_op_01.jpg


The GTX480 must really rock to justify a $679.99 price.

A bit pricey but say around 600$ seems good for GTX480 2GB VRAM. Anything less than 2GB VRAM is a crime nowdays.
 
Back
Top