NVIDIA Fermi: Architecture discussion

Sure, but if the '380' is 250w, it may not make too much sense having a 300w dual-gpu card.
Could even be that it couldn't beat the 5970, even if the 380 trounces the 180w 5870 - it'll be all about power efficiency (@300w) and not absolute performance.
To many, yes efficiency will determine their choice.
 
At least according to that supposed leak the 104 is faster.

Nvidia's aibs said 104 was "high end" which is a bit ambiguous. Typically "high end" is below "enthusiast" so who knows. Either way my guess is that the 14 pin beast is the faster config and likely dual gpu. That would imply a single gpu that would be 5-10% faster than 5870?

Of course we really know nothing. :)

Doesn't the word "end" in "high end" imply it is one of two ends of a complete market offer where on the other end is covered by the "low end" segment?

"Enthusiast" is a synonym of "High End", IMAO :)
 
Video of GF100-based hardware running the Unigine Heaven bench are up on Youtube. The tessellation pattern seems different than on ATI - Perhaps (Obviously) a different methodology, perhaps something a little more programmable?
 
Video of GF100-based hardware running the Unigine Heaven bench are up on Youtube. The tessellation pattern seems different than on ATI - Perhaps (Obviously) a different methodology, perhaps something a little more programmable?
DX11 defines the tessellation patterns so it should be the same.
 
DX11 defines the tessellation patterns so it should be the same.

You're right, it should be, but nearest I can tell, it isn't. Have a look at the bench running on both hardwares and it sticks out like a sore thumb - something different is happening between the two but I'm not sure what. The mesh pattern is different somehow.
 
Last edited by a moderator:
You may excuse the sarcasm, but that's obviously just a "byproduct" of tesselation running entirely in "software" on the GF100 :LOL:
 
You're right, it should be, but nearest I can tell, it isn't. Have a look at the bench running on both hardwares and it sticks out like a sore thumb - something different is happening between the two but I'm not sure what. The mesh pattern is different somehow.

Most likely NV is cheating to get a decent performance out of the Furby.

*I claim to be the first to have discovered that they are cheating again - just in case.
 
GF104 as a dual-board chip is still based on that unverified 'source' of a laid-off engineer. I wouldn't put too much effort into arguing over it

He/she never claimed it was a dual GPU card... only that it's performance was higher than GF100
 
Saw some rumblings about screenshots about raytracing with Fermi ...

Do you think Fermi will finally be the point where NVIDIA starts actually accelerating the core rendering engine of Mental Ray?
 
Sure, but if the '380' is 250w, it may not make too much sense having a 300w dual-gpu card.
Could even be that it couldn't beat the 5970, even if the 380 trounces the 180w 5870 - it'll be all about power efficiency (@300w) and not absolute performance.

Not really. It is about absolute performance. In the enthusiast market, power efficiency is not really at the top of the list of priorities. If they can deliver a single card that is quite a bit faster than the current "king" of performance (the HD 5970), that's what they will deliver.
If GF100 alone is capable of keeping up with Hemlock (even if not beating it in all cases), then yes, NVIDIA may not need a X2 card with two fully enabled Fermi chips in it, but they will definitely make a X2 card anyway, with two mid-range chips or even salvage parts from the full Fermi chip) as I speculated in the past and was mentioned by someone else in the last couple of pages.
 
Do you think Fermi will finally be the point where NVIDIA starts actually accelerating the core rendering engine of Mental Ray?
I'm expecting it to be a killer feature. Ray tracing has been a poor fit for GPUs so far and I think multiple kernels and the cache hierarchy in Fermi will allow a more finely-grained implementation. The key problem is keeping the GPU working instead of stalled on incoherency - and I'm guessing that the architectural improvements in Fermi will help considerably, seeing as fine-grained execution of kernels and cached global memory cut out the hundreds and thousands of cycles of latency older GPUs suffer with.

Jawed
 
Video of GF100-based hardware running the Unigine Heaven bench are up on Youtube. The tessellation pattern seems different than on ATI - Perhaps (Obviously) a different methodology, perhaps something a little more programmable?

Don't think so. There are other videos where the tessellation is activated, so it was simply not active in some videos. ;)



Am I wrong or I see some intense stuttering when the wireframe is on?
With three GF100s under the hood? :LOL:
 
"Not optimized for this view" the guy in the video is saying.

What does that mean? :asd:
I mean, why are you showing something which is not optimized? It also seems that the demo crashed at the end of the video...
I am wondering if this is in some way linked to clocks far too high for the gpu... Maybe to counterbalance the fact that the gpus shown are salvage parts (less cuda cores active)...
 
Cooling shouldn't be the problem.


For the rest of your questions ask nvidia. ^^

/edit
Wonder if charlie takes this picture to write that fermi has massive heat problems. :D
 
Back
Top