NVIDIA GF100 & Friends speculation

What consumer cares about die size or # trannies? People care about performance and cost. When I buy a Plasma TV, I could care less how many rejects came off the assembly line, or what the expenses were that went into making my Honda Civic, all I care about is it's performance record and cost. Anything else is spin.

The average consumer doesn't even care about power consumption, otherwise , they'd stop using halogen/incandescent lights, buy more efficient cars, refrigerators, etc.
Indeed. But at the moment we are talking neither, we are talking architectural differences / possible performances.
 
It would be nice if the charlie haters/NV fanboys could stop jacking each other off and get back to the thread at hand, IMO...
 
Are you at liberty to disclose when we will get your take on this reveal?
When NV's embargo expires, as always, despite the fact I haven't signed a legally binding thing restricting when and what I can say about the data I get for many years now :p
 
It would be nice if the charlie haters/NV fanboys could stop jacking each other off and get back to the thread at hand, IMO...

Why is it that not liking Charlie instantly turns you into a Nvidia fanboy. It isn't for no reason that Charlie has so many detractors, and not every single one is a fanboy or a hater. After 3-4 months of Charlie predicting doom for GF100 and Nvidia; it turns out GF100 looks highly competitive and Nvidia have one of the best CES showings of the lot with Tegra and non-discrete stuff. Surely we can all say that Charlie has been misleading us about the doom and gloom if the benchmarks tonight are in line with the rumours and the architecture is sound and scalable.

Anyway, away from the subject of Charlie, it looks like GF100 will be what Nvidia need to survive the industry as a non x86-CPU producer which is why all this speculation started...

Hopefully it comes in at less than £350/370 for a GTX380 but I feel like I might be in for a wait.
 
I suspect he's more right than wrong on this issue, I bet they have a little hardware for the most fixed function, specific parts of the tesselator and the rest is just some kind of shader emulation. With all the talk of higher triangle rates when it seems they've given a 1tri/clock rate out, I wonder if maybe they have done something completely different, or maybe its just the setup/tri rate is on the hot clock as some have speculated.

I really just wish it was 9, the curiosity is burning a hole in my head. ;)
 
When NV's embargo expires, as always, despite the fact I haven't signed a legally binding thing restricting when and what I can say about the data I get for many years now :p
Really? I thought everyone came to an understanding that B3D took some extra time with their pieces :p

Thanks btw. :)
 
I suspect he's more right than wrong on this issue, I bet they have a little hardware for the most fixed function, specific parts of the tesselator and the rest is just some kind of shader emulation. With all the talk of higher triangle rates when it seems they've given a 1tri/clock rate out, I wonder if maybe they have done something completely different, or maybe its just the setup/tri rate is on the hot clock as some have speculated.

I really just wish it was 9, the curiosity is burning a hole in my head. ;)

Really really can't wait till the NDA expires so these types of posts come to an end.
 
I'm asking because the previous leaked info/videos, suggested that the GTX 360 was the only one being shown and since the launch is still around March, NVIDIA may still want to keep the performance of the full Fermi chip in the 380, under NDA.

I don't think anyone out there has credibly suggested that the video was the GTX 360. And either way, why would they want the 360 to be announced but not the flagship? The term flagship is there for a reason...
 
I suspect he's more right than wrong on this issue, I bet they have a little hardware for the most fixed function, specific parts of the tesselator and the rest is just some kind of shader emulation. With all the talk of higher triangle rates when it seems they've given a 1tri/clock rate out, I wonder if maybe they have done something completely different, or maybe its just the setup/tri rate is on the hot clock as some have speculated.

I really just wish it was 9, the curiosity is burning a hole in my head. ;)
Since hot clock can't be used without major change in the "hardware" pipeline, I think it's more likely to be shader based.

The problem I see with it being shader based is it would be blazing fast without any shader but would slow down in case of heavy shaders running, and since Fermi ALU peak throughput is already not that high, the better theorical performance could end up... useless.

In fact, it could probably be done with any GPU and Cypress having a higher ALU peak throughput could even benefit more from this.
 
Really really can't wait till the NDA expires so these types of posts come to an end.
Cases like this I am glad not be in the know-how, biting your own tongue is quite difficult.
I don't think anyone out there has credibly suggested that the video was the GTX 360. And either way, why would they want the 360 to be announced but not the flagship? The term flagship is there for a reason...
Razor did suggest it was the stripped down esp .. (page 16)
It was a stripped down version, those are engineering samples they used, its common practice.
 
Cases like this I am glad not be in the know-how, biting your own tongue is quite difficult.

And I've been doing it for nearly 3 months now. Actively avoiding the speculation threads :) Anyway its not much longer. After waiting 3 months the next bit is nothing.
 
It depends on game/engine on how it loads the hardware. Some have much higher perfomance other less and then the question if CPU was limiting.

Of course. 40% was an average, that should be understood.


There are already games with DX11 support and many more to come (both exclusive and multiplatforms). Early adoption and better and clever use as time passes. Such thing as tesselation will make for a even more drastic difference than already by highlighting the difference between 'lego brick characters' and CGI alike ones.

Also DX11 being something far far better in comparision to DX9->DX10 in ease of use, freedom etc.

DX11!=tesselation. Also, the only game I've seen getting much DX11 pub is Dirt 2, so far, to little effect, it looks about the same as the DX10 version.

I'm just wondering what games will make a big tesselation push? In order to do that a developer would have to deviate in a major way from the console version, which I do not see any current examples of. EG, Assasins Creed 2, Batman Arkham Asylum, Modern Warfare 2, etc.
 
There are alot of interesting things that can be done with tessellation with a little creativity. The hair demo you've seen is actually physx + tessellation.
 
The real question is whether NVidia will launch a 512 ALU GPU based on the A3 chip. I have a feeling that the "ultra", the card with 512 ALUs, won't be arriving until B1 or later. Should be here in time for Christmas, eh?...

Jawed
 
Since hot clock can't be used without major change in the "hardware" pipeline, I think it's more likely to be shader based.

The problem I see with it being shader based is it would be blazing fast without any shader but would slow down in case of heavy shaders running, and since Fermi ALU peak throughput is already not that high, the better theorical performance could end up... useless.

In fact, it could probably be done with any GPU and Cypress having a higher ALU peak throughput could even benefit more from this.

That middle part is what makes me wonder... People in the know in this thread have been saying something like "with tesselation its going to be crazy fast and beat ATI by a mile, but in games its only about 1.X times faster." This leads me to believe the shader based thing because it would probably show just that, in certain controlled benches with tesselation, it could show itself to be way faster, but really its only 1.X faster, ending up more of a benchmark bragging feature. PhysX in 3dmark says hi. ;)

Originally Posted by ChrisRay
Really really can't wait till the NDA expires so these types of posts come to an end.

Me too. Care to share anything?;)
 
Specifically I said Fermi's tessellation engine is impressive. I think its the biggest investment Nvidia has put into a new API to accellerate new API features in a very long time. And what I mean by that is Nvidia's tessellation engine certainly not implemented in a half assed way. And I stand by that statement :) Its not long from now when everyone will have all their information about it.

I hope so. :) Tessellation is one of the DX11 features I am most exited about. Especially after seeing the videos of AvP and using Unigine in free mode. I definetly want to see more of it in games.
 
Back
Top