NVIDIA GF100 & Friends speculation

Do LAN Parties even still exist now that broadband connections are so widespread?

I think the worst part of this marketing scheme is the big ugly characters with giant guns…

they are even bigger.

dreamhack etc.
 
So they are now comapring to a 5750, not 5770 anymore? :/

Also I am wondering: The GTS250 has less shaders (128 vs. 192), less clockrate, but it still holds strong against the GTS450 in some cases?? Is it the memory bandwidth? Or have nVidia invested almost all the new tansistors in GPGPU and DX11 features, leaving little for actual speed improvments?
 
Do LAN Parties even still exist now that broadband connections are so widespread?
Err, PAX is on right as of this very moment I believe. Quake Con was earlier in the summer, and in Europe there's others like Dreamhack as mentioned by previous poster etc.
 
So they are now comapring to a 5750, not 5770 anymore? :/

Also I am wondering: The GTS250 has less shaders (128 vs. 192), less clockrate, but it still holds strong against the GTS450 in some cases?? Is it the memory bandwidth? Or have nVidia invested almost all the new tansistors in GPGPU and DX11 features, leaving little for actual speed improvments?

GTS250 has a higher shader clock and more than twice of the texture fillrate. It has also 16 rops with 16 pixel/clock.
 
Err, PAX is on right as of this very moment I believe. Quake Con was earlier in the summer, and in Europe there's others like Dreamhack as mentioned by previous poster etc.

Sorry, I meant "amateur" LAN Parties, where a bunch of friends get their PCs into a cramped garage with cables running everywhere…
 
Sorry, I meant "amateur" LAN Parties, where a bunch of friends get their PCs into a cramped garage with cables running everywhere…

I was in one last week.

I have one once a month (or so) at my apartment. With modern components (e.g. LCDs) it's nothing like LAN parties of old. Plenty of space and minimal exposed cabling. My roommate's computer is already in the living room, as is my HTPC (which also happens to be my most up-to-date system) along with all the networking equipment including the 8-port gigabit switch. Setup is pretty simple, clear off the table and bring out the UPS and other PC from my bedroom and voila!
 
GTS250 has a higher shader clock and more than twice of the texture fillrate. It has also 16 rops with 16 pixel/clock.
Small nitpick GTS250 doesn't have more than twice the texture fillrate but slightly less than twice :).
The GTS450 also has 16 rops - though yes if the chip isn't otherwise changed from GF104 rasterizer and color fillrate would be limited to 8 pixels/clock which indeed might limit performance a bit. Lower bandwidth certainly doesn't help neither though the so far released cards didn't really seem very bandwidth demanding (might have to do something with the fact that they can't push that many pixels anyway). So all told it looks ok to me - if just the die would be a bit smaller...
 
334szeg.png
 
So it all comes down to that 64-bit link between SM and ROPs? Quite a convincing argument that Fermi was not meant to be gaming architecture, I think. :(

What if a Fermi refresh featured a widened link - how much would that improve performance and efficiency of the whole desing, in your estiamtes?
 
What if a Fermi refresh featured a widened link - how much would that improve performance and efficiency of the whole desing, in your estiamtes?

Well we don't know how much of a limitation it is in the first place. The GTS450 has a ~30% flops advantage over the GTS250 if you don't count the latter's extra MUL. In most other metrics it's slower - 53% of the texture rate, 82% of the bandwidth, 53% of the filrate. Yet it's faster than the 250 in several titles and on par otherwise based on the numbers released so far. Part of that would be due to more efficient ROPs, TMUs etc.
 
What I find interesting is that some of the 4xxM parts have 192 SP and a 192bit memory interface. So either GF104 has such bad yields that Nvidia puts chips with half of the SPs disabled into mobile space, or GF106 in fact has 192bit/24ROP, as was originally rumored (there were even PCB blueprints supporting that rumor).

If the latter is the case, I wonder why they went with 128bit for the GTS450? With 50% more bandwidth and fillrate, it might have been able to compete with the 5770, not just 5750.
 
The 144 SP 445M also has a 192-bit interface. Besides, Nvidia's product images sort of confirm that the 460M is GF106 based so it does in fact have a 192-bit interface. Whether that will ever appear on the desktop is a mystery.
 
If the latter is the case, I wonder why they went with 128bit for the GTS450? With 50% more bandwidth and fillrate, it might have been able to compete with the 5770, not just 5750.
Those pcb pictures showing 6 (12) ram chip locations are enough proof to me.
As far as why they didn't go with 192bit for the GTS450, that's a good question - maybe yields increase a bit but probably not that much. According to my theory, they'd go with 128bit because performance wouldn't change one bit and just increase cost for the additional memory chips. Unless the rasterizer and the SMs are changed (or it has less capable rops per memory partition which given the die size seems unlikely), color fillrate wouldn't increase one bit since the rops can already handle twice the pixels per clock as the rest of the chip is able to deliver. Z fill might be different (or not) but the chip has way enough of that already. Bandwidth could still help a tiny bit (though major consumers are rops), but don't forget the performance difference between 192bit and 256bit GF104 is very small - and that chip has twice as many SMs/GPCs (ok not quite twice since one SM is disabled). Of course, that leaves the question why the chip is 192bit in the first place... Apparently it's used for the mobile chips, it also gives some additional flexibility with how much memory cards could be outfitted in theory (512MB, 768MB, 1GB, 1.5GB are all possible whereas Juniper is basically restricted to 512MB/1GB). And that the reference pcb has spots for more ram chips could be an indication we might eventually see cards with different amount of memory I guess. Still, I think 192bit memory on that chip is a mistake (that is, the additional transistors/die size this costs is not justified), though maybe I'm missing something - surely nvidia must have reasons why they did it... It should also be possible to build ddr3-based cards (with 1.5GB) which don't suck (much), but I'm not sure that's really a sensible option neither.
 
Those pcb pictures showing 6 (12) ram chip locations are enough proof to me.
As far as why they didn't go with 192bit for the GTS450, that's a good question - maybe yields increase a bit but probably not that much. According to my theory, they'd go with 128bit because performance wouldn't change one bit and just increase cost for the additional memory chips.
They could've gone for 768MB, so cost may actually have gone down a little. You're right that performance gains would probably have been minor, but i don't think they'd be 0% across the board.

Though with Nvidia, I wouldn't be surprised if they decided to go for 128bit/1GB for marketing reasons as well (there are enough consumers out there who think 1GB=faster than 768MB).

Of course, that leaves the question why the chip is 192bit in the first place...
yeah, that's the question of the day. If those die-size rumors (~240mm²) are correct, the chip is far too expensive to make for its performance. All the perf./mm² improvements GF104 introduced are ridiculed with this chip. It's over 40% bigger than Juniper and performs slightly worse than a 5770.
In terms of perf./mm², they basically wasted a full die-shrink, even RV770 was better in that aspect.
 
Back
Top