NVIDIA GF100 & Friends speculation

Depends on which rumours are true. I've seen everything from a simple respin all the way to a smaller die with 300m fewer transistors. If it's the former then yeah it will most likely find a home in Quadro/Tesla too. But if they did silly things like reduce the amount of L2, number of ROPs or number of schedulers per SM then it's less likely. They could also reduce the GPC count from 4 to 2, remove the artificial limits on rasterization throughput and end up in about the same ballpark. Would they bother fiddling with all that? IMO they would if they cared at all about making a play for the dual-GPU crown.

Its hard to say, it depends whether they intended a 32nm refresh this quickly or whether it is simply a fixed G100 taking advantage of better yields. If it is indeed the fixed 32nm refresh instead backported to 40nm it could be different again. Hard to say and obviously it could go either way unless you're privy to some insider information its hard to call.
 
I normally don't take part in these theads, but I don't think that a graphics card will have much, if any effect on the electric bill due to heat generation. Usually a thermostat is placed somewhat centrally in a house, mounted say about 5 feet up. I have never seen a computer set up anywhere near a thermostat. They are usually in the bedroom, office, or the basement, not in the hallway. For example in my house, which is a two story, the thermostat is located centrally on the first floor between the dining area and the livingroom. The computers now are setup in the basement, but at one time they were in setup on the second floor in one of thebedrooms. I do not think that either location would have any effect on the ac. The inlaws house down in 'Bama is a ranch with the thermostat located in the hallway on the way to the bedrooms. I do not think the pc setup in one of the bedrooms would have any effect either.

Disclaimer: My desktop has a GTS250, my sons desktop has a 4350. The old Gateway has the original Radeon Mobility(I believe), and the e-machines laptop has Intel integrated graphics, 4500M.

Jim

The size of the location, whether you lived in an apartment or flat or a large house like your one would also come into it. Noise also relates to the noise from the cooling fans required in addition. It is something which has 'it depends' written all over it.
 
http://www.nordichardware.com/news/71-graphics/41570-the-geforce-gtx-580-mystery-is-clearing.html

Some compliation of the rumours: clocks upped to 775mhz, no mysterious unlocked TMUs as the given tex fillrate is what you'd expect from the extra clocks and SM, 300 millions transistors saved by cutting unspecified HPC features... apparently enough to beat Cayman? :rolleyes:

Cutting 'HPC features' from flagship GPU just to beat Cayman at gaming!? Sounds fishy and unlike nV to me.
 
I'm not the one calling people loonies for having a certain preference, perf/watt is a valid metric no matter which way you slice it.

Power consumption matters to me as well from a pure GPU comparison standpoint. It's when people try to imply that the higher power consumption results in signifiicantly higher real world power bills is when I call them on that nonsense.
 
AMD can market HD5830 as a part with 1600 SPs. It would be fair, wouldn't it? It has 1600 SPs at GPU level. Maybe some of them are switched off at product level, but who cares? ;)
You know you're comparing a graphics chip comprising part of a graphics card versus a whole product including clocks and all? Well…

Cut them all the slack you want, give them all the excuses you want. Nvidia just couldn't make a viable product with the features they promised. That's why they had to cut Fermi down, volt the living daylights out of what was left, why it needs to be re-vamped into a new product line straight away, and why it's killing their profits.
I can see it being cut down from what the chip would be able to deliver and I don't deny that. Btw, that's why I've opted against buying one, as I already said.

I cannot see them putting excess voltage on the chip. In fact, I believe, most of AMDs high-end GPUs use a higher voltage. But that's not comparable anyway, since different ASICs have different power characteristics wrt voltage.

I can see it killing their profits, as was the case with GT200 also, after AMD forced them into a price war. But that's not my concern from a consumer standpoint - I go to the shop, I select what I want and then I pay the bill.

Conspiracy theory: The GTX580 uses the same chip as the GTX480. No respin.
Perhaps NVidia collected all the best chips from the past 6 months of GF100 production, the ones where all 16 SMs worked without voltage boosts. There were too few of these to released at launch, but now they have a 6 month inventory and can have a non-paper launch as a new model number. Hopefully the 6 months of chip production improved their defect rate enough that they can now keep the 16 SM GTX580 in stock now.

Just a wild theory. But it'd explain why it's pin-compatible.
Actually, I've been wondering about that myself. It would make every sense in the world to have a midlife kicker, as JHH called it, ready. But you'd need to make sure, it's excessively priced in order to keep demand low so you can have at least a few etailers per continent ready to provide links where they are in stock, thus negating any paper launch accusations. :D

I would do it! ;)
 
Conspiracy theory: The GTX580 uses the same chip as the GTX480. No respin.
Perhaps NVidia collected all the best chips from the past 6 months of GF100 production, the ones where all 16 SMs worked without voltage boosts. There were too few of these to released at launch, but now they have a 6 month inventory and can have a non-paper launch as a new model number. Hopefully the 6 months of chip production improved their defect rate enough that they can now keep the 16 SM GTX580 in stock now.

Just a wild theory. But it'd explain why it's pin-compatible.
A more impressive GF110 design would be based on the 48 SP per SM design seen in GF104 which gives better performance for the area. Why not scale that up to say 576 SPs in 12 SMs, which would use comparable die area as the 512 SPs of GF100. (hard to judge, since die area isn't 100% SPs, but it's something in the GF100 530mm^2 range..)

I don't think the above theory is true, but it's almost a little plausible. One fact that argues against it: if 512 SP chips were rare to start, why not use those chips for Tesla and make the Tesla cards more powerful than the GTX480? That'd make the compute guys happy, and you'd still skim off a couple of the rare crazy rich gamers buying 4x priced Teslas just to play Battlefield BC2 at one extra FPS.

By God I think you've got it. It really makes sense and sounds like something Nvidia would do! Honestly it wouldn't surprise me if there was no respin or anything like that and GTX580 just ended up being held back GF100 units that had a full configuration. Nvidia have really gone off the boil for some reason and I don't see them getting their mojo back until they part ways with Jen-Hsun Huang, similar to how AMD had to part ways with Hector as he seemed to be stuck in the Athlon era not wanting to move on.
 
Conspiracy theory: The GTX580 uses the same chip as the GTX480. No respin.
Perhaps NVidia collected all the best chips from the past 6 months of GF100 production, the ones where all 16 SMs worked without voltage boosts. There were too few of these to released at launch, but now they have a 6 month inventory and can have a non-paper launch as a new model number. Hopefully the 6 months of chip production improved their defect rate enough that they can now keep the 16 SM GTX580 in stock now.

Just a wild theory. But it'd explain why it's pin-compatible.
A more impressive GF110 design would be based on the 48 SP per SM design seen in GF104 which gives better performance for the area. Why not scale that up to say 576 SPs in 12 SMs, which would use comparable die area as the 512 SPs of GF100. (hard to judge, since die area isn't 100% SPs, but it's something in the GF100 530mm^2 range..)

I don't think the above theory is true, but it's almost a little plausible. One fact that argues against it: if 512 SP chips were rare to start, why not use those chips for Tesla and make the Tesla cards more powerful than the GTX480? That'd make the compute guys happy, and you'd still skim off a couple of the rare crazy rich gamers buying 4x priced Teslas just to play Battlefield BC2 at one extra FPS.

The thought has hit my mind also but I really don't think it is possible. The one thing we know is that it is named GF110, not GF100. Respin or even a die shrink would possibly make it a GF100b but binned devices are GF100 still, anyway you look at it.

The new chip-designation GF110 still indicates either added or subtracted functionality on chip level - or both. Nvidia has never changed the name of the same chip to my knowledge, only the cards.
 
The thought has hit my mind also but I really don't think it is possible. The one thing we know is that it is named GF110, not GF100. Respin or even a die shrink would possibly make it a GF100b but binned devices are GF100 still, anyway you look at it.

The new chip-designation GF110 still indicates either added or subtracted functionality on chip level - or both. Nvidia has never changed the name of the same chip to my knowledge, only the cards.

That really doesn't mean anything. And NVIDIA is quite famous for rebranding products, so why not chips?
 
That really doesn't mean anything. And NVIDIA is quite famous for rebranding products, so why not chips?

Think of the possible consequences:

Rebranding cards is really only applying to consumers and may be complained about from a marketing ethics perspective. So what?

To say that there exist a new chip, with the related development and fabrication costs when they hav only used old ones. That's serious misconduct versus the owners. They can't tell the truth to the owners but keep it from the consummers, therefore they will not do something as stupid as this.

The small possible gains by doing such a thing would be only in recognition of the product.
The potential loss is a real business scandal with legal consequences. You don't make up products to your owners.
 
Think of the possible consequences:

Rebranding cards is really only applying to consumers and may be complained about from a marketing ethics perspective. So what?

To say that there exist a new chip, with the related development and fabrication costs when they hav only used old ones. That's serious misconduct versus the owners. They can't tell the truth to the owners but keep it from the consummers, therefore they will not do something as stupid as this.

The small possible gains by doing such a thing would be only in recognition of the product.
The potential loss is a real business scandal with legal consequences. You don't make up products to your owners.

I think you're over-thinking this. NVIDIA doesn't need to say "here, that's a new chip, with the related development and fabrication costs". All they need to do is say "here is GF110, it has characteristics X, Y, and Z, and it's awesome".

That would be technically true, except the awesome part, which is subjective anyway.

I'm not saying this is what's going to happen and that GF110 is just GF100-A3, actually I don't think it's likely because according to rumors, NVIDIA hasn't made more than one batch of GF100s, but it's not impossible.
 
In HD6800 also? :eek: If yes, any chance, we'll be seeing an upgrade for HD5k series any time soon?
In case it wasn't clear, Catalyst 10.10 already enabled HDMI 1.4a on Evergreen.

Evergreen cannot support overlays in Windowed mode while displaying 3D stereoscopic content, so MVC playback needs to be fullscreen (and of course, software decoded) while NI has hardware differences that allow overlays in a windowed mode. However, HDMI 1.4a itself is a small and specific subset of HDMI 1.4 that dicates a few framepacking modes for 3D Stereoscopic which can be supported by the PHY speeds from most current HDMI recievers and transmitters - thats how and why you see things like the PS3 getting updated with HDMI 1.4a support.
 
The small possible gains by doing such a thing would be only in recognition of the product.
The potential loss is a real business scandal with legal consequences. You don't make up products to your owners.

Companies do it all the time. Make a small change, and call it a new product. Marketing led companies need new products to keep their marketing cycles going, so all they need to do is stick to their own interpretation of what they consider "new".

Pharmaceutical companies do it in particular to keep patents outside of their natural lifespan ie, change a small thing, call it a "new product" and hey presto, new patent, new marketing campaign. Car companies, electronics companies, etc, just about every company does it.

Even if Nvidia just do rare-binned-GF100s-with-all-512-cores-and-a-different-clock-speed-vanity-edition, it can be justified by pointing out the differences to show why it gets a new model number. It's even easier if they actually did a respin an a few tweaks to make it work properly.
 
In case it wasn't clear, Catalyst 10.10 already enabled HDMI 1.4a on Evergreen.

Evergreen cannot support overlays in Windowed mode while displaying 3D stereoscopic content, so MVC playback needs to be fullscreen (and of course, software decoded) while NI has hardware differences that allow overlays in a windowed mode. However, HDMI 1.4a itself is a small and specific subset of HDMI 1.4 that dicates a few framepacking modes for 3D Stereoscopic which can be supported by the PHY speeds from most current HDMI recievers and transmitters - thats how and why you see things like the PS3 getting updated with HDMI 1.4a support.
Thanks for pointing that out (my bold) Dave because I really didn't catch that. :)
 
Companies do it all the time. Make a small change, and call it a new product. Marketing led companies need new products to keep their marketing cycles going, so all they need to do is stick to their own interpretation of what they consider "new".

Pharmaceutical companies do it in particular to keep patents outside of their natural lifespan ie, change a small thing, call it a "new product" and hey presto, new patent, new marketing campaign. Car companies, electronics companies, etc, just about every company does it.

Even if Nvidia just do rare-binned-GF100s-with-all-512-cores-and-a-different-clock-speed-vanity-edition, it can be justified by pointing out the differences to show why it gets a new model number. It's even easier if they actually did a respin an a few tweaks to make it work properly.

Small change - new envelope - new product. I know this is how it works but the chip designations are not model numbers. The chip names are the subsets that are used for communication with the owners and the board.

You don't do this with products in the spotlight.
What I'm saying is that if GF110 is binned GF100 chips JHH will not go in to his owners and put up a card and say this is our new flagchip GF110. This is the top model in the second generation of our fermi engine. We improved it 20% in performance and energy consumption. We think this will beat the new offerings from AMD.

Any small change in the architecture is enough to pull it off, but binned chips from old batches that been sold as Gf100, that's just unnecessary.

I'm just saying that it would be stupid, and give limited gain. I understand if some people think that this is not a reason to think it would not happen.
 
Back
Top