How come there is no next generation Nvidia speculation?

So the gtx 280 was a rebranding before launch cause it was not called the 1x000?

That's kind of a short-sighted argument now is it? Not adding much to the post you're answering.
If you first present it to partners as GTX380 and launch it a year later as GTX480, yes.. you have renamed it.
 
Looks like the rest of the Fermi lineup is almost here , just in time to run into AMD's refresh.

Code:
 - NVIDIA_DEV.0E23.01 = "NVIDIA GeForce GTS 455 "
 - NVIDIA_DEV.0DC4.01 = "NVIDIA GeForce GTS 450 "
 - NVIDIA_DEV.0DC5.01 = "NVIDIA GeForce GTS 450 "
 - NVIDIA_DEV.0DC0.01 = "NVIDIA GeForce GT 440 "
 - NVIDIA_DEV.0DE1.01 = "NVIDIA GeForce GT 430 "
 - NVIDIA_DEV.0DE2.01 = "NVIDIA GeForce GT 420 "
 - NVIDIA_DEV.0E30.01 = "NVIDIA GeForce GTX 470M "
 - NVIDIA_DEV.0DD1.01 = "NVIDIA GeForce GTX 460M "
 - NVIDIA_DEV.0DD2.01 = "NVIDIA GeForce GT 445M "
 - NVIDIA_DEV.0DD3.01 = "NVIDIA GeForce GT 435M "
 - NVIDIA_DEV.0DF2.01 = "NVIDIA GeForce GT 435M " 
 - NVIDIA_DEV.0DF0.01 = "NVIDIA GeForce GT 425M "
 - NVIDIA_DEV.0DF3.01 = "NVIDIA GeForce GT 420M "
 - NVIDIA_DEV.0DF1.01 = "NVIDIA GeForce GT 420M "
 - NVIDIA_DEV.0DEE.01 = "NVIDIA GeForce GT 415M "
 
Looks like they're about to launch quite a few SKUs! Hopefully that will be enough to make the mainstream DX11 market more appealing, price-wise.
 
Looks like they're about to launch quite a few SKUs! Hopefully that will be enough to make the mainstream DX11 market more appealing, price-wise.

looking at the GTS lineup, I think it's like this:
GTS455 = GF104 (GF106 = 192 CC, or exactly half of GF104, it doesn't seem logical that they didn't cut it exactly in half) so a 240/288CC part.
GTS450/440 = GF106 (192CC,confirmed? and 144CC)
GT430/420 = GF108 96 and 48 shaders?
 
Last edited by a moderator:
People still ignore dual-GPU boards. Many reviewers marched GTX480 against HD5870, despite the HD5970 was closer in price and power consumption...

This, plus the following...

Yup, I can't see NV releasing anything else than dual-GF104 board to (try to!) counter R9xx. I wonder how high they could clock it while remaining below 300W with the full 384 SPs - probably not very high.

Makes me wonder how many websites that were fine benching GTX 295 versus 5870, then refused to benchmark 5970 versus GTX 480 will flip flop yet again and decide to bench GTX 495 (or whatever the dual GPU card will be called) versus 6870.

Regards,
SB
 
Makes me wonder how many websites that were fine benching GTX 295 versus 5870, then refused to benchmark 5970 versus GTX 480 will flip flop yet again and decide to bench GTX 495 (or whatever the dual GPU card will be called) versus 6870.

My bets:
Hardwarecanucks
TPU
 
looking at the GTS lineup, I think it's like this:
GTS455 = GF104 (GF106 = 192 CC, or exactly half of GF104, it doesn't seem logical that they didn't cut it exactly in half) so a 240/288CC part.
GTS450/440 = GF106 (192CC,confirmed? and 144CC)
GT430/420 = GF108 96 en 48 shaders?

That seems probable except that the GTS 450 PCB shot shows 6 memory slots indicating a 192-bit bus. If the 450 is 128-bit the GTS 455 could be GF106 based. Charlie's already started beating his chest though so this could all be academic on Nvidia's part.
 
Nvidia seems to be trying everything they can to sell 480's. Most 480's at newegg now have price cuts, rebates and a bundled Mafia 2 or Metro 2033 game. Quite a few 470's too. They must be having a hard time of it, probably made worse by the release of the 460.
 
The job is made a bit easier as I'm somewhat convinced that their inventory writeoff was based off not being able to sell GTX 480/470 at current prices.

Now that much of the inventory has been written off, they are free to drop prices, hoping to drives sales enough that they will end up selling the inventory that was written off prior to the price drops. Normally a price drop would incur a downward pressure on your product margins in the quarter in which the price cut is enacted, but in this case, if they can move enough non-written off cards that they suddenly have a "need" for more inventory... Well, it'll be downward pressure initially and then a large margin swing upwards if they can pull it off.

In effect, moving the negative margin effects to a quarter other than the one in which you enact a price cut. And just coincidentally being the Holiday quarter when it's absolutely critical to show your stockholders an upward trend. :) In many ways this mirrors both Nvidia and AMD's inventory writeoffs during the Rv770 vs. G200 battle.

Regards,
SB
 
The rumours for GF106/GF108 are very confusing, but presumably the die sizes of ~240/~130mm² are correct, in which case Charlie's claims of 256 shaders/192-bit and 128 shaders/128-bit probably aren't too far from the truth. I'd bet on 240 shaders/40 TMUs/1 GPC/192-bit GDDR5 and 96 shaders/24 TMUs/1 GPC/128-bit GDDR5 (direct GT215 replacement). There's also the annoying "2 pixels per SM" output limitation which they should at least double.

Silent_Buddha: That might be part of it, but I suspect much of the write-off was for 55nm parts actually.
 
There's no nv speculation, because speculating about new stickers aint fun. :)
rofl.gif
rofl.gif
rofl.gif


The job is made a bit easier as I'm somewhat convinced that their inventory writeoff was based off not being able to sell GTX 480/470 at current prices.

Now that much of the inventory has been written off, they are free to drop prices, hoping to drives sales enough that they will end up selling the inventory that was written off prior to the price drops. Normally a price drop would incur a downward pressure on your product margins in the quarter in which the price cut is enacted, but in this case, if they can move enough non-written off cards that they suddenly have a "need" for more inventory... Well, it'll be downward pressure initially and then a large margin swing upwards if they can pull it off.

In effect, moving the negative margin effects to a quarter other than the one in which you enact a price cut. And just coincidentally being the Holiday quarter when it's absolutely critical to show your stockholders an upward trend. :) In many ways this mirrors both Nvidia and AMD's inventory writeoffs during the Rv770 vs. G200 battle.

Regards,
SB
Uhm, what? Could you explain that a bit slower and with littler words please for those of us thinking impaired? :oops:
 
I think we have a ways to go before we see any radical reworking on the high end from Nvidia. Unfortunately TSMCs woes in coming up with a 32nm process means that any interesting die-shrink / respin stuff is now a ways away, definitely not this year. And even then, I wonder if the high end part will be the first one to see a new version (G92b-style).
 
The rumours for GF106/GF108 are very confusing, but presumably the die sizes of ~240/~130mm² are correct, in which case Charlie's claims of 256 shaders/192-bit and 128 shaders/128-bit probably aren't too far from the truth. I'd bet on 240 shaders/40 TMUs/1 GPC/192-bit GDDR5 and 96 shaders/24 TMUs/1 GPC/128-bit GDDR5 (direct GT215 replacement).
So you think that GF108 has only 32 shaders per SM, making this more similar to GF100 rather than GF104/GF106? Or how else would you end up with 24 TMUS with 96 shaders?
There's also the annoying "2 pixels per SM" output limitation which they should at least double.
There's other stuff which still is a bit unclear imho. Not only do you have the 2 pixel per SM limit, but with only one rasterizer there'd be also a 8 pixel rasterization limit, which sounds like not that much for a 240 shader part. GF104 already has ridiculous amount of ROPs considering what other parts of the chip can do, with 3/4 of that (if that's a 192bit bus with same ROP arrangement as GF104/GF100) it would only get more ridiculous with GF106...
 
So you think that GF108 has only 32 shaders per SM, making this more similar to GF100 rather than GF104/GF106? Or how else would you end up with 24 TMUS with 96 shaders?
Yup, that's what I think is most likely, although I edited my post several times hesitating between 16 and 24 TMUs ;) So I wouldn't be surprised either way.

There's other stuff which still is a bit unclear imho. Not only do you have the 2 pixel per SM limit, but with only one rasterizer there'd be also a 8 pixel rasterization limit, which sounds like not that much for a 240 shader part. GF104 already has ridiculous amount of ROPs considering what other parts of the chip can do, with 3/4 of that (if that's a 192bit bus with same ROP arrangement as GF104/GF100) it would only get more ridiculous with GF106...
Hmm, yeah. I guess an easy way to 'fix' that part is to have 2 TPCs on GF106 and 16 TMUs on GF108 (i.e. 24 pixels in the ROPs, 16 in the rasteriser, 20 in SM output for GF106 and 8 pxiels in the ROPs, 8 in the rasteriser, 8 in SM output for GF108). But this introduces the problem that the 5 SMs on GF106 would have to be unevenly divided 3-2 between the GPCs, ugh. And otherwise they would indeed have to revise the architecture more.

Some of these limitations are genuinely bizarre; I find it insane that NVIDIA implemented full-speed FP16 on GF104 but kept those minuscule buses between units. Then again given the (slightly depressing) conversation I had with John Nickolls about floating point texturing in 2008 (he seemed to massively overestimate its importance) I probably shouldn't act so surprised.
 
Based on what evidence?

What says the 6870 will be any sort of significant performance bump on top of the 5870? We know ATI are loath to design large chips these days, and the only way to add major amounts of performance barring process shrink (which they don't have at this time) is to increase die size - which is already on the largeish side of things.

Of course we're free to speculate, but seems to me there's no actual fact to support such speculation at this time, making said speculation even more dubious than regular speculation... ;)

So you're saying new architecture can't be even more efficient per mm^2 than Evergreen?
 
So you're saying new architecture can't be even more efficient per mm^2 than Evergreen?
Well if he does, I think he's wrong.

Juniper has exactly half of the performance-relevant parts of Cypress (except for the tesselator which remained the same, of course) and stuff like display controllers etc. obviously weren't cut into half, and yet Juniper has less than half the transistor count of Cypress and exactly half the die size.

I believe some people here at this forum calculated that ~200-300 million transistors in Cypress are DP/GPGPU-specific and were removed for Juniper. Also, reviews pitching 5850 CF vs. 5970 as well as OCing 5850 to 5870 clocks indicate that the 10th SIMD from each of the two SIMD arrays doesn't do much for performance, and the same might hold true for the 9th as well.

So if you took Cypress, removed all the DP/GPGPU-specific stuff just like they did for Juniper and removed 2 SIMDs per block, you'd most likely end up with a chip that has a RV770-like die size, but if clocked to 5870 clocks outperforms a 5850 by 5-10% (maybe even more).

Then we'd be talking about a chip that outperforms the 460-1GB by 15-20% with a die area closer to GF106.
And that doesn't even take architectural efficiency improvements into account.

That's why I fully expect the Barts/6700 cards to make GF106 and even GF104 look silly in terms of perf./mm².
 
Silent_Buddha: That might be part of it, but I suspect much of the write-off was for 55nm parts actually.

That's a good point but it wouldn't line up quite as well with them projecting a rather massive Quarter to Quarter margin increase. And yes, I realize that their margin projections are not supposed to take into account sales of written off inventory (I think). But then again it's just as likely their margin predictions weren't taking into account possibly going into a price war with AMD which leads to...

If they plan on discounting GF100 based cards heavily in order to stimulate demand, it'll help with revenue but drive down margins. And I don't see GF104 or any of the upcoming Fermi derivatives making up not only that downward pressure but in addition actually driving up margines into the 45%+ range from last quarters ~16% (IIRC).

There's always the prospect that GF100 has already been EOL'd and there's a super secret replacement soon to be unveiled with potential for larger margins. If that's the case, then a firesale on GF100 products might not hurt them too much.

I still think it's probably a majority of GF100 inventory being written off, however. But we'll see how this plays out over the next Quarter. :)

Uhm, what? Could you explain that a bit slower and with littler words please for those of us thinking impaired? :oops:[/QUOTE]

Hmmmm, how to condense? :)

OK, start with the fact that most companies will want to show a good Holiday quarter if possible.

Now, we have Nvidia that took a hit to their margins (~16% margins is scandalously low) last Quarter by writing off inventory that they "predict" they can't sell at the then current price of whatever it was they wrote off. NOTE - this isn't to say that the ~16% margins (IIRC) is entirely due to inventory write-offs. It's not, but it is a significant part of it.

From this point it's purely speculation time.

Now we have Nvidia apparently slashing prices on GF100 based consumer products in order stimulate demand. Normally, this is going to tank margins on a product that is probably not getting good margins in the first place.

But, if we assume that GF100 based consumer (not professional) products were written off, then we have a situation where those cut rate GF100's will only put downward pressure on margins until they've sold out of non-written off inventory. Assuming demand is stimulated enough, there's a good possibility they'll suddenly "need" additional inventory. And now they have a handy dandy pile of written off inventory they can sell for pure margin, minus rather negligible costs like shipping and handling.

In other words, they've shifted the burden of reduced margins from the Holiday quarter to the one prior.

Note - this doesn't have to be limited to GF100 based products, it could as Arun noted be for other lines of products.

Now the fly in the ointment is any possible statements made to investors about the inventory writeoff. But they were very careful to use language that wouldn't come back to bite them if they did end up selling significant quantities of written off inventory as they have done in the past.

Both Nvidia and AMD have done this in the past. You'll notice in some quarters they'll have revenue from sales of inventory write-offs. It's not an uncommon practice, and not unexpected for companies to do it. It isn't necessarily "wrong" either as the costs are accounted for either way. It could be argued to be a bit deceptive depending on how it is used, but that's left for the stockholders to judge.

Regards,
SB
 
Back
Top