It's called GTX 480 while everyone was expecting GTX 380. So actually, it was rebranded before launch!
So the gtx 280 was a rebranding before launch cause it was not called the 1x000?
It's called GTX 480 while everyone was expecting GTX 380. So actually, it was rebranded before launch!
So the gtx 280 was a rebranding before launch cause it was not called the 1x000?
- NVIDIA_DEV.0E23.01 = "NVIDIA GeForce GTS 455 "
- NVIDIA_DEV.0DC4.01 = "NVIDIA GeForce GTS 450 "
- NVIDIA_DEV.0DC5.01 = "NVIDIA GeForce GTS 450 "
- NVIDIA_DEV.0DC0.01 = "NVIDIA GeForce GT 440 "
- NVIDIA_DEV.0DE1.01 = "NVIDIA GeForce GT 430 "
- NVIDIA_DEV.0DE2.01 = "NVIDIA GeForce GT 420 "
- NVIDIA_DEV.0E30.01 = "NVIDIA GeForce GTX 470M "
- NVIDIA_DEV.0DD1.01 = "NVIDIA GeForce GTX 460M "
- NVIDIA_DEV.0DD2.01 = "NVIDIA GeForce GT 445M "
- NVIDIA_DEV.0DD3.01 = "NVIDIA GeForce GT 435M "
- NVIDIA_DEV.0DF2.01 = "NVIDIA GeForce GT 435M "
- NVIDIA_DEV.0DF0.01 = "NVIDIA GeForce GT 425M "
- NVIDIA_DEV.0DF3.01 = "NVIDIA GeForce GT 420M "
- NVIDIA_DEV.0DF1.01 = "NVIDIA GeForce GT 420M "
- NVIDIA_DEV.0DEE.01 = "NVIDIA GeForce GT 415M "
Looks like they're about to launch quite a few SKUs! Hopefully that will be enough to make the mainstream DX11 market more appealing, price-wise.
People still ignore dual-GPU boards. Many reviewers marched GTX480 against HD5870, despite the HD5970 was closer in price and power consumption...
Yup, I can't see NV releasing anything else than dual-GF104 board to (try to!) counter R9xx. I wonder how high they could clock it while remaining below 300W with the full 384 SPs - probably not very high.
Makes me wonder how many websites that were fine benching GTX 295 versus 5870, then refused to benchmark 5970 versus GTX 480 will flip flop yet again and decide to bench GTX 495 (or whatever the dual GPU card will be called) versus 6870.
looking at the GTS lineup, I think it's like this:
GTS455 = GF104 (GF106 = 192 CC, or exactly half of GF104, it doesn't seem logical that they didn't cut it exactly in half) so a 240/288CC part.
GTS450/440 = GF106 (192CC,confirmed? and 144CC)
GT430/420 = GF108 96 en 48 shaders?
There's no nv speculation, because speculating about new stickers aint fun.
Uhm, what? Could you explain that a bit slower and with littler words please for those of us thinking impaired?The job is made a bit easier as I'm somewhat convinced that their inventory writeoff was based off not being able to sell GTX 480/470 at current prices.
Now that much of the inventory has been written off, they are free to drop prices, hoping to drives sales enough that they will end up selling the inventory that was written off prior to the price drops. Normally a price drop would incur a downward pressure on your product margins in the quarter in which the price cut is enacted, but in this case, if they can move enough non-written off cards that they suddenly have a "need" for more inventory... Well, it'll be downward pressure initially and then a large margin swing upwards if they can pull it off.
In effect, moving the negative margin effects to a quarter other than the one in which you enact a price cut. And just coincidentally being the Holiday quarter when it's absolutely critical to show your stockholders an upward trend. In many ways this mirrors both Nvidia and AMD's inventory writeoffs during the Rv770 vs. G200 battle.
Regards,
SB
So you think that GF108 has only 32 shaders per SM, making this more similar to GF100 rather than GF104/GF106? Or how else would you end up with 24 TMUS with 96 shaders?The rumours for GF106/GF108 are very confusing, but presumably the die sizes of ~240/~130mm² are correct, in which case Charlie's claims of 256 shaders/192-bit and 128 shaders/128-bit probably aren't too far from the truth. I'd bet on 240 shaders/40 TMUs/1 GPC/192-bit GDDR5 and 96 shaders/24 TMUs/1 GPC/128-bit GDDR5 (direct GT215 replacement).
There's other stuff which still is a bit unclear imho. Not only do you have the 2 pixel per SM limit, but with only one rasterizer there'd be also a 8 pixel rasterization limit, which sounds like not that much for a 240 shader part. GF104 already has ridiculous amount of ROPs considering what other parts of the chip can do, with 3/4 of that (if that's a 192bit bus with same ROP arrangement as GF104/GF100) it would only get more ridiculous with GF106...There's also the annoying "2 pixels per SM" output limitation which they should at least double.
Yup, that's what I think is most likely, although I edited my post several times hesitating between 16 and 24 TMUs So I wouldn't be surprised either way.So you think that GF108 has only 32 shaders per SM, making this more similar to GF100 rather than GF104/GF106? Or how else would you end up with 24 TMUS with 96 shaders?
Hmm, yeah. I guess an easy way to 'fix' that part is to have 2 TPCs on GF106 and 16 TMUs on GF108 (i.e. 24 pixels in the ROPs, 16 in the rasteriser, 20 in SM output for GF106 and 8 pxiels in the ROPs, 8 in the rasteriser, 8 in SM output for GF108). But this introduces the problem that the 5 SMs on GF106 would have to be unevenly divided 3-2 between the GPCs, ugh. And otherwise they would indeed have to revise the architecture more.There's other stuff which still is a bit unclear imho. Not only do you have the 2 pixel per SM limit, but with only one rasterizer there'd be also a 8 pixel rasterization limit, which sounds like not that much for a 240 shader part. GF104 already has ridiculous amount of ROPs considering what other parts of the chip can do, with 3/4 of that (if that's a 192bit bus with same ROP arrangement as GF104/GF100) it would only get more ridiculous with GF106...
Based on what evidence?
What says the 6870 will be any sort of significant performance bump on top of the 5870? We know ATI are loath to design large chips these days, and the only way to add major amounts of performance barring process shrink (which they don't have at this time) is to increase die size - which is already on the largeish side of things.
Of course we're free to speculate, but seems to me there's no actual fact to support such speculation at this time, making said speculation even more dubious than regular speculation...
Well if he does, I think he's wrong.So you're saying new architecture can't be even more efficient per mm^2 than Evergreen?
Silent_Buddha: That might be part of it, but I suspect much of the write-off was for 55nm parts actually.
Uhm, what? Could you explain that a bit slower and with littler words please for those of us thinking impaired? :oops:[/QUOTE]
Hmmmm, how to condense? :)
OK, start with the fact that most companies will want to show a good Holiday quarter if possible.
Now, we have Nvidia that took a hit to their margins (~16% margins is scandalously low) last Quarter by writing off inventory that they "predict" they can't sell at the then current price of whatever it was they wrote off. NOTE - this isn't to say that the ~16% margins (IIRC) is entirely due to inventory write-offs. It's not, but it is a significant part of it.
From this point it's purely speculation time.
Now we have Nvidia apparently slashing prices on GF100 based consumer products in order stimulate demand. Normally, this is going to tank margins on a product that is probably not getting good margins in the first place.
But, if we assume that GF100 based consumer (not professional) products were written off, then we have a situation where those cut rate GF100's will only put downward pressure on margins until they've sold out of non-written off inventory. Assuming demand is stimulated enough, there's a good possibility they'll suddenly "need" additional inventory. And now they have a handy dandy pile of written off inventory they can sell for pure margin, minus rather negligible costs like shipping and handling.
In other words, they've shifted the burden of reduced margins from the Holiday quarter to the one prior.
Note - this doesn't have to be limited to GF100 based products, it could as Arun noted be for other lines of products.
Now the fly in the ointment is any possible statements made to investors about the inventory writeoff. But they were very careful to use language that wouldn't come back to bite them if they did end up selling significant quantities of written off inventory as they have done in the past.
Both Nvidia and AMD have done this in the past. You'll notice in some quarters they'll have revenue from sales of inventory write-offs. It's not an uncommon practice, and not unexpected for companies to do it. It isn't necessarily "wrong" either as the costs are accounted for either way. It could be argued to be a bit deceptive depending on how it is used, but that's left for the stockholders to judge.
Regards,
SB