NVIDIA GF100 & Friends speculation

That's awfully conspiracy theorist of you.

And for that matter, if they did leak it, why would it get taken down so soon? Wouldn't leaving it up during the launch day of the NI line have much more of an effect?

I see articles around the web, it got the result they wanted when it comes to exposure, without NV having to "officially" talk about it. It's marketing, not a conspiracy.

Removal of the items only leads to "exclusivity" and "ooh, we shouldn't have seen that" feelings, i.e. more buzz.
 
Yeah, I don't buy the "accidental leak" theory either. Not when their major competitor is just about to launch a new product. Sounds like part of an overall marketing push, along with the price cuts to get some focus back on Nv. Use the price cuts to fight Barts, and the shadow of the 580 to give some uncertainty to Cayman buyers. Pretty basic marketing stuff.
 
Sorry, but it's not 512-bit/2GB

gtx500.jpg
 
This 6 month old April Fool's day announcement of the GTX 580 almost sounds plausible now.

NVIDIA has announced that the next series of graphics cards based on the Fermi architecture is already in the works - the 500 Series. The NVIDIA GeForce GTX 580 will be the first to launch and will feature 580 CUDA cores to go along with its "580" moniker and have a whopping 2560MB of 384-bit GDDR5 memory. The rest of the specifications are unknown at this time, but NVIDIA is promising the fastest single-GPU card on the market.

"Although we're very proud of the GTX 480," NVIDIA President and CEO Jen-Hsun Huang said, "the 400 series is merely a tease for what Fermi can really accomplish. When we release the 500 series later this year, I think everyone will be pleasantly surprised."

After the rather lukewarm reception of the GTX 480, this is certainly welcome news to fans of NVIDIA. Even more welcoming is that the GTX 580 is expected to launch before the end of the year. ... A separate NVIDIA source who will remain anonymous for obvious reasons let us in on a little secret - the GTX 580 will launch with the GTX 480's current price. The best part of that tidbit may be the obvious implication - a price drop on the 400 series!

About the only obvious failure is that 580 cores are impossible (not a multiple of 48, 32, 16 or even 8). 576 cores in 12 SMs... now that's reasonable and would be manufacturable.
 
Along with selectively reducing functional blocks from Cypress and removing FP64 support, AMD made one other major change to improve efficiency for Barts: they’re using Redwood’s memory controller. In the past we’ve talked about the inherent complexities of driving GDDR5 at high speeds, but until now we’ve never known just how complex it is. It turns out that Cypress’s memory controller is nearly twice as big as Redwood’s! By reducing their desired memory speeds from 4.8GHz to 4.2GHz, AMD was able to reduce the size of their memory controller by nearly 50%. Admittedly we don’t know just how much space this design choice saved AMD, but from our discussions with them it’s clearly significant. And it also perfectly highlights just how hard it is to drive GDDR5 at 5GHz and beyond, and why both AMD and NVIDIA cited their memory controllers as some of their biggest issues when bringing up Cypress and GF100 respectively.
http://www.anandtech.com/show/3987/...renewing-competition-in-the-midrange-market/2

So 512-Bit @ ~ 4Gbps might be cheaper than 384-Bit @ ~4.8Gbps on the die side.
 
http://www.anandtech.com/show/3987/...renewing-competition-in-the-midrange-market/2

So 512-Bit @ ~ 4Gbps might be cheaper than 384-Bit @ ~4.8Gbps on the die side.

Doubtful. They are just talking about changes to the PHY and some of the related areas (training, etc.). I'm not entirely sure if they could really save 50% of the area of the memory controller, just by lowering the target frequency. Maybe they could cut the PHY in half...

Also, remember that if you double your memory controller width, you now need to double all the internal plumbing leading to it.


DK
 
Well, next spring would be about the right time for a refresh part from nVidia. We shouldn't expect earth-shattering changes from the current GF1xx, but it wouldn't be too unreasonable to expect some significant changes.
 
Well, next spring would be about the right time for a refresh part from nVidia. We shouldn't expect earth-shattering changes from the current GF1xx, but it wouldn't be too unreasonable to expect some significant changes.
I would imagine they will pull out all the stops and have some cool features. This will be a good time for there S/W guys to fix the performance/price gap perception. Though the PR guys will f-it up for sure.. Lack of OEM will be hard.
 
I think Nvidia will stick with what they've got, with some fixes, but mostly marketing spin and renaming. Generally, they will be treading water until 28nm because they've spent so much time and money on Fermi, and not got much back from it. Pouring more money into the back hole seem pointless when they can wind up for 28nm in twelve months time.

If your competitors keep kicking you in the nuts, you should stop hanging them out there and do something different. Unless Nvidia magic up the fabled HPC market they are trying to lifeboat the company with, Fermi isn't cutting it, and will be in even worse shape when Fusion takes the low end.
 
I think Nvidia will stick with what they've got, with some fixes, but mostly marketing spin and renaming. Generally, they will be treading water until 28nm because they've spent so much time and money on Fermi, and not got much back from it. Pouring more money into the back hole seem pointless when they can wind up for 28nm in twelve months time.
The last time they put out a sub-par architecture they did much worse (the GF FX), and their refresh part was also probably one of the most significant refresh changes they've put out. So I wouldn't be surprised at all if they really went all-out to improve things for their refresh. We'll see.
 
I wouldn't call the GF-100 anything near the quality of an FX though so I doubt we'll see such a radical refresh.
Agreed, but it does seem to indicate that if nVidia follows past behavior, they will be more focused on the refresh part due to GF100's sub-par performance than they would otherwise have been, not less.

Edit: And I'd also like to add that in the FX era, not only did they have a significant refresh, but they also accelerated the schedule for their next-gen architecture. So it wouldn't be terribly surprising if we see nVidia really put the rubber to the road and push for both an improved refresh and a better next-gen.
 
The last time they put out a sub-par architecture they did much worse (the GF FX), and their refresh part was also probably one of the most significant refresh changes they've put out. So I wouldn't be surprised at all if they really went all-out to improve things for their refresh. We'll see.

Im not saying that it isn't possible but you really cant compare it to the FX situation. Back then the gpu die size/complexity was much less and design cycles of 6 months weren't unheard of.

With the current trend of huge die dizes and >12 month design cycles(eg G80 -> GT200 or GT200 -> GF100) i really cant see them making significant changes. As Ujesh Desai very succintly put it "Designing big GPU's is f***king hard!". Also dont forget than 28nm GPU's are due about a year from now. So if they expend a lot of resources on current GPU's on 40nm, it might very well delay the transition to 28nm
 
Back
Top