NVIDIA GF100 & Friends speculation

Im not saying that it isn't possible but you really cant compare it to the FX situation. Back then the gpu die size/complexity was much less and design cycles of 6 months weren't unheard of.

If you take the development time of GF100 and the troubles it had, releasing a refresh part (GF100B/GF110) that is working for 100% is actually an achievement.
 
If you take the development time of GF100 and the troubles it had, releasing a refresh part (GF100B/GF110) that is working for 100% is actually an achievement.

But a B spec chip is still considerably less complex than an entirely new chip or even a refresh/redesign. A FX5800 -> FX 5900 type redesign surely GF100 is not?
 
Agreed, but it does seem to indicate that if nVidia follows past behavior, they will be more focused on the refresh part due to GF100's sub-par performance than they would otherwise have been, not less.

In order to do that, Nvidia have to effectively cut loose the current Fermi architecture and focus their resources to the next generation. They would have to take resources away from trying to fix the current Fermi with sticking plasters, and concentrate on the next architecture and process.
 
In order to do that, Nvidia have to effectively cut loose the current Fermi architecture and focus their resources to the next generation. They would have to take resources away from trying to fix the current Fermi with sticking plasters, and concentrate on the next architecture and process.
Why?

They could alternatively just borrow some money and hire a few more workers. Or just use it as an incentive to get their current workers to work that much harder.

Productivity isn't a constant, after all. nVidia doesn't have a finite amount of resources that they have to judiciously distribute. They do have concurrent production teams working on different architectures at the same time, and there is not necessarily any need to sacrifice on either of them.

That said, the economic downturn may cause nVidia to be less aggressive than they might otherwise be, but that's a somewhat separate issue.
 
Why?

They could alternatively just borrow some money and hire a few more workers. Or just use it as an incentive to get their current workers to work that much harder.

Nine women cannot make a baby in one month, etc. You can't just get a couple of hundred people into the team and make everything all right. I bet the people at Nvidia are already working totally flat out. You can't just wave more money at them and then they can suddenly work without sleep. Do you think the problems with Fermi are just because that design team wasn't really trying for the last couple of years, or that they weren't being paid enough bonuses?

Productivity isn't a constant, after all. nVidia doesn't have a finite amount of resources that they have to judiciously distribute. They do have concurrent production teams working on different architectures at the same time, and there is not necessarily any need to sacrifice on either of them.

Are you saying that Nvidia has and is willing to spend infinite resources? They must be the first company in the history of the world that has infinite resources.

How are they going to get people in the door to fix Fermi when they already have a team most of the way through 28nm design for this time next year?

That said, the economic downturn may cause nVidia to be less aggressive than they might otherwise be, but that's a somewhat separate issue.

Why would they even keep pouring time and money into a chip that should be done and dusted, and is not really fit for purpose (ie is costing them money) and needs to be replaced ASAP with a fixed version on a new process in 12 months?

I don't believe that Fermi is fixable as a viable product for Nvidia. It does not fit in the current market with AMD undercutting and outperforming it. It's not the right product for today's market, and there's nothing that can be done to change that. Nvidia have to start with a clean sheet, move away from the monolithic designs they built in the hope of creating a HPC market, and design something different for the home/gaming market.
 
no real info/ all speculation/no idea how NVDA runs there divisions
Thats just it you havent a clue.But if you use what is available( history,best S/W guys in the biz on payrole/Slimest PR team) Lots of cash still. lots of cash still. and still having a larger Market-share will keep them cranking it out till it evens out.

But i dont have clue either.
 
Nine women cannot make a baby in one month, etc. You can't just get a couple of hundred people into the team and make everything all right. I bet the people at Nvidia are already working totally flat out. You can't just wave more money at them and then they can suddenly work without sleep. Do you think the problems with Fermi are just because that design team wasn't really trying for the last couple of years, or that they weren't being paid enough bonuses?
Part of the problem was just that ATI did it better, not necessarily that nVidia did anything wrong. The competition from ATI gives nVidia a stronger focus point against which to measure their upcoming parts, and that increased focus can improve efficiency. Furthermore, as you mention, designing a modern GPU is just a lot of work, and hiring more workers, provided they are qualified and well-managed, is always going to be a winning strategy for improving the result for a sufficiently complex and difficult engineering task.

Are you saying that Nvidia has and is willing to spend infinite resources? They must be the first company in the history of the world that has infinite resources.
Sorry, I meant they don't have a fixed amount of resources. They can increase manpower by hiring more workers. They can borrow money to fund the expansion. There is no fundamental need to sacrifice members of one development team to strengthen another.

How are they going to get people in the door to fix Fermi when they already have a team most of the way through 28nm design for this time next year?
If any changes were to be made, they would have been made around a year ago or so when it would have become apparent within nVidia how the GF100 would stack against ATI's parts. This isn't so terribly late that they couldn't have made some changes to the team working on the 28nm parts, or even the refresh parts that may be coming early next year.

Why would they even keep pouring time and money into a chip that should be done and dusted, and is not really fit for purpose (ie is costing them money) and needs to be replaced ASAP with a fixed version on a new process in 12 months?
I think you're making some unwarranted assumptions there about the ability for nVidia to improve the performance/watt of Fermi.
 
Thats just it you havent a clue.But if you use what is available( history,best S/W guys in the biz on payrole/Slimest PR team) Lots of cash still. lots of cash still. and still having a larger Market-share will keep them cranking it out till it evens out.

But i dont have clue either.

I've worked in big tech companies that do software and hardware. I know that no company has infinite resources. I know that throwing more people (if you can get enough people who are actually worth their money) into a problematic project doesn't work if the project has fundamental issues, least of all something that's going to be replaced in 12 months. I know that people from companies like Nvida don't deliberately make poor products, so you can't just wave money at them and fix things. I also know that big companies know when to cut loose on poor projects instead of continually pouring money into something that can never be fixed.

Given that Nvidia is right now working on next gen 28nm parts for this time next year, they won't be foolish enough to do anything more than minor tweaks to Fermi. Sure, fix the yields, get all 512 stream processors working, improve power usage, but a big redesign? No chance until 28nm.
 
Why?

They could alternatively just borrow some money and hire a few more workers. Or just use it as an incentive to get their current workers to work that much harder.

Productivity isn't a constant, after all. nVidia doesn't have a finite amount of resources that they have to judiciously distribute. They do have concurrent production teams working on different architectures at the same time, and there is not necessarily any need to sacrifice on either of them.

.
Exactly, Nvidia has the Kepler architecture (2011) and the Maxwell architecture (2013) concurrenty in development right now, nevermind refresh / mid-life kickers and tweaks.

Nvidia badly needs to move away from massive monolithic GPUs and goto ultra efficient upper midrange GPUs, where two GPUs can reside on a single card, making the high-end product, like AMD /ATI has done since the RV770 / Radeon HD 4800 back in 2008. Hopefully we shall see just that in Kepler next year.
 
Nvidia badly needs to move away from massive monolithic GPUs and goto ultra efficient upper midrange GPUs, where two GPUs can reside on a single card, making the high-end product, like AMD /ATI has done since the RV770 / Radeon HD 4800 back in 2008. Hopefully we shall see just that in Kepler next year.

I don't think they are in that position, actually. They have to keep pushing further and further with their compute capabilities and this will lead inevitably to large gpus, at least for the performance market. So it's rather a war on two front lines simultaneously.

Ofcourse, they do have more to gain from this strategy in my opinion, but the price to pay is loss in the perf/watt and perf/area metrics. For gaming workloads, that is. Probably this price is not that significant either.
 
I don't think they are in that position, actually. They have to keep pushing further and further with their compute capabilities and this will lead inevitably to large gpus, at least for the performance market. So it's rather a war on two front lines simultaneously.

....
Then you would give up the OEM, which then is aced by fusion/larbee-esq, they will make smaller chips. They HAVE to.
 
Exactly, Nvidia has the Kepler architecture (2011) and the Maxwell architecture (2013) concurrenty in development right now, nevermind refresh / mid-life kickers and tweaks.

Nvidia badly needs to move away from massive monolithic GPUs and goto ultra efficient upper midrange GPUs, where two GPUs can reside on a single card, making the high-end product, like AMD /ATI has done since the RV770 / Radeon HD 4800 back in 2008. Hopefully we shall see just that in Kepler next year.

Actually since RV670 / HD3800 (HD3870 X2)
 
http://article.pchome.net/content-1218188.html


369369.jpg


3693693.jpg
 
fudzilla says the price-cut for GTX 460-1G and GTX 470 is not permanent and will only last until 14th November.

Either their 5x0 successors are closer than we thought, or the cards are too expensive to make to maintain these prices.
 
Hardwarecanucks contacted Nvidia and they repsonded saying that : Price cuts are permanent :

This news seems to have ruffled some feathers over at AMD. In a carefully worded email to editors, they have spelled out their belief that any price drops on NVIDIA’s part are temporary. There have indeed been more than a handful of times where “new” prices have been announced in this industry only to rebound a month or two afterwards. So we were inclined to believe AMD, especially considering they included an internal memo (in French) supposedly sent from NVIDIA’s sales team to a retailer in the EU. It states that any shipments currently in transit or shipped between October 21st and November 14th would be subject to reduced pricing but it doesn’t mention that the reduction would not be carried on after those dates. Since we cannot confirm the validity of this memo, we won’t post it for the time being.
With questions aplenty we went to the source: major retailers and distributors in Canada and the US. Our contacts’ responses were unanimous: NVIDIA’s new prices will take effect within the next 12-24 hours and have no cut-off date. NVIDIA’s own response was clear enough as well:


“These new prices are permanent” – Brandon Bell, NVIDIA


We have largely avoid posting about this “he said, she said” game being played these days but in this case, we have finally seen that it has led to lower prices for consumers.

http://www.hardwarecanucks.com/news/video/nvidia-drops-prices-amd-responds/
 
Back
Top