NVIDIA GF100 & Friends speculation

I think most of us suspected this. Extremely limited launch quanitys with real avaliblity a few months down the line.


Look like ati may have gotten themselves a 8-10 month lead on dx 11 this time around.

Nvidias Q1 ends on the 30th of April. So even if they say Q1 it may very well be early Q2 for the rest of us.
 
So now Fermi will be available in Q2 and not Q1 as previously stated it seems, according to xbitlabs news story. So we're likely to able to get them easily around May-June. At this rate Northern Ireland from AMD will be out just after it :D

In that story, JHH seemed to suggest that GF100 derivatives will appear within weeks of GF100, so it might not be so bad after all.
 
Nvidias Q1 ends on the 30th of April. So even if they say Q1 it may very well be early Q2 for the rest of us.

Well it was originaly set for Late march from what I understand. So if we don't see more than extremely limited quanitys till their second qtr which is may through june. So we may not even see mass quanitys till june.

The news keeps getting worse imo.


I hope the lower end parts come out quicker and ramp faster .
 
are you guys not aware there are features like Genlock (which is a must in real time broadcasting) that aren't available outside of quadro cards? I don't see why you guys are talking about this at all, its not about performance in gaming, its about features and support for professional software and needs, its pretty simple to understand that. The market for such cards are needed, when working in millions of dollars budget, you might have the same core, but software that is customized and supported to the needs of business is much more important. This goes with Matrox too, why are they still alive, because they made a niche where they excel in, many imaging software and companies rely on martox's stability in imaging.

Doesn't take a page and half of posts to see that.
 
So now Fermi will be available in Q2 and not Q1 as previously stated it seems, according to xbitlabs news story. So we're likely to able to get them easily around May-June. At this rate Northern Ireland from AMD will be out just after it :D

Ah, no?!

“Q2 [of FY 2011] is going to be the quarter when Fermi is hitting the pull stride. It will not just be one Fermi product, there will be a couple of Fermi products to span many different price ranges, but also the Fermi products will span GeForce Quadro and Tesla. So, we are going to be ramping now on Fermi architecture products through Q2 and we are building a lot of it. I am really excited about the upcoming launch of Fermi and I think it will more than offset the seasonality that we usually see in Q2,” said Jen-Hsun Huang, chief executive officer of Nvidia, during the most recent conference call with financial analysts.
http://www.xbitlabs.com/news/video/...e_Pull_Stride_Next_Quarter_CEO_of_Nvidia.html

He is clearly speaking about the geforce family and not GTX470/GTX480.
 
What's the difference between this "ramping now" and the "we're in full production" from CES?
 
aaronspink's current belittling of the Quadro lineup is similar in tone to his previous belittling of the Tesla lineup. What a great way to crap on the efforts of hundreds of engineers by completely ignoring the enormous time and effort it takes NVIDIA to support Quadro and Tesla customers (on both hardware and software side), simply because the hardware development platform is shared with consumer-based Geforce GPU's? Armchair MBA indeed, and borderline trollish behavior too.

You might want to actually follow the discussion more. The whole discussion was started because people were insisting that Quadro/Tesla were separate products from a silicon design standpoint which isn't true.

And yes, I have little faith that Tesla will be a savior for Nvidia if they are uncompetitive in the gaming market, same for quadro, as neither of the SKUs can exist without the R&D being paid for by the much larger gaming market.

And if I'm crapping on hundred of engineers by saying the only reason that Quadro out performs Geforce in professional graphics benchmarks is because Nvidia purposely cripples Geforce performance, then so be it. Sometimes the truth hurts.

As far as Tesla, the number of engineers required to support that SKU is minimal, a handful of additional board designs, big deal. All the software is payed for by Geforce marketing efforts.
 
He is clearly speaking about the geforce family and not GTX470/GTX480.

GTX470/480 are members of the coming GeForce family. There's a "rest" missing in that sentence ;)

What's the difference between this "ramping now" and the "we're in full production" from CES?

There are a couple of ways to interpret it. Too bad the Delphi Oracle is discfunctional now for countless of centuries otherwise I'd get a clearer than that one indiction from it. If he should have meant:

So, we are going to be ramping now on [other] Fermi architecture products through Q2 and we are building a lot of it.

...than fine.
 
What a great way to crap on the efforts of hundreds of engineers by completely ignoring the enormous time and effort it takes NVIDIA to support Quadro and Tesla customers (on both hardware and software side), simply because the hardware development platform is shared with consumer-based Geforce GPU's? Armchair MBA indeed, and borderline trollish behavior too.

And u think that the suport from pro aplications isnt much much bigger ? Anyway they code them always for OpenGL so it works on multiple platforms. I dont think that making both drivers working same fast in the OpenGL needs more resources than the suport for many games on much cheaper game cards.
Quite a years back overpriced 3Dlabs cards were used in pro stations and than geforce cards begined to have drivers which beated those pro cards for much less. Now its the same again.

They will fall on nose the same way as 3dlabs if a third card manufacter would apear and sell cheap cards with real OpenGL support. Intell could for example make good reputation with such move and increase sales with Larrabee if it comes out in future.
 
You might want to actually follow the discussion more. The whole discussion was started because people were insisting that Quadro/Tesla were separate products from a silicon design standpoint which isn't true.

And yes, I have little faith that Tesla will be a savior for Nvidia if they are uncompetitive in the gaming market, same for quadro, as neither of the SKUs can exist without the R&D being paid for by the much larger gaming market.

And if I'm crapping on hundred of engineers by saying the only reason that Quadro out performs Geforce in professional graphics benchmarks is because Nvidia purposely cripples Geforce performance, then so be it. Sometimes the truth hurts.

As far as Tesla, the number of engineers required to support that SKU is minimal, a handful of additional board designs, big deal. All the software is payed for by Geforce marketing efforts.

actaully thats alot of guessing on your part, software and driver development is much higher on the quadro side, the expense for the driver team is actually larger on professional cards, partially because of the license aspects of what is needed in the professional market. And its not crippling of performance of the cards they are getting at, you tell me features like genlock which isn't even a silicon portion of a quadro card, is crippling of a card.

As far as tesla is concerned, I can guesstimate the support needed for the software that will utilize tesla, is a hell of alot more then quardo, sh*t tesla is going to be used in pretty data sensitive areas where errors in software are not liked at all, an error could be disastrous.
 
You might want to actually follow the discussion more. The whole discussion was started because people were insisting that Quadro/Tesla were separate products from a silicon design standpoint which isn't true.

Agreed.

And if I'm crapping on hundred of engineers by saying the only reason that Quadro out performs Geforce in professional graphics benchmarks is because Nvidia purposely cripples Geforce performance, then so be it. Sometimes the truth hurts.

As far as Tesla, the number of engineers required to support that SKU is minimal, a handful of additional board designs, big deal. All the software is payed for by Geforce marketing efforts.
AMD has IMO the potential to bite off over time more and more professional market share from NVIDIA. As to why AMD's market share hasn't increased radically so far is another truth that hurts just as much. I'd rather hear what AMD intends to do in the future then just sitting back and point with fingers.
 
actaully thats alot of guessing on your part, software and driver development is much higher on the quadro side, the expense for the driver team is actually larger on professional cards, partially because of the license aspects of what is needed in the professional market. And its not crippling of performance of the cards they are getting at, you tell me features like genlock which isn't even a silicon portion of a quadro card, is crippling of a card.

Its quite likely that Nvidia has significantly more resources concentrated on the driver development for games than for professional applications.

As far as tesla is concerned, I can guesstimate the support needed for the software that will utilize tesla, is a hell of alot more then quardo, sh*t tesla is going to be used in pretty data sensitive areas where errors in software are not liked at all, an error could be disastrous.

You mean Cuda and the application development environment for Cuda that was developed FOR Geforce? Or you mean the applications and APIs that Nvidia supports FOR Geforce? The only thing they are doing special for Tesla is designing a separate board with additional memory. The software stack is based around Geforce because that's what actually makes it all possible. And what it was developed for, because if Geforce fails, Tesla fails, Quadro fails. Geforce is what makes it all possible and that's where the vast majority of the R&D is directed.
 
Sigh, guys I am trying to sleep here; can't we agree that there has been enough tesla/quadro/geforce banter already? I'll clean up this thread after I wake up tomorrow.

Willard, I understand where you are coming from but I think this discussion is actually pretty central to the GF100 speculation. One of the central points of question is if Nvidia took their eye off the prize and designed for the Tesla/Quadro space at a sacrifice of the Geforce space leading to a lot of their current problems.
 
Its quite likely that Nvidia has significantly more resources concentrated on the driver development for games than for professional applications.

That is partially true when you look only at drivers, you can talk about the TWIMTBP but getting to that, Quardo teams and I mean teams are actually sent out studios on big projects, not one or two engineers. There is alot of resources spent when you look at over all software development.

You mean Cuda and the application development environment for Cuda that was developed FOR Geforce? Or you mean the applications and APIs that Nvidia supports FOR Geforce? The only thing they are doing special for Tesla is designing a separate board with additional memory. The software stack is based around Geforce because that's what actually makes it all possible. And what it was developed for, because if Geforce fails, Tesla fails, Quadro fails. Geforce is what makes it all possible and that's where the vast majority of the R&D is directed.
not only cuda. Cuda only started about a 2 years back. R&D is a different category, there will be over lapping of cost in R&D since both all three chips are pretty much identical, but software that is used at the end is quite different. Tesla, alot of development did go into cuda, and it was targeted for that. There is going to be some crippling on the geforce end on this for the performance side since there is really know what to cut CUDA out without a redesign of the chip, but when you have banks that are going to be using Tesla for their trading computers (which btw I already know 2 banks that are very well vested into Tesla) you are looking at customized software that use frotan and other languages that don't directly correlate to CUDA.
 
Willard, I understand where you are coming from but I think this discussion is actually pretty central to the GF100 speculation. One of the central points of question is if Nvidia took their eye off the prize and designed for the Tesla/Quadro space at a sacrifice of the Geforce space leading to a lot of their current problems.
Architecturally, no - compute is required for D3D. In terms of execution, clearly NVidia has serious problems with execution.

NVidia's old architecture has a severe setup bottleneck, theoretically - though one that was never seen in games, in practice. NVidia seemingly had no choice but the kind of significant architectural change we see, in order to implement decent tessellation performance. Do we believe the engineers who say that the distributed setup system was a ball breaker?...

There's very little about compute in Fermi that's beyond D3D spec: virtual function support is required, so we're looking at a few percent spent on double-precision and ECC as the CUDA 3.0 tax.

NVidia even attempted to pre-empt 40nm by moving "early", originally planning to release 40nm chips in autumn 2008 before AMD. But NVidia's execution is generally so bad (see the string of GPUs before this) that that came to naught. Charlie's argument that the architecture (G80-derived, essentially) is unmanufacturable appears to hold some water, because G80 is the only large chip that appeared in a timely fashion.

GDDR5 might have been the straw that broke the camel's back: leaving implementation until the 40nm chips seems like a mistake, but the execution quagmire drags that whole thing down anyway.

Jawed
 
I like all the armchair MBAs here who just know (from their guts) how NVIDIA should runs its business.

Heh, not me. I'm an officially licensed armchair MBA :p The argument is getting silly. Yes, aaron we know it's the same hardware. And Nvidia could enable certain features on Geforce if they wanted to. But you keep avoiding the only important question and that is why should they?
 
Architecturally, no - compute is required for D3D. In terms of execution, clearly NVidia has serious problems with execution.

NVidia's old architecture has a severe setup bottleneck, theoretically - though one that was never seen in games, in practice. NVidia seemingly had no choice but the kind of significant architectural change we see, in order to implement decent tessellation performance. Do we believe the engineers who say that the distributed setup system was a ball breaker?...

If nv could do 1 tri/clk for pre-fermi, then why is it any bigger bottleneck than for AMD which also does 1 tri/clk and does not claim any significant benefits by improving setup rates.

There's very little about compute in Fermi that's beyond D3D spec: virtual function support is required, so we're looking at a few percent spent on double-precision and ECC as the CUDA 3.0 tax.

I am curious about the kind of virtual function support required for d3d11. I was under the impression that the support needed was a little bit below the full generality needed by C++.

NVidia even attempted to pre-empt 40nm by moving "early", originally planning to release 40nm chips in autumn 2008 before AMD. But NVidia's execution is generally so bad (see the string of GPUs before this) that that came to naught. Charlie's argument that the architecture (G80-derived, essentially) is unmanufacturable appears to hold some water, because G80 is the only large chip that appeared in a timely fashion.

Could it be that designing reticle sized chips for a cutting edge process is what is causing extra problems, not just normal execution ones? Similar doubts for LRB have been expressed before.

http://forum.beyond3d.com/showpost.php?p=1395490&postcount=5517

GDDR5 might have been the straw that broke the camel's back: leaving implementation until the 40nm chips seems like a mistake, but the execution quagmire drags that whole thing down anyway.

Didn't they have working GDDR5 chips in form of the GT21x on 40 nm? So shouldn't that have relieved the Fermi design effort? Or are you implying that the delay in GT21x rippled over to Fermi?
 
Back
Top