NVIDIA GF100 & Friends speculation

That's what I am concerned about as a consumer. NV's hpc focus.

And why should you be worried ? If you invest in NVIDIA, you should actually be glad they did.

rpg.314 said:
Longer term, it is actually nv that has nothing in HPC market.

I'll be waiting to see that one :)

rpg.314 said:
In my case, I'd say the HPC market NV is so hot on is a dead duck in the water for them if they can't arrange for permission from the powers that be to touch the x86 socket.

I'll agree with that. The lack of a x86 license is certainly not good to them.

rpg.314 said:
The window of opportunity for NV in hpc market is limited to the time multi-socket fusion chips arrive. My guess 2012 is when they arrive, 2013 is when tesla's growth begins to sublimate away. NV knows this is *very* short and is hence shot for almost all the features HPC crowd asks for.

LOL, a fusion chip as powerful as Fermi ? That will be a long walk for both Intel and AMD...long enough for NVIDIA to reap the benefits from that market and come along with something else eventually.
 
Why should nVidia have any interest in that? Their parts are basically billed as co-processors, and ones that can easily be upgraded to boot. There's been quite a lot of interest in building clusters with nVidia hardware among people I work with (this is for the Planck satellite, and we have a number of supercomputers dotted around Europe and the US for data crunching).
For now, yes. Fermi is a *very* good advance on the state of gpu hw for hpc. Longer term, the issues cited by Paterson in Fermi immediately come to mind. Simply put NV's stuck on the wrong side of PCIe bus. See this and the link therein.

http://forum.beyond3d.com/showpost.php?p=1367032&postcount=1

GPU+ARM cores, flash, 10GbE (IMO, not mentioned but quite likely by then). They are thinking of literally growing a separate "compute system" on the other side. Why?

Those aren't going to be anywhere near the performance of discrete graphics for a long time to come.

I am very sure there will be parts with much larger die area devoted to GPU cores in that time. Bulldozer was supposed to be designed to allow quite some flexibility in that regard. Even if you say "AMD can't execute on it's CPUs to save it's life", there is still the question of Intel. Good luck betting against their resources/talent/execution. (No larrabee is not a valid counter example here). Larrabee sitting on an x86 socket (in whatever state it was) would have nuked Tesla out of orbit.
 
Let's get cryptic and stuff

The closer it gets, the warmer the air.

It gets there fast, for what it's worth.
 
Well...., you should if only because it is in YOUR best interest as a consumer to have parts competitive in perf/$ from both sides

That argument only holds water in low-end and mainstream...for high-end it's irrelevant, there perfomance is the "benchmark" :p
 
I would have thought that those people would have seen already have seen how this thing performs.

Hmmm, why would they? Not all the revellers have the inside scoop. The party is getting started based on the early leaks / rumours, but if those are confirmed things will really start going. :)

The closer it gets, the warmer the air.

It gets there fast, for what it's worth.

But...but...can you HEAR it coming!?
 
I'll be waiting to see that one :)
You don't need to wait. Just try to port a shared memory parallel code over to distributed memory one. ;)

If you manage to climb that hill, try pushing stuff over PCIe to GPUs from your 10 year old code, especially the parts that need IO over network to feed the GPU.

Or you could just add -mavx to your Makefile (talking about 2011-2012 of course). :p

For a cpu that gives even half of it's die to the gpu bit (and has commensurate bw), I'll buy it even if it has 3x the price of a co-processor that has 2x the perf.

And in case you haven't noticed, the >50% of the discrete gpu market is going to vaporize once gpu's are on socket across the board.
 
That argument only holds water in low-end and mainstream...for high-end it's irrelevant, there perfomance is the "benchmark" :p
Wall street suits won't be too happy if you win benches by 10% but cost, say 50% more to produce.

And yes, they do matter, no matter how you feel about them.
 
Not really. GF100 vs Cypress is a better situation for NV then GT200 vs RV770 was. They now have a lead in features, they have a clearly more future-proof products and they still are basically the only company with a GPU-based products for HPC markets. And although it's not really revealed yet I'd guess that AMD's transistor density advantage is lost on 40G.
The only advantage 5800 has here in my opinion is a quite less power consumption. But that's just not something that matters to me in any way.

How exactly can you prove ahead of release even that the G100 will be more future-proofed in a meaningful way relative to the price / performance between the two chips? We won't find that out for another 6 months at least how these chips scale with time and as always its a circular argument with product continually coming in from both companies. You could just as easily say that 1 5850 which held the significant feature advantage of being one of the first DX11 chips and one 6850 for $250 * 2 over two years is a more future proofed purchasing strategy than say one $500 chip in between.

I hope it isn't lost on you that the HD5xxx series actually has two meaningful advantages aside from power. It can natively support 3 monitors. NVision isn't as viable anymore now that Intels 1156 pin platform doesn't support dual GPU particularly well anymore especially with USB3 support and its cheaper, probably with a better price/performance ratio.
 
So much HPC hype from nVidia and I see AMD on the top 500 #5 machine from china.

Then again, let's see if they manage the Oak Ridge deal. Although I doubt it now since Fermi is a power hog.
 
For now, yes. Fermi is a *very* good advance on the state of gpu hw for hpc. Longer term, the issues cited by Paterson in Fermi immediately come to mind. Simply put NV's stuck on the wrong side of PCIe bus. See this and the link therein.

http://forum.beyond3d.com/showpost.php?p=1367032&postcount=1

GPU+ARM cores, flash, 10GbE (IMO, not mentioned but quite likely by then). They are thinking of literally growing a separate "compute system" on the other side. Why?
This is quite long-term stuff, like 10 years out or so. A lot could change in 10 years.

I am very sure there will be parts with much larger die area devoted to GPU cores in that time. Bulldozer was supposed to be designed to allow quite some flexibility in that regard. Even if you say "AMD can't execute on it's CPUs to save it's life", there is still the question of Intel. Good luck betting against their resources/talent/execution. (No larrabee is not a valid counter example here). Larrabee sitting on an x86 socket (in whatever state it was) would have nuked Tesla out of orbit.
The problem isn't so much on the side of making a big enough GPU on the CPU die, but instead of memory bandwidth. GPGPU applications already tend to be rather bandwidth-starved on today's GPU's. Things will only get worse for them if they're forced to share much lower memory bandwidth with the CPU.
 
You don't need to wait. Just try to port a shared memory parallel code over to distributed memory one. ;)

If you manage to climb that hill, try pushing stuff over PCIe to GPUs from your 10 year old code, especially the parts that need IO over network to feed the GPU.

Or you could just add -mavx to your Makefile (talking about 2011-2012 of course). :p

For a cpu that gives even half of it's die to the gpu bit (and has commensurate bw), I'll buy it even if it has 3x the price of a co-processor that has 2x the perf.

And in case you haven't noticed, the >50% of the discrete gpu market is going to vaporize once gpu's are on socket across the board.

And that's a given isn't it ? "Fusion" chips will take on the integrated graphics market at first (which currently gets...what...~50% of the market already ?), because that's what most of them will be able to provide, performance wise.

Until they catch up with the performance offered by GPUs today, discrete GPUs will evolve, probably to address some of the caveats that were pointed out as problematic for the future of GPUs post-Fermi.
 
How does a company go from being almost twice as fast 8800 ultra vs 2900 as their competitor to this farcical mess with Fermi in 3 years?

I think it is important to consider when the overall design of gt300 was "set in stone". I believe it was before Nvidia knew what r770 was. It was also back when larrabee was a very real "possible product". I suppose Nvidia envisioned a world where Intel was more of a threat than AMD was. The focus of Fermi might have been performance with flexibility (to counter larrabee) and not absolute performance (with added flexibility only to conform to new API standards). I don't think Nvidia's engineers are any less capable than AMD's engineers. But sometimes it comes down to being able to predict what the market conditions will be in X amount of time.
 
Until they catch up with the performance offered by GPUs today, discrete GPUs will evolve, probably to address some of the caveats that were pointed out as problematic for the future of GPUs post-Fermi.

In view of Bill Daly's speculations, it's kinda hard to see any worthwhile solution evolving unless they can touch the socket. And don't forget that nv has to fight *both* intel and amd. Fighting INtel any day is not a good proposition. With ATI into the mix, and the current efficiency handicap, it's doubly not-good.
 
LOL, a fusion chip as powerful as Fermi ? That will be a long walk for both Intel and AMD...long enough for NVIDIA to reap the benefits from that market and come along with something else eventually.
For a workstation that needs GPGPU, I can see high end discrete GPUs ruling for a long time. However, for HPC clusters it's more about perf/watt. That might some day mean lots of fusion APUs will be better than discrete GPUs.
 
This is quite long-term stuff, like 10 years out or so. A lot could change in 10 years.
IMHO, the solutions are more far-off than the problems. Not to mention they are more disruptive of the ecosystem.


The problem isn't so much on the side of making a big enough GPU on the CPU die, but instead of memory bandwidth. GPGPU applications already tend to be rather bandwidth-starved on today's GPU's. Things will only get worse for them if they're forced to share much lower memory bandwidth with the CPU.
AFAIK, Llano has a 128bit DDR3 interface. If you must use DDRx, then fusion-DRAM MCMs are the most feasible of options.
 
In view of Bill Daly's speculations, it's kinda hard to see any worthwhile solution evolving unless they can touch the socket. And don't forget that nv has to fight *both* intel and amd. Fighting INtel any day is not a good proposition. With ATI into the mix, and the current efficiency handicap, it's doubly not-good.
Alternatively, they could work with other CPU-manufacturers. In the HTPC space, x86, though popular, is much less important.
 
For a workstation that needs GPGPU, I can see high end discrete GPUs ruling for a long time. However, for HPC clusters it's more about perf/watt. That might some day mean lots of fusion APUs will be better than discrete GPUs.

Also, GPU's blades need another cpu blade just to run. With APUs, you can have 2 identical blades for quite a bit better perf/$/W.

Also, AFAIK the workstation market has much less volumes, leading to it's own set of problems.
 
I think it is important to consider when the overall design of gt300 was "set in stone". I believe it was before Nvidia knew what r770 was. It was also back when larrabee was a very real "possible product". I suppose Nvidia envisioned a world where Intel was more of a threat than AMD was. The focus of Fermi might have been performance with flexibility (to counter larrabee) and not absolute performance (with added flexibility only to conform to new API standards). I don't think Nvidia's engineers are any less capable than AMD's engineers. But sometimes it comes down to being able to predict what market conditions will be in X amount of time.


Yes I understand that, it was more of a rhetorical question.

My personal belief is nVidia stopped innovating on gaming gpu's the very second AMD bought ATI. A full on rebranding scheme has got them into this position while ATI went back to basics and delivered on time. I believe nVidia thought they had much more time than they had.

You could say that their rebranding almost worked in fact because they only started to lose badly with g200. My issue is with Fermi however - this had to be a lot better than what it is.

At some point nVidia must have made the decision to forego a dual chip card Fermi. The must have known it was going to be at least 250w or higher, and that it would not be a viable candidate for making an X2. For me, they aimed even higher and made the conscious decision that a maxed out single Fermi chip would be faster than any X2 ATI could deliver. I just do not believe that nVidia would give up the halo - they believed a 300w Gtx 480 would beat the 5970 and by the looks of things they've fallen well short of that mark.

I think we all know that ATI simply has to go with a slightly bigger chip again next time around and it will be game, set and match.
 
Back
Top