NVIDIA Fermi: Architecture discussion

They've been doing that for the past two years. CUDA is even teached at universities. The "killer apps" are slowly but certainly appearing and GPGPU computing is gaining a lot of strength. Most people already know that GPUs can do a lot more than just graphics.

The HPC market is a clear target, now more than ever, because of the high profit margins it provides.

I would not be surprized if OpenCL replaces CUDA completely at universities.

Edit: I think there is a good posibility that actually AMD and their ATI videocards have a better position going forward if OpenCL takes off. The reason is that they have both the CPU and GPU to offer to HPC systems on top of past history in the jaguar and roadrunner supers that use the opteron.
 
Last edited by a moderator:
That's forgetting the main benefit of CS 5.

As NV themselves like to say, a GPU is no more "just a gaming chip", and even for games, CS could have a sensible effect on playability if used right (see HDAO in STALKER CoP... 40% framerate increase vs PS, for a lesser final cost than medium SSAO).

Yeah, tessellation seems to be the thing to throw away, but that's too much of a shortcut.

When we are talking about low-end cards, that certainly doesn't matter. They aren't able to run many games, even on the lowest settings.
 
I would not be surprized if OpenCL replaces CUDA completely at universities.

Probably, but that doesn't concern NVIDIA too much, since they support everything else, including OpenCL from the start.

compres said:
Edit: I think there is a good posibility that actually AMD and their ATI videocards have a better position going forward if OpenCL takes off. The reason is that they have both the CPU and GPU to offer to HPC systems on top of past history in the jaguar and roadrunner supers that use the opteron.

We'll see. You're thinking way ahead, since OpenCL still has a long way to long.
 
Probably, but that doesn't concern NVIDIA too much, since they support everything else, including OpenCL from the start.
Really? Losing support from proprietary tech can never be good thing, unless ofcourse the proprietary tech in question really sucks, which is not the case here. There are somethings that you cant spin.
 
will OpenCL support Fermi features? (pointers, memory hierarchy and what not)

for HPC I believe OpenCL doesn't matter much, any code will be custom and OpenCL code would need to be rewritten anyway if ran on another architecture. we don't hear of Radeons in 1U racks or ECC memory either.

I believe it will matter more in the consumer space, especially with AMD Fusion. Maybe we'll even see code running on the sandy bridge IGP, even if reluctantly :), and on a future Fermi-based nvidia Tegra.
 
Really? Losing support from proprietary tech can never be good thing, unless ofcourse the proprietary tech in question really sucks, which is not the case here. There are somethings that you cant spin.

Spin ?...Ok...:rolleyes:

CUDA will not disappear, with OpenCL or any other abstraction layer. CUDA is part of NVIDIA's architecture and that won't change anytime soon. Support for these other abstraction layers will be given through translation in drivers.
 
When we are talking about low-end cards, that certainly doesn't matter. They aren't able to run many games, even on the lowest settings.
Instead of shifting to the low end to say derivatives are useless, what about some analysis?

Quite fun to see low end downplayed and emphasized in the same post, just to distract from the real point.

EVEN IF we ignore low end, mainstream GPUs DO exist and ARE targeted by game devs.

On the red corner, Juniper is here and low end/value will be there soon.
On the green corner, where are those Fermi derivatives everyone waits?

Juniper has already been sold almost twice as well as Cypress even with RV770 and RV790 still on the shelves, so one might think that even with a reasonably priced high end, mainstream sells twice more in volume, thus have a bigger impact on how fast devs will shift to DX11, and how they'll do it.
 
Spin ?...Ok...:rolleyes:

CUDA will not disappear, with OpenCL or any other abstraction layer. CUDA is part of NVIDIA's architecture and that won't change anytime soon. Support for these other abstraction layers will be given through translation in drivers.

If OpenCL ever catches up with CUDA, I don't see why it should not disappear/diminish (as a dev platform). CUDA as an architecture is irrelevant to this discussion, as devs won't care if it's CUDA or whatever other stuff that's underneath, if they write an app under OCL.
 
Instead of shifting to the low end to say derivatives are useless, what about some analysis?

Quite fun to see low end downplayed and emphasized in the same post, just to distract from the real point.

EVEN IF we ignore low end, mainstream GPUs DO exist and ARE targeted by game devs.

On the red corner, Juniper is here and low end/value will be there soon.
On the green corner, where are those Fermi derivatives everyone waits?

Juniper has already been sold almost twice as well as Cypress even with RV770 and RV790 still on the shelves, so one might think that even with a reasonably priced high end, mainstream sells twice more in volume, thus have a bigger impact on how fast devs will shift to DX11, and how they'll do it.

Lets not forget that low end cards get faster each time out. The dx 11 low end cards will be faster than what the dx 10 cards could do. If dx 11 mode can increase performance over the dx 10 mode like in stalker then it will be targeted more towards devs. What dev wouldn't want a 20% increase in performance at the low end. Thats when it matters the most .
 
Instead of shifting to the low end to say derivatives are useless, what about some analysis?

Quite fun to see low end downplayed and emphasized in the same post, just to distract from the real point.

EVEN IF we ignore low end, mainstream GPUs DO exist and ARE targeted by game devs.

On the red corner, Juniper is here and low end/value will be there soon.
On the green corner, where are those Fermi derivatives everyone waits?

Juniper has already been sold almost twice as well as Cypress even with RV770 and RV790 still on the shelves, so one might think that even with a reasonably priced high end, mainstream sells twice more in volume, thus have a bigger impact on how fast devs will shift to DX11, and how they'll do it.

You are way too touchy about this subject it seems...

When I replied to you, the discussion was focused on low-end. Juniper is not low-end. But since you included it NOW I'll just go ahead and agree with you. Mainstream GPUs are much more important to game devs, than low-end GPUs and yes, there's no Fermi based mainstream graphics card...yet.
And if you've read the last few pages of this thread, you would see that I've speculated myself, that NVIDIA may very well release a GeForce 340/350 with the release of the GeForce 380, instead of the usual GeForce 380/360, to have something in the most profitable section of the consumer graphics market: the mid-range.

As for your Juniper sales numbers, your sources are...?
 
If OpenCL ever catches up with CUDA, I don't see why it should not disappear/diminish (as a dev platform). CUDA as an architecture is irrelevant to this discussion, as devs won't care if it's CUDA or whatever other stuff that's underneath, if they write an app under OCL.

That was precisely the point I made in the post you quoted me from. CUDA will always be "under the hood" if you will. It will be transparent to any developer that wants to use any other abstraction layer, since translation will exist in drivers.
 
You are way too touchy about this subject it seems...

When I replied to you, the discussion was focused on low-end. Juniper is not low-end. But since you included it NOW I'll just go ahead and agree with you. Mainstream GPUs are much more important to game devs, than low-end GPUs and yes, there's no Fermi based mainstream graphics card...yet.
And if you've read the last few pages of this thread, you would see that I've speculated myself, that NVIDIA may very well release a GeForce 340/350 with the release of the GeForce 380, instead of the usual GeForce 380/360, to have something in the most profitable section of the consumer graphics market: the mid-range.

As for your Juniper sales numbers, your sources are...?
How would they do this with a non-existent tapeout?
Or are you implying a drastically cut down version of GF100? They would have a very slim window for MSRP, especially depending on competitive performance, they have to price it higher than ~$250 but what if it can't beat a 5850?
What do they do then, spend resources on producing a card only to lose that time and money or just take the loss? It is the less of two evils.
 
How would they do this with a non-existent tapeout?
Or are you implying a drastically cut down version of GF100? They would have a very slim window for MSRP, especially depending on competitive performance, they have to price it higher than ~$250 but what if it can't beat a 5850?
What do they do then, spend resources on producing a card only to lose that time and money or just take the loss? It is the less of two evils.

No cut down version. My speculation involved a new chip, with half verything that the full Fermi chip allows. But then evolved to something like: 256 ALUs, 32 ROPs and 80 TMUs.
Pretty much a Fermi based version of the GTX 285 with more ALUs and on a 256 bit memory interface with GDDR5.

But whatever it ends up to be, released or not, at the same time than the high-end graphics card, it must beat the competition. Matching performance, when it's already late, is not enough. Less performance and it's a total failure.
 
Or are you implying a drastically cut down version of GF100?

No cut down version. My speculation involved a new chip, with half verything that the full Fermi chip allows. But then evolved to something like: 256 ALUs, 32 ROPs and 80 TMUs.
Pretty much a Fermi based version of the GTX 285 with more ALUs and on a 256 bit memory interface with GDDR5.

But whatever it ends up to be, released or not, at the same time than the high-end graphics card, it must beat the competition. Matching performance, when it's already late, is not enough. Less performance and it's a total failure.

excuse my intrusion.. I'm a bit confused. The question was asked about nV using a "a drastically cut down version of GF100?" to compete against ATI's mainstream (juniper), Silus responds with "No cut down version.".. "256 ALUs, 32 ROPs and 80 TMUs.
Pretty much a Fermi based version of the GTX 285 with more ALUs and on a 256 bit memory interface with GDDR5."

Isn't that exactly what a "drastically cut down version of GF100" would entail ?? (though I'm confused as to how nv would create and execute a fermi based version of the GTX 285 when they are 2 entirely different architectures).

just looking for a bit of clarification
 
Isn't that exactly what a "drastically cut down version of GF100" would entail ?? (though I'm confused as to how nv would create and execute a fermi based version of the GTX 285 when they are 2 entirely different architectures).

just looking for a bit of clarification

I assumed that by "cut down version", he meant a full Fermi chip with units disabled. I'm talking about a new chip. with probably the specs I speculated.

And I mentioned the GTX 285, because this new chip would essentially have the same ROPs and TMUs as the GTX 285 with roughly the same number of ALUs too, plus would be competing with the HD 5850.
 
I assumed that by "cut down version", he meant a full Fermi chip with units disabled. I'm talking about a new chip. with probably the specs I speculated.

And I mentioned the GTX 285, because this new chip would essentially have the same ROPs and TMUs as the GTX 285 with roughly the same number of ALUs too, plus would be competing with the HD 5850.

You said you expect this to launch with the GTX380?
How can you honestly expect that with no word on tape out of any derivatives?
 
You said you expect this to launch with the GTX380?
How can you honestly expect that with no word on tape out of any derivatives?

I did ? You're putting words in my posts now ?

No, I did not say that. I did however speculate that it might be the case, given how late Fermi is and the fact that the mid-range market is where the big profits are.
 
Back
Top