Replace "GPU" with "Graphics Card" then and my point stands.
OK, maybe most people have a vague idea of what a graphics card is, but they certainly don't know about GPGPU.
Replace "GPU" with "Graphics Card" then and my point stands.
I don't want to sound rude or intrusive , but could any of you guys join me in this thread :
http://forum.beyond3d.com/showthread.php?t=55873
I would appreciate the help .
Sure! Question is: is it worth the effort. Which probably it's not.Certainly a crazy idea, but is it possible to offload things like post-processing and tesellation to a dedicated lowish/mainstream card, just like PhysX?
They've been doing that for the past two years. CUDA is even teached at universities. The "killer apps" are slowly but certainly appearing and GPGPU computing is gaining a lot of strength. Most people already know that GPUs can do a lot more than just graphics.
The HPC market is a clear target, now more than ever, because of the high profit margins it provides.
That's forgetting the main benefit of CS 5.
As NV themselves like to say, a GPU is no more "just a gaming chip", and even for games, CS could have a sensible effect on playability if used right (see HDAO in STALKER CoP... 40% framerate increase vs PS, for a lesser final cost than medium SSAO).
Yeah, tessellation seems to be the thing to throw away, but that's too much of a shortcut.
I would not be surprized if OpenCL replaces CUDA completely at universities.
compres said:Edit: I think there is a good posibility that actually AMD and their ATI videocards have a better position going forward if OpenCL takes off. The reason is that they have both the CPU and GPU to offer to HPC systems on top of past history in the jaguar and roadrunner supers that use the opteron.
Really? Losing support from proprietary tech can never be good thing, unless ofcourse the proprietary tech in question really sucks, which is not the case here. There are somethings that you cant spin.Probably, but that doesn't concern NVIDIA too much, since they support everything else, including OpenCL from the start.
Really? Losing support from proprietary tech can never be good thing, unless ofcourse the proprietary tech in question really sucks, which is not the case here. There are somethings that you cant spin.
Instead of shifting to the low end to say derivatives are useless, what about some analysis?When we are talking about low-end cards, that certainly doesn't matter. They aren't able to run many games, even on the lowest settings.
Spin ?...Ok...
CUDA will not disappear, with OpenCL or any other abstraction layer. CUDA is part of NVIDIA's architecture and that won't change anytime soon. Support for these other abstraction layers will be given through translation in drivers.
Instead of shifting to the low end to say derivatives are useless, what about some analysis?
Quite fun to see low end downplayed and emphasized in the same post, just to distract from the real point.
EVEN IF we ignore low end, mainstream GPUs DO exist and ARE targeted by game devs.
On the red corner, Juniper is here and low end/value will be there soon.
On the green corner, where are those Fermi derivatives everyone waits?
Juniper has already been sold almost twice as well as Cypress even with RV770 and RV790 still on the shelves, so one might think that even with a reasonably priced high end, mainstream sells twice more in volume, thus have a bigger impact on how fast devs will shift to DX11, and how they'll do it.
Instead of shifting to the low end to say derivatives are useless, what about some analysis?
Quite fun to see low end downplayed and emphasized in the same post, just to distract from the real point.
EVEN IF we ignore low end, mainstream GPUs DO exist and ARE targeted by game devs.
On the red corner, Juniper is here and low end/value will be there soon.
On the green corner, where are those Fermi derivatives everyone waits?
Juniper has already been sold almost twice as well as Cypress even with RV770 and RV790 still on the shelves, so one might think that even with a reasonably priced high end, mainstream sells twice more in volume, thus have a bigger impact on how fast devs will shift to DX11, and how they'll do it.
If OpenCL ever catches up with CUDA, I don't see why it should not disappear/diminish (as a dev platform). CUDA as an architecture is irrelevant to this discussion, as devs won't care if it's CUDA or whatever other stuff that's underneath, if they write an app under OCL.
How would they do this with a non-existent tapeout?You are way too touchy about this subject it seems...
When I replied to you, the discussion was focused on low-end. Juniper is not low-end. But since you included it NOW I'll just go ahead and agree with you. Mainstream GPUs are much more important to game devs, than low-end GPUs and yes, there's no Fermi based mainstream graphics card...yet.
And if you've read the last few pages of this thread, you would see that I've speculated myself, that NVIDIA may very well release a GeForce 340/350 with the release of the GeForce 380, instead of the usual GeForce 380/360, to have something in the most profitable section of the consumer graphics market: the mid-range.
As for your Juniper sales numbers, your sources are...?
How would they do this with a non-existent tapeout?
Or are you implying a drastically cut down version of GF100? They would have a very slim window for MSRP, especially depending on competitive performance, they have to price it higher than ~$250 but what if it can't beat a 5850?
What do they do then, spend resources on producing a card only to lose that time and money or just take the loss? It is the less of two evils.
Or are you implying a drastically cut down version of GF100?
No cut down version. My speculation involved a new chip, with half verything that the full Fermi chip allows. But then evolved to something like: 256 ALUs, 32 ROPs and 80 TMUs.
Pretty much a Fermi based version of the GTX 285 with more ALUs and on a 256 bit memory interface with GDDR5.
But whatever it ends up to be, released or not, at the same time than the high-end graphics card, it must beat the competition. Matching performance, when it's already late, is not enough. Less performance and it's a total failure.
Isn't that exactly what a "drastically cut down version of GF100" would entail ?? (though I'm confused as to how nv would create and execute a fermi based version of the GTX 285 when they are 2 entirely different architectures).
just looking for a bit of clarification
I assumed that by "cut down version", he meant a full Fermi chip with units disabled. I'm talking about a new chip. with probably the specs I speculated.
And I mentioned the GTX 285, because this new chip would essentially have the same ROPs and TMUs as the GTX 285 with roughly the same number of ALUs too, plus would be competing with the HD 5850.
You said you expect this to launch with the GTX380?
How can you honestly expect that with no word on tape out of any derivatives?