Geforce NV50 Canceled [Inq]

If they really cancelled it, that can only mean they have an alternative which blew the management away completely. I'd really like to know what's going on.

Any other blurb?
 
tEd said:
the spy's came back
rofl.gif
rofl.gif
rofl.gif
 
Is there any reason to believe this is true?

Scale from 1-10 how does the rate? 1=Total BS, 10=Set in stone
 
_xxx_ said:
If they really cancelled it, that can only mean they have an alternative which blew the management away completely. I'd really like to know what's going on.

Any other blurb?

I don't know about that. It's not conventional business practice to just forfeit significant R&D investments unless you are concerned about marketing a product that is inferior to your competition's. Of course that's assuming the rumour is founded. :LOL:
 
We don’t have any idea as yet what lead to such a decision, but Nvidia does apparently think it's now time to make its next breakthrough chip.

More dodgy English from Fuad? Because that makes no sense, otherwise. The NV50 WAS their "breakthrough" chip. He's talking like it's just a standard refresh there.
 
While I can't comment on this, there have been a few things that I've had trouble with reconciliing on the roadmap rumours I'd heard.

For instance, if NV50 was supposed to be around for late 2005, then that would put it in a bit of no-mans land as to API support - before Longhorn, so not worth doing Shader 4.0, but probably not too much more you can sell on top of NV40. Longhorns date appears to have varied from early 2006 to 2007; if it was 2007 then that would suit NV50 coming out in late 2005 quite well, as NV60 with DX Next support could then com in time for Longhorn, however it seems as though Longhorn appears to be firming up for 2006 (with some features missing) so its important to have something for that.

The wildcard is, what is NV47? Perhaps NV47 is (or was) NV50 or at least very similar. i.e. alter the roadmap to cutback on some of the NV50 features and blend that into the NV40 architecture making it a little more orthogonal (i.e. make MSAA operate with FP Blending), speed up the shader performance (more quads, better branch handling) and possibly make a few other architectural changes (things like memory bus etc.).

Thats a whole bunch of speculation, but they may be some things to consider.
 
All I have read is hearsay and I have not listened to the Credit Suisse FB technology presentation, but these announcements of NV48 and NV50 being canned make little sense to me. If you were, in fact, canning two consecutive products and announced this you would have to make damn sure you had something else to offer. You can't just go up there and say "no, not gonna do that one" and "nope, we can't do that one either." To avoid the ensuing lynch mob you had better have something very positive to say about it and I have heard no such commentary.
 
Reverend said:
DaveBaumann said:
While I can't comment on this <snip>
OIC. Longish "no comment" post though :rolleyes:

Yes, but as you will observe Mr Baumann kept his promise and didn't actually shed much light on it. It's like he has fun posting his free thoughts just like everyone else...sheesh... :p
 
In 2002, the information I heard regarding the NV50 was that it was quite a world away from the NV40. It would have been - at least according to sources - a very different approach to shading.
Let's put it this way: Since the original R400, ATI wanted to unify VS and PS. This most certainly is an interesting feature, but David Kirk - when asked about it in a 2003 interview - said that NVIDIA was, at least for the time being, not interested in such a technology. They estimated that it wasn't worth the trouble, and would cause important difficulties for caching, unless that was radically modified.
But obviously, NVIDIA also needed to improve efficiency somehow. And, from my information... Licensing Fast14 simply was ATI's answer to NVIDIA's projects. A good one at that I'll admit. The basic NVIDIA idea is... yes, I know I already repeated this word a billion times in the past, but... ILDP.

The "real' NV50 was much more of a long term project for NVIDIA, just like the "real" R400 in fact. That made each part of the design much more important, at least for NVIDIA; the ALUs - and more importantly their architecture, the way they work together - are expected to be much more polished. Scalability and power efficiency have been considered as of the very first days of the design, because of the NV30 debacle.
From my point of view - and that pretty much is speculation I'll admit - the NV40 is much more of a transitional chip between NV30 and NV50. What the NV30 is, besides a flop, I don't know though. I guess it'd be reasonable to say that the NV30 was to give a featureset that would be fast on the NV40/50, so that those would get a headstart. Obviously, that was assuming ATI wouldn't do anything worthwhile in that timeframe...

So to make a potentially very long post only moderately long... Just like Dave, I expect the there to be a NV40-like chip using NV50 architectural improvements. To a certain extend. I'd expect the idea to be to replace the clock bottlenecks in the NV40 by the NV50 equivalent. This would potentially increase clockrate or/and reduce transistor count. Fundamentally you'd be replacing the units, but not the scheduler/organisation/featureset. MSAA comes to mind too though, as Dave mentionned; making it work with RTT would be important too, but such ideas are problematic because it involves changing many steps of the pipeline. It seems unlikely to me, sadly, and the acronym of NV47 would make me think of an improved NV45; not a nerfed NV50.

Replacing the NV40 ALUs by the NV50's would also be quite interesting for NVIDIA, since it'd most likely be the clock bottleneck; this would help them improve it further for the real NV50, at least in theory.

Another, completely different, possibility is also worth mentionning: the renaming of the real NV50 to NV60, kinda like what ATI did with the real R400 (although a bit differently). But that would imply the NV47 most likely wouldn't be a high-end refresh (or that it's not a major one), or that it'd due to be renamed soon.
Ah, where are the days IHVs actually kept a product's codename from first rumors to announcement? *grins*


Uttar
 
Last I had heard, NVIDIA were still pressuring MS not to have anything in the API that forces a unified approach at the hardware level (i.e. whilst the VS and PS have a unified instruction set, the hardware could still be distinct if they wanted to).
 
Hmmmmmm, what are the chances of Nvidia rolling out another 3dfx implementation? With SLI out of the way the next board could be the GeForce 7800 RAMPAGE. :oops:

It would be really interesting if the 512MB boards were pushed back so that Nvidia could work out a new memory controller. It could be a nice, single card alternative to SLI. Mmmmmmm, 2-4 6800U GPUs and 512mb-1gb of DDR3 or DDR4 memory. If they manage to get the heat under control it could be done without needing much more PCB area or a heatsink from hell...
 
DaveBaumann said:
Last I had heard, NVIDIA were still pressuring MS not to have anything in the API that forces a unified approach at the hardware level (i.e. whilst the VS and PS have a unified instruction set, the hardware could still be distinct if they wanted to).
Yeah that's what I meant. Probably wasn't too explicit about that in my post though, as I already mentionned that in my post in the second locked thread on this subject, sorry about that.
My point basically is this: ATI decided to unify the VS and PS. NVIDIA decided to increase parallelism inside the PS and VS, but not unify them. It seems likely their caching and scheduling approach is made with that in mind, and that they're going in a direction that'd complicate unifying all shader types. I don't really know what they're doing there though, so that's obviously speculation. I'd get into my beliefs of the details of the architecture, but the odds that my theories are right there are just too slim to bother ;)

Uttar
P.S.: Regarding SLI: As others have said in other threads, I don't believe multichip is going to work too nicely nowadays. The only theory that would work then is ONE uber geometry chip for several PS chips, ala Realizm, but I doubt there's much use in that albeit the rumors of the NV30 originally having a separate geometry chip iirc. Seems likely they'd try to improve alternative-line rendering though, since AFR currently is the preferred method quite ironically.
 
DaveBaumann said:
Last I had heard, NVIDIA were still pressuring MS not to have anything in the API that forces a unified approach at the hardware level (i.e. whilst the VS and PS have a unified instruction set, the hardware could still be distinct if they wanted to).
You mean for DXnext, or whatever they call the Longhorn DX distro...right? :|

EDITED BITS: Thanks Uttar, lots of great insight to chew on there....I've missed ya.
 
Uttar! ILDP! Wheeeeeee!

*dies from sleep deprivation and the mention of ILDP*

I get the feeling that NV47 is going to be the weirdest refresh ever.
 
DaveBaumann said:
Last I had heard, NVIDIA were still pressuring MS not to have anything in the API that forces a unified approach at the hardware level (i.e. whilst the VS and PS have a unified instruction set, the hardware could still be distinct if they wanted to).
I don't think there would be any technical reason to force a unified hardware implementation.
 
Operation Mindcrime said:
Hmmmmmm, what are the chances of Nvidia rolling out another 3dfx implementation? With SLI out of the way the next board could be the GeForce 7800 RAMPAGE. :oops:

It would be really interesting if the 512MB boards were pushed back so that Nvidia could work out a new memory controller. It could be a nice, single card alternative to SLI. Mmmmmmm, 2-4 6800U GPUs and 512mb-1gb of DDR3 or DDR4 memory. If they manage to get the heat under control it could be done without needing much more PCB area or a heatsink from hell...

I have been wondering about it too, heat production isn't linear with closk speed is it? So maybe by decreasing speed 20% and having two chips you could have the same thermal output, that would be pretty cool eh...
 
Back
Top