NVIDIA GF100 & Friends speculation

They're at 180W (and according to gaming consumption much less)
IIRC the 5850 uses less than a GTS250 when gaming... tied on (nV TDP vs AMD maxboardpower)

What I'd really like to see AMD build- besides the eventual 5890- is reduce length and power draw of the 5870 with a minor clock cut and voltage drop, oh and of course the price.

"Look you're using ~2x the power to be slightly faster!" might be even more damning than winning 15% on the halo.

Which brings us back to GF100. This better not be all true, especially the performance metrics, otherwise the 1/2 cut card even when pumped at screaming clocks wouldn't manage to do much.

Gonna make the same mistake like I did on R600 - assume drivers for now :p

Im pretty certain all 5870s produced nowadays would be under than 180W thermal design power as the chips and processes have improved for both the RAM and the GPU itself it probably means they don't have to run them quite as hot as at launch. Im not exactly sure what causes the 5870/50 to be as long as they currently are if Nvidia is able to make shorter boards with hotter GPUs.

I suspect 50% more power and slower would be more damaging personally, but this is just speculation at this point and I would love to see these chips in action against each other if only to reveal the truth.

One concept I cannot get out of my head is the idea that Dave Baumann has been sitting in his office for the last few months surfing the internet and eating cake. I suspect Charlie is also getting fat from cake as im sure Dave will share the 3, 4, 5, 6, 7, 8, 9, 10 etc million shipped cakes with him too. Whats there to do for a product manager for a product which simply sells itself? I say hes eating cake until confirmed otherwise!
 
The "max board power" was never even near the usual gaming situation anyway, in real world gaming HD5870 consumes usually less than HD4870 even though the "TDP" or max board power is higher
 
If he was talking about the 448 SP version, I might think that the 5% figure is barely plausible. But that claim seems very fishy to me unless he was testing at CPU limited resolutions.

OTOH, I was quite skeptical that 64 texture units would be enough for NVidia to get the performance they desired, regardless of efficiency improvements.

Well considering the handpicked test got 4480SP version around 20-30% faster than HD5870, I don't see the 5% being necessarily much off some other title
 
Im pretty certain all 5870s produced nowadays would be under than 180W thermal design power as the chips and processes have improved for both the RAM and the GPU itself it probably means they don't have to run them quite as hot as at launch. Im not exactly sure what causes the 5870/50 to be as long as they currently are if Nvidia is able to make shorter boards with hotter GPUs.

maybe they like having a lot of room for the power circuitry and the fan.
overheating of the power stage was a weak point of previous radeons.

I wouldn't mind a long midrange card personnally. my beige tower accepts full length cards but I'm concerned with heat and price.
I loved the voodoo5 : I had a good length for the price, and I ran it passive.
 
maybe they like having a lot of room for the power circuitry and the fan.
overheating of the power stage was a weak point of previous radeons.

I wouldn't mind a long midrange card personnally. my beige tower accepts full length cards but I'm concerned with heat and price.
I loved the voodoo5 : I had a good length for the price, and I ran it passive.

That could be it. The only minor complaints anyone ever makes are about the length and given the price I suspect for a lot of people the psychological boost of having a lot of card for the money makes it more a pro than a con. The stock design is really good too for noise so one can say its a pretty fair tradeoff considering the various compromises and positive improvements.

Im going to have myself a Saphire Vapour X 5870 in about a week, I wasn't holding off because of any desire for a Fermi card but because I was hoping competition would drive the cost down!
 
I'll wager that this, this and this are actually the same length as this.

They're doing it wrong! :devilish:

None of those would fit in snugly to my casing, and I'm on a very common Antec 300. Yes it can fit eventually, but probably involving violent yet creative methods. :cry:

Though I do question the need for components being spaced that far apart. Less PCB layers as a consequence too?
 
It's funny that charlie thinks using the "gpgpu shader" for tessellation (second stage) is a good strategy after this:


http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture
Again double dipping Charlie's articles after mods advised to not go offtrack? This is after YOU called Cypress another NV30. :LOL:

Anyway purely taking Charlie's 5% average lead over Cypress, it will be the final nail in GF100 to make it a certain failure. I hope its not true. But Charlie going balls out on this piece of information makes it more disheartening..
 
They're doing it wrong! :devilish:

None of those would fit in snugly to my casing, and I'm on a very common Antec 300. Yes it can fit eventually, but probably involving violent yet creative methods. :cry:

Though I do question the need for components being spaced that far apart. Less PCB layers as a consequence too?

Apparantly it fits if you don't have a HDD inline with the HD 5870. :)

Just got confirmation.
"Hi guys!

I've registered in here to give you a final answer, because yesterday I bought the HD 5870:

YES, it will fit in the Antec 300. It's pretty close however.



Have a good time with your HD 5870 (it's a very nice card)"
 
SemiAccurate gets some GTX480 scores
http://www.semiaccurate.com/2010/02/...gtx480-scores/

No, not the thermal cap of the chip, but the tessellation performance in Heaven. On that synthetic benchmark, the numbers were more than twice as fast as the HD5870/Cypress, and will likely beat a dual chip HD5970/Hemlock.

Well, the resolution of Fermi's paradox is near. Within a month we'll know whether it is closer to NV30 or G80?

File under Microprocessors and Graphics and Channel and Reviews and Humor and Desktop and Gaming
:)
 
Anyone know of what the "chip plans that didn't work out" at NVIDIA were? Was there some other DX11 design on the drawing board? Or was that just Charlie doing some speculation? GF100's intense GPGPU oriented design has worried me with regards to its 3D rendering efficiency and it sounds like those worries are on the mark.
 
GF100's intense GPGPU oriented design has worried me with regards to its 3D rendering efficiency and it sounds like those worries are on the mark.

Oh dear.... :rolleyes:

Looks like Jawed was right. Somebody needs to put "Graphics and Compute/GPGPU are joined at the hip :yep2:" in his sig.

Any volunteers? ;)
 
If you think this is nonsense, could you please provide some cogent arguments in your favor? After all, more than one person on B3D thinks building reticle size dies on a cutting edge process is a bad idea...
I don't see the relevance of your link.

Jawed was suggesting that the whole G80-based architecture was fundamentally unmanufacturable because only G80 was supposedly on time. The fact that G80 was a seriously large die itself already contradicts that very statement, but never mind: the idea that smaller to very small versions (G88, G86, G98) were late because of the architecture is laughable and a poster child of a 'correlation doesn't mean causation' argument. (Charlie is way better at this, though, see the R&D story, which must have been the most embarrassing article ever on his website.)

We don't know why they were late (if they were: do you know the internal roadmap?), but unless there are serious process issues (and 40nm is only one in recent history where this was the case), I'd put all my money on issues that delay chips at all companies: feature creep first and foremost, ordinary design bugs, backend timing mistakes such as hold time violations and incorrect multi-cycle paths, noise problems etc.

The list is endless. Frankly, I wouldn't even know how to design a chip with an architecture that's somehow fundamentally unmanufacturable even though the first (large) version comes out flawless. I would love to hear specific details from Jawed about exactly what would make an architecture unmanufacturable. And how GDDR5 fits in that picture is a similar mystery.

(BTW: I don't buy the GF100 via story from Charlie either. He's very reliable about tape-out dates, but the moment he steps into anything technical he's a loose cannon who really has no clue. I've given up correcting him, it's pointless.)
 
My argument against GF100 stems from the fact that even companies with widely acknowledged process advantage like Intel don't build reticle sized chips on cutting edge process. That could very well be affecting yields. GT200 being late only adds to this, though it wasn't on a problematic or new process.

Although fermi is essentially a g80 derivative, the difference between them is huge. And I am neutral on the "architectural things affecting manufacturability" issue.

Regarding feature creep, I have a feeling it was delayed after the first LRB paper, in fall 2008 to achieve feature parity.

As for GDDR5, I think it has been said earlier by neliz too that nv is having trouble designing the memory controllers for gddr5 on 40 nm. The delay in their first gddr5 only adds to it.

I agree arm chair silicon experts do a lot of "look for correlation, if true then it's probably the cause", but often it comes out right too.
 
Back
Top