It hasnt been softlaunched yet.
And, judging from the latest reports, the HD4870 has been further delayed into July.
GDDR5 supply issues ?
It hasnt been softlaunched yet.
However, Peddie gives Nvidia kudos for aggressively pursuing the emerging market for technical computing on graphics chips. The company is launching two board level products for such high-end apps, including a four GPU system in a 1U-sized rack-mounted device that delivers up to 4 TFlops at 700W. It sells for $7,995.
See Nvidia's cloth demo white paper for a description of how the method works.Why are the 9800GTX, GX2, and GTX280 so bunched up in Vantage's Cloth sim (here, too)? Tridam shows that GT200 keeps up with G80, so what's the hold up?
Found this rather interesting detail diagrams and die analysis at pc.watch.impress.co.jp
The original article is in Japanese, if you can read it .... this is the url link...
http://pc.watch.impress.co.jp/docs/2008/0617/kaigai446.htm
You could probably get a 4850 CF for that money.
How much higher clockspeed in the graphics core domain would GTX 280 need, beyond 602 MHz, to reach the 1 teraflop and 1.2 teraflop milestones AMD has apparently reached with 4850 and 4870 ?
It's very possible. You have cascaded shadow maps, which have to run through the entire scene's geometry multiple times (though good object culling will reduce that to 1.x times), and need to render more than what's visible, too. You sometimes have local shadow maps. You have environment maps - cube or planar - for reflections. You have Z-only passes and other multipassing depending on the specifics of the render engine.This was mentioned in the article but is it really that much of a limitation? 602m triangles per second is a lot of triangles, ~10m per frame at 60fps. Are we expecting games to require that level of polygon detail?
It's very possible. You have cascaded shadow maps, which have to run through the entire scene's geometry multiple times (though good object culling will reduce that to 1.x times), and need to render more than what's visible, too. You sometimes have local shadow maps. You have environment maps - cube or planar - for reflections. You have Z-only passes and other multipassing depending on the specifics of the render engine.
All these things take a scene's polygon count and multiply it to a much bigger number that's fed to the GPU. Shadow map rendering is particularly setup limited. Say GT200 can do 30 GPix/s uncompressed z-only fillrate (BW limited). The pixels in a large 2048x2048 shadow map with 3x net overdraw could be rendered in 0.4ms. You can't even set up a quarter million polys in that time.
It's also impossible to have the pixel:triangle ratio to stay constant even when accounting for post-transform caches/FIFOs, so in reality you probably can only do 5M total polys at 60fps to keep yourself setup-limited less than, say, 30% of the time.
Setup limitations are especially relevent at lower resolutions. GT200 may let you play Crysis with twice the resolution than your old card did at the same FPS, but at the same resolution you won't get close to a 100% increase in FPS. A lot of people would like the latter.
This is also a reason that consoles can sort of keep up with faster PC hardware, because they generally render at lower resolution while having similar setup rates.
Assuming a 2.5 shader/core clock ratio it would take ~ 670Mhz core to hit 1.2 teraflop.
Hopefully an Ultra version of the 55nm GT200b will hit 670-700 MHz.
I'm not too sure.Any particular reason why they don't improve triangle setup significantly? Is it expensive?
Hopefully an Ultra version of the 55nm GT200b will hit 670-700 MHz.
No wonder, you'd instead prefer backing out ..I could also take a dead mule instead, but why should anyone do something like that? Serious question.