No, this was xbitlabs and a lot of AMD fanboys.
That doesn't make sense to me, though. . . you'd think AMD fanboys would be touting tessellation months ago and downplaying it now in the face of PolyMorph. Not that I give a monkey's ass either way.
No, this was xbitlabs and a lot of AMD fanboys.
That'll make sense though I wonder how the situation really is with games currently used by reviewers.
The texturing rate isn't that low, in fact it is very comparable to what AMD has - AMD has 4 tmus for 80 alus, nvidia now 4 for 64 (clock-corrected) - factor in AMD doesn't get 100% utilization of their alus and it is very very similar.
That doesn't make sense to me, though. . . you'd think AMD fanboys would be touting tessellation months ago and downplaying it now in the face of PolyMorph. Not that I give a monkey's ass either way.
I remain to be convinced based purely on a small part of a single benchmark supplied by Nvidia PR. I'll wait to see it running in a game and compared to competing hardware. After all, wasn't it Nvidia telling us just a few months back how DX11 tessellation wasn't that important?
Maybe, but the bar for Fermi isn't to be similiar to Cypress. Nvidia claims Fermi's texturing units can achieve 40-70% higher throughput than GT200's depending on the application. Assuming that's true, there isn't much cause for concern. It's the downmarket parts that might have bit more trouble as the discrepancy there would be larger.
Geometry performance was a major focus of this new architecture, as much as you want to wait for "indpendent" views on that, they showed enough demos, synthethics, etc at CES and editor's days to really show the potential of the polymorph engine. The whole point of starting this off now was to show everyone they haven't forgotten about gaming, about a month away from seeing these cards in action.
March? HAHAHA, that's what they WANT you to think. In Feb, it will be April, in March it will be May, in April - June, in May - July, in June - Aug, in July - Sep, in Aug - Oct, in Sep - Nov, and then they will finally release it in time for xmas and they're master plan will be fulfilled.
See its all really a bar bet that Jen-Hsun made that the power of Nvidia was so strong that they didn't even need to release a product to prevent ATI's success, just endless reveals.
I've come to be sceptical when it comes to a company's PR department showing "potential" of a new architecture. I'd rather see what a gamer sees in an actual game.
Yep, they wanted to highlight their advantage in tessellation heavy scenes. Dastardly, ain't it?
I havent seen anyone call out Xbit, new low indeed.
Nvidia: DirectX 11 Will Not Catalyze Sales of Graphics Cards.
DirectX 11 - Not Important
Nvidia believes that special-purpose software that relies on GPGPU technologies will drive people to upgrade their graphics processing units (GPUs), not advanced visual effects in future video games or increased raw performance of DirectX 11-compliant graphics processors.
http://www.xbitlabs.com/news/video/...ill_Not_Catalyze_Sales_of_Graphics_Cards.html“DirectX 11 by itself is not going be the defining reason to buy a new GPU. It will be one of the reasons. This is why Microsoft is in work with the industry to allow more freedom and more creativity in how you build content, which is always good, and the new features in DirectX 11 are going to allow people to do that. But that no longer is the only reason, we believe, consumers would want to invest in a GPU,” said Mike Hara, vice president of investor relations at Nvidia, at Deutsche Bank Securities Technology Conference on Wednesday.
That doesn't make sense to me, though. . . you'd think AMD fanboys would be touting tessellation months ago and downplaying it now in the face of PolyMorph. Not that I give a monkey's ass either way.
I'm not talking about the benchmarks. I'm talking about the architecture. The move to multiple parallel geometry execution units is a huge change, and is according to nVidia the entire reason the product was delayed. Of course, if all of that hard work didn't pay off for nVidia, it would really suck for them. But it is nevertheless entirely clear that geometry performance is the primary thing this video card is designed to have over its predecessors.I remain to be convinced based purely on a small part of a single benchmark supplied by Nvidia PR.
That was when they didn't have a product that did a good job at it. Now that they're touting that it's extremely important, this seems to indicate that they are very confident that this is where their GF100 truly shines in real performance.After all, wasn't it Nvidia telling us just a few months back how DX11 tessellation wasn't that important?
No, but it is quite telling. When a company is confident of their product, they don't have to cherry-pick best cases for 'sneak previews'. This tells me that the numbers we are 'seeing' won't be anything close to the real disparity between the cards.
Nvidia said:DirectX 11 by itself is not going be the defining reason to buy a new GPU. It will be one of the reasons.
Yes, but keep in mind g92/gt200 never achieved close to theoretical rate (for whatever reason), not even in 3dmarks texture fill rate tests. So 40-70% probably just means they now indeed get to their theoretical rate (that would account for maybe 30%) plus larger l1 caches make them even more efficient. Maybe all that makes them more efficient than rv870 tmus, but I doubt it's much. Hence the actual alu:tex ratio is still similar for cypress and fermi.Maybe, but the bar for Fermi isn't to be similiar to Cypress. Nvidia claims Fermi's texturing units can achieve 40-70% higher throughput than GT200's depending on the application.
I thought that was already debunked?AnarchX said:The possibility seems high, that filtering runs at hot-clock, while adressing is half hot-clock.
Yeah, IF filtering would run at hot clock that would indeed probably make any filtering cheats rather unnecessary.So we are looking at 1.4GHz on 44.8Gtex/s bilinear, trilinear and 2x bi-AF (G80 style). HD 5870 would be in comparison 68/34/34.
I think a bit more, if that's 128bit / 16 rops. Either way, that gets quite close to Juniper die size, and it doesn't look very competitive to me. More like a serious Redwood competitor (though for that it probably wouldn't need the 16 rops).Silus said:'m extremely interested in the mainstream parts, if as was discussed in the past, NVIDIA can simply disable GPCs and end up with chips with:
512 SPs, 384 SPs, 256 SPs and 128 SPs.
It would be extremely interesting to see a 128 SPs part in the mid-low end market. Could this part be roughly 1/4 in size of the die of GF100 (maybe a bit bigger like ~150 mm2) ?
And 384 SPs is also the "natural" salvage part of a 512 SP part (either disable a full GPC or one SM within each GPC). Still wondering how the 448 SP part deals with the asymmetries...On second thought, the 384 SPs chip may not be very practical, since it would probably be a considerably large die aswell (maybe ~400 mm2).
Regarding these raw silicon costs, it doesn't make sense to me that they would simply divide the # of dies by the wafer costs. Would they not realistically expect a higher proportion of the wafer costs be ascribed to the higher bin parts to keep their margins relatively stable throughout the range of bins?
In addition to this, with Hemlock (sorry Dave I did think about writing R800) they would also expect that the highest bin Cypress parts would also be ascribed the highest proportion of the per wafer costs?
I know at this point Frankenstein has a greater chance of a fully functional brain than a Fermi board but even so the higher bin chips are actually worth more so I don't understand why they aren't priced as such on an analysis.
These 3DM tests with pure bilinear filtering were bound by interpolation performance and GT200 saw here an increase through the additional 8SPs per TPC.Yes, but keep in mind g92/gt200 never achieved close to theoretical rate (for whatever reason), not even in 3dmarks texture fill rate tests.
Nvidia told that TMUs are 1/2 hot clock but did not give more details.I thought that was already debunked?
Yeah, IF filtering would run at hot clock that would indeed probably make any filtering cheats rather unnecessary.
Fermi L1 texture cache is 12KB, just like GT200's.Yes, but keep in mind g92/gt200 never achieved close to theoretical rate (for whatever reason), not even in 3dmarks texture fill rate tests. So 40-70% probably just means they now indeed get to their theoretical rate (that would account for maybe 30%) plus larger l1 caches make them even more efficient.
Do you know that GF100 will be sold in professional and HPC markets where the margins are sky high and AMD has next to 0% market share, despite having great consumer products?
GF100 will make a lot of money for nv as r&d for mainstream consumer market has been paid off already, and *profits* in quadro and tesla markets are worth a LOT.
The video leaks, someone found them on PCPrespective's website , while they were preparing thier article.
No one in the press community has the cards in hand yet, they will soon though, and the benchmarks they have shown so far, is actually not thier "Best Advantages" in game situations.