Semi Accurate's 4XX views

Status
Not open for further replies.
It's really a great product aside from the heat and power usage. Its performance is impressive and if the laws of physics would stop getting in everyone's way we would have unbelievable hardware. :D

It'd be disappointing to me even if the power issues and availability weren't there. It's just not enough of an improvement over the GTX 285. The Radeon HD 5870 is the same situation. I expected(Until the rumors on the GTX 480 hit, anyway) closer to a 70-80% improvement with both. I figured, since it was a new generation, we'd see these improvements, and we didn't.
 
Well the B3D crowd should be happy since only 13 people here are interested in the cards :LOL: Everybody should be able to get one if they're quick enough. There were 480's and 470's in stock at newegg today for 10-20 mins at a time on several occasions.

It'd be disappointing to me even if the power issues and availability weren't there. It's just not enough of an improvement over the GTX 285. The Radeon HD 5870 is the same situation. I expected(Until the rumors on the GTX 480 hit, anyway) closer to a 70-80% improvement with both. I figured, since it was a new generation, we'd see these improvements, and we didn't.

Yeah the gains just aren't there. Too bad we don't have good profilers cause some games show massive gains while others are much less impressive. Would be nice to understand why.

Take this bench for example @ 1920x1200. http://www.computerbase.de/artikel/...6/#abschnitt_world_in_conflict_soviet_assault

The HD 5870 leads the 4890 by 53% and 47% with 4xAA off and on respectively. GTX 480 leads the 285 by 83% and 47%. Note the big difference in the AA off scenario. Different titles will exhibit completely different behavior. It's unfortunate that we don't have much more than framerates to go off of.
 
It'd be disappointing to me even if the power issues and availability weren't there. It's just not enough of an improvement over the GTX 285. The Radeon HD 5870 is the same situation. I expected(Until the rumors on the GTX 480 hit, anyway) closer to a 70-80% improvement with both. I figured, since it was a new generation, we'd see these improvements, and we didn't.
Clearly we've found the realm of diminishing returns here in 2010. But also, Fermi is quite obviously down clocked and down-specced due to manufacturing issues. The 480 appears to be clocked and specced just well enough to slightly beat up 5870. Just enough to get by without being a failure and end up on teh HD 2900 XT path of no money making.
 
He said, that Cypress will faster than GT300 in every DX10/11 application.

I can't see him saying DX10. That doesn't make a lot of sense. I can believe he might have said it for DX11 since the general belief a while back was that DX11 for Nvidia was perhaps an after-thought, and his own assumptions that Fermi uses shaders for tessellation. Both assumptions were false.
 
I meant that it's not doing anything different than what Ati does in regards to what work gets done in shaders versus fixed function hardware.
 
Just think how we used to have GPUs that didn't even need heatsinks and then you realize how manufacturing just isn't keeping up.
The chips were a lot smaller back then though.

I meant that it's not doing anything different than what Ati does in regards to what work gets done in shaders versus fixed function hardware.
No one has proved what each vendor is doing so you can't confirm Charlie's right or wrong.
 
Don't you think the benefit of the doubt should go to the people who designed the hardware and not an internet blogger who has often failed to demonstrate even a superficial understanding of GPU architecture? It's their word against his and on this topic his word counts for a bit less than nought.
 
It's not about believing Charlie it's about not believing everything a PR/marketing person says and deciding for myself. For Nvidia's case I have yet to see evidence of 15 "polymorph engines" being anything more than 15 multi-processors so I remain undecided.

If you read between the lines Charlie likely took this bit of speculation and translated it to tessellation will suck. If the speculation is accurate it was obviously the wrong leap to make.
 
It'd be disappointing to me even if the power issues and availability weren't there. It's just not enough of an improvement over the GTX 285. The Radeon HD 5870 is the same situation. I expected(Until the rumors on the GTX 480 hit, anyway) closer to a 70-80% improvement with both. I figured, since it was a new generation, we'd see these improvements, and we didn't.

This is the first new api we've had in what 3 years. So supporting it was most likely performance zapping. Just another dx 10 part would have been much faster i'd say.
 
Score another point for Charlie.

Today is GTX480 launch day. As Charlie crowed, there aren't any boards available. None. Not on newegg, or amazon, or tigerdirect, or mwave, or zipzoomfly, or frys, or CDW or evga.com.
Even on ebay you'd expect a couple, but the listings there aren't for cards the sellers have in their hands, they're for preorders.

I think Charlie is having a schadenfreude orgasm today.
Honestly I dont think anyone in the right minds expected more than 50k boards. Simply put its some nice smoke pumped by PR and idiots munching them up like gluttons.
 
neliz, no. Let wait to see what additional review sites state. I doubt the 5870 ever has to run at 100%, let alone anywhere near that level. I think they're going for some self-justification for their new shiny and expensive purchase.

That's kind of what I meant, I don't think a 5870 at 100% even touches 60 degrees operating temperature. Power use is also ignored etc. So yes, the second batch of reviews will be most interesting.
 
He did in an article he wrote back in dec/jan time frame. Wouldn't surprise me at all if he has deleted it.
A ~225watt, full die card would indeed have ended up slower than Cypress XT, if manufacturable.

Frequency is not an issue and a substantial part of the power draw comes from static leakage, the RAM controller seems to be problematic too.

Charlie made a lot of false claims, but in the end his claims were correct overall. The article you refer to is still there, but he was speculating about a 500MHz "GT300", which he optimistically described as "a tad slower than Cypress" while the end products indicate it would have been in the HD5850 territory.
 
It's not about believing Charlie it's about not believing everything a PR/marketing person says and deciding for myself. For Nvidia's case I have yet to see evidence of 15 "polymorph engines" being anything more than 15 multi-processors so I remain undecided.

Sorry but that makes no sense. First of all you will never see "evidence" of most architectural features. Secondly why in the world wouldn't Nvidia milk the removal of fixed function hardware if that were the case? That's a much bigger PR win than what they're claiming now. I don't see the PR benefit of having fixed function hardware. Just because Charlie made it sound like doing work in the shaders is inherently inferior people have jumped on that bandwagon.

The article you refer to is still there, but he was speculating about a 500MHz "GT300", which he optimistically described as "a tad slower than Cypress" while the end products indicate it would have been in the HD5850 territory.

In other words if the facts turned out to match his speculation then he would have been right. Brilliant, even when he's completely wrong it's not his fault, it's only because the facts didn't co-operate.
 
Well, in this sea of debate about Semi Accurate views and their truth, seems Charlie has made one mistake yesterday, about OpenGL 4.0 support here: http://www.semiaccurate.com/2010/04/12/nvidia-misses-opengl-40-promises/

If you want the Nvidia version, click here and hit F5 a lot. Given how long you might have to wait, SemiAccurate does not recommend holding your breath while doing so. SemiAccurate also disclaims any liability for worn or broken F5 keys. You've been warned.

A few hours later the drivers supporting OpenGL 4.0 were released here: http://developer.nvidia.com/object/opengl_driver.html.
Seems they were delayed a bit because of a bug, but now should work fine. But, NOOOO, Charlie had to predict doom for nVIDIA as usual :rolleyes:.

And now that Dear Leader of an Army of nVIDIA haters, acknowledges that he was wrong guess what? He still didnt changed the article, hours after it was debunked by an nVIDIA driver release, which the users of his own forum have already pointed at him.

Was Semi Accurate, accurate in many things about Fermi? You bet it was. But does it has an Agenda against nVIDIA? You bet it has too. That is not good for credibility, despite being right in some stuff.
 
Was Semi Accurate, accurate in many things about Fermi? You bet it was. But does it has an Agenda against nVIDIA? You bet it has too. That is not good for credibility, despite being right in some stuff.

What Nvidia lovers seem hard to swallow is the fact that while openly biased he was not more wrong that the sources they love. What if find laughable is people that were openly fooled by nvidia officials these past 6 months still acting as if it was fine.
 
Sorry but that makes no sense. First of all you will never see "evidence" of most architectural features. Secondly why in the world wouldn't Nvidia milk the removal of fixed function hardware if that were the case? That's a much bigger PR win than what they're claiming now. I don't see the PR benefit of having fixed function hardware. Just because Charlie made it sound like doing work in the shaders is inherently inferior people have jumped on that bandwagon.
I agree we will never see evidence of most architectural features, but that's no reason to blindly take what they say as fact. This is probably a dead end discussion, but what doesn't yet make sense to me is why you'd have 16 fixed function units when 4 would have been sufficient. Maybe it comes down to ease of implementation and the tessellator is small. Maybe it's marketing not telling us the real number of fixed function units. Who knows.
 
A 16-way split would hint that the tesselator is performing part of its work on the SMs.
The polymorph engine would be a secondary portion of the logic.
Elsewhere, the tesselator's (the fixed-function portion between the HS and DS) workload has been described as being some simple fixed point math and table lookups.
The Polymorph engine, or the hardware portion of it, could contain the needed lookups and supply setup information for an internal program that runs the fixed point math on the SM.

Cypress may have some ALUs set aside for this outside of the SIMDs, or its weaker than expected tesselation throughput is a sign that there is contention for this resource that a more distributed scheme might have avoided.
 
What Nvidia lovers seem hard to swallow is the fact that while openly biased he was not more wrong that the sources they love. What if find laughable is people that were openly fooled by nvidia officials these past 6 months still acting as if it was fine.

Well im not an nVIDIOT either, so go preach other place. In this laptop there is an ATI chip, and i dont intend to buy new hardware anytime soon, much less the GTX400 series. But I hate it when I see unjustified bias, like this recent event, where hes literally creating "news", out of thin air, just to bad mouth a company.
 
Status
Not open for further replies.
Back
Top