NVIDIA GF100 & Friends speculation

With HD5770 out-pacing HD4890, which is a nice surprise, there won't be much excuse if GTX480 is considerably faster than HD5870.

The G92-owning horde will certainly want to upgrade as it's pretty miserable. Shame that GF100's looking like it's going to be harder to get than HD5970 was upon launch.

9600GT trounces HD3870, laying to rest any doubt about which is more future proof :LOL:

Jawed


fixed that for ya /wink
 
You really think the xbox360 is running settings even remotely close to PC? Let alone resolution difference?
Of course not .. but the idea is still silly , at the very least , HD5870 should be more than enough to run the game with all possible quality settings SMOOTHLY .

I am not holding my breath for GTX 480 though , at best it could be 20% better than HD5870 in this game , which is still not enough .
 
Of course not .. but the idea is still silly , at the very least , HD5870 should be more than enough to run the game with all possible quality settings SMOOTHLY .

I am not holding my breath for GTX 480 though , at best it could be 20% better than HD5870 in this game , which is still not enough .

Did you say the same thing when Crysis was first released? IMO there's nothing wrong with a game that pushes the limits of current hardware. 3k shadow maps aren't exactly needed....
 
VLIW, but the increase in IPC you can get from that is far more limited than from vectorization.

More limited? Could you give an example of what a vec4 could do that a VLIW with the same 4 instruction slots couldn't?


I know, there is one more slots in AMD chips.
 
Did you say the same thing when Crysis was first released? IMO there's nothing wrong with a game that pushes the limits of current hardware. 3k shadow maps aren't exactly needed....
Crysis graphics was phenomenal and this game is not Crysis , it's graphics is barely better if not worse .
 
STALKER CoP benchmark seems to have such an issue on its 4th scene, activating AA incurrs a considerable drop when combined with CS Ultra HDAO, somewhat comparable to what Heaven shows when tessellation is turned on.

That's something else. Reading their presentations about how they're doing both AA and sunshaft rendering indicates why that is the case.
 
I haven't checked into this thread in a while and it looks like we still aren't at the point where we can discuss actual benchmarks only 10 days away from release day. Surely Nvidia can't expect people to buy this without seeing how well (badly) it performs...

Can anyone explain how we can be so close to release and not have concrete benchmarks from reputable sources?!?
 
I haven't checked into this thread in a while and it looks like we still aren't at the point where we can discuss actual benchmarks only 10 days away from release day. Surely Nvidia can't expect people to buy this without seeing how well (badly) it performs...

Can anyone explain how we can be so close to release and not have concrete benchmarks from reputable sources?!?

Well from what I can tell they won't have any cards to sell at "release" anyway (but will 10 days or so after that?). Only available to press for reviews. So yes you'll be able to see benchmarks and reviews before you decide to purchase.
 
Doesn't make a whole lot of sense to me. GTX480 with faster ram, but clocked the same? And nearly same core clock? The difference in performance would be quite small.

And 2000Mhz doesn't really make sense does it? Shouldn't it be classed as either 1000Mhz or 4000Mhz RAM?
 
Well from what I can tell they won't have any cards to sell at "release" anyway (but will 10 days or so after that?). Only available to press for reviews. So yes you'll be able to see benchmarks and reviews before you decide to purchase.

Press (EU at least) will get the cards in two days.
 
I haven't checked into this thread in a while and it looks like we still aren't at the point where we can discuss actual benchmarks only 10 days away from release day. Surely Nvidia can't expect people to buy this without seeing how well (badly) it performs...

Can anyone explain how we can be so close to release and not have concrete benchmarks from reputable sources?!?
Surely the benchmarks on launch day will be enough to make a purchasing decision. Unless I’m missing something.
 
Doesn't make a whole lot of sense to me. GTX480 with faster ram, but clocked the same? And nearly same core clock? The difference in performance would be quite small.

Well, there's no reference of hot clock there, which will probably matter more than the core clock.
 
VLIW, but the increase in IPC you can get from that is far more limited than from vectorization.

Atleast in the cpu compiler world, vectorization refers to extracting vector ILP from loops. In that sense, the only thing vectorization could mean is that 4 work items are packed together into a single one. Branching will be a pain then.

AMD's compiler, however is very good with extracting ILP in vec2/3/4 operations and packing them to minimize instruction slot usage.

In shaders, AFAICS, loop level parallelism within a single work item is not worth going from existing VLIW scheme to complete reliance on intra-work-item-vectorization.
 
Back
Top