NVIDIA GF100 & Friends speculation

guys you are talking about art direction, at the end of it all, that has nothing to do with the technical merits of the engines :smile:, its kinda like what was better Far Cry or Doom 3, well tell ya what, Doom 3 looked a hell of alot better but Far Cry had the better engine.
 
Let me get this straight. The NDA expires and the launch is the 26th, but reviewers don't get cards until 1 week later? So basically all the benchmarks for one week will be Nvidia's own? That's a pretty controlled launch, even better than having reviewers that follow guidelines.


It can't even be called as 'paper launch' if it is true. I doubt such crazy PR thing will happen.
 
Erhmm...people did all sort of whack hacks to get Crysis to perform better?
(eg. they ran it in DX9 mode and tweked ini-files...for the same reason...that they thought the DX10 mode ran to slowy)

But if you are surprised that extra features impact performance...oh well...

Yes but crysis had alot of tunning in the menus. Reports state there is no such thing on metro.

I have no problems with having to tweak things. I only have a 4850. However I do have a problem with having to go directly into the inf files to get any time of fine tuning at all .

You should read up on the game and the problems many are having and get off the high horse.

I bet we will see proper tweak menus when the fermi reviews hit the wed tho
 
THe problem with metro is the lack of adjustable options.

Its slly that you can't set the Tessliation levels even setting it low , medium and high would render it alot more playable on current hardware.

This is where Crysis succeeded and Metro has failed.

There are reports of people editing files and turning off dof on dx 11 for massive boosts or tessliation for massive boosts.
If editing ini files is possible, then there are adjustable options, no?
Maybe developers will add GUI to manipulate some settings in no way it will allow same level of editing as directly accessing ini files
 
Add some "in 1920x1200" and I totally agree with him.

Why the hell should a game running on a 3 years old console struggle to run correctly on an all new $1000+ PC using a mainstream screen res?

CF/SLI are there for extreme cases (30" monitors, multi monitors, quad-HD screens, stereo...), not for an "optimal" result.

I agree. I think devs are getting sloppy or something. In ages past scaling in games was way better than it is now. We get double the GPU power, but not the frame rate. I don't know if games are to CPU dependent and the CPU has slowed in progress, or if they just are not forward looking enough or what.

PS in a random side note now everyone is marketed the same. I heard someone saying that to have a good gaming system you had to get all AMD, intel i7 CPUs were slow for gaming when combined with AMD GPUs apparently in the person's fantasy land. (And yes they stubbornly clung to the notion)
 
I never understood the majority of complaints against Crysis interiors.

I'm not saying Crysis interiors are bad at all. I'm asking if they're as good/better than what we're seeing in Metro 2033. 4A seems to have crammed a lot of detail, high resolution textures and many light sources into their environments.

Can we, maybe.. discuss Crysis and Metro in their respective threads?

Yep, agreed. Though it's a more interesting topic than which billionth Fermi rumour is fake or not.
 
Yes but crysis had alot of tunning in the menus. Reports state there is no such thing on metro.


People didn't use the menu settings, they tweaked the ini files...do a goggle.

I have no problems with having to tweak things. I only have a 4850. However I do have a problem with having to go directly into the inf files to get any time of fine tuning at all .

I repeat:
The DX9 High I.Q hack wasn't done in the menu...it was done in ini files?

You should read up on the game and the problems many are having and get off the high horse.

Pot, kettle, black:
http://www.tweakguides.com/Crysis_8.html

Bottomline:
You whine because Metro2033 is a TWIMTBP game...and not because of ini files.
Otherwise you would have flamed Crysis likewise :LOL:

I bet we will see proper tweak menus when the fermi reviews hit the wed tho

Bookmarked for future reference...
 
Let me get this straight. The NDA expires and the launch is the 26th, but reviewers don't get cards until 1 week later? So basically all the benchmarks for one week will be Nvidia's own? That's a pretty controlled launch, even better than having reviewers that follow guidelines.

We'll have independent reviews on 26th, I think. The scenario you are positing is a Powerpoint PR launch, not even a paper launch. :cool: NV's reputation will be murdered if they attempted that.
 
Yeah, especially when they mention 4000 Mhz for the HD 58xx.
That's true, certainly it makes no sense to use one clock for one card and the other for the other card. Otherwise though you see mention of all of these, they refer to command/address/data rate I think, though I'm not sure what can actually be called "clock".

Well, there's no reference of hot clock there, which will probably matter more than the core clock.
Yes, but 1) I'm not sure that "core clock" here isn't actually half-hot-clock and 2) I'd suspect just like with older gpus they'll keep a very similar core/hot clock ratio.
 
PowerPoint launch is a paper launch.

In a paper launch, you have real reviews from independent (and hopefully, unbiased) 3rd parties.

In a Powerpoint launch, you have powerpoint slides provided by the vendor.

To avoid confusion, may be I should rename my definition of powerpoint launch to vapor/smoke launch. :smile:
 
Wait a minute. I thought the definition of paper launch was based on the availability/unavailability of the product at retail? What do reviews have to do with it? If we're talking about powerpoint slides then technically Nvidia has launched Fermi about a dozen times to date.
 
Technically two "launches" so far. There's the white paper launch and the Fermi architecture launch. In a way, you have to think that Nvidia knew all along it'll be this late, otherwise why bother with the trickle launches, trying to keep mind-share.
 
People didn't use the menu settings, they tweaked the ini files...do a goggle.

No? Thye didn't use the menu settings at all ? Sounds silly to me if your really going to go down that route.

I used the in game settings and then i further tweaked the game through inf files. With metro you have to use inf because there are no tweaks avalible at all.


I repeat:
The DX9 High I.Q hack wasn't done in the menu...it was done in ini files?

I repeat there are no tuning options at all for metro aside from low , medium and high. crysis let you tweak many moe settings.

I repeat why are you arguing when your wrong.


http://forum.beyond3d.com/showpost.php?p=1408014&postcount=187

This game has 4 presets and only 4 presets to change. It also doesn't support widescreen gaming properly.



Has nothin to do with the disucsion or are you claiming that crysis only has 4 in game presets to choose from and no further tweaking in the game menus ?


Bottomline:
You whine because Metro2033 is a TWIMTBP game...and not because of ini files.
Otherwise you would have flamed Crysis likewise :LOL:

I whine because metro doesn't offer any abilitys to custimise the game for my tastes. Crysis offered a multidue of settings , metro offers 4 presets. Crysis supported widescreen gaming. Metro does not

Yes Metro is a nvidia game. But the game should still offer more than 4 presets for options. That is more akin to a console game than a pc game.



Let me ask you a question. In crysis . When you chose dx 10 mode. Could you not choose to change the texture quality independent of the dx mode used ? Because thats what this game is doing.



Bookmarked for future reference...

Good job. Proud of you
 
Yes, but 1) I'm not sure that "core clock" here isn't actually half-hot-clock and 2) I'd suspect just like with older gpus they'll keep a very similar core/hot clock ratio.

Not for Fermi. Actually since GT200 IIRC of what was mentioned a while back in this thread.
It's the Scheduler clock that matters and it's half of the hot clock. The Core clock is different.
 
I like this "launch" because both companies are delivering equal performance. It's not as lop-sided as 9700Pro vs. Ti4600/GeforceFX or 8800GTX vs. 2900xt/hd3870. Now that the playing field has somewhat leveled out, it should be exciting to see who is the first one to release legendary next generation technology on the next go around. Regarding GF100 design, could anyone help out and make an assessment of those 3dmark06 Feature tests explaining the scores relating to the architecture? I've read this from Neeyik at futuremark thus far:
Let's assume that the results are genuine - i.e. it was a real GTX 480 being used and that the odd readings in GPU-Z are just because the program's database hasn't been updated yet. The VS results make sense: the simple test is a measurement of brute vertex throughput, which the 480 would be better at, whereas the complex test is about shader strength, favouring the 5870. The PS test is all about shader (5870 clear favourite then), whereas the shader particles is a combination of shader, texturing and vertex throughput (hence why it's almost even). The fill rate figures would actually be a reasonable fit for a GTX 480 running at 650MHz: multitexturing peak fill rate = 39,000 Mtexel/s for 60 TUs @ 650MHz. 48 ROPS at the same clock speed would give an output of 31,200 Mtexels/s for the single texturing fill rate test but that doesn't match the indicated 14493 - it does, though, if the 480 only has one blender per ROP pairs.
All of the results seem reasonable apart from the single texturing figure - I don't understand why NVIDIA would go backwards with their ROP designs but considering little else of the Fermi design makes a lot of sense to me, I wouldn't be surprised if this really was a genuine set of results...
When performing alpha blending, primitives that have already been rendered to the back buffer need to be sampled and then blended with the next primitive that's overlaid. This is all done by the ROPs but some chips, such as the G80, only has one blending unit for each pair of ROPs - so although it can read/write 24 pixels per cycle, it can only blend and output 12 pixels per cycle. Thus the fill rate test in 3DMark06 ends up giving results almost half that the chip is theoretically capable of.
Seems difficult to understand this design.
edit: (numbers)
gtx480 / hd5870
14,500 / 17000 single tex fillrate
36,600 / 72000 multi tex fillrate
630 / 1100 pixel shader
512 / 430 vertex shader simple
340 / 420 vertex shader complex
308 / 310 particles test
10400 / 9700 sm 2.0 score
11800 / 10600 sm 3.0 score
Please excuse me if this is an irrelevant comparison to make, I am trying to gain a better understanding of these numbers.
 
Last edited by a moderator:
BTW, just wondering ... are any of the people here going to Pax with the intention of buying one if it's available for sale? (As been hinted at.) If so, any chance of posting benchmarks on the 27th? :) (I don't buy into all the benchmarking guidelines noise, but internet news sites have a huge conflict of interest without explicit shenanigans ... so user reviews are always nice.)
 
Back
Top