NVIDIA Fermi: Architecture discussion

Yeah, wouldn't May 2010 make this the most belated launch in the history of the graphics market yet? It's never been more than six or seven months from what I recall.

May? No ... February (probably, with some proper high tides and incense burning) .. though "Fermi" might launch earlier (and won't be game, schedule like GT200b)

By your reasoning NVIDIA only needs a similar DX11 to Unigine's demo to have a benchmark answer to another benchmark and the happy hairpulling can start from there which is more accurate in evaluating exactly what.

As Far as I know, the Unigine engine used in older games always heavily favoured nVidia.
 
Last edited by a moderator:
As Far as I know, the Unigine engine used in older games always heavily favoured nVidia.

So what? I responded to a notion that suggested to raise Unigine's DX11 demo to an unquestionable industry standard. In that regard I don't care which IHV a techdemo favours or not, I'll still won't recognize it any sort of standard and will judge by real game performance and not just a bunch of selected ones that favour IHV A or B.

What exactly is there so hard to comprehend in that one?
 
As Far as I know, the Unigine engine used in older games always heavily favoured nVidia.
Do you have an example for me?

I just looked up the list of licensees on uniginge's homepage, but there's not a single game liste, that I've even heard of so far. In fact, most of them are "unnannounced" or "in development".
http://unigine.com/clients/#games
Or is there yet another list?
 
Over at nvnews the "Tropics" demo saw 4870's (x2) perform just above the 9800GTX, even a GTX260 with AA wasn't much slower than a 4870 without it.
If their previous demos were favouring NVIDIA what makes you think that their new demo (made without NV's DX11 h/w in sight) isn't favouring AMD?
 
I think the real question when we finally see the Unigine benchmark running on more than one kind of DX11 hardware will not be which IHV it "favors", but rather whether it's actually benchmarking tessellation performance or rather rasterizer performance for very small polygons (or maybe something entirely different).

In any case I'm sure it will be interesting.
 
Over at nvnews the "Tropics" demo saw 4870's (x2) perform just above the 9800GTX, even a GTX260 with AA wasn't much slower than a 4870 without it.
Sorry, I was under the impression, you were talking about real games, not benchmarking-demos. :)

For those: I simply don't know, since i regard them as almost as useless as a 3DMark score because you never know who did sponsor this or that demo. Something that's for a lot of people seemingly even a concern when talking about real games - those, where publishers and developers intend to make money with and therefore cannot afford to lock a double-digit percentage out of their potential customer base.

edit:
FWIW: Did you try to turn on MSAA in furmark and see how it affects framerates?
 
Sorry, I was under the impression, you were talking about real games, not benchmarking-demos. :)

For those: I simply don't know, since i regard them as almost as useless as a 3DMark score because you never know who did sponsor this or that demo. Something that's for a lot of people seemingly even a concern when talking about real games - those, where publishers and developers intend to make money with and therefore cannot afford to lock a double-digit percentage out of their potential customer base.

edit:
FWIW: Did you try to turn on MSAA in furmark and see how it affects framerates?

Unfortunately, we can't trust games either. You don't know who sponsored that game too. Let me remind Batman Arkham Asylum fiasco, in which you have to force AA on ATI cards making them far slower.

Now a new bright example that favors Nvidia cards has surfaced. Borderlands! This industry is funny. I wonder why Microsoft isn't doing something about it?
 
Now a new bright example that favors Nvidia cards has surfaced. Borderlands! This industry is funny. I wonder why Microsoft isn't doing something about it?

Microsoft? What's it to them? Why doesn't ATI do something about it?
 
Now a new bright example that favors Nvidia cards has surfaced. Borderlands! This industry is funny. I wonder why Microsoft isn't doing something about it?

Whoa!!

I think that's the most one egregious display of architecture affinity I've ever seen in a game. :???:

They wouldn't have to do much if the developer played it a bit more straight in the first place.

How do you know it's not an ATI driver issue? They aren't exactly known for great launch day support of new games.
 
Normally, you have boards ready, drivers ready and the rest, and you plug the GPU in/solder it down, and run your tests to see if everything comes up OK/correctly. This shouldn't take long. If you have bugs, the test itself should narrow down where it is in silicon. If not, you didn't prep right. Finding, fixing and verifying the fix should be fairly quick since it is likely a pretty specific change.
Maybe GPU's live in an entirely different place, but whatever it is that you're describing is a complete fiction in my world.

The problems you tends to hit during silicon validation are not the ones that are easy to narrow down with a targeted test. Those are usually found and fixed during module simulations. You will typically rerun your chip-level verification suite on the real silicon, but that's the easy part: they are supposed to pass because they did so in RTL or on some kind of emulation device. The only reason you rerun them is to make sure that basic functionality is sane.

The things you run into on the bench are hard to find corner cases. Something that hangs every 10 minutes. Or a bit corruption that happens every so often. They are triggered when you run the real applications, system tests that are impossible to run, even on emulation, because it just takes too long. In telecom, this may be a prolonged data transfers that suddenly errors out or a connection that lose sync. In an SOC, some buffers that should not overflow suddenly do or a crossbar that locks up. A video decoder may hang after playing a DVD for 30 minutes. When these things trigger, a complicated dance starts to try to isolate the bugs. It can take days to do so. Very often there is a software fix by setting configuration bits that, e.g., may lower performance just a little and disable a local optimization, but sometimes there is not. These kind of problems are present in every chip, but even if they're not, you need wall clock time to make sure they are not. Did I mention that sometimes you need to work around one such bug first before you can run other system tests?

And then there's the validation of analog interfaces: making sure that your interface complies with all specifications under all PVT corners takes weeks even if everything goes right. (Corner lots usually arrive a week after the first hot lot, so your imaginary 2 week window has already gone down by half.) You need to verify the slew rates and driving strengths of drivers. Input levels of all comparators. Short term and long term jitter of all PLL's. etc.

The whole idea that you can do all that quickly and then do a respin in 2 weeks is laughable (and you'd be an idiot to do so anyway: you know there are stones unturned if you do it too quickly.) If everything goes well, 5 weeks is the bare minimum.

And what about the claim that you can fix logic bugs with just 2 metal layers (the real smoking gun if ever there was one that you really don't know what you're talking about, thank you). This would mean that you're only changing the two upper metal layers out of 7 or 8. Which is surprising because, as I'm sure you're aware, the wire density of M7 and M8 is low. There doesn't happen a lot of interesting stuff at that level, so the chance of fixing anything remotely useful is very, very low.

You usually get away with not having to touch M1, but in 99% of the cases, you don't even bother looking for a fix that doesn't touch M2. Respin time is 100% a function of the lowest layer you change. If you change M2, it doesn't matter that you also modify M3-M8. The majority of backup wafers are parked before M2 or V1.

You may or may not have great insider info about tape-out dates and other dirty laundry. It's very entertaining, but please stay out of the kitchen?
 
In all seriousness, won't Nvidia have a bit of catching up to do? It's possible that they're getting builds of DX11 titles in development but surely there aren't any A1 samples in developers' hands currently.....
 
As a gamer, I'll be happy enough if Fermi support for existing DX9/10/10.1 titles are solid enough on release. TWIMTBP is a really large program compared to ATI's DX11 list, and it tends to cover a lot more of the AAA release titles, regardless of which tech they're on. While I want to see DX11 titles as much as the next guy, as a gamer I'd rather have solid support for games I actually want to play on day 1 instead of a few select games that I only ever run as benchmarks.

That said, I would expect Nvidia developer support to rapidly build DX11 support once they ship their DX11 hardware - I just don't think Nvidia needs to be as selective as ATI on supporting just a few titles - they have a lot more developer support resources to spread around.
 
Unfortunately, we can't trust games either. You don't know who sponsored that game too. Let me remind Batman Arkham Asylum fiasco, in which you have to force AA on ATI cards making them far slower.

Now a new bright example that favors Nvidia cards has surfaced. Borderlands! This industry is funny. I wonder why Microsoft isn't doing something about it?

Techdemos/benchmarks have beyond doubt their own value and deliver valuable data. It shouldn't however influence someone's buying decision more than a long selection of game performance results (and yes hello if the list of games is long enough you can add an equal amount of titles that favor X and an equal amount that favors Y IHV), and in that notion I'm not excluding any benchmark like Futuremark's applications for instance.

After all I have the awkward tendency to buy a GPU as a mainstream consumer to play games on it and not endlessly masturbate at benchmark results.
 
In all seriousness, won't Nvidia have a bit of catching up to do? It's possible that they're getting builds of DX11 titles in development but surely there aren't any A1 samples in developers' hands currently.....

Unless theres some fundamentally wrong with their DX11 hardware. I dont really see this as a problem. Specially if its using DX11 specifications.

I am positive Nvidia will support DX11 in their devrel at the very least.

After all I have the awkward tendency to buy a GPU as a mainstream consumer to play games on it and not endlessly masturbate at benchmark results.

Thanks Ail. I really needed that mental image.
 
Well you and I aren't immun to such sports in the very least. You should start to worry if your palm starts to grow hair :LOL:
 
Back
Top