NVIDIA Fermi: Architecture discussion

Let me get this striaght, ATI feature wise had better "paper" specs than Nvidia in the R770 vs G200 round, but yet Nvidia had better real world performance. ATI has added a little, doubled everything else and NOW they will be faster in Gaming overall vs Fermi. Interesting, concidering Fermi is pretty much Double and completely new design vs G200.

And I really wish I knew why people felt OCL was/is going to save ATI in GPGPU computing, at best I see 30% improvement for them in GPGPU tasks. Hey atleast ATI can always say, look we own Synthitic GPGPU benching, pay no attention to those real world numbers over there, just look here.

It's pretty ironic for you to talk about "real world performance" whilst holding up a Fermi part that's about six months away from release where we have no performance figures, against an existing AMD part where we actually have real world performance figures.

And real world performance can be easily manipulated. Nvidia have a poor reputation for identifying and changing shaders and game benchmark modes in order to provide better "real world numbers" than you actually get in the real world.
 
It's pretty ironic for you to talk about "real world performance" whilst holding up a Fermi part that's about six months away from release where we have no performance figures, against an existing AMD part where we actually have real world performance figures.

And real world performance can be easily manipulated. Nvidia have a poor reputation for identifying and changing shaders and game benchmark modes in order to provide better "real world numbers" than you actually get in the real world.

WOW! Going all the way back to the 5800FX days on that one with 3DMark03 huh, and when was that a game? One where the pic used in the "Shader scape" was shown to be rendered more true on the Nvidia card than ATIs. And yes real world, ATI has long since beaten up Nvidia in the 3D Pro Synthitic Apps, but in the ones used by Pros, it was just the oppisite. 4870 and GTX285was no different, on paper, ATI had the better specs for most things, but yet got beat in actual apps. I doubt that is going to change this time around either.
 
Let me get this striaght, ATI feature wise had better "paper" specs than Nvidia in the R770 vs G200 round, but yet Nvidia had better real world performance.

With far more die area and power.

ATI has added a little, doubled everything else and NOW they will be faster in Gaming overall vs Fermi. Interesting, concidering Fermi is pretty much Double and completely new design vs G200.

Fermi isn't a straight forward doubling. Each core is 2X wider and there are fewer cores. The overall system balance has changed considerably, which has implications for tuning code.

RV870 is a simpler doubling of core count, rather than substantially changing the internal microarchitecture of a core.

Obviously both made other changes (e.g. caching, features in an ALU), but the way they improved is rather different.

And I really wish I knew why people felt OCL was/is going to save ATI in GPGPU computing, at best I see 30% improvement for them in GPGPU tasks. Hey atleast ATI can always say, look we own Synthitic GPGPU benching, pay no attention to those real world numbers over there, just look here.

The point is that nobody wants to develop with goofy proprietary programming models, so OpenCL is likely to end up as the standard. Not many people used ATI GPUs before b/c the programming was difficult and OpenCL will make it a heck of a lot easier.

It's not about performance, it's about what developers use.

David
 
It's pretty ironic for you to talk about "real world performance" whilst holding up a Fermi part that's about six months away from release where we have no performance figures, against an existing AMD part where we actually have real world performance figures.

And real world performance can be easily manipulated. Nvidia have a poor reputation for identifying and changing shaders and game benchmark modes in order to provide better "real world numbers" than you actually get in the real world.


oh its not six months away from retail dude.

you and general public don't have performance figures :LOL: yet that will change soon enough.

and real world numbers well at this point again thats your ideas!

The point is that nobody wants to develop with goofy proprietary programming models, so OpenCL is likely to end up as the standard. Not many people used ATI GPUs before b/c the programming was difficult and OpenCL will make it a heck of a lot easier.

It's not about performance, it's about what developers use.


Oh no no no, competition has nothing at the moment to compete with physX, cuda, etc. coming from full function programs, don't think its that easy as to say since something is open standard it will be adopted, especially when you are talking about niche industries with deep pockets.
 
WOW! Going all the way back to the 5800FX days on that one with 3DMark03 huh, and when was that a game? One where the pic used in the "Shader scape" was shown to be rendered more true on the Nvidia card than ATIs. And yes real world, ATI has long since beaten up Nvidia in the 3D Pro Synthitic Apps, but in the ones used by Pros, it was just the oppisite. 4870 and GTX285was no different, on paper, ATI had the better specs for most things, but yet got beat in actual apps. I doubt that is going to change this time around either.

Way to dodge everything I said, and you didn't disagree with anything either. Try looking at consistently higher frame rates for ATI cards, or the higher peaks and lower troughs of Nvidia cards creating unstable frame rates - but getting the benchmark wins. Or the "reviewer edition" cards that were pretty much cherry picked and only available for benchmarking in order to get high benchmarks.

You can't talk about "real world performance", let alone on an unfinished part still six months from release, and claim it's going to categorically beat the card we do have figures on. That's just dreaming about something we know nothing about and have no figures on at all.

ATI cards are the only products we now have real world performance numbers on, and they are beating Nvidia at the moment according to those real world figures.
 
Let me get this striaght, ATI feature wise had better "paper" specs than Nvidia in the R770 vs G200 round, but yet Nvidia had better real world performance.
What? No it didn't. I just told you the exact opposite. NVidia has much higher specs in several areas.

ATI only won the FLOP spec, and I showed you a link to the most FLOP intensive task out there - dense matrix multiplication - and RV770 destroys GT200. Then I explained to you how missing features and a poorer API handicapped it in some applications (GPU folding). Those are no longer the case. Shared memory may now be an ATI strength. Concurrent kernel support was always there.
ATI has added a little, doubled everything else and NOW they will be faster in Gaming overall vs Fermi. Interesting, concidering Fermi is pretty much Double and completely new design vs G200.
Again, WTF are you reading? I just told you that the top end Fermi will be faster than RV870, so why are you putting the opposite words in my mouth?

ATI is going to win for a given BOM.
And I really wish I knew why people felt OCL was/is going to save ATI in GPGPU computing, at best I see 30% improvement for them in GPGPU tasks.
It's not about performance, it's about compatibility. Now the same algorithm can be used for both IHVs. Using GPU folding as an example, their previous architecture required repeating each force calculation due to shared memory limitations. So RV870 can possibly quadruple RV770.
 
WOW! Going all the way back to the 5800FX days on that one with 3DMark03 huh, and when was that a game? One where the pic used in the "Shader scape" was shown to be rendered more true on the Nvidia card than ATIs. And yes real world, ATI has long since beaten up Nvidia in the 3D Pro Synthitic Apps, but in the ones used by Pros, it was just the oppisite. 4870 and GTX285was no different, on paper, ATI had the better specs for most things, but yet got beat in actual apps. I doubt that is going to change this time around either.

If we go back to geforce fx days there was also tomb raider and the dx 9 path as well as farcry and with the dx 9 path . Flash foward to modern times and you get the shady practice of batman and physix and fsaa and then of course there was I believe that ubisoft something creed game with a dx 10.1 path that gave ati cards up to a 20% performance increase that suddenly had to be taken out of the game becuase it was a nvidia partner game.
 
Looks at Razor's sig..

looks at that quote...

Looks at Razor's sig.

Is there anything you would like to tell us?


And you think Charlie was right, his info was so off base it was funny as hell :LOL:, he was making informed performance figures well before A1 tapped out he must be a genious to know those things. The 40nm process was in trouble since it was used from the begin for mass production, not now, its the stupid thing to think TSMC magically screwed themselves by accident after they were capable of getting product yields. It was Charlie's luck its because of TSMC the g100 got pushed to Jan of next year for retail sales at this point.
 
ATI had the better specs for most things
Like what? There's a FLOPS advantage (highly evident whenever an app is math limited much of the time) and a slight one in triangle setup. OTOH, RV770 has less bandwidth and is blown away on paper in colour fillrate, Z fillrate, and texturing rate.

Your whole argument is based on a BS assumption.
 
And you think Charlie was right, his info was so off base it was funny as hell :LOL:, he was making informed performance figures well before A1 tapped out he must be a genious to know those things. The 40nm process was in trouble since it was used from the begin for mass production, not now, its the stupid thing to think TSMC magically screwed themselves by accident after they were capable of getting product yields. It was Charlie's luck its because of TSMC the g100 got pushed to Jan of next year for retail sales at this point.
:rolleyes:

As much as I hate Charlie's froth for a disaster at Nvidia, he's been the most accurate "rumor" source so far. Convenient that luck is now brought into play. :LOL::rolleyes:

Oh I'm under no illusion that this is not coincidence. Like I said. Nvidia have succeeded in the thing they wanted most. People to talk about it.
I'm not sure how they succeeded. This photo is not getting any positive reaction neither is it making people wait or anything of that sort. The 5890 reviews are at most sites and that news is very mainstream. Even their stock gained 10% in one day.
 
It's hilarious seeing a certain camp now downplaying their past flagship so badly in order to justify their arguments.

As for the pic, no need to bicker about it though.

ATI kinda did the same thing with Juniper (6 months ago) and nobody was asking if it worked. Sure it was playing Wolfenstein and there were demos with EVERGREEN BROADWAY EG stamped on deviceID, but you could argue it's easily faked.

Let's stick to the more tangible stuff and crucify people that deserve to be crucified first. :LOL:
 
I'm not sure how they succeeded. This photo is not getting any positive reaction neither is it making people wait or anything of that sort. The 5890 reviews are at most sites and that news is very mainstream. Even their stock gained 10% in one day.


Why? Because a few people at beyond3d are complaining about it? I've looked at nzone, Rage3d, Nvnews, Hardocp, and futuremark forums. And I have seen no where near the negative association I see here from the vocal minority. Yes I see a few people complaining about it. But I see just as many people who appreciated seeing the card.

Nvidia is not going to please everyone. No matter what they do.
 
Why? Because a few people at beyond3d are complaining about it? I've looked at nzone, Rage3d, Nvnews, Hardocp, and futuremark forums. And I have seen no where near the negative association I see here from the vocal minority. Yes I see a few people complaining about it. But I see just as many people who appreciated seeing the card.

Nvidia is not going to please everyone. No matter what they do.
Apart from those (what else can I expect there :LOL:) the other two places are just some very few posts related discussion where people are trying hard to prove it fake. I dont see how thats got "people talking", its like a few hundred compared to a few hundred thousand for the 5970. Again, fighting real product launches with photos of prototype products are not going to sway buyers, the green hardcore maybe.
 
Heh, not really. Every forum (this one included) has 3 or 4 "extremists" who revel in mocking Nvidia. Most people just want more info.
 
Apart from those (what else can I expect there :LOL:) the other two places are just some very few posts related discussion where people are trying hard to prove it fake. I dont see how thats got "people talking", its like a few hundred compared to a few hundred thousand for the 5970. Again, fighting real product launches with photos of prototype products are not going to sway buyers, the green hardcore maybe.

I think you are overestimating the intention behind it. Obviously this is not going to stop the mass press of a hardware launch.

It was a teaser. Not a launch. Don't give it more credit than due.
 
Last edited by a moderator:
It's hilarious seeing a certain camp now downplaying their past flagship so badly in order to justify their arguments.

As for the pic, no need to bicker about it though.

ATI kinda did the same thing with Juniper (6 months ago) and nobody was asking if it worked. Sure it was playing Wolfenstein and there were demos with EVERGREEN BROADWAY EG stamped on deviceID, but you could argue it's easily faked.

Let's stick to the more tangible stuff and crucify people that deserve to be crucified first. :LOL:

Yeah, but when they showed the Juniper wafer, they had live cards running a few feet away, and multiple ones at that. By the time they had showed that wafer, developers had been seeded for a long time.

Big difference, one was revealing what was in place, the other is a PR stunt.

-Charlie
 
Back
Top