R360 != 0.13 process ?

Chalnoth, you even need a link? Don't be ridiculous, it's not exactly hidden or anything. But since you asked, I will provide the links.

http://www.digit-life.com/articles2/gffx/gffx-13.html
http://www.digit-life.com/articles2/gffx/gffx-16.html

Shit, can't find the shadermark one right now though, can only find GeForce FX 5800 Ultra scores. Hellbinder, ChrisW, WaltC, if you all know where it is, could you post a link to the scores of Radeon 9800 Pro outperforming the NV35 significantly in shadermark? But here's something that can hold you over, 5800 Ultra unoptimized shader mark scores:

http://www.hardocp.com/article.html?art=NDM5LDk=

I seriously doubt that a legit NV35 score would be 4 to 5x faster than an NV30...Perhaps in your dream land, but not in reality.

Read about NV35 Shadermark scandal here:

http://www.beyond3d.com/forum/viewtopic.php?p=132574&highlight=#132574

Yeah, but sorry bub, but you are the only one living in a self delusioned dream land here.
 
More Shadermark:

http://www.3dvelocity.com/reviews/5900u/5900u_4.htm

Go there to check out scores BEFORE anti cheat for shadermark. Notice 5800 Ultra is actually FASTER than the 5900 Ultra.

Now check here for scores AFTER anti cheat for shadermark.

http://www.3dvelocity.com/reviews/5900u/5900u_16.htm

What a joke.

Thread with lots of info here:

http://www.nvnews.net/vbulletin/showthread.php?s=&threadid=13494&highlight=shadermark


Do you need any more proof how much faster the Radeon 9800 Pro is?
 
surfhurleydude said:
Chalnoth, you even need a link? Don't be ridiculous, it's not exactly hidden or anything. But since you asked, I will provide the links.

http://www.digit-life.com/articles2/gffx/gffx-13.html
http://www.digit-life.com/articles2/gffx/gffx-16.html
Well, the first link only shows 3DMark03 scores. Those are useless.
The second link only shows the FX 5900 significantly behind in one test, the Unreal II with anisotropic test. One isolated game doesn't mean much. The rest did pretty well.

Shit, can't find the shadermark one right now though, can only find GeForce FX 5800 Ultra scores. Hellbinder, ChrisW, WaltC, if you all know where it is, could you post a link to the scores of Radeon 9800 Pro outperforming the NV35 significantly in shadermark? But here's something that can hold you over, 5800 Ultra unoptimized shader mark scores:
Synthetic, unoptimized. Real games are more important. Information coming from developers so far seems positive. Also of key importance is the fact that the FX 5900 has up to three times the FP performance per clock as the FX 5800 (when partial precision is used in at least 2/3rds of the instructions)

Anyway, I'll end with a quote from one of the digit-life articles you linked:
Also remember that the FX 5900 is one of the fastest runners today.

...

1. Excellent speed in 3D (especially with the AA and anisotropy enabled)
Seems I'm not the only one who doesn't think the FX 5900 is half the speed (per clock) as the R3xx cores.
 
Chalnoth said:
3DMark2003 is a useless benchmark. Real games have completely different goals than a benchmark like 3DMark03 has.
It is not useless. It definately tells us a good deal about performance characteristics of the card. Stop spouting nVidia propaganda and come up with an original thought.

So? The NV35 is hard to program for. That's why nVidia made Cg, and that's why nVidia's got a rather expansive developer relations department. In the real world, nVidia's developer relations will make sure that every developer has the chance to optimize their game for nVidia's graphics cards.
Well, actually, if they use FP, the performance just sucks anyways. besides, your fairytale world scenario still relies on external influences which are NOT guaranteed, no matter how you try to make them seem so.

Non-optimized scores are therefore meaningless.
Yeah, we'd hate to know how the card could end up performaing should a developer not spend the time to optomize for nVidia. We'd hate to know the techincal merits of the card (ones not hidden by CHEATS). :rolleyes: :rolleyes:
 
Althornin said:
It is not useless. It definately tells us a good deal about performance characteristics of the card. Stop spouting nVidia propaganda and come up with an original thought.
This isn't nVidia propoganda. I've been saying 3DMark is essentially useless since 3DMark2000. If you remember, 3DMark2000 favored nVidia's GeForce line quite strongly.

Granted, there may be some benefit gained from the purely-synthetic tests, but even most of the "synthetic" portions of 3DMark tests aren't very good in terms of synthetic benchmarks.

And finally, the GeForce FX 5900 does better than the Radeon 9800 Pro in 3DMark2003. I still say it's a meaningless benchmark.

Well, actually, if they use FP, the performance just sucks anyways. besides, your fairytale world scenario still relies on external influences which are NOT guaranteed, no matter how you try to make them seem so.
Not on the 5900. That's what we're talking about here.

Yeah, we'd hate to know how the card could end up performaing should a developer not spend the time to optomize for nVidia.
Except, from what I've heard, it's mostly nVidia spending the time to help developers optimize for their hardware. Besides, the installed-base of nVidia hardware virtually guarantees optimization (particularly at the low-end...nVidia currently has the only low-end DX9 card).
 
noko said:
How in the world could Nvidia get away with that or even hide it??? :?:

I have a feeling that the R360 is more then just another tweaked R300 series at .15micron. The designs are run in parrallel with different variations. The threat of Nvidia with a killer .13 process was ever present so I would think ATI would have been developing a .13 version along with the .15 version. Since the .13 process was immature ATI did the right thing and ranned with the .15. Now the .13 process is more mature it is time for ATI to make the jump. Just my thoughts, no reall inside scoop.

Noko I hope you're right. Even when ATI have a superior card to Nvidia there appears to be some people (even on these boards) that want to go down fighting with nvidia and insist it's not. :rolleyes: Lets hope that one of the IHVs can open up a very clear lead again so these kind of clutching at straw arguments to try and justify the weaker cards performance stop.

You can see why it's going to be a tough battle for ATI to win over the NVidia mob. Now I'm reading that it's fine and dandy to require developers to heavily optimise for a particular card just to make it comparable? Jeez, as a developer the last thing I want to do is bother with highly optimised paths. Even JC will no doubt tire of this approach and reduce the optimisations required to make a card tick.
 
Chalnoth said:
How they compare in advanced shaders will depend on games, and we don't have the advanced shader-capable games to examine just yet.

With the full PS.20 effects enabled in TR:AOD 9800 PRO currently looks to be about 2.5x-3x faster than 5900 Ultra.
 
DaveBaumann said:
With the full PS.20 effects enabled in TR:AOD 9800 PRO currently looks to be about 2.5x-3x faster than 5900 Ultra.
Seems to be in line with synthetical PS2.0 benchmarks?
 
DaveBaumann said:
Chalnoth said:
How they compare in advanced shaders will depend on games, and we don't have the advanced shader-capable games to examine just yet.

With the full PS.20 effects enabled in TR:AOD 9800 PRO currently looks to be about 2.5x-3x faster than 5900 Ultra.

Interesting. And IMO information that should have been included when introducing this brand new benchmark.

I know y'all don't like to do shoot-outs (and in certain cases I agree), but it seems to me that at the introduction of a new benchmark you sort of have to. Otherwise, what context are we supposed to have for the scores? You did a shoot-out of sorts (along with much in-depth explanation and analysis) with the introduction of 3dMark03, and your coverage was the better for it.

If possible, a similar round-up with this new TR benchmark would be much appreciated IMO. You've (by which I mean at least Rev) obviously put a lot of great work into it; let's see what it has to tell us!

(As for NV3x's poor PS 2.0 performance, can't say that's at all unexpected. Just because it's a real game doesn't mean the obvious record of every synthetic PS 2.0 benchmark will suddenly fly out the window.)
 
Its something that we're looking at doing, and we might do yet. We do have a couple of issues with the way the boards handle the DOF effects though - the Radeon boards do it in a very controlled fashion, and its only really seen where you would expect to see it; the FX's are much more indescriminate with it use and blurr the scene all over the place, including areas where it just plain isn't needed. We are trying to understand why the FX's are doing this and what the impact on its performance is (we've heard from one source that the excessive blurring would actually imprive performance, although we want to understand that).
 
DaveBaumann said:
Its something that we're looking at doing, and we might do yet. We do have a couple of issues with the way the boards handle the DOF effects though - the Radeon boards do it in a very controlled fashion, and its only really seen where you would expect to see it; the FX's are much more indescriminate with it use and blurr the scene all over the place, including areas where it just plain isn't needed. We are trying to understand why the FX's are doing this and what the impact on its performance is (we've heard from one source that the excessive blurring would actually imprive performance, although we want to understand that).

FX' seems to blur the scene without looking to the depth. In fact, the screen seems divided into strips. A strip with DOF, a strip without, a strip with DOF, a strip without...
 
Chalnoth said:
Also of key importance is the fact that the FX 5900 has up to three times the FP performance per clock as the FX 5800...

At least we're now acknowledging that the 5800 sucks-ass.

(when partial precision is used in at least 2/3rds of the instructions)...

Are you sure the moon doesn't also have to be full as well? I'm sure developers love having to keep instruction counts and precision control under a stringent microsocope to ensure some kind of acceptable performance.

1. Excellent speed in 3D (especially with the AA and anisotropy enabled)

So, does that include shading performance, which is entirely what's at issue with the FX cores?
 
Dave H said:
(As for NV3x's poor PS 2.0 performance, can't say that's at all unexpected. Just because it's a real game doesn't mean the obvious record of every synthetic PS 2.0 benchmark will suddenly fly out the window.)

That's just it. If you listened to Kyle, for example, poor 2.0 performance for the FX in a real game IS unexpected. Whereas if you "listened to 3D Mark pre optimizations / cheats, one wouldn't be surprised.
 
Chalnoth said:
Also of key importance is the fact that the FX 5900 has up to three times the FP performance per clock as the FX 5800 (when partial precision is used in at least 2/3rds of the instructions)

You assume that FX12 units have been upgraded to FP16 units... AFAIK it's just not the case.
 
That's just it. If you listened to Kyle, for example, poor 2.0 performance for the FX in a real game IS unexpected. Whereas if you "listened to 3D Mark pre optimizations / cheats, one wouldn't be surprised.

Totally, as I think it would be a complete joke to believe that nVidia would include replacement shader codes for every single game that comes out in their drivers. Oh, but nevermind, "even most of the "synthetic" portions of 3DMark tests aren't very good in terms of synthetic benchmarks". So basically, God himself has participated in this thread and said that GeForce FX 5900 Ultra's shader speed is perfectly fine, so we shoul drop it. :LOL:
 
Joe DeFuria said:
Chalnoth said:
Also of key importance is the fact that the FX 5900 has up to three times the FP performance per clock as the FX 5800...
At least we're now acknowledging that the 5800 sucks-ass.
I still say it's only because of the way Microsoft chose to write PS 2.0.
 
Tridam said:
Chalnoth said:
Also of key importance is the fact that the FX 5900 has up to three times the FP performance per clock as the FX 5800 (when partial precision is used in at least 2/3rds of the instructions)
You assume that FX12 units have been upgraded to FP16 units... AFAIK it's just not the case.
Well, from what I've seen, changing FX12 instructions to FP16 instructions results in a minimal performance hit on the FX 5900. What other explanation is there?
 
surfhurleydude said:
Totally, as I think it would be a complete joke to believe that nVidia would include replacement shader codes for every single game that comes out in their drivers. Oh, but nevermind, "even most of the "synthetic" portions of 3DMark tests aren't very good in terms of synthetic benchmarks". So basically, God himself has participated in this thread and said that GeForce FX 5900 Ultra's shader speed is perfectly fine, so we shoul drop it. :LOL:
1. I don't expect replacement shader codes. I expect optimized shaders in the games themselves. This is the way that software is going. OpenGL is just beginning to support architecture-specific compiling, and I expect (well, mostly hope, I suppose) DirectX to have architecture-specfic compiling soon as well. This should take care of most of the optimization.

2. 3DMark is useless. Always has been. It just means you should look elsewhere for your evidence that the FX "sucks."

As for the Tomb Raider game, that is interesting, but one game is not enough. We would need multiple games showing the same thing for it to be of any interest.
 
Back
Top