R300 clocked at 315mhz!!!

Joe DeFuria said:
I was personally expecting roughly the same clock speeds as the Parhelia...low 200's, with 250 being the absolute max.

Parhelia (as well as GF4) is limited by AGP power supply and not transistor size/manufacturing process. With more power, you could get higher clockspeeds.
 
Parhelia (as well as GF4) is limited by AGP power supply and not transistor size/manufacturing process. With more power, you could get higher clockspeeds.

I wouldn't be so sure about that. First off, if that was their only limitation then why not take the route of having a hard-disk/floppy power supply on there? If power was the problem it would give them an easy route to higher speeds.

Second, P10 is a very similarily sized chip to Parhelia on a similar process and it can run at considerably higher clockspeeds (although 3Dlabs were not releasing clocks I did eyball a brown tag on one in use with the clockspeed on) and it doesn't appear have any of the fancy power regulation stuff that GF4 does.

I doubt that power is the only limitation wrt to Parhelia.
 
Bjorn said:
Look at the scores for different cards in 3D Mark, then look at the same cards in UT2003.

See any similiarities ?
Sure, things look alright on the top of the spectrum but keep comparing all the way down. So, NOW can we stop using 3DMark to compare unique 3d cards?

http://www.anandtech.com/video/showdoc.html?i=1647&p=8

Top 3DMark scores
----------------------
16604 GeForce 4 Ti 4600
14300 GeForce 4 Ti 4400
14085 GeForce 4 Ti 4200
13916 Radeon 8500
12097 GeForce 3 Ti 500
10729 GeForce 4 MX 440
9029 GeForce 4 MX 460
9020 GeForce 2 MX 400
7610 GeForce 2 Ultra
7230 GeForce 2 Pro
6892 Radeon 7500
4112 Kyro II
3103 GeForce 2 MX 200

Top UT Scores
-----------------
94.5 NVIDIA GeForce4 Ti 4600 (128MB)
83.6 NVIDIA GeForce4 Ti 4400 (128MB)
75.3 NVIDIA GeForce4 Ti 4200 (64MB)
70.9 NVIDIA GeForce4 Ti 4200 (128MB)
57.6 ATI Radeon 8500 (128MB)
55.6 NVIDIA GeForce3 Ti 500 (64MB)
55.0 ATI Radeon 8500 (64MB)
54.4 Matrox Parhelia (128MB)
52.4 ATI Radeon 8500LE (128MB)
49.1 NVIDIA GeForce3 (64MB)
42.9 NVIDIA GeForce3 Ti 200 (64MB)
42.9 NVIDIA GeForce4 MX 460 (64MB)
39.2 ATI Radeon 7500 (64MB)
36.5 NVIDIA GeForce4 MX 440 (64MB)
32.4 ST Micro Kyro II (64MB)
30.4 NVIDIA GeForce2 Ultra (64MB)
25.9 NVIDIA GeForce2 Pro (64MB)
14.8 NVIDIA GeForce2 MX 400 (32MB)
7.3 NVIDIA GeForce2 MX 200 (32MB)

By using the same rule, any benchmarks using anything but the game you want to play is useless. Even games using the same engine (f.e Quake3) can give different results so even those comparisions aren't fool proof.

Not totally useless, but still they should be used as a guide just as I was suggesting for 3DMark. There are no set standards for a developer for what a developer wants to do, how he wants to do it, and how much he is going to use it. Therefore each engine is unique and will give unique performance.
 
DaveBaumann said:
I wouldn't be so sure about that. First off, if that was their only limitation then why not take the route of having a hard-disk/floppy power supply on there? If power was the problem it would give them an easy route to higher speeds.

Hard to say if it is the only limitation, but AGP power really IS a limitation today. You can't get more than 42 Watts out of the AGP and in order to achieve that, your board design has to be really complicated, using three different voltages as well as not going over the current spec for each voltage. Nvidia did quite a good job with their Ti4600 board design, they can use up to 40 watts from AGP.

Dunno why Matrox thinks external power is bad for them, may be because Matrox is mainly an OEM company and OEMs don't like sticking together more cables when they're building PCs? Nvidia didn't go that route too (except Canopus with their Spectra series).

Second, P10 is a very similarily sized chip to Parhelia on a similar process and it can run at considerably higher clockspeeds (although 3Dlabs were not releasing clocks I did eyball a brown tag on one in use with the clockspeed on) and it doesn't appear have any of the fancy power regulation stuff that GF4 does.

Well, was it a production board? Was it an AGP or AGP Pro board? But basicly yes, different chips can have different power requirements even with similar process and transistor count. Radeon 8500 also requires much less power than a GF3 or a Ti4200.
 
LittlePenny said:
Sure, things look alright on the top of the spectrum but keep comparing all the way down. So, NOW can we stop using 3DMark to compare unique 3d cards?

Well, there aren't any huge problems with the scores are there ?

And, afaik, the Kyro doesn't render cubemaps so it has an advantage there compared to the cards that does. Another thing (which i think Anand also mentions) is that the Kyro don't have a T & L engine which probably will lead to even worse performance when acually playing the game (compared to the benchmarks).
 
How does the lack of a T&L engine affect gameplay but not benchmarks?

If LOD and other things can be tweaked and MadOnion will still compare those scores to other, untweaked scores, it's time 3DM incorporated a visual check if they want to be considered relevant. The whole point of benchmarking is not to see who gets the highest numbers, but who gets the highest numbers with the same image quality. We all bitched out ATi for their Q3 mipmap "mistake," I don't see how 3DM numbers with altered LOD can be compared--unless the difference were clearly noted (as with ATi's aniso and nV's MSAA).
 
iRCAFAIK said:
ATi PR subsequently said they were aiming for 300Mhz (at that time of writing) but eventually got 275Mhz - OEM boards were clocked at 250 then subsequently were renamed LE's.

So that's exactly what they did, hyped it higher than it shipped. Seems I was wrong about the review samples. What about the first Radeon? You guys aren't really saying that Ati hasn't fiddled with their frenquencies more than they shoud have?
 
Pete said:
How does the lack of a T&L engine affect gameplay but not benchmarks?

that depends...

for example, on some games my AIW Radeon slows down with HW T&L, but then on my second computer (Celeron 500A), GeForce DDR HW T&L makes totally unplayable Rally Trophy (Without HW T&L) very enjoyable experience.

Mostly it is up to your CPU how much it affects...
 
NTD said:
iRCAFAIK said:
ATi PR subsequently said they were aiming for 300Mhz (at that time of writing) but eventually got 275Mhz - OEM boards were clocked at 250 then subsequently were renamed LE's.

So that's exactly what they did, hyped it higher than it shipped. Seems I was wrong about the review samples. What about the first Radeon? You guys aren't really saying that Ati hasn't fiddled with their frenquencies more than they shoud have?

I recall them announcing 250 then shipping 275.

Its not as if other IHV's especially the one where no criticism is brooked hasn't shipped a product with amuch lower clock than expected or hyped.
 
What about the first Radeon? You guys aren't really saying that Ati hasn't fiddled with their frenquencies more than they shoud have?

Are you saying no-one else has 'fiddled with their frequencies' more than they should either? i.e. TNT2's being shown at higher than their release frequencies, or GeForce 4's being reviewed with faster RAM than the retail boards 'because this option is available to OEM's and some may use it'?

At least ATi put out their preview samples for R8500 lower than their final retail speeds.
 
Nappe1, I meant I find it hard to believe KyroII would benchmark better than it played, while other boards wouldn't. I understand it's always SW TnL, I just don't see how it would differ between benchmarking and gameplay, while a GF4Ti wouldn't--unless the lack of TnL severly affects minimum fps.

For example, does your second system benchmark better with SW TnL than HW TnL, but play better with HW TnL in Rally Trophy? I wouldn't think so.

I'd love to see fps over time graphs for UT2K3, to make sure GF4Ti's aren't scoring higher simply because they reach higher occasional peaks--I wonder why Anand doesn't include them? (For instance, my Voodoo 2 gives me 42fps avg. in CS, but that's because it counters its frequent dips into the low teens with 60+ fps in empty hallways--I don't consider it playable/enjoyable. I wonder if the 8500 and GF4Ti hit the same lows, but the latter merely shoots up to ridiculously high scores in lulls in the action. Why haven't we seen more benchmarks adopting a suggestion I saw here, to cap the max fps at slightly above the avg fps achieved in the first benchmarking run, thus emphasizing minimum, not max, performance?)
 
Pete: I understand your point. getting good average frame rate all the time is much more important than getting average framerate with very low lows and very high highs. I second to getting frame rate spreading graphs to tests.

once more about that Rally Trophy: (excelent game with pretty nice gfx engine, btw...) I don't even know the fps that GeForce DDR is giving at Rally Trophy. (not in SW T&L mode nor HW T&L mode.) But surely I can say what's unplayable and what enjoyable.

And I don't even want to know those fps'es... So in generally, you can play and enjoy quite a lot amount fo games (and yes, there's good games beyond first/third person shooter genre.) running on 30-45 fps, but after you see the amount of fps, you suddenly think that's not enough though you just few minutes earlier said how fluently it runs...

In finland we have a way to describe this: "what human doesn't know, he doesn't need either." ;)
 
iRC said:
Are you saying no-one else has 'fiddled with their frequencies' more than they should either? i.e. TNT2's being shown at higher than their release frequencies, or GeForce 4's being reviewed with faster RAM than the retail boards 'because this option is available to OEM's and some may use it'?

I remember the trouble with the first TNT (125MHz dropped to 100MHz) but can't recall if they had the same thing with the next one.

I guess I'm wrong if nobody agrees with me, but IMHO Ati has had a lot more freguency troubles than others (nVidia seemed to learn their lesson).

The only thing I'm trying to say here is that I'm rather sceptical about that high a clockspeed. Hope they prove me wrong.
 
NTD said:
iRC said:
Are you saying no-one else has 'fiddled with their frequencies' more than they should either? i.e. TNT2's being shown at higher than their release frequencies, or GeForce 4's being reviewed with faster RAM than the retail boards 'because this option is available to OEM's and some may use it'?

I remember the trouble with the first TNT (125MHz dropped to 100MHz) but can't recall if they had the same thing with the next one.

I guess I'm wrong if nobody agrees with me, but IMHO Ati has had a lot more freguency troubles than others (nVidia seemed to learn their lesson).

The only thing I'm trying to say here is that I'm rather sceptical about that high a clockspeed. Hope they prove me wrong.

Whenever ATi discusses the specks, they usually give clock range (Ex.:150-250MHz) rather then something definitive. Due to that rather smart approach the last time I heard anyone complain about then not making announced clock speed was way back when the original Radeon was released.
 
Nappe1 said:
In finland we have a way to describe this: "what human doesn't know, he doesn't need either." ;)
You Fins screwed it up! ;) It is supposed to be the AMERICAN saying: "What you don't know, can't hurt you".
 
John Carmack mentioned at one point that he moved some code from the CPU to the vertex shaders in Doom3 and went from 100% CPU usage to 5% on a GF3.
On the other hand, it also went from, i think it was 30 fps to 25.
Now, the vertex shader of f.e a GF4 is a lot faster so it should end up being faster then the CPU.

Anyway, what happens when you start playing and all the other parts (AI...) of the engine has to be executed by the CPU ?
Maybe the 30 fps becomes 20 and maybe the 25 fps stays at 25.

Just speculation of course but i just want to show that benchmarks might (highly depends on the game engine of course) not be entirely correct when comparing SW T&L and HW T&L.
 
Back
Top