R420 Hints from people under NDA!

Re: No indications of valid info on Rage3d

Sxotty said:
I find a certain thing ironic. The people who are saying don't buy the 6800 for unreal3 (btw they are right) should have said (and maybe they did) don't buy the 9700/nv30 for hl2, or doom3. I cannot comprehend the number of people who bought cards like this specifically in their words for those two games and we are still waiting and waiting.

You're absolutely right, the best thing to do is to wait until the actual game comes out except if you're already hurting bad and need the new viddy card for current titles too.

I would not compare UE3 to D3/HL2 though wrt preemptive buying. You see, there's a tiny difference there - UE3 is known to be MIA for many years to come, whereas both D3 and HL2 should have been here for quite some time now. I already made this analogy a few posts back, but let me re-emphasize a this bit : UE3 is now what D3 was back in 2001, and not what it was in fall 2003 or say HL2 was in summer 2003 ( ie. "soon to be released" ). Buying a few months in advance versus buying a few years in advance is NOT the same level of... erm, stupidity.
 
martrox said:
DoS, There has been a real rise in the volume of fanboys in the last 2-3 months here.....and there as been a equal rise in responces..... It's happened before, many times...... all the way back to the original GeForce/Voodoo3 arguements. Nothing is going to stop it, so why add to the noise level, heh? There's no need to get personal about any of this..... It would be very nice IF everyone - yes, John, even ME! ;) - would stick to the subject.

Copy that.
I am really at the edge of my sit regardng R420. I can't wait to see it head to head with NV40. This is probably the first time i am going to get one of the hign-end pieces and not only that but also as soon as they are available. I have even quit playing FarCry (i have only played until mission 4) cause i want to play it at higher res with all the eyecandy with more fps.
Regarding bandwidth efficiency and R420, if the leaked memory clock speeds for the XT part are true, then one can imagine that it has very good bandwidth utilisation and doesn't need the extra speed in order to be competitive with the 6800 Ultra, otherwise they would go for higher clocked chips (i hear 600 mhz chips have acceptable levels of availability)
 
By the way (and judging by some of his latest comments), I wouldn't be surprised if Carmack had a "brand" new engine out as well by the time Unreal3 ships. And of course by that time Crytek will have upgraded their own engine, etc. Which means that I doubt that Unreal 3 is going to be "THE" next-gen game, just because we got to see it first :)
 
DoS said:
martrox said:
DoS, There has been a real rise in the volume of fanboys in the last 2-3 months here.....and there as been a equal rise in responces..... It's happened before, many times...... all the way back to the original GeForce/Voodoo3 arguements. Nothing is going to stop it, so why add to the noise level, heh? There's no need to get personal about any of this..... It would be very nice IF everyone - yes, John, even ME! ;) - would stick to the subject.

Copy that.
I am really at the edge of my sit regardng R420. I can't wait to see it head to head with NV40. This is probably the first time i am going to get one of the hign-end pieces and not only that but also as soon as they are available. I have even quit playing FarCry (i have only played until mission 4) cause i want to play it at higher res with all the eyecandy with more fps.
Regarding bandwidth efficiency and R420, if the leaked memory clock speeds for the XT part are true, then one can imagine that it has very good bandwidth utilisation and doesn't need the extra speed in order to be competitive with the 6800 Ultra, otherwise they would go for higher clocked chips (i hear 600 mhz chips have acceptable levels of availability)

Do we know if the 16-pipe version is going to be available for benchmarking in the first round (April 26th or May 4th, whichever it is)? I know we've heard that the 16-pipe version will be available later, end of May-ish. The reason I ask is that I think it is a mistake for ATI to put a 12-pipe card out against the 16-pipe NV40 as their first move. Sure, a certain amount of extrapolating can be done to try to get an apples-to-apples result for what the 16-pipe R420 can do, but first impressions are important and lasting, so why would you want to send out the B team against the other guys first string?
 
Maybe ATi doesn't believe this is their first string.
Maybe ATi is having trouble with 4 quad yields, and/or speed binning enough quality chips.
Maybe ATi is trying to recoup $ quickly with a less expensive part that is comparable.
It MAY BE none of the above!! heheh
 
Bry said:
hstewarth said:
All I am really saying, I am glad that this time is no long "Dustbuster" jokes and stupid cheating on benchmarks and other stuff.

Just think what will happen if NVidia came up short and was out of race.. ATI would be the only game in town and that means ATI can charge any price they want.. Do you serious want this?

It better for the customers if ATI and NVidia are neck and neck with performance - so there is no true winner. This will means that customers have a choice..

I personally choose NVidia for another reason. The support is better in Professional 3d graphics programs. I don't consider my self a <bleep> but one who has great success with NVidia cards on multiple machines.

Thats the point these gentlemen have been trying to make to you..R420 has yet to be released. Yet we already see NV's FUD campain of lies with the Farcry shots.. When R420 is released, And if it beats NV40..will NVidia return to their same old driver cheats and lies..The verdict is still out on that. And will be for a few moths..

Dont prod and incite the guy who is kicking your butt, this is a lesson it seems Nvidia will have to learn the hard way.
 
Clearly R300's memory bus was very efficient but not high speed - will R420's still maintain good efficiency whilst being higher performance?

If ATi keep to the same design philosiphy for the R420 that they had for the R300, I'd say yes.

so why would you want to send out the B team against the other guys first string?

Maybe they figure the B team is all they need. Leave the 'A' team for any comeback nvidia come up with.

btw: Has anyone done a per clock comparison between the NV40 and R3** pixel shader pipes?
 
Kombatant said:
By the way (and judging by some of his latest comments), I wouldn't be surprised if Carmack had a "brand" new engine out as well by the time Unreal3 ships. And of course by that time Crytek will have upgraded their own engine, etc. Which means that I doubt that Unreal 3 is going to be "THE" next-gen game, just because we got to see it first :)

That's of course always the case. The Unreal Engine 3 is here today. But not the games based on it. By the time Unreal Engine 3 games starts to ship, it'll be "current gen".
 
Snarfy said:
heres what pixel shader 2.0 really looks like:
http://www.azuretwilight.org/gallery/albums/2004_04/FarCry0015.sized.jpg

as you can see, pixel shader 2.0 is NOT as big of a step down from ps3.0 as nvidia would like for us to believe=)

And here's what PS1.3 looks like :

FarCry0004.jpg


FarCry0006.jpg


The above were taken on a GeForce4 ti4400 (56.72 drivers) with all settings at the highest, but no AA and AF at 1. It was taken at 1024x768x32 and resized to 800x600 (to save bandwidth). More shots can be found at http://www.diplo.nildram.co.uk/farcry/

No real point to make, just for your interest, guys :)
 
Bjorn said:
That's of course always the case. The Unreal Engine 3 is here today. But not the games based on it. By the time Unreal Engine 3 games starts to ship, it'll be "current gen".

Heh, it's not even here, it's still a work in progress. Hence my belief that it's invalid to talk about the said game/engine right now apart from commenting on how cool the gfx will look in 2-3 years time :)

As for that site, if my German haven't failed me, it's about Carmack giving NV40 his blessing.
 
digitalwanderer said:
Translation please by someone more fluent in German than babblefish?

Sorry, the gist of it is :
"Since the development of Doom3 is about to be wrapped up, my job is now working on the next generation rendering technology - for wich my platform of choice is NV40 [(film at eleven)] because of support for very long fragment programs, generalised floating point blending and filtering and extreme high performance."

The news blurb also speculates that this engine will use GLSL.
 
DW that is an older statement from JC. I mean that it came out right when nv40 did, he basically said he would use it as the minimum spec for his next engine... surprise not really... the same way he used the gforce as the base for D3.

"As DOOM 3 development winds to a close, my work has turned to development of the next generation rendering technology. The NV40 is my platform of choice due to its support of very long fragment programs, generalized floating point blending and filtering, and the extremely high performance,"
 
FYI, my father which does a fair bit of buying on ebay.de for paintings found Google's translator MUCH better for German than babelfish. So maybe try that instead of babelfish if you want better translation in the future :)
(no idea about other languages)


Uttar
 
DaveBaumann said:
chip both system have a finite amount of compute abilities and so regardless of what happens some operations for both systems are done on the CPU. One advantages that VideoShader has is the quantity of compute ability scales with the 3D performance.

Yes, but the question is *how much* has to be done on the CPU. The NV40 can run both VideoShader style processing and their video processor. Simply accelerating deblocking and DCT/IDCT isn't enough. Motion Estimation is one of the most expensive parts of the codec. That's why people buy external encoder cards and why Tivo can encode multiple streams with a 27-56Mhz PowerPC processor + playback + server up streaming content over a LAN.

Moving motion estimations from the CPU to the GPU enables one of the biggest gains in encoding performance outside of accelerating (I)DCT in hardware. That performance gap only goes up as you move to MPEG-4/H.264 and Hi-Def (4x4 blocks, SAD, 1/4 pixel interpolation vs 1/2 pixel interpolation, plus 4x as many pixels)

For MPEG-2 decoding, 95% of the work is offloaded. For encoding, well over 50%. This means a 3Ghz pentium box could encode two hi-def streams with marginal impact on system performance.

This doesn't mean the NV40 is doing stuff that is impossible for the R300 to do, just that it is doing stuff with alot less CPU overhead. Afterall, a PC with no hardware acceleration (e.g. crappiest integrated video) at all can do MPEG-2 encoding. I do MPEG-2 encoding on a Voodoo3 equipped system.

Of course, they could always buy a third party encoder chip and put it on the PCB as well.
 
Yes, but the question is *how much* has to be done on the CPU. The NV40 can run both VideoShader style processing and their video processor. Simply accelerating deblocking and DCT/IDCT isn't enough.

Err, yeah, I think you'd be underestimating things if you thought that was all that was going to happen.

This doesn't mean the NV40 is doing stuff that is impossible for the R300 to do, just that it is doing stuff with alot less CPU overhead.

I would wait for the testing myself.

Of course, they could always buy a third party encoder chip and put it on the PCB as well.

Given ATI'sbackground, do you think thats likely?
 
Why not use an external encoder?

I dunno - ATI isn't hesitant about using external components on their AIW products. I don't know why they would hesitate to use an external encoder solution if they thought it was in their customer's best interests.

However you do have a point in that ATI would prefer to use thier own and they have no hesitation about developing their own.

One thing I always wonder is how ATI does all the stuff they do considering the size of their company. It seems they have more diverse interests than nV does.
 
DaveBaumann said:
Err, yeah, I think you'd be underestimating things if you thought that was all that was going to happen.

I'm not saying anything Dave because I don't know the specs of the R420. On the otherhand, to say that "ATI already has a programmable video processor, they use the shaders" (so can any other DX9 card) as a response to NVidia's VOP is just misleading. The two solutions do not help solve the same problems in the video codec pipeline.

I would wait for the testing myself.
I've already seen live tests on a P4 3Ghz. It beats my current R300 + P4-3Ghz system in CPU efficiency on DVD sized video. All I'm saying is, NV40 raised the bar. I'm not saying it is better than an unknown R420. Simply put, I view dedicated encoder + pixel shaders as superior to pixel shaders alone for video processing. The less CPU, the better for me.


Of course, they could always buy a third party encoder chip and put it on the PCB as well.

Given ATI'sbackground, do you think thats likely?[/quote]

I don't know. They could either choose to make the Theatre chip programmable, or add fixed function motion estimation, or they could simply buy a cheap encoder IC from the dozens of companies selling them.

I have no doubt they'll offer similar functionality. I'll wait to see how it performs CPU wise against the NV40 when both of them have mature drivers. But the early data for me already shows NV40 is an improvement over previous generous video solutions.
 
Back
Top