NV-30 to have.............

Any way you slice it. If they use 256mbit and DDRII with their current rumored design..

This spells real troubble for the Radeon 9700. I dont see how the 9700 will survive the comparative benchmark wars that will ensue. Especially considered that the Nv30 will be clocked quite higher in all likelyhood...

ATi had better get cards out to customers in august. If Nvidia has a production sample or two out to key sites to bench agaist the 9700..

Ouch..

Of course.. if this turns out to be nothing more than another unethical PR move by them.. I would not be supprised.
 
Guys and Gals,

Samsung has posted their DDR II RAM speed a few weeks ago (as I posted here with a clearly titled thread a few days ago). http://www.beyond3d.com/forum/viewtopic.php?t=1814

Basically they are producing 128MBit chips caple of 4GBytes/sec of transfer along a 32 bit bus. So 8 * 4 = 32 GB/Sec, on a 8 * 128 MBit = 128 MBytes, provided you have a 8 * 32 bit = 256 bit bus.

And as the DDR II is clocked doubled, to reach 1GHz that means they are running at 500MHz.
 
read this presentation ->
http://developer.nvidia.com/docs/IO/3122/ATT/sigcourse2002-hardware.pdf


"CineFX" is based on third generation shading features. NV30 goes further than the DX9 specification / Pixel Shader 2.0. Because of highly complex shading techniques that can be made possible the NV30 has an increased instruction management subsystem that can handle shader programs up to 1024 instructions long, and these can contain loops, branches and call & return. NVIDIA will be pushing the CG programming language to write these highly complex shading programs that previously had to be wrote in assembly language.


the siggraph presentation says 256 static VS instruction not 1024
 
More memory bandwidth does not mean higher performance, necessarily - look at Parhelia.

But if ATI need the extra bandwidth they can have it since the chip supports it. I don't really see any point in using this as an R300 vs. nv30 thread.
 
tEd said:
the siggraph presentation says 256 static VS instruction not 1024

There's a disparity between the marketting material they've used; its already been mentioned in the 'CineFX' thread. I'm not sure which one is correct, I've quoted the one that was given to me.
 
ATi will have a DDR-II 9700 card out by 2003. I'm assuming not long after the NV30 gets released.

ddr2.jpg



http://www.ati.com/vortal/r300/educational/main.html

Woot, we got some exciting video cards coming up, both from Nvidia & ATI.:)
 
just a few things picked up over at rage3d
apparently the nv40 is going to be able to dipslay images better than real life pictures at frame rates faster than real time. not only that, but its display engine will use a source better than real light for its effects! :LOL:
 
Dolemite said:
X-Reaper said:
ATi will have a DDR-II 9700 card out by 2003. I'm assuming not long after the NV30 gets released.

ddr2.jpg



http://www.ati.com/vortal/r300/educational/main.html

Woot, we got some exciting video cards coming up, both from Nvidia & ATI.:)

Dude, an 800x600 desktop? I hope you're on a 486 or something... :rolleyes:

LOL, 800x600 is fine with me. I don't like my letters too small as Im only on a 17inch monitor(16inch viewable). Hehe, I remember owning a 486.:p

Edit: Okay you conviced me, I got my desktop at 1024x768. :oops:
 
from Basic's link ...
davepermen wrote:

about the simultaneous buffers.
we can have up to 4 buffers to render in. does that mean that we can have up to 128bit at max, and can do that as four 32bit color buffers, or 2 64bit color buffers or 1 128bit color buffer, or do we have always 4 as a max. i don't know this yet, never found clear documentation about this on an nvidia or ati paper (they render to two 64bit buffers simultaneus, making me believe its 128 as max, not 4 buffers of independend format).

128 bits per pixel max. You can use the pack/unpack instructions to fill it in any way you like. 4x32 floating point colors, 8x16 floating point colors, 16x8 fixed point colors. Or any bizarre combination you can pack/unpack to are also allowed.

This is confusing... is he saying that the maximum amount of data per pixel is 128bits, regardless of how many render targets you are writing to?
(I.e. you cannot write to 4 different 128bit render targets)?
 
Basic said:
Either way, he seem trustable, and he says it's 256 static instructions. The 1024 figure is an error.

Well, that answeres the Vertex Shaders - what about the Pixel Shaders? Is the 1024 number in error there?
 
http://www.nvmax.com/Articles/Previews/NV30_SNEAK_PREVIEW/

CineFX" is based on third generation shading features. NV30 goes further than the DX9 specification / Pixel Shader 2.0. Because of highly complex shading techniques that can be made possible the NV30 has an increased instruction management subsystem that can handle shader programs up to 1024 instructions long, and these can contain loops, branches and call & return


The 2.0 spec increases this to 16, and with the complexity of shading offered by the NV30's programmability Texture Maps can be replaced by realtime effects. The ATI R300 can process 160 instructions in one go whilst the NV30 does 1024.

Did any of you got a confirmation from Nvidia that it is 256 in VS and not 1024? The paper states 1024 and they changed the paper alrady (it was 500kb & later became 300kb), so they had some time to fix things if they did something wrong, which they didn't do, so nothing is wrong.

So for now, it's 1024 both in PS & VS
 
psurge:
That wasn't the first time I saw that.
NV30 up to 4 "pixels", max 128 bit total
R300 up to 4 "pixels", max 128 bit each

I get the feeling that NV30 stores all the pixels together as a "superpixel". But I don't know whether R300 does it that way or in four separate buffers.


DaveBaumann:
Doesn't all info say 1024 instructions for PS? [<< Edit: clarification]

Btw, I have been wondering if there will be a gamer/pro split on these chips. With GeForce/Quadro you had two chips (that probably were identical, just with different features enabled). To make sure that they could sell the pro boards to professionals at a premium, nvidia disabeled some features that not made so much sense for a gamer but were essential for a pro, to make the gamers version.

Now the same could be applied here. Most would agree that the realy large shaders aren't very useful for games, but could be essential for DCC. So why not differentiate the products by program length (+probably other things)?


alexok:
Last chage on the pdf is dated 2002-07-22, cass@nvidia wrote his post att 2002-07-26. So that would explain why it isn't in there yet.
 
Basic said:
psurge:
That wasn't the first time I saw that.
NV30 up to 4 "pixels", max 128 bit total
R300 up to 4 "pixels", max 128 bit each

I get the feeling that NV30 stores all the pixels together as a "superpixel". But I don't know whether R300 does it that way or in four separate buffers.

DaveBaumann:
Doesn't all info say 1024 instructions for that one?

Btw, I have been wondering if there will be a gamer/pro split on these chips. With GeForce/Quadro you had two chips (that probably were identical, just with different features enabled). To make sure that they could sell the pro boards to professionals at a premium, nvidia disabeled some features that not made so much sense for a gamer but were essential for a pro, to make the gamers version.

Now the same could be applied here. Most would agree that the realy large shaders aren't very useful for games, but could be essential for DCC. So why not differentiate the products by program length (+probably other things)?

Precisely!

That's what I meant! All the info EVERYWHERE (people under NDA whom I talked to who confirmed this to me, various sites under NDA (like in yesterday's NVMAX article), Nvidia offical papers and what not)), all state that it's indeed 1024 in both PS & VS, so I have no reason to belive otherwise!
 
I've only seen 1024 for the PS yes.
I've seen 256/1024 for VS. In documents from nvidia. And that guy that gives out detailed information about cg, and seems to come from nvidia says it's 256, and that 1024 is an error.

So as I see it it's 1024 for PS, and probably 256 for VS. Maybe it's 256/1024 for VS in the gamers/pro version.
 
Basic said:
I've only seen 1024 for the PS yes.
I've seen 256/1024 for VS. In documents from nvidia. And that guy that gives out detailed information about cg, and seems to come from nvidia says it's 256, and that 1024 is an error.

So as I see it it's 1024 for PS, and probably 256 for VS. Maybe it's 256/1024 for VS in the gamers/pro version.

Yeah, I just noticed that...

Oh well, 1024 is unneccessary, Nvidia knows better! :D
 
Basic said:
DaveBaumann:
Doesn't all info say 1024 instructions for that one?

I don’t know. My docs do, but I’m just wondering whether or not there is a discrepancy.

Basic said:
Now the same could be applied here. Most would agree that the realy large shaders aren't very useful for games, but could be essential for DCC. So why not differentiate the products by program length (+probably other things)?

I’ve wondered about this – are they really different chips? I seem to remember someone saying at the NV25 launch was the first time they would have actual different chips for the workstation market. And, with the levels of features crossing over from consumer to workstation and vice versa is there really much need to risk getting things wrong in you 120+ million transistor products? It seems to me from a cost and risk point of view the levels we are talking about it would be best just to build them in both (but that’s just MO).

alexsok said:
That's what I meant! All the info EVERYWHERE (people under NDA whom I talked to who confirmed this to me, various sites under NDA (like in yesterday's NVMAX article), Nvidia offical papers and what not)), all state that it's indeed 1024 in both PS & VS, so I have no reason to belive otherwise!

The same thing has been said on these forums before, from people I would generally say ‘would know’. I’ve put in a request for clarification to their PR however.
 
Back
Top