"leaked" NV30 & NV35 specs.

Nappe1

lp0 On Fire!
Veteran
http://forum.oc-forums.com/vb/showt...e089251&threadid=86626&highlight=NV35
NV30;
400MHz GPU - 8 pipeline engine - .13Micron
512-bit architecture
800~1000MHz DDR / QDR memory
400MHz Ramdac
LMA3 on 128MB~256MB Video Memory.
DirectX9 - OpenGL 1.3

NV35;
500MHz GPU - 8 pipeline engine - .13Micron
512-bit architecture
1000~1200MHz DDR / QDR memory
400MHz Ramdac
LMA3 on 128MB~256MB Video Memory.
TT&L Technology
DirectX 9.1 - OpenGL 2.0

I'll go through these GPUs in a bit of detail with everyone,

-First, the NV30 is a brand new architecture. That means no more GeForce title, and no more internal guts which were repeated all the way through the GeForce architecture.
-The GeForce 4 was 256-bit, while the NV30 is 512-bit.
-The GeForce 4 was built on .15Micron technology while the NV30 is built on .13Micron Technology.
-GeForce4 is limited to 4096x4096 texture sizes while the NV30 has an infinite maximum texture size meaning 16384x16384 and higher can now be possible.
-NV30 also supports future resolutions of up to 2560x1920 with 64-bit colour (These are 100% rumors, so dont be let done if the final product doesnt support 64-bit colour). The whole rumor of 64-bit started when John Carmack back almost two years ago opened his mouth about Doom3's 64-bit support through the help of next generation Nvidia architecture.
-GeForce 4 features AGP4X while NV30 has full AGP8X support.
-NV30 supports memory bandwidth up to 16GB/s (1GHz).
-PCI-X support (1GB/s) the advanced PCI interface thats an alternative to AGP8X (2GB/s).
-This product is expected to be annouced in August 2002 and is expected to hit store shelves by late september, early october.

As the CEO announces this wonderful new revolutionary GPU advancement, consumers countinue to purchase inferrior products such as GeForce4.


Now, I'll go through the key new features that NV35 will bring;
-NV35 will hit the 500MHz barriar on .13Micron technology. To do this Nvidia is removing the T&L unit from the GPU and putting it external. This allows for more heat spreader and better heat transfer from the GPU.
-By removing the T&L unit, Nvidia is building on a new technology called TT&L which runs at 3/2 the GPU speed. (EX: Your GPU ran at 200MHz, the T&L unit would run at 300MHz) In NV35 case, the T&L unit will now run dedicated at 758MHz.
-Expected memory bandwidth of the NV35 is between 20~24GB/s.
-3DFX SLI technology will be implemented in higher models (A.K.A Workstation models)
-This product is expected to be annouced in January 2003, shipping by March.

I'm stoping here. Theres too much new info to be passed out but the TT&L technology dazzles me the most.

at first those memory speeds aren't from this world. there must be mistake. DDR800 or QDR1000 could be possible though. still I don't get how they get that memory bandwidth. DDR800 gives in 256Bit bus about 25GB/s and in 128Bit bus 13GB/s.

EDIT: after another look, I didn't found anything sensible, so IMO that's propably complete BS. :)

T&L unit running on 758Mhz?? yeah, right.
 
I haven't read the link to the forum, but nothing in the specs above says anything about a 256 bit-width to the memory interface. It jsut says "512 bit" chip, which likely relates to some number relating to "internal bitness".

Just like the GeForce 256 is a "256 bit chip", whether or not it's on a 128 SDR or DDR 128 bit wide memory bus. Also the mention of 16 GB/sec "at 1 Ghz" bandwidth supports the 128 bit wide, DDR bus. (Assuming 1 Ghz means DDR memory clocked at a real 500 Mhz.)

The only thing to me that screams pure specutlation / fake, is the mention of QDR memory. I have not heard of any QDR type of SDRAM memory. It may be some reference to DDR-II though, which some people migh "translate" into QDR.

I would love to see the T&L / vertex unit separated out in order to gain higher clock speeds for the rasterizer and offer a more modular design.

I can definitely see the "budget" implementation that simply omits the vertex processor altogether, and relegates that to the CPU. Interestingly, the complexity added for the needed bus between T&L unit and the rasterizer, could explain nVidia's apparent reluctance to go with 256 bit DDR, when it seems everyone else is doing that for the high-end.
 
hide this page from Basic :)

DDR memory ok, but QDR memory ( as he has explained quite often ) is SRAM so this seems to be bs.

Also, why all this "new" spec's shortly after the Perhelia comes up. It seems a "few" nvidiotics can't stand the "fact" that Matrox has surpassed their beloved nvidia.

The only "practical" quad data rate memory I know is the new RDRAM in development / developed(?) from Rambus.
 
Blah. This is BS. QDR memory, LMA3 and a seperate T&L-chip doesn't make any sense of course.

Just pick this from Anandtech's Parhelia preview:
The next-generation Radeon and NVIDIA's NV30 will both have extremely sophisticated forms of occlusion culling built into the hardware.

DX9-generation cards will have more trouble getting enough fillrate/Pixel Shader processing power than mem bandwidth. The above doesn't suggest how that should be solved besides the "8 pipeline" - which I want to see [on a DX9 part!] before I believe it. This must be pure BS...

... back to the Parhelia previews for me! 8)
 
besides... I dont think that the Nv30 has 8 pipelines. That was something that nvidians started spewing once the R300 specs were leaked. The Nv30 was inititally leaked to have 6 pipelines. And thats what i think it has....

I am really really TIRED of Nvidiocy. IT is frikking LAME.

Of course, i dont really know.. I just know that every time Anything is learned about a competetors product these jackasses post some stupid jibberish, where suddenly the NV30 has a nuclear reactor core....

Whatever..
 
That was something that nvidians started spewing once the R300 specs were leaked.

Of course, i dont really know..

Make up your mind...Either you know, or you don't know. In all your previous threads, you made it sound like your word is gospel...and in the end, you know...just as well as I...that you really don't know, so why do you insist on saying ridiculous things like "nvidians started spewing once the R300...etc" ?

Until both R300 and NV30 are released, you're only guessing...so let's just leave it at that.
 
My point is that Hellbinder has a tendency to talk about things in definitive terms, when he-himself doesn't have any basis for such claims.
 
mboller:
Too ... late ... mouth ... foaming ... can't ... stop ...
;)

Nah, I'm calm now.

While what Joe said can bring sense to some of that rumour, I still don't believe it for a second. While I think that many confuse DDRII and QDR, it shouldn't be wrong in a rumor that came from nvidia, since nvidians surely knows what memory they use.

Any similarities between this and the real NV30/NV35 is purely coincidental.
 
ignoring the rumors, i think separating the T&L and rasterization chips makes a lot of sense.

Wouldn't it be possible to link rasterizer and T&L chip with a standard bus
like HT? I don't know what HT scales to, but upcoming AMD Opterons have 6.4GB/s HT links OTOH. I think for a truly next-gen tri-rate (lets say 500million :D) you would probably need more...

Plus it would allow separate memory for geometry and the frame-buffer + textures...
 
Typedef Enum,

Its simple....

Some things I feel strongly about... Some things I am not sure about... Other things I feel Strongly about but still am not sure...

Isn't that nomal???

In this case, I feel Strongly that this is just more Nvidiocy... However I am not posotive enough to make it gospel. I still have strong irritations with the obvious Nvidiocy patterns on the internet. Something I might add, that I dont see from ANY of the other Fanboy camps. The R300 rumors never mirrored anything resembling the Nv30 rumors. Suddenly When I and other posted information about the leaked Internal ATI Powerpoint presentation... Nvidia Fanboys started posting claims that the Nv30 has 8 pixel pipelines...
 
A separate T&L chip doesn't sound like anything Nvidia has done in the past - it sounds very much like the 3dfx Rampage+Sage solution, though ... if we assume 200 MPolys/sec with 64 bytes per (transformed) vertex (which I'd assume that a next-generation T&L unit would be perfectly capable of), we end up at about 12.8 GBytes/second for the chip-to-chip interconnect. Which would max out the widest supported configuration (64 bits @ 1.6 GHz) of Hypertransport - at the cost of a rather large pin count (HT uses differential pairs, so a 64-bit link takes 128 pins).

Also, a note on the clock speeds: When Nvidia has launched brand new designs before, they have generally failed to reach the clock speeds of their previous designs. E.g. TNT2->Geforce1: 150->120 MHz, Geforce2U->Geforce3: 250->200 MHz.
 
agreed. it doesn't sound like something they've done in the past, but...

it is one way of drastically upping the total # of transistors (and clock speed) without having to wait for TSMC to perfect another process.

Where exactly do you think the pipeline should be split (if at all)? For instance - the T&L chip could include a HZ buffer and rasterize tris into
tiles...
 
I'm skeptical of nvidia putting T&L on a separate chip. Two chips will probably increase the cost quite a bit, of course the same has been said of a 256 bit bus and now two announced chips have it. Another reason I'm skeptical is because 3dlabs has until now used separate geometry and raster chips and they are now integrating them with P10. They should know the tradeoffs better than anyone.
 
The cost would mostly increase due to chip packaging cost. If both the T&L and the renderer chip also both have local memory (which they will need to have for decent performance), the chips might also easily have to be nearly as large as today's GPUs due to pin counts, which would add to the cost as well.

The natural place to split the pipeline would probably be immediately after T&L for immediate-mode rendering and after polygon binning for tiled rendering (IOW, where the CPU-renderer split is normally placed in T&L-less architectures); this would keep features like displacement mapping, subdivision surface support, binning (if they do tiling) sqarely within the domain of the T&L chip.

Unless TSMC continues to have severe yield problems with each new process or Nvidia is somehow convinced that a Rampage+Sage-like multi-chip setup (with SLI?) really is the future, this setup really makes very little economic sense for Nvidia to do.
 
Although I actually doubt that NVidia will use a two chip solution.
If you were to split the existing GF4 pipeline directly after primitive assembly placing a largish fifo (I assume the two chips would allow this)between the two chips you'd probably see a significant performance increase in high stress real world situations.
 
Back
Top