NV30 RUMORED spec`s from Guru 3D

dbeard

Newcomer
http://www.guru3d.com/forum/read_msg.php?tid=5320&forumid=ubb2

Ok, this rumored TTL engine sounds a lot like what the rampage would have had. It sure is interesting to start supporting glide and blur effects at the same time. Any insights? Any chance of this being correct?

Omen Card

AGP 4x and 8x

Copper Core Gpu Processor 0.13 m default clock 450 MHz

73 million transistors

8 pipes

Quad Vertex Shaders

Dual Pixel Shaders

85 bone capable

Lightspeed Memory Architecture III

Quad Cache memory caching subsystem

Dual Z-buffers for greater compression without losses

visibility subsystem

NvAuto advanced pre-charge

Advanced Hardware Anisotropic Filtering:
A new 12nvx mode that delivers improved subpixel coverage ,texture placement and quality.
12nvx mode should deliver 80% better subpixel coverage and placement than previous modes

A seprate second copper processor running at 450Mhz an 0.13m as well
Shall handel all T&L now called TT&L (True Time and lighting) .
It will also handel NVAutoShaper a new prechached and precacheable system.
That has precached shapes cirles,squares,triangles,and other shapes.
Programmers can also add shapes to be temporaly stored in it`s precache to speed things up.
This should speed things up and make standard shapes more accurate!
Backward glide game capable thanks to
NvBlur (Yes a new version of Glide!)
 
Four vertex shaders sounds a bit weird, IMO. Would it really be practical for this generation if the chip is already running at 450Mhz?

Heh I can't wait for the official release of NV30...
 
erm, one of my friends also sent me these specs (which are supposedly from nvmax forum), it just contained little more :p ....looks like a total nonsense to me

A seprate second copper processor running at 450Mhz an 0.13m as well
Shall handel all T&L now called TT&L (True Time and lighting) .
It will also handel NVAutoShaper a new prechached and precacheable system.
That has precached shapes cirles,squares,triangles,and other shapes.
Programmers can also add shapes to be temporaly stored in it`s precache to speed things up.
This should speed things up and make standard shapes more accurate!

128 MB DDR-SDRAM @ 375 MHz x2 = 750Mhz and 256 bit yes 256 bit!
370Mhz RAMDAC
Direct X 9.X compatable
Latest Open Gl
Backward glide game capable thanks to
NvBlur (Yes a new version of Glide!)
NView Multi-display Technology
Built in reprogramable dvd and tv decoder chip
There maybe A new form of cooling the gpu`s .
Where the Gpu`s have a hole in the middle and through the board.
So, the hottest area of the Gpu`s gets more efficent cooling.
 
This is not the first time we've heard of NVIDIA adopting their own API - there was tlak of it last year. Going this route was also make it easier for NVIDIA to score 'exclusives', however MS may not be too happy about this type of action (although NVIDIA have bargaining power with MS and the ongoing price of NV2A).
 
I find nVidia adopting thier own API EXTREMELY unlikely. The developers probably won't support it.

Seperate T&L unit? Doubt it. nVidia has always been a single chip company.
 
nvidia's API

DaveBaumann said:
This is not the first time we've heard of NVIDIA adopting their own API - there was tlak of it last year. Going this route was also make it easier for NVIDIA to score 'exclusives', however MS may not be too happy about this type of action (although NVIDIA have bargaining power with MS and the ongoing price of NV2A).

Wasn't it nvidia who claimed that D3D was their API? I'll have to look around for the source of the quote, but I think I am correct... (the quote was made back when 3dfx was supporting GLIDE and S3 was supporting METAL, etc.)
 
Re: nvidia's API

OpenGL guy said:
Wasn't it nvidia who claimed that D3D was their API? I'll have to look around for the source of the quote, but I think I am correct... (the quote was made back when 3dfx was supporting GLIDE and S3 was supporting METAL, etc.)

Yep. Back then they said that D3D was their API of choice. I remember this very well, because I went through the hazzle of nVidias alpha/beta OpenGL-drivers with my Riva128. :-?

OpenGL eventually came through for them, but back in the hard core days of 3D (Riva128!) they were betting their money on D3D. Quake II of course change this, as nVidia realized that they just had to get OpenGL-drivers.
 
jkl

all those specs... probably false... glide compatible api... i know that isn't true, but as far as nvidia making their own api, it wouldn't suprise me a bit... there were lots of rumors about nvidia developing their own api BEFORE they got in the dispute with ms and ati over dx 8.1.... that might have just been the icing on the cake
 
Why? I think that is pretty much possible, but not some other things mentioned...
 
I believe 73 million transistors is way to low for a spec sheet of that magnitude. When we went from GF3 to GF4, we gained 6 million (from 57 to 63) for only one extra vertex shader taking the largest part for that.

For a spec sheet with 4 vertex shaders alone this is already to low.

BTW, these specs are fake IMO as you probably would have guessed...

And ehm, does a dual Z-buffer increase compression without loss, this is a first for me...

Avé...
.PGN.iNERTiA.
 
Back
Top