Did Digitlife jump the gun announcing NV40?

dan2097 said:
RGBA stands for red, green, blue, alpha right?

What does the alpha channel actually do?
The alpha channel is generally used for transparency, though there are other uses.
 
Hellbinder said:
8 shader units per pipeline and 16 pipelines...
The characteristics table says:

Pixel Pipelines - 16
Pixel shader operations/pixel - 8
Pixel shader operations/clock - 128

All of that using two shader units per pipe. (total 32)

cu

incurable
 
Ah now I see! The pictures are all saved to a sub directory same name as the article - I have ALL those files about 22KB each * 6 or so piccys - I'll send them to you too!

/done!
 
Hellbinder said:
8 shader units per pipeline and 16 pipelines...

Someone tell me how Nvidia gets away with telling one lie after the next and everyone still thinks they are cool?

Do you have any idea how many Transistors it would take to actually make that statement true?

Keep in mind they have 222 million transistors and the design is simd/mimd so they can have one set of logic operate on many sets of data.
 
I noticed Abit wasn't on the list of vendors providing 6800 Ultra cards. Maybe Nvidia is a bit mad at them for going ATI?
________
METEOR RIDEAU
 
Last edited by a moderator:
g__day said:
Ah now I see! The pictures are all saved to a sub directory same name as the article - I have ALL those files about 22KB each * 6 or so piccys - I'll send them to you too!

/done!
No worries, someone took care of it . Just consider mine 56k friendly 8)
 
So, the GeForce 6800 GPU family, codenamed NV40, today officially entered the distribution stage. Initially it will include two chips, GeForce 6800 Ultra and GeForce 6800, with the same architecture.

These are the key innovations introduced in NVIDIA's novelties:
16-pipeline superscalar architecture with 6 vertex modules, DDR3 support and real 32-bit pipelines
PCI Express x16, AGP 8x support
222 million transistors
400MHz core clock
Chips made by IBM
0.13µm process

40x40mm FCBGA (flip-chip ball grid array) package
ForceWare 60+ series
Supports 256-bit GDDR3 with over 550MHz (1.1GHz DDR) clock rates
NVIDIA CineFX 3.0 supporting Pixel Shader 3.0, Vertex Shader 3.0; real-time Displacement Mapping and Tone Mapping; up to 16 textures/pass, 16-bit and 32-bit FP formats, sRGB textures, DirectX and S3TC compression; 32bpp, 64bpp and 128bpp rendering; lots of new visual effects
NVIDIA HPDR (High-Precision Dynamic-Range) on OpenEXR technology supporting FP filtering, texturing, blending and AA
Intellisample 3.0 for extended 16xAA, improved compression performance; HCT (High-resolution compression), new lossless compression algorithms for colors, textures and Z buffer in all modes, including hi-res high-frequency, fast Z buffer clear
NVIDIA UltraShadow II for 4 times the performance in highly shadowed games (e.g. Doom III) comparing to older GPUs

Extended temperature monitoring and management features
Extended display and video output features, including int. videoprocessor, hardware MPEG decoder, WMV9 accelerator, adaptive deinterlacing, video signal scaling and filtering, int. NTSC/PAL decoder (up to 1024x768), Macrovision copy protection; DVD/HDTV to MPEG2 decoding at up to 1920x1080i; dual int. 400MHz RAMDAC for up to 2048x1536 @ 85Hz; 2 x DVO for external TMDS transmitters and TV decoders; Microsoft Video Mixing Renderer (VMR); VIP 1.1 (video input); NVIDIA nView
NVIDIA Digital Vibrance Control (DVC) 3.0 for color and image clarity management
Supports Windows XP/ME/2000/9X; MacOS, Linux
Supports the latest DirectX 9.0, OpenGL 1.5

Also diagrams:
Pixel pipe:
loop:
shader1 or Tex
shader2
-------------
also
"co-issue - 2 independant instruction executing on same shader unit as 3/1 or 2/2 (R3xx can execute at 3/1)" <- that's funny :)
" GF6800 dual-issue - 2 instructions in same cicle on different shader units"
 
My origional comment was based on a statement in the thread made by another poster. Now that i see the linked page it makes more sense. ;)
 
nVidia PR said:
Extended display and video output features, including int. videoprocessor, hardware MPEG decoder, WMV9 accelerator, adaptive deinterlacing, video signal scaling and filtering, int. NTSC/PAL decoder (up to 1024x768), Macrovision copy protection; DVD/HDTV to MPEG2 decoding at up to 1920x1080i; dual int. 400MHz RAMDAC for up to 2048x1536 @ 85Hz; 2 x DVO for external TMDS transmitters and TV decoders; Microsoft Video Mixing Renderer (VMR); VIP 1.1 (video input); NVIDIA nView

*claps*

Well done nVidia. Give me a quiet and less power hungry beast (? NV41) and it'll go into my HTPC.
 
Hellbinder said:
Gotta love how they are telling everyone how "R300" from two years ago.. "Cant do that"... :LOL:

Maybe because the R3xx line up is the only competing card with the NV40? Remember, the R420 has not been announced yet!
 
I guess they got confused about the time zone because on the digilife post they said exactly 17:00 so I'm guessin 17:00 wherever timezone we are going to see alot of NV40 posts :)
 
Hmm from the diagrams it seems their are actually 2 shader units per pipe, each handling 4 components (?). So, an upper limit would be 32 unique operations per clock (?) performed on up to 128 (?) components. (???)
 
bloodbob said:
I guess they got confused about the time zone because on the digilife post they said exactly 17:00 so I'm guessin 17:00 wherever timezone we are going to see alot of NV40 posts :)

Perhaps they thought the NDA was up at 5:00 PM, but rather 5:00 AM... I don't know how they confuse it since they're on the 24 hour clock... Anyways, it is less than 7 hours until 5 AM where they are, perhaps we'll see some shit then.
 
Extended temperature monitoring and management features

Might be needed... ;)

Anyway, they don't say anything about how many vertex shader units there are, or indeed, give any vertex shader performance numbers at all... Maybe the pixel pipes doubles as vertex shaders too? :oops: (Ok, ok. Baseless speculation I admit, but why else wouldn't they say anything on the subject??)

And that die shot was VEEERY interesting. Not nearly as regular as I expected, though most pixel pipes seems to be clearly distinguishable as a square block. I didn't bother to try and find all 16 as they get more and more staggered as one proceeds from top left towards bottom right; up to around 12 is fairly easy though.
 
Guden Oden said:
Anyway, they don't say anything about how many vertex shader units there are, or indeed, give any vertex shader performance numbers at all... Maybe the pixel pipes doubles as vertex shaders too? :oops: (Ok, ok. Baseless speculation I admit, but why else wouldn't they say anything on the subject??)
16-pipeline superscalar architecture with 6 vertex modules, DDR3 support and real 32-bit pipelines

Looks like 6 VS to me
 
I think they just mean 8 total fp32 ops per clock per pipe, i.e. two vector4 ops per pipe per clock. R300 can do a texture access and a vector4 op per clock, so that's also 8 ops per clock using similar terminology.

8 * 16 = 128, NVidia's magic number.

NV40, however, can do another arithmetic vector4 op instead of a texture op if it wants to, which will pay dividends down the line, and they can also split vector4 ops into vector3 + scalar (like R300) or vector2 + vector2.

In other words, FAR more throughput per pipe than NV30, and as games use more arithmetic in their shaders, significantly more than R300 per pipe as well.

Of course, all this hinges on me correctly interpreting those diagrams.
 
Back
Top