ELSA hints GT206 and GT212

Why is that chip so big btw? Die size identical to rv740 (no way it keeps up with that...), transistor count almost the same as g92 while having way less functional units (half the rops and texture units, 3/4 the alus, and no unless GT200 it can't claim wasted transistors for DP).
Because it comes from the GT200 architecture. It's a step back from G9x in performance density.

The groundwork for GT2xx was probably laid while ATI still had R600. They wanted to add features for CUDA (DP, larger register file, maybe more), and felt they had room. When RV670 came out, they probably thought it was a decent improvement over R600 in competitiveness, but this was the best that ATI could optimize and it was still a decent step back from G9x. They probably also expected higher clock speeds.

If GT215 was fabbed at 65nm, it probably would be around 250 mm2. Now consider how big GT200 is compared to G92 and the limited peformance gain you get. Using these two pieces of infomation, it's not surprising that GT215 is a bit below 9600GT performance. Transistor numbers are largely PR, so don't get to caught up in that.
 
It's nvidia's 4670. only it needs to be priced like GT220 ddr3 currently is, plus we need a dual slot or passive model rather than that OEM garbage.
Note that frequency looks a bit conservative and power usage is low overall, so it might be o/cable a bit. (I don't consider o/c on the big guzzling cards)


whoops, I didn't see the techpowerup review, with a dual slot MSI modern. Weird comments about the noise. Sure, there's no fan control, but I don't get how 35dB is that noisy. e.g. the 4850 is 10dB less when idle but 5dB more under load.
isn't that a constant slight hum, versus a fan controlled card spinning up when gaming or even watching fullscreen youtube? I downgraded to an o/c 8400GS to get rid of that annoyance. playing the waiting game, and the 8400GS is actually brilliant at running older games.
 
Last edited by a moderator:
Because it comes from the GT200 architecture. It's a step back from G9x in performance density.
Does GT215 have double precision support?

The groundwork for GT2xx was probably laid while ATI still had R600. They wanted to add features for CUDA (DP, larger register file, maybe more), and felt they had room.

http://www.techreport.com/articles.x/14934/5

Another enhancement in GT200 is the doubling of the size of the register file for each SM. The aim here is, by adding a more on-chip storage, to allow more complex shaders to run without overflowing into memory. Nvidia cites improvements of 35% in 3DMark Vantage's parallax occlusion mapping test, 6% in GPU cloth, 5% in Perlin noise, and 15% overall with Vantage's Extreme presets due to the larger register file.

Graphics benefits from increased register file (much like the gain from increased ALU:TEX) - CUDA gains are riding on the back of that. It was silly small to start with in G80. It's increasing again in Fermi, that might appease the CUDA crowd.

GT200 also gained improvements in texturing throughput - presumably that costs some area, but we have no idea what.

The D3D10.1 changes (doubling the per vertex attributes, cubemap features, "fp32 everywhere", gather4 etc.) will also have cost an uncomfortable amount, I expect.

If GT215 was fabbed at 65nm, it probably would be around 250 mm2. Now consider how big GT200 is compared to G92 and the limited peformance gain you get. Using these two pieces of infomation, it's not surprising that GT215 is a bit below 9600GT performance. Transistor numbers are largely PR, so don't get to caught up in that.
I think the real reason GT215 is so poor is that it seems it only has 8 ROPs (same as HD4670). The very high Z rate compensates somewhat as it's double the per ROP rate of ATI, though 47% faster in absolute theoretical terms, and it has 70% more bandwidth. It should have 16 ROPs, to compete against RV740, but that would make it significantly larger.

In my view NVidia has made a GDDR3 budget chip to replace 9500GT, low power and cool for OEM systems and it happens to have GDDR5 as a testbed for later chips.

Jawed
 
GT21x GPUs also have PureVideo HD VP4, which adds bitstream decoding of VC-1, MPEG-2 and MPEG4 ASP in addition to H.264, compared to VP2 which only support H.264 bitstream decoding and partial accelerated decoding of VC-1 and MPEG-2.

Also, HD audio processor is built-in with 8 channel LPCM support. All previous GeForce 8/9 and the GT200 GPUs didn't have the HD audio processor with 8 channel LPCM support.

Both of these features will take up slightly more transistors & die space than the GeForce 8/9 design.

The only disappointing things are the low shader clock speed, 1340MHz vs 1625MHz on the 9600 GT and half the ROPs instead of the expected 16 ROPs. If it had say, 1500MHz shader clock or higher & 16 ROPs then it should easily beat the 9600 GT. GT 220 has pretty large headroom, Gigabyte sells a 720Mhz core 1566MHz shader clock overclock version instead of the 625MHz core 1360MHz shader clock stock GT 220.
 
Last edited by a moderator:
GT21x GPUs also have PureVideo HD VP4, which adds bitstream decoding of VC-1, MPEG-2 and MPEG4 ASP in addition to H.264, compared to VP2 which only support H.264 bitstream decoding and partial accelerated decoding of VC-1 and MPEG-2.

Also, HD audio processor is built-in with 8 channel LPCM support. All previous GeForce 8/9 and the GT200 GPUs didn't have the HD audio processor with 8 channel LPCM support.

Both of these features will take up slightly more transistors & die space than the GeForce 8/9 design.
This functionality is found also in the lowest end products, I doubt it takes up a significant amount of area. Sure it will use more area than in the old cards but how much? 2 mm² or what?
The only disappointing things are the low shader clock speed, 1340MHz vs 1625MHz on the 9600 GT and half the ROPs instead of the expected 16 ROPs. If it had say, 1500MHz shader clock or higher & 16 ROPs then it should easily beat the 9600 GT. GT 220 has pretty large headroom, Gigabyte sells a 720Mhz core 1566MHz shader clock overclock version instead of the 625MHz core 1360MHz shader clock stock GT 220.
Yes, lower clocks is certainly some trend we're seeing with GT200 architecture cards. That said, they are really low here, especially if you consider you can get the same chip as mobile, with lower power draw and actually slightly higher clocks (gts260m)!
With clocks around 650/1550 it should (even with 8 rops) be able to beat GT9600 quite solidly, though it would need the additional power connector and still couldn't touch HD 4770, so I guess nvidia went with the OEM-friendly version instead. Though I'm wondering why they didn't come up with two cards, I think the large quantities will be with ddr3 versions anyway, so could have as well sold as cheap, no-pcie power connector ddr3 version and higher clocked gddr5 version (with a different name - but they just love to sell different cards with the same name and same cards with different names I know...).

And actually I take the "fastest card without pcie power connector" back. That title would have to go to the 9800GT Green Edition, I suppose.
 
That title would have to go to the 4850 Green Edition, I suppose.

Fixed.
Chinese manufacturers are scarier than you thought.

Wow @ the 4670 actually- 128bit and GDDR3 and on a bigger process, it doesn't get bulldozed (75% and below). Let's see how the future mobile Junipers do- if they scale down and such with clocks and voltage.

For now, this is quite an impressive mobile chip for the GTX 250M SKU I have to say.
 
Fixed.
Chinese manufacturers are scarier than you thought.
Well yeah but that's not an official part. No doubt rv740/juniper parts would also fit into a power envelope not requiring additional power with some different clocks, but these don't exist neither.

trinibwoy said:
Doesn't make too bad of a showing here vs 9600GT
Depends on how you look at it. It's a way more complex chip, with more shader alus (which are supposedly faster too), similar memory bandwidth. But still 10% slower (on average) with AA/AF at least (and 9600GT is using older driver there so might even gain a little with newer one). Though maybe it'll at least end up cheaper - currently it's not.
 
Depends on how you look at it. It's a way more complex chip, with more shader alus (which are supposedly faster too), similar memory bandwidth. But still 10% slower (on average) with AA/AF at least (and 9600GT is using older driver there so might even gain a little with newer one). Though maybe it'll at least end up cheaper - currently it's not.

Yeah it does depend on how you look at it. For example how does it fare compared to other 8-ROP cards? In fillrate limited cases the 9600gt has a major advantage. I agree it does seem complex for its specs - maybe DX10.1 was expensive.
 
Fudzilla's review sample card has very good overclocking, 650MHz core, 1590 shader and 2200MHz memory, practially maxed out. All this without the need for external power.

http://www.fudzilla.com/content/view/16492/1/1/5/

Techpowerup's sample got up to 670MHz core, 1670MHz shader and 2190MHz memory. Again without external power.

http://www.techpowerup.com/reviews/Palit/GeForce_GT_240_Sonic/33.html

The other Techpowerup review sample managed 620MHz core, 1610MHz shader and 1960MHz memory.

http://www.techpowerup.com/reviews/MSI/GeForce_GT_240/33.html

So there is headroom. Perhaps Nvidia is being conservative with the core and shader clock speed, since the 40nm TSMC process has issues and of course, being a more OEM friendly SKU.
 
Fudzilla said:
Geforce GT 240, Geforce GT 220 and Geforce G210 were launched on the market to keep Nvidia's name in the press loop...We doubt that these GPUs will maintain any high market share in the long run.

Thanks Fudo, but we knew all of this months ago. What I want info on is Fermi mainstream and low end variants, assuming they even exist at this point in time.
 
Fudzilla's review sample card has very good overclocking, 650MHz core, 1590 shader and 2200MHz memory, practially maxed out. All this without the need for external power.
It is somewhat likely though it'll violate PCIE spec and draw more power than 75W at these settings. No doubt this will still work, but it probably wouldn't be a very good idea for stock settings...
 
NVIDIA GeForce 310

VG885AA_400x400.jpg


http://h10010.www1.hp.com/wwpc/uk/en/sm/WF06c/A1-329290-64268-348724-348724-4015767-4015770.html
 
Given that it's appearing some 18 months after that on a new process node with about 50% more transistors and a heftier price tag, a "too bad of a showing" is a bit less than one should be allowed to expect.

That was obviously in relation to previous, more horrible showings not to any reasonable expectations. ;)
 

Yay! GT218 rebranding arrives! .. How many months did that take since its launch?

Specs:
Geforce 310 said:
CUDA Cores 16
Graphics Clock (MHz) 589 MHz
Processor Clock (MHz) 1402 MHz
Memory Clock (MHz) 500
Standard Memory Config 512 MB DDR2
Memory Interface Width 64-bit
Memory Bandwidth (GB/sec) 8.0
Feature Support:
NVIDIA PureVideo® Technology* yes
NVIDIA CUDA™ Technology yes
Microsoft DirectX 10.1
OpenGL 3.1
Bus Support PCI-E 2.0
Certified for Windows Vista yes
Display Support:
Maximum Digital Resolution 2560x1600
Maximum VGA Resolution 2048x1536
Standard Display Connectors DVI
VGA
DisplayPort
Multi Monitor yes
HDCP* yes
HDMI* Via dongle (DVI-HDMI or DP-HDMI)
Audio Input for HDMI Internal
Standard Graphics Card Dimensions:
Height 2.731 inches
Length 6.60 inches
Width Single-slot
Thermal and Power Specs:
Maximum GPU Temperature (in C) 105 C
Maximum Graphics Card Power (W) 30.5 W
Minimum System Power Requirement (W) 300 W

Specs:
Geforce 210 said:
CUDA Cores 16
Graphics Clock (MHz) 589 MHz
Processor Clock (MHz) 1402 MHz
Memory Specs:
Memory Clock (MHz) 500
Standard Memory Config 512 MB DDR2
Memory Interface Width 64-bit
Memory Bandwidth (GB/sec) 8.0
Feature Support:
NVIDIA PureVideo® Technology* yes
NVIDIA CUDA™ Technology yes
Microsoft DirectX 10.1
OpenGL 3.1
Bus Support PCI-E 2.0
Certified for Windows Vista yes
Display Support:
Maximum Digital Resolution 2560x1600
Maximum VGA Resolution 2048x1536
Standard Display Connectors DVI
VGA
DisplayPort
Multi Monitor yes
HDCP* yes
HDMI* Via dongle (DVI-HDMI or DP-HDMI)
Audio Input for HDMI Internal
Standard Graphics Card Dimensions:
Height 2.731 inches
Length 6.60 inches
Width Single-slot
Thermal and Power Specs:
Maximum GPU Temperature (in C) 105 C
Maximum Graphics Card Power (W) 30.5 W
Minimum System Power Requirement (W) 300 W
 
Last edited by a moderator:
I have to buy one. 310 is the biggest three-digit-number for nvidia cards, so it has to be better than a 220 or a 260?

310 is like 15 more than a 295! .. wait.. I made that joke before..

How soon before someone tries to run anything DX11 related on this card? "But the websidez told me it waz the new GF300 sseries! man! Firmy!"
 
Yay! GT218 rebranding arrives! .. How many months did that take since its launch?

Specs:


Specs:

Jesus Christ, not again! And what the hell are they going to do when they release their Fermi derivatives? Call them GT4xx?
 
Back
Top