Welcome, Unregistered.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Reply
Old 04-Aug-2010, 22:14   #6426
Silent_Buddha
Regular
 
Join Date: Mar 2007
Posts: 10,173
Default

Quote:
Originally Posted by NathansFortune View Post
Would it be on par with Juniper or worse? Very interested for HTPC, would like a nice for it and casual games and flash acceleration for flash games.
For HTPC it'll be interesting if they finally support bitstreaming. As well, it's still questionable if they can approach the perf/watt of the AMD chips.

GF104 compares quite well at its price segment because it's in direct competition with a salvage chip (5830) with often worse power consumption than 5850 or even 5870.

Once you dip below that you're now going against more power efficient chips once again.

Regards,
SB
Silent_Buddha is offline   Reply With Quote
Old 04-Aug-2010, 22:30   #6427
Jawed
Regular
 
Join Date: Oct 2004
Location: London
Posts: 9,948
Send a message via Skype™ to Jawed
Default

G92 and G94 both have the same GDDR bus width, 256-bit. Seems possible that GF104 and GF106 are the same, both with 256 bits. Fillrate is still king.

Then GF106 would have a single GPC, maximum of 192 MADs per clock.

GF108 would be 128-bit with a single GPC, too, I suppose, perhaps with only 2 SMs.
__________________
Can it play WoW?
Jawed is offline   Reply With Quote
Old 04-Aug-2010, 23:32   #6428
Sontin
Naughty Boy!
 
Join Date: Dec 2009
Posts: 399
Default

Quote:
Originally Posted by Silent_Buddha View Post
For HTPC it'll be interesting if they finally support bitstreaming. As well, it's still questionable if they can approach the perf/watt of the AMD chips.
What means perf/watt for a HTPC card?
Because the GTX460 needs less power while idling (5770 niveau) and playback of blu-rays (lower then a 5770): http://ht4u.net/reviews/2010/zotac_g...60/index12.php
http://ht4u.net/reviews/2010/zotac_g...60/index13.php
I think nobody cares that a GTX460 needs 10 watt more while it's 6% slower...
Sontin is offline   Reply With Quote
Old 05-Aug-2010, 02:58   #6429
mczak
Senior Member
 
Join Date: Oct 2002
Posts: 2,657
Default

Quote:
Originally Posted by Jawed View Post
G92 and G94 both have the same GDDR bus width, 256-bit. Seems possible that GF104 and GF106 are the same, both with 256 bits. Fillrate is still king.

Then GF106 would have a single GPC, maximum of 192 MADs per clock.
I think that makes zero sense. GF104 isn't particularly fillrate/rop limited to begin with (there's not really that much difference between the 768MB and 1024MB version), and if you only have 4 SMs you can only output 8 (color) pixels per clock anyway.
I totally don't understand why a 128bit / 1 GPC / 4 48 core SM chip would be that big though...
mczak is offline   Reply With Quote
Old 05-Aug-2010, 09:29   #6430
Jawed
Regular
 
Join Date: Oct 2004
Location: London
Posts: 9,948
Send a message via Skype™ to Jawed
Default

Quote:
Originally Posted by mczak View Post
I think that makes zero sense. GF104 isn't particularly fillrate/rop limited to begin with (there's not really that much difference between the 768MB and 1024MB version),
Yes, 4-10% margin in favour of the latter is not much. Maybe running both at ~800MHz would show more difference?

Quote:
and if you only have 4 SMs you can only output 8 (color) pixels per clock anyway.
GF100 is a bit strange like this...

Quote:
I totally don't understand why a 128bit / 1 GPC / 4 48 core SM chip would be that big though...
I think the ROPs/MCs (and related stuff that only appears 3 times) in the central area of GF100 are only about 25% of that central area, i.e. the central area (excluding ROPs/MCs etc.) in GF104 is really costly, roughly as much as a GPC.
__________________
Can it play WoW?
Jawed is offline   Reply With Quote
Old 05-Aug-2010, 11:11   #6431
DavidGraham
Member
 
Join Date: Dec 2009
Posts: 893
Default

Quote:
Thank you for your feedback.

In one of our previous articles several years ago we carried out image quality investigation and the results were that ATI and Nvidia use different methods of transparency antialiasing (TAA). Due to those differences the actual quality of ATI's quality (super-sampling) setting is closer to Nvidia's multi-sampling. In fact, ATI's TAA SS is not pure super-sampling.

We also found out that no matter what test you are running the worst hit in performance with TAA enabled/disabled is around 2%. Modern graphics cards have a very advanced AA optimized architecture, which makes not only TAA but also some FSAA modes to have minimal impact on performance. Since not all scenes feature alpha textures in substantial quantities to have a tangible impact onto performance, the TAA mode is hardly crucial at all, which is why we simply tend to set image quality to the similar levels.

We stay committed to our previous findings. Since it is impossible to make games look the same on ATI Radeon and Nvidia GeForce, we simply tend to set driver settings so that the image quality was close to the maximum possible degree.
Xbitlabs official response from the comment section..

http://www.xbitlabs.com/discussion/6404.html

maybe they were referring to this one :
http://www.xbitlabs.com/articles/vid...ossfire_5.html

the link compares GT200 vs HD4xxx , they should have made a contemporary comparison , because HD5xxx supposedly improved SSAA quality .

Also :
Quote:
We also found out that no matter what test you are running the worst hit in performance with TAA enabled/disabled is around 2%.
could anyone validate this claim ?

Last edited by DavidGraham; 05-Aug-2010 at 11:23.
DavidGraham is online now   Reply With Quote
Old 05-Aug-2010, 13:45   #6432
mczak
Senior Member
 
Join Date: Oct 2002
Posts: 2,657
Default

Quote:
Originally Posted by Jawed View Post
Yes, 4-10% margin in favour of the latter is not much. Maybe running both at ~800MHz would show more difference?
Likely but probably not that much. Besides, nothing probably stops nvidia from upping the memory clock to 1Ghz on GF106 based products (if they want to use the same ram chips - apparently faster ones can't be that much more expensive considering AMD is using them on HD5770). Of course that won't increase the ROP throughput - but this is already overkill anyway imho.
Quote:
I think the ROPs/MCs (and related stuff that only appears 3 times) in the central area of GF100 are only about 25% of that central area, i.e. the central area (excluding ROPs/MCs etc.) in GF104 is really costly, roughly as much as a GPC.
Hmm so what's taking up all the space? In theory scaling back GF104 to GF106 should shrink size a bit more than Cypress -> Juniper - less shared logic. But maybe that's not the case... If, however, GF106 is larger because it's more than a half GF104, my bet would be an additional SM, not 256bit memory interface.
mczak is offline   Reply With Quote
Old 05-Aug-2010, 13:54   #6433
Space Giraffe
Junior Member
 
Join Date: Jun 2010
Posts: 16
Default

That 2% figure maybe happened in a game with no foliage? Cause AAA can cause big drops on my HD 5770. It's less of a hit than my HD 4670 but it can still be quite a difference. I fired up some GPU limited scenes in Mass Effect, and there was definitely a difference. With 8x MSAA I got 60 almost everywhere at 1440x900. With AAA added I dropped to 30, 20, even 15 at places.
Space Giraffe is offline   Reply With Quote
Old 05-Aug-2010, 15:45   #6434
trinibwoy
Meh
 
Join Date: Mar 2004
Location: New York
Posts: 9,943
Default

http://www.xtremesystems.org/forums/...d.php?t=256740

__________________
What the deuce!?
trinibwoy is offline   Reply With Quote
Old 05-Aug-2010, 15:49   #6435
Chalnoth
 
Join Date: May 2002
Location: New York, NY
Posts: 12,681
Default

I don't know if it's just me, but that board image just seems off.
Chalnoth is offline   Reply With Quote
Old 05-Aug-2010, 16:13   #6436
Alexko
Senior Member
 
Join Date: Aug 2009
Posts: 2,678
Send a message via MSN to Alexko
Default

Quote:
Originally Posted by trinibwoy View Post
OK, so the variable part of the PCI-E connector is supposed to be 71.65mm long according to Wikipedia, and in this picture I measured it at 973 pixels. So thats 1 mm = 13,58 pixels.

And I measured the die at 151 × 154 pixels, or 11.2 × 11.34 = 127mm˛. That's pretty big, 27% over Redwood (probably closer to ~23% without the packaging, and accounting for a slight distortion of the image). I guess NVIDIA doesn't intend to compete with Cedar at all.
Alexko is offline   Reply With Quote
Old 05-Aug-2010, 16:29   #6437
mczak
Senior Member
 
Join Date: Oct 2002
Posts: 2,657
Default

Quote:
Originally Posted by trinibwoy View Post
The memory chips are interesting. 2gbit ddr3 800Mhz 16bit. Either this board has another 4 of these chips on the back (and 2GB memory which is total overkill certainly for this performance class), or it's going to be very bandwidth constrained (64bit ddr3 interface - certainly with such a memory interface it couldn't be more than a cedar competitor no matter the die size...). Maybe the chip though would support much faster gddr5, and that's just the low-end board.
mczak is offline   Reply With Quote
Old 05-Aug-2010, 16:48   #6438
mczak
Senior Member
 
Join Date: Oct 2002
Posts: 2,657
Default

Quote:
Originally Posted by Alexko View Post
OK, so the variable part of the PCI-E connector is supposed to be 71.65mm long according to Wikipedia, and in this picture I measured it at 973 pixels. So thats 1 mm = 13,58 pixels.

And I measured the die at 151 × 154 pixels, or 11.2 × 11.34 = 127mm˛. That's pretty big, 27% over Redwood (probably closer to ~23% without the packaging, and accounting for a slight distortion of the image). I guess NVIDIA doesn't intend to compete with Cedar at all.
Well, that would be smaller than GT215. So if this indeed can compete with redwood (read: it needs to be a bit faster than GT215 in the fastest possible configuration) I think that would be very good - more features, more performance, smaller. A ~25% or ~20mm˛ die size difference compared to competition isn't going to cost that much more.
The board shown though certainly isn't a HD5670 competitor...
mczak is offline   Reply With Quote
Old 05-Aug-2010, 19:59   #6439
fbuffer
Naughty Boy!
 
Join Date: Apr 2010
Posts: 90
Default

Quote:
Originally Posted by mczak View Post
Well, that would be smaller than GT215. So if this indeed can compete with redwood (read: it needs to be a bit faster than GT215 in the fastest possible configuration) I think that would be very good - more features, more performance, smaller. A ~25% or ~20mm˛ die size difference compared to competition isn't going to cost that much more.
The board shown though certainly isn't a HD5670 competitor...
With the update to GDDR5 for the 5500 series I wonder if it would compete with it either in such a configuration. Anywho, maybe it's just me and my aging eyes but the text on the GPU "GF108-200-A1 QUAL SAMPLE" looks a bit odd even considering the image distortion, the "shadow" like effect doesn't look like it is at the correct perspective given the camera lens position.. but as I said hey' it could be my eyes. IN particular the "QUAL SAMPLE" shouldn't have much if any apparent "shadow" (impression/etching) as it would be almost directly under the lens.
fbuffer is offline   Reply With Quote
Old 05-Aug-2010, 20:10   #6440
no-X
Senior Member
 
Join Date: May 2005
Posts: 2,086
Default

It's in the image center. It's not affected by the barrel distorsion because of that.
__________________
Sorry for my English. But I hope it's better than your Czech
no-X is offline   Reply With Quote
Old 05-Aug-2010, 20:30   #6441
fbuffer
Naughty Boy!
 
Join Date: Apr 2010
Posts: 90
Default

Quote:
Originally Posted by no-X View Post
It's in the image center. It's not affected by the barrel distorsion because of that.
That's my point, there is distortion (where there shouldn't be any).
fbuffer is offline   Reply With Quote
Old 05-Aug-2010, 21:08   #6442
no-X
Senior Member
 
Join Date: May 2005
Posts: 2,086
Default

Maybe it's even more complex. The card isn't aligned with the image, the image isn't taken perfectly upright, the lens is very poor (it probably embodies more than just barrel distorsion).

I think because we were fooled by the puppy boards, we now see fakes everywhere despite there are none

I find more interesting the fact, that several pictures of various nVidia's product were leaked almost at the same time. I can't believe this should be just an accident.
__________________
Sorry for my English. But I hope it's better than your Czech
no-X is offline   Reply With Quote
Old 06-Aug-2010, 11:36   #6443
Jawed
Regular
 
Join Date: Oct 2004
Location: London
Posts: 9,948
Send a message via Skype™ to Jawed
Default

Quote:
Originally Posted by mczak View Post
Hmm so what's taking up all the space?
Apart from one bit of the central area which doesn't have ROPs/MCs - and so is prolly analogue/digital outputs plus PCI Express, a lot of it repeats 4 times (i.e. once per GPC). Then there's that nice square thing occupying the centre.

Quote:
In theory scaling back GF104 to GF106 should shrink size a bit more than Cypress -> Juniper - less shared logic. But maybe that's not the case... If, however, GF106 is larger because it's more than a half GF104, my bet would be an additional SM, not 256bit memory interface.
Honestly, I'm too lazy to measure stuff off the GF100 die.

I think it's possible to measure the ROPs/MCs plus the DVI/PCI-Express in the centre. Then the remainder of the central section could be scaled in one or two ways according to GPC-count. The physical stuff around the perimeter of GF100 can be scaled fairly easily too.

Then you just have to decide on the size of a GPC in GF104, versus those in GF100...

GF100 appears to have quite a bit of dead space (much like GT200 has a lot of dead space, bordering logic blocks).

Jawed
__________________
Can it play WoW?
Jawed is offline   Reply With Quote
Old 06-Aug-2010, 17:08   #6444
Ethatron
Member
 
Join Date: Jan 2010
Posts: 422
Default

Quote:
Originally Posted by Jawed View Post
GF100 appears to have quite a bit of Dead Space (much like GT200 has a lot of Dead Space, bordering logic blocks).
Jawed
I knew it, it's the alien's all along!
Ethatron is offline   Reply With Quote
Old 06-Aug-2010, 18:09   #6445
Man from Atlantis
Member
 
Join Date: Jul 2010
Location: Istanbul
Posts: 728
Default

Quote:
Originally Posted by mczak View Post
The memory chips are interesting. 2gbit ddr3 800Mhz 16bit. Either this board has another 4 of these chips on the back (and 2GB memory which is total overkill certainly for this performance class), or it's going to be very bandwidth constrained (64bit ddr3 interface - certainly with such a memory interface it couldn't be more than a cedar competitor no matter the die size...). Maybe the chip though would support much faster gddr5, and that's just the low-end board.
its 2gb.. 128 bit ddr3

http://www.hynix.com/datasheet/eng/g...&m=3&s=2&RK=27
Man from Atlantis is offline   Reply With Quote
Old 07-Aug-2010, 03:09   #6446
mczak
Senior Member
 
Join Date: Oct 2002
Posts: 2,657
Default

Quote:
Originally Posted by Man from Atlantis View Post
its 2gb.. 128 bit ddr3
I don't know what you're trying to tell. Sure the chips are 2gb, 800Mhz, 16bit. Bog-standard ddr3. I don't get how you infer though there are another 4 chips on the back and hence it's 128bit...
mczak is offline   Reply With Quote
Old 07-Aug-2010, 08:49   #6447
DavidGraham
Member
 
Join Date: Dec 2009
Posts: 893
Default

Again , but this time from Tomshardware , using latest drivers , 4XAA :

1-GTX480 has a 36% advantage over HD5870 in STALKER COP @1920x1080 increasing to 58% @2500x1600 (FB limit for certain).

2-GTX480 has a a 26% advantage over HD5870 in Dirt 2 @ 1920x1080 decreasing to 13% @2500x1600 .

3-GTX480 has a 21% advantage over HD5870 in AVP @1920x1080 , but they used the benchmark , not the actual game , and @2500x1600 , the lead increases to 22% .

4-GTX480 has a 12% advantage over HD5870 in Crysis (Very High) @1920x1080 , increasing to 40% @2500x1600 (FB limit again) .

4-In COD4 GTX480 is 29% faster @1920x1080 , and 18%@2500x1600 .

http://www.tomshardware.com/reviews/...-480,2694.html

And from anandtech :
GTX480 VS HD5870 :

STALKER COP : 16% @1920x1080 , increasing to 29% @2500x1600 (FB limit again)

Dirt 2 : 25% @1920x1080 , decreasing to 9% @2500x1600

BF BC2 : Only 9% @1920x1080 , decreasing to 2% @2500x1600 , however they used another test area from the game representing a worst case scenario (Water fall bench), here GTX480 came 64% faster @2500x1600 4XAA (FB again I think) .

http://www.anandtech.com/show/3836/m...-gtx-470-sli/5

In 4 out of 8 tests at 2500x1600 , the GTX480 enjoyed it's lead because of the frame buffer limitation in HD5870 , however that doesn't explain the advantage in 1920x1080 (maybe driver enhancements).
DavidGraham is online now   Reply With Quote
Old 07-Aug-2010, 10:07   #6448
Man from Atlantis
Member
 
Join Date: Jul 2010
Location: Istanbul
Posts: 728
Default




http://vga.zol.com.cn/190/1903529.html
Man from Atlantis is offline   Reply With Quote
Old 07-Aug-2010, 10:53   #6449
Jawed
Regular
 
Join Date: Oct 2004
Location: London
Posts: 9,948
Send a message via Skype™ to Jawed
Default

So, only 128-bit and only 4 SMs?

Maybe it's so huge because there are two GPCs?

Why is it showing D3D10.1?
__________________
Can it play WoW?
Jawed is offline   Reply With Quote
Old 07-Aug-2010, 11:00   #6450
Alexko
Senior Member
 
Join Date: Aug 2009
Posts: 2,678
Send a message via MSN to Alexko
Default

The clocks are surprisingly high, too…
Alexko is offline   Reply With Quote

Reply

Tags
bug fix, crysis2 is earlier, fermi, geforce, gf100, gf110, gtx470, gtx480, nvidia, physx

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 10:06.


Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.